Git Mailing List Archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC v2 0/4] Add an external testing library for unit tests
@ 2023-05-17 23:56 steadmon
  2023-05-17 23:56 ` [PATCH RFC v2 1/4] common-main: split common_exit() into a new file steadmon
                   ` (5 more replies)
  0 siblings, 6 replies; 32+ messages in thread
From: steadmon @ 2023-05-17 23:56 UTC (permalink / raw)
  To: git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, chooglen,
	avarab, gitster, sandals, Calvin Wan, Phillip Wood

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

Unit testing in C requires a separate testing harness that we ideally
would like to be TAP-style and to come with a non-restrictive license.
Fortunately, there already exists a C TAP harness library[2] with an MIT
license (at least for the files included in this series). Phillip Wood
has also proposed an alternative implementation[3]. I have not had a
chance to review that patch in much detail, but I have hopefully made
the Makefile integration here somewhat pluggable so that we can easily
switch to his version if it proves superior. For now, I have continued
with the C TAP library.

The first patch is a small cleanup to allow linking common_exit()
separately from common-main.o. In the second patch, I've added a rough
draft project plan listing some goals. Patch 3 adds the C TAP libraries.
Patch 4 is a modified version of Calvin's previous implemenation with
better integration to our Makefiles.

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/
[2] https://github.com/rra/c-tap-harness/
[3]: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/

Signed-off-by: Josh Steadmon <steadmon@google.com>
---
Calvin Wan (1):
      Add C TAP harness

Josh Steadmon (3):
      common-main: split common_exit() into a new file
      unit tests: Add a project plan document
      unit test: add basic example and build rules

 .gitignore                             |    2 +
 Documentation/Makefile                 |    1 +
 Documentation/technical/unit-tests.txt |   47 +
 Makefile                               |   25 +-
 common-exit.c                          |   26 +
 common-main.c                          |   24 -
 t/Makefile                             |   10 +
 t/runtests.c                           | 1789 ++++++++++++++++++++++++++++++++
 t/strbuf-test.c                        |   54 +
 t/tap/basic.c                          | 1029 ++++++++++++++++++
 t/tap/basic.h                          |  198 ++++
 t/tap/macros.h                         |  109 ++
 12 files changed, 3289 insertions(+), 25 deletions(-)
---
base-commit: 69c786637d7a7fe3b2b8f7d989af095f5f49c3a8
change-id: 20230517-unit-tests-v2-94f50a7ccd8a

Best regards,
-- 
Josh Steadmon <steadmon@google.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH RFC v2 1/4] common-main: split common_exit() into a new file
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
@ 2023-05-17 23:56 ` steadmon
  2023-05-18 17:17   ` Junio C Hamano
  2023-05-17 23:56 ` [PATCH RFC v2 2/4] unit tests: Add a project plan document steadmon
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 32+ messages in thread
From: steadmon @ 2023-05-17 23:56 UTC (permalink / raw)
  To: git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, chooglen,
	avarab, gitster, sandals

It is convenient to have common_exit() in its own object file so that
standalone programs may link to it (and any other object files that
depend on it) while still having their own independent main() function.
So let's move it to a new common-exit.c file and update the Makefile
accordingly.

Change-Id: I41b90059eb9031f40c9f65374b4b047e7ba3aac0
---
 Makefile      |  1 +
 common-exit.c | 26 ++++++++++++++++++++++++++
 common-main.c | 24 ------------------------
 3 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/Makefile b/Makefile
index e440728c24..8ee7c7e5a8 100644
--- a/Makefile
+++ b/Makefile
@@ -987,6 +987,7 @@ LIB_OBJS += combine-diff.o
 LIB_OBJS += commit-graph.o
 LIB_OBJS += commit-reach.o
 LIB_OBJS += commit.o
+LIB_OBJS += common-exit.o
 LIB_OBJS += compat/nonblock.o
 LIB_OBJS += compat/obstack.o
 LIB_OBJS += compat/terminal.o
diff --git a/common-exit.c b/common-exit.c
new file mode 100644
index 0000000000..1aaa538be3
--- /dev/null
+++ b/common-exit.c
@@ -0,0 +1,26 @@
+#include "git-compat-util.h"
+#include "trace2.h"
+
+static void check_bug_if_BUG(void)
+{
+	if (!bug_called_must_BUG)
+		return;
+	BUG("on exit(): had bug() call(s) in this process without explicit BUG_if_bug()");
+}
+
+/* We wrap exit() to call common_exit() in git-compat-util.h */
+int common_exit(const char *file, int line, int code)
+{
+	/*
+	 * For non-POSIX systems: Take the lowest 8 bits of the "code"
+	 * to e.g. turn -1 into 255. On a POSIX system this is
+	 * redundant, see exit(3) and wait(2), but as it doesn't harm
+	 * anything there we don't need to guard this with an "ifdef".
+	 */
+	code &= 0xff;
+
+	check_bug_if_BUG();
+	trace2_cmd_exit_fl(file, line, code);
+
+	return code;
+}
diff --git a/common-main.c b/common-main.c
index f319317353..a8627b4b25 100644
--- a/common-main.c
+++ b/common-main.c
@@ -62,27 +62,3 @@ int main(int argc, const char **argv)
 	/* Not exit(3), but a wrapper calling our common_exit() */
 	exit(result);
 }
-
-static void check_bug_if_BUG(void)
-{
-	if (!bug_called_must_BUG)
-		return;
-	BUG("on exit(): had bug() call(s) in this process without explicit BUG_if_bug()");
-}
-
-/* We wrap exit() to call common_exit() in git-compat-util.h */
-int common_exit(const char *file, int line, int code)
-{
-	/*
-	 * For non-POSIX systems: Take the lowest 8 bits of the "code"
-	 * to e.g. turn -1 into 255. On a POSIX system this is
-	 * redundant, see exit(3) and wait(2), but as it doesn't harm
-	 * anything there we don't need to guard this with an "ifdef".
-	 */
-	code &= 0xff;
-
-	check_bug_if_BUG();
-	trace2_cmd_exit_fl(file, line, code);
-
-	return code;
-}

-- 
2.40.1.606.ga4b1b128d6-goog


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC v2 2/4] unit tests: Add a project plan document
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
  2023-05-17 23:56 ` [PATCH RFC v2 1/4] common-main: split common_exit() into a new file steadmon
@ 2023-05-17 23:56 ` steadmon
  2023-05-18 13:13   ` Phillip Wood
  2023-05-18 20:15   ` Glen Choo
  2023-05-17 23:56 ` [PATCH RFC v2 3/4] Add C TAP harness steadmon
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 32+ messages in thread
From: steadmon @ 2023-05-17 23:56 UTC (permalink / raw)
  To: git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, chooglen,
	avarab, gitster, sandals

Describe what we hope to accomplish by implementing unit tests, and
explain some open questions and milestones.

Change-Id: I182cdc1c15bdd1cbef6ffcf3d216b386f951e9fc
---
 Documentation/Makefile                 |  1 +
 Documentation/technical/unit-tests.txt | 47 ++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..7c575e6ef7
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,47 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+Unit testing in C requires a separate testing harness that we ideally would
+like to be TAP-style and to come with a non-restrictive license. Fortunately,
+there already exists a https://github.com/rra/c-tap-harness/[C TAP harness
+library] with an MIT license (at least for the files needed for our purposes).
+We might also consider implementing
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[our
+own TAP harness] just for Git.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Open questions
+
+=== TAP harness
+
+We'll need to decide on a TAP harness. The C TAP library is easy to integrate,
+but has a few drawbacks:
+* (copy objections from lore thread)
+* We may need to carry local patches against C TAP. We'll need to decide how to
+  manage these. We could vendor the code in and modify them directly, or use a
+  submodule (but then we'll need to decide on where to host the submodule with
+  our patches on top).
+
+Phillip Wood has also proposed a new implementation of a TAP harness (linked
+above). While it hasn't been thoroughly reviewed yet, it looks to support a few
+nice features that C TAP does not, e.g. lazy test plans and skippable tests.
+
+== Milestones
+
+* Settle on final TAP harness
+* Add useful tests of library-ish code
+* Integrate with CI
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run along with regular `make test` target

-- 
2.40.1.606.ga4b1b128d6-goog


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC v2 3/4] Add C TAP harness
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
  2023-05-17 23:56 ` [PATCH RFC v2 1/4] common-main: split common_exit() into a new file steadmon
  2023-05-17 23:56 ` [PATCH RFC v2 2/4] unit tests: Add a project plan document steadmon
@ 2023-05-17 23:56 ` steadmon
  2023-05-18 13:15   ` Phillip Wood
  2023-05-17 23:56 ` [PATCH RFC v2 4/4] unit test: add basic example and build rules steadmon
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 32+ messages in thread
From: steadmon @ 2023-05-17 23:56 UTC (permalink / raw)
  To: git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, chooglen,
	avarab, gitster, sandals, Calvin Wan, Phillip Wood

From: Calvin Wan <calvinwan@google.com>

Introduces the C TAP harness from https://github.com/rra/c-tap-harness/

There is also more complete documentation at
https://www.eyrie.org/~eagle/software/c-tap-harness/

Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Change-Id: I611e22988e99b9407a4f60effaa7fbdb96ffb115
---
 t/runtests.c   | 1789 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 t/tap/basic.c  | 1029 ++++++++++++++++++++++++++++++++
 t/tap/basic.h  |  198 +++++++
 t/tap/macros.h |  109 ++++
 4 files changed, 3125 insertions(+)

diff --git a/t/runtests.c b/t/runtests.c
new file mode 100644
index 0000000000..4a55a801a6
--- /dev/null
+++ b/t/runtests.c
@@ -0,0 +1,1789 @@
+/*
+ * Run a set of tests, reporting results.
+ *
+ * Test suite driver that runs a set of tests implementing a subset of the
+ * Test Anything Protocol (TAP) and reports the results.
+ *
+ * Any bug reports, bug fixes, and improvements are very much welcome and
+ * should be sent to the e-mail address below.  This program is part of C TAP
+ * Harness <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
+ *
+ * Copyright 2000-2001, 2004, 2006-2019, 2022 Russ Allbery <eagle@eyrie.org>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: MIT
+ */
+
+/*
+ * Usage:
+ *
+ *      runtests [-hv] [-b <build-dir>] [-s <source-dir>] -l <test-list>
+ *      runtests [-hv] [-b <build-dir>] [-s <source-dir>] <test> [<test> ...]
+ *      runtests -o [-h] [-b <build-dir>] [-s <source-dir>] <test>
+ *
+ * In the first case, expects a list of executables located in the given file,
+ * one line per executable, possibly followed by a space-separated list of
+ * options.  For each one, runs it as part of a test suite, reporting results.
+ * In the second case, use the same infrastructure, but run only the tests
+ * listed on the command line.
+ *
+ * Test output should start with a line containing the number of tests
+ * (numbered from 1 to this number), optionally preceded by "1..", although
+ * that line may be given anywhere in the output.  Each additional line should
+ * be in the following format:
+ *
+ *      ok <number>
+ *      not ok <number>
+ *      ok <number> # skip
+ *      not ok <number> # todo
+ *
+ * where <number> is the number of the test.  An optional comment is permitted
+ * after the number if preceded by whitespace.  ok indicates success, not ok
+ * indicates failure.  "# skip" and "# todo" are a special cases of a comment,
+ * and must start with exactly that formatting.  They indicate the test was
+ * skipped for some reason (maybe because it doesn't apply to this platform)
+ * or is testing something known to currently fail.  The text following either
+ * "# skip" or "# todo" and whitespace is the reason.
+ *
+ * As a special case, the first line of the output may be in the form:
+ *
+ *      1..0 # skip some reason
+ *
+ * which indicates that this entire test case should be skipped and gives a
+ * reason.
+ *
+ * Any other lines are ignored, although for compliance with the TAP protocol
+ * all lines other than the ones in the above format should be sent to
+ * standard error rather than standard output and start with #.
+ *
+ * This is a subset of TAP as documented in Test::Harness::TAP or
+ * TAP::Parser::Grammar, which comes with Perl.
+ *
+ * If the -o option is given, instead run a single test and display all of its
+ * output.  This is intended for use with failing tests so that the person
+ * running the test suite can get more details about what failed.
+ *
+ * If built with the C preprocessor symbols C_TAP_SOURCE and C_TAP_BUILD
+ * defined, C TAP Harness will export those values in the environment so that
+ * tests can find the source and build directory and will look for tests under
+ * both directories.  These paths can also be set with the -b and -s
+ * command-line options, which will override anything set at build time.
+ *
+ * If the -v option is given, or the C_TAP_VERBOSE environment variable is set,
+ * display the full output of each test as it runs rather than showing a
+ * summary of the results of each test.
+ */
+
+/* Required for fdopen(), getopt(), and putenv(). */
+#if defined(__STRICT_ANSI__) || defined(PEDANTIC)
+#    ifndef _XOPEN_SOURCE
+#        define _XOPEN_SOURCE 500
+#    endif
+#endif
+
+#include <ctype.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <limits.h>
+#include <stdarg.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <strings.h>
+#include <sys/stat.h>
+#include <sys/time.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <time.h>
+#include <unistd.h>
+
+/* sys/time.h must be included before sys/resource.h on some platforms. */
+#include <sys/resource.h>
+
+/* AIX 6.1 (and possibly later) doesn't have WCOREDUMP. */
+#ifndef WCOREDUMP
+#    define WCOREDUMP(status) ((unsigned) (status) &0x80)
+#endif
+
+/*
+ * POSIX requires that these be defined in <unistd.h>, but they're not always
+ * available.  If one of them has been defined, all the rest almost certainly
+ * have.
+ */
+#ifndef STDIN_FILENO
+#    define STDIN_FILENO  0
+#    define STDOUT_FILENO 1
+#    define STDERR_FILENO 2
+#endif
+
+/*
+ * Used for iterating through arrays.  Returns the number of elements in the
+ * array (useful for a < upper bound in a for loop).
+ */
+#define ARRAY_SIZE(array) (sizeof(array) / sizeof((array)[0]))
+
+/*
+ * The source and build versions of the tests directory.  This is used to set
+ * the C_TAP_SOURCE and C_TAP_BUILD environment variables (and the SOURCE and
+ * BUILD environment variables set for backward compatibility) and find test
+ * programs, if set.  Normally, this should be set as part of the build
+ * process to the test subdirectories of $(abs_top_srcdir) and
+ * $(abs_top_builddir) respectively.
+ */
+#ifndef C_TAP_SOURCE
+#    define C_TAP_SOURCE NULL
+#endif
+#ifndef C_TAP_BUILD
+#    define C_TAP_BUILD NULL
+#endif
+
+/* Test status codes. */
+enum test_status {
+    TEST_FAIL,
+    TEST_PASS,
+    TEST_SKIP,
+    TEST_INVALID
+};
+
+/* Really, just a boolean, but this is more self-documenting. */
+enum test_verbose {
+    CONCISE = 0,
+    VERBOSE = 1
+};
+
+/* Indicates the state of our plan. */
+enum plan_status {
+    PLAN_INIT,    /* Nothing seen yet. */
+    PLAN_FIRST,   /* Plan seen before any tests. */
+    PLAN_PENDING, /* Test seen and no plan yet. */
+    PLAN_FINAL    /* Plan seen after some tests. */
+};
+
+/* Error exit statuses for test processes. */
+#define CHILDERR_DUP    100 /* Couldn't redirect stderr or stdout. */
+#define CHILDERR_EXEC   101 /* Couldn't exec child process. */
+#define CHILDERR_STDIN  102 /* Couldn't open stdin file. */
+#define CHILDERR_STDERR 103 /* Couldn't open stderr file. */
+
+/* Structure to hold data for a set of tests. */
+struct testset {
+    char *file;                /* The file name of the test. */
+    char **command;            /* The argv vector to run the command. */
+    enum plan_status plan;     /* The status of our plan. */
+    unsigned long count;       /* Expected count of tests. */
+    unsigned long current;     /* The last seen test number. */
+    unsigned int length;       /* The length of the last status message. */
+    unsigned long passed;      /* Count of passing tests. */
+    unsigned long failed;      /* Count of failing lists. */
+    unsigned long skipped;     /* Count of skipped tests (passed). */
+    unsigned long allocated;   /* The size of the results table. */
+    enum test_status *results; /* Table of results by test number. */
+    unsigned int aborted;      /* Whether the set was aborted. */
+    unsigned int reported;     /* Whether the results were reported. */
+    int status;                /* The exit status of the test. */
+    unsigned int all_skipped;  /* Whether all tests were skipped. */
+    char *reason;              /* Why all tests were skipped. */
+};
+
+/* Structure to hold a linked list of test sets. */
+struct testlist {
+    struct testset *ts;
+    struct testlist *next;
+};
+
+/*
+ * Usage message.  Should be used as a printf format with four arguments: the
+ * path to runtests, given three times, and the usage_description.  This is
+ * split into variables to satisfy the pedantic ISO C90 limit on strings.
+ */
+static const char usage_message[] = "\
+Usage: %s [-hv] [-b <build-dir>] [-s <source-dir>] <test> ...\n\
+       %s [-hv] [-b <build-dir>] [-s <source-dir>] -l <test-list>\n\
+       %s -o [-h] [-b <build-dir>] [-s <source-dir>] <test>\n\
+\n\
+Options:\n\
+    -b <build-dir>      Set the build directory to <build-dir>\n\
+%s";
+static const char usage_extra[] = "\
+    -l <list>           Take the list of tests to run from <test-list>\n\
+    -o                  Run a single test rather than a list of tests\n\
+    -s <source-dir>     Set the source directory to <source-dir>\n\
+    -v                  Show the full output of each test\n\
+\n\
+runtests normally runs each test listed on the command line.  With the -l\n\
+option, it instead runs every test listed in a file.  With the -o option,\n\
+it instead runs a single test and shows its complete output.\n";
+
+/*
+ * Header used for test output.  %s is replaced by the file name of the list
+ * of tests.
+ */
+static const char banner[] = "\n\
+Running all tests listed in %s.  If any tests fail, run the failing\n\
+test program with runtests -o to see more details.\n\n";
+
+/* Header for reports of failed tests. */
+static const char header[] = "\n\
+Failed Set                 Fail/Total (%) Skip Stat  Failing Tests\n\
+-------------------------- -------------- ---- ----  ------------------------";
+
+/* Include the file name and line number in malloc failures. */
+#define xcalloc(n, type) \
+    ((type *) x_calloc((n), sizeof(type), __FILE__, __LINE__))
+#define xmalloc(size)     ((char *) x_malloc((size), __FILE__, __LINE__))
+#define xstrdup(p)        x_strdup((p), __FILE__, __LINE__)
+#define xstrndup(p, size) x_strndup((p), (size), __FILE__, __LINE__)
+#define xreallocarray(p, n, type) \
+    ((type *) x_reallocarray((p), (n), sizeof(type), __FILE__, __LINE__))
+
+/*
+ * __attribute__ is available in gcc 2.5 and later, but only with gcc 2.7
+ * could you use the __format__ form of the attributes, which is what we use
+ * (to avoid confusion with other macros).
+ */
+#ifndef __attribute__
+#    if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7)
+#        define __attribute__(spec) /* empty */
+#    endif
+#endif
+
+/*
+ * We use __alloc_size__, but it was only available in fairly recent versions
+ * of GCC.  Suppress warnings about the unknown attribute if GCC is too old.
+ * We know that we're GCC at this point, so we can use the GCC variadic macro
+ * extension, which will still work with versions of GCC too old to have C99
+ * variadic macro support.
+ */
+#if !defined(__attribute__) && !defined(__alloc_size__)
+#    if defined(__GNUC__) && !defined(__clang__)
+#        if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 3)
+#            define __alloc_size__(spec, args...) /* empty */
+#        endif
+#    endif
+#endif
+
+/*
+ * Suppress the argument to __malloc__ in Clang (not supported in at least
+ * version 13) and GCC versions prior to 11.
+ */
+#if !defined(__attribute__) && !defined(__malloc__)
+#    if defined(__clang__) || __GNUC__ < 11
+#        define __malloc__(dalloc) __malloc__
+#    endif
+#endif
+
+/*
+ * LLVM and Clang pretend to be GCC but don't support all of the __attribute__
+ * settings that GCC does.  For them, suppress warnings about unknown
+ * attributes on declarations.  This unfortunately will affect the entire
+ * compilation context, but there's no push and pop available.
+ */
+#if !defined(__attribute__) && (defined(__llvm__) || defined(__clang__))
+#    pragma GCC diagnostic ignored "-Wattributes"
+#endif
+
+/* Declare internal functions that benefit from compiler attributes. */
+static void die(const char *, ...)
+    __attribute__((__nonnull__, __noreturn__, __format__(printf, 1, 2)));
+static void sysdie(const char *, ...)
+    __attribute__((__nonnull__, __noreturn__, __format__(printf, 1, 2)));
+static void *x_calloc(size_t, size_t, const char *, int)
+    __attribute__((__alloc_size__(1, 2), __malloc__(free), __nonnull__));
+static void *x_malloc(size_t, const char *, int)
+    __attribute__((__alloc_size__(1), __malloc__(free), __nonnull__));
+static void *x_reallocarray(void *, size_t, size_t, const char *, int)
+    __attribute__((__alloc_size__(2, 3), __malloc__(free), __nonnull__(4)));
+static char *x_strdup(const char *, const char *, int)
+    __attribute__((__malloc__(free), __nonnull__));
+static char *x_strndup(const char *, size_t, const char *, int)
+    __attribute__((__malloc__(free), __nonnull__));
+
+
+/*
+ * Report a fatal error and exit.
+ */
+static void
+die(const char *format, ...)
+{
+    va_list args;
+
+    fflush(stdout);
+    fprintf(stderr, "runtests: ");
+    va_start(args, format);
+    vfprintf(stderr, format, args);
+    va_end(args);
+    fprintf(stderr, "\n");
+    exit(1);
+}
+
+
+/*
+ * Report a fatal error, including the results of strerror, and exit.
+ */
+static void
+sysdie(const char *format, ...)
+{
+    int oerrno;
+    va_list args;
+
+    oerrno = errno;
+    fflush(stdout);
+    fprintf(stderr, "runtests: ");
+    va_start(args, format);
+    vfprintf(stderr, format, args);
+    va_end(args);
+    fprintf(stderr, ": %s\n", strerror(oerrno));
+    exit(1);
+}
+
+
+/*
+ * Allocate zeroed memory, reporting a fatal error and exiting on failure.
+ */
+static void *
+x_calloc(size_t n, size_t size, const char *file, int line)
+{
+    void *p;
+
+    n = (n > 0) ? n : 1;
+    size = (size > 0) ? size : 1;
+    p = calloc(n, size);
+    if (p == NULL)
+        sysdie("failed to calloc %lu bytes at %s line %d",
+               (unsigned long) size, file, line);
+    return p;
+}
+
+
+/*
+ * Allocate memory, reporting a fatal error and exiting on failure.
+ */
+static void *
+x_malloc(size_t size, const char *file, int line)
+{
+    void *p;
+
+    p = malloc(size);
+    if (p == NULL)
+        sysdie("failed to malloc %lu bytes at %s line %d",
+               (unsigned long) size, file, line);
+    return p;
+}
+
+
+/*
+ * Reallocate memory, reporting a fatal error and exiting on failure.
+ *
+ * We should technically use SIZE_MAX here for the overflow check, but
+ * SIZE_MAX is C99 and we're only assuming C89 + SUSv3, which does not
+ * guarantee that it exists.  They do guarantee that UINT_MAX exists, and we
+ * can assume that UINT_MAX <= SIZE_MAX.  And we should not be allocating
+ * anything anywhere near that large.
+ *
+ * (In theory, C89 and C99 permit size_t to be smaller than unsigned int, but
+ * I disbelieve in the existence of such systems and they will have to cope
+ * without overflow checks.)
+ */
+static void *
+x_reallocarray(void *p, size_t n, size_t size, const char *file, int line)
+{
+    n = (n > 0) ? n : 1;
+    size = (size > 0) ? size : 1;
+
+    if (n > 0 && UINT_MAX / n <= size)
+        sysdie("realloc too large at %s line %d", file, line);
+    p = realloc(p, n * size);
+    if (p == NULL)
+        sysdie("failed to realloc %lu bytes at %s line %d",
+               (unsigned long) (n * size), file, line);
+    return p;
+}
+
+
+/*
+ * Copy a string, reporting a fatal error and exiting on failure.
+ */
+static char *
+x_strdup(const char *s, const char *file, int line)
+{
+    char *p;
+    size_t len;
+
+    len = strlen(s) + 1;
+    p = (char *) malloc(len);
+    if (p == NULL)
+        sysdie("failed to strdup %lu bytes at %s line %d", (unsigned long) len,
+               file, line);
+    memcpy(p, s, len);
+    return p;
+}
+
+
+/*
+ * Copy the first n characters of a string, reporting a fatal error and
+ * existing on failure.
+ *
+ * Avoid using the system strndup function since it may not exist (on Mac OS
+ * X, for example), and there's no need to introduce another portability
+ * requirement.
+ */
+char *
+x_strndup(const char *s, size_t size, const char *file, int line)
+{
+    const char *p;
+    size_t len;
+    char *copy;
+
+    /* Don't assume that the source string is nul-terminated. */
+    for (p = s; (size_t) (p - s) < size && *p != '\0'; p++)
+        ;
+    len = (size_t) (p - s);
+    copy = (char *) malloc(len + 1);
+    if (copy == NULL)
+        sysdie("failed to strndup %lu bytes at %s line %d",
+               (unsigned long) len, file, line);
+    memcpy(copy, s, len);
+    copy[len] = '\0';
+    return copy;
+}
+
+
+/*
+ * Form a new string by concatenating multiple strings.  The arguments must be
+ * terminated by (const char *) 0.
+ *
+ * This function only exists because we can't assume asprintf.  We can't
+ * simulate asprintf with snprintf because we're only assuming SUSv3, which
+ * does not require that snprintf with a NULL buffer return the required
+ * length.  When those constraints are relaxed, this should be ripped out and
+ * replaced with asprintf or a more trivial replacement with snprintf.
+ */
+static char *
+concat(const char *first, ...)
+{
+    va_list args;
+    char *result;
+    const char *string;
+    size_t offset;
+    size_t length = 0;
+
+    /*
+     * Find the total memory required.  Ensure we don't overflow length.  We
+     * aren't guaranteed to have SIZE_MAX, so use UINT_MAX as an acceptable
+     * substitute (see the x_nrealloc comments).
+     */
+    va_start(args, first);
+    for (string = first; string != NULL; string = va_arg(args, const char *)) {
+        if (length >= UINT_MAX - strlen(string)) {
+            errno = EINVAL;
+            sysdie("strings too long in concat");
+        }
+        length += strlen(string);
+    }
+    va_end(args);
+    length++;
+
+    /* Create the string. */
+    result = xmalloc(length);
+    va_start(args, first);
+    offset = 0;
+    for (string = first; string != NULL; string = va_arg(args, const char *)) {
+        memcpy(result + offset, string, strlen(string));
+        offset += strlen(string);
+    }
+    va_end(args);
+    result[offset] = '\0';
+    return result;
+}
+
+
+/*
+ * Given a struct timeval, return the number of seconds it represents as a
+ * double.  Use difftime() to convert a time_t to a double.
+ */
+static double
+tv_seconds(const struct timeval *tv)
+{
+    return difftime(tv->tv_sec, 0) + (double) tv->tv_usec * 1e-6;
+}
+
+
+/*
+ * Given two struct timevals, return the difference in seconds.
+ */
+static double
+tv_diff(const struct timeval *tv1, const struct timeval *tv0)
+{
+    return tv_seconds(tv1) - tv_seconds(tv0);
+}
+
+
+/*
+ * Given two struct timevals, return the sum in seconds as a double.
+ */
+static double
+tv_sum(const struct timeval *tv1, const struct timeval *tv2)
+{
+    return tv_seconds(tv1) + tv_seconds(tv2);
+}
+
+
+/*
+ * Given a pointer to a string, skip any leading whitespace and return a
+ * pointer to the first non-whitespace character.
+ */
+static const char *
+skip_whitespace(const char *p)
+{
+    while (isspace((unsigned char) (*p)))
+        p++;
+    return p;
+}
+
+
+/*
+ * Given a pointer to a string, skip any non-whitespace characters and return
+ * a pointer to the first whitespace character, or to the end of the string.
+ */
+static const char *
+skip_non_whitespace(const char *p)
+{
+    while (*p != '\0' && !isspace((unsigned char) (*p)))
+        p++;
+    return p;
+}
+
+
+/*
+ * Start a program, connecting its stdout to a pipe on our end and its stderr
+ * to /dev/null, and storing the file descriptor to read from in the two
+ * argument.  Returns the PID of the new process.  Errors are fatal.
+ */
+static pid_t
+test_start(char *const *command, int *fd)
+{
+    int fds[2], infd, errfd;
+    pid_t child;
+
+    /* Create a pipe used to capture the output from the test program. */
+    if (pipe(fds) == -1) {
+        puts("ABORTED");
+        fflush(stdout);
+        sysdie("can't create pipe");
+    }
+
+    /* Fork a child process, massage the file descriptors, and exec. */
+    child = fork();
+    switch (child) {
+    case -1:
+        puts("ABORTED");
+        fflush(stdout);
+        sysdie("can't fork");
+
+    /* In the child.  Set up our standard output. */
+    case 0:
+        close(fds[0]);
+        close(STDOUT_FILENO);
+        if (dup2(fds[1], STDOUT_FILENO) < 0)
+            _exit(CHILDERR_DUP);
+        close(fds[1]);
+
+        /* Point standard input at /dev/null. */
+        close(STDIN_FILENO);
+        infd = open("/dev/null", O_RDONLY);
+        if (infd < 0)
+            _exit(CHILDERR_STDIN);
+        if (infd != STDIN_FILENO) {
+            if (dup2(infd, STDIN_FILENO) < 0)
+                _exit(CHILDERR_DUP);
+            close(infd);
+        }
+
+        /* Point standard error at /dev/null. */
+        close(STDERR_FILENO);
+        errfd = open("/dev/null", O_WRONLY);
+        if (errfd < 0)
+            _exit(CHILDERR_STDERR);
+        if (errfd != STDERR_FILENO) {
+            if (dup2(errfd, STDERR_FILENO) < 0)
+                _exit(CHILDERR_DUP);
+            close(errfd);
+        }
+
+        /* Now, exec our process. */
+        if (execv(command[0], command) == -1)
+            _exit(CHILDERR_EXEC);
+        break;
+
+    /* In parent.  Close the extra file descriptor. */
+    default:
+        close(fds[1]);
+        break;
+    }
+    *fd = fds[0];
+    return child;
+}
+
+
+/*
+ * Back up over the output saying what test we were executing.
+ */
+static void
+test_backspace(struct testset *ts)
+{
+    unsigned int i;
+
+    if (!isatty(STDOUT_FILENO))
+        return;
+    for (i = 0; i < ts->length; i++)
+        putchar('\b');
+    for (i = 0; i < ts->length; i++)
+        putchar(' ');
+    for (i = 0; i < ts->length; i++)
+        putchar('\b');
+    ts->length = 0;
+}
+
+
+/*
+ * Allocate or resize the array of test results to be large enough to contain
+ * the test number in.
+ */
+static void
+resize_results(struct testset *ts, unsigned long n)
+{
+    unsigned long i;
+    size_t s;
+
+    /* If there's already enough space, return quickly. */
+    if (n <= ts->allocated)
+        return;
+
+    /*
+     * If no space has been allocated, do the initial allocation.  Otherwise,
+     * resize.  Start with 32 test cases and then add 1024 with each resize to
+     * try to reduce the number of reallocations.
+     */
+    if (ts->allocated == 0) {
+        s = (n > 32) ? n : 32;
+        ts->results = xcalloc(s, enum test_status);
+    } else {
+        s = (n > ts->allocated + 1024) ? n : ts->allocated + 1024;
+        ts->results = xreallocarray(ts->results, s, enum test_status);
+    }
+
+    /* Set the results for the newly-allocated test array. */
+    for (i = ts->allocated; i < s; i++)
+        ts->results[i] = TEST_INVALID;
+    ts->allocated = s;
+}
+
+
+/*
+ * Report an invalid test number and set the appropriate flags.  Pulled into a
+ * separate function since we do this in several places.
+ */
+static void
+invalid_test_number(struct testset *ts, long n, enum test_verbose verbose)
+{
+    if (!verbose)
+        test_backspace(ts);
+    printf("ABORTED (invalid test number %ld)\n", n);
+    ts->aborted = 1;
+    ts->reported = 1;
+}
+
+
+/*
+ * Read the plan line of test output, which should contain the range of test
+ * numbers.  We may initialize the testset structure here if we haven't yet
+ * seen a test.  Return true if initialization succeeded and the test should
+ * continue, false otherwise.
+ */
+static int
+test_plan(const char *line, struct testset *ts, enum test_verbose verbose)
+{
+    long n;
+
+    /*
+     * Accept a plan without the leading 1.. for compatibility with older
+     * versions of runtests.  This will only be allowed if we've not yet seen
+     * a test result.
+     */
+    line = skip_whitespace(line);
+    if (strncmp(line, "1..", 3) == 0)
+        line += 3;
+
+    /*
+     * Get the count and check it for validity.
+     *
+     * If we have something of the form "1..0 # skip foo", the whole file was
+     * skipped; record that.  If we do skip the whole file, zero out all of
+     * our statistics, since they're no longer relevant.
+     *
+     * strtol is called with a second argument to advance the line pointer
+     * past the count to make it simpler to detect the # skip case.
+     */
+    n = strtol(line, (char **) &line, 10);
+    if (n == 0) {
+        line = skip_whitespace(line);
+        if (*line == '#') {
+            line = skip_whitespace(line + 1);
+            if (strncasecmp(line, "skip", 4) == 0) {
+                line = skip_whitespace(line + 4);
+                if (*line != '\0') {
+                    ts->reason = xstrdup(line);
+                    ts->reason[strlen(ts->reason) - 1] = '\0';
+                }
+                ts->all_skipped = 1;
+                ts->aborted = 1;
+                ts->count = 0;
+                ts->passed = 0;
+                ts->skipped = 0;
+                ts->failed = 0;
+                return 0;
+            }
+        }
+    }
+    if (n <= 0) {
+        puts("ABORTED (invalid test count)");
+        ts->aborted = 1;
+        ts->reported = 1;
+        return 0;
+    }
+
+    /*
+     * If we are doing lazy planning, check the plan against the largest test
+     * number that we saw and fail now if we saw a check outside the plan
+     * range.
+     */
+    if (ts->plan == PLAN_PENDING && (unsigned long) n < ts->count) {
+        invalid_test_number(ts, (long) ts->count, verbose);
+        return 0;
+    }
+
+    /*
+     * Otherwise, allocated or resize the results if needed and update count,
+     * and then record that we've seen a plan.
+     */
+    resize_results(ts, (unsigned long) n);
+    ts->count = (unsigned long) n;
+    if (ts->plan == PLAN_INIT)
+        ts->plan = PLAN_FIRST;
+    else if (ts->plan == PLAN_PENDING)
+        ts->plan = PLAN_FINAL;
+    return 1;
+}
+
+
+/*
+ * Given a single line of output from a test, parse it and return the success
+ * status of that test.  Anything printed to stdout not matching the form
+ * /^(not )?ok \d+/ is ignored.  Sets ts->current to the test number that just
+ * reported status.
+ */
+static void
+test_checkline(const char *line, struct testset *ts, enum test_verbose verbose)
+{
+    enum test_status status = TEST_PASS;
+    const char *bail;
+    char *end;
+    long number;
+    unsigned long current;
+    int outlen;
+
+    /* Before anything, check for a test abort. */
+    bail = strstr(line, "Bail out!");
+    if (bail != NULL) {
+        bail = skip_whitespace(bail + strlen("Bail out!"));
+        if (*bail != '\0') {
+            size_t length;
+
+            length = strlen(bail);
+            if (bail[length - 1] == '\n')
+                length--;
+            if (!verbose)
+                test_backspace(ts);
+            printf("ABORTED (%.*s)\n", (int) length, bail);
+            ts->reported = 1;
+        }
+        ts->aborted = 1;
+        return;
+    }
+
+    /*
+     * If the given line isn't newline-terminated, it was too big for an
+     * fgets(), which means ignore it.
+     */
+    if (line[strlen(line) - 1] != '\n')
+        return;
+
+    /* If the line begins with a hash mark, ignore it. */
+    if (line[0] == '#')
+        return;
+
+    /* If we haven't yet seen a plan, look for one. */
+    if (ts->plan == PLAN_INIT && isdigit((unsigned char) (*line))) {
+        if (!test_plan(line, ts, verbose))
+            return;
+    } else if (strncmp(line, "1..", 3) == 0) {
+        if (ts->plan == PLAN_PENDING) {
+            if (!test_plan(line, ts, verbose))
+                return;
+        } else {
+            if (!verbose)
+                test_backspace(ts);
+            puts("ABORTED (multiple plans)");
+            ts->aborted = 1;
+            ts->reported = 1;
+            return;
+        }
+    }
+
+    /* Parse the line, ignoring something we can't parse. */
+    if (strncmp(line, "not ", 4) == 0) {
+        status = TEST_FAIL;
+        line += 4;
+    }
+    if (strncmp(line, "ok", 2) != 0)
+        return;
+    line = skip_whitespace(line + 2);
+    errno = 0;
+    number = strtol(line, &end, 10);
+    if (errno != 0 || end == line)
+        current = ts->current + 1;
+    else if (number <= 0) {
+        invalid_test_number(ts, number, verbose);
+        return;
+    } else
+        current = (unsigned long) number;
+    if (current > ts->count && ts->plan == PLAN_FIRST) {
+        invalid_test_number(ts, (long) current, verbose);
+        return;
+    }
+
+    /* We have a valid test result.  Tweak the results array if needed. */
+    if (ts->plan == PLAN_INIT || ts->plan == PLAN_PENDING) {
+        ts->plan = PLAN_PENDING;
+        resize_results(ts, current);
+        if (current > ts->count)
+            ts->count = current;
+    }
+
+    /*
+     * Handle directives.  We should probably do something more interesting
+     * with unexpected passes of todo tests.
+     */
+    while (isdigit((unsigned char) (*line)))
+        line++;
+    line = skip_whitespace(line);
+    if (*line == '#') {
+        line = skip_whitespace(line + 1);
+        if (strncasecmp(line, "skip", 4) == 0)
+            status = TEST_SKIP;
+        if (strncasecmp(line, "todo", 4) == 0)
+            status = (status == TEST_FAIL) ? TEST_SKIP : TEST_FAIL;
+    }
+
+    /* Make sure that the test number is in range and not a duplicate. */
+    if (ts->results[current - 1] != TEST_INVALID) {
+        if (!verbose)
+            test_backspace(ts);
+        printf("ABORTED (duplicate test number %lu)\n", current);
+        ts->aborted = 1;
+        ts->reported = 1;
+        return;
+    }
+
+    /* Good results.  Increment our various counters. */
+    switch (status) {
+    case TEST_PASS:
+        ts->passed++;
+        break;
+    case TEST_FAIL:
+        ts->failed++;
+        break;
+    case TEST_SKIP:
+        ts->skipped++;
+        break;
+    case TEST_INVALID:
+        break;
+    }
+    ts->current = current;
+    ts->results[current - 1] = status;
+    if (!verbose && isatty(STDOUT_FILENO)) {
+        test_backspace(ts);
+        if (ts->plan == PLAN_PENDING)
+            outlen = printf("%lu/?", current);
+        else
+            outlen = printf("%lu/%lu", current, ts->count);
+        ts->length = (outlen >= 0) ? (unsigned int) outlen : 0;
+        fflush(stdout);
+    }
+}
+
+
+/*
+ * Print out a range of test numbers, returning the number of characters it
+ * took up.  Takes the first number, the last number, the number of characters
+ * already printed on the line, and the limit of number of characters the line
+ * can hold.  Add a comma and a space before the range if chars indicates that
+ * something has already been printed on the line, and print ... instead if
+ * chars plus the space needed would go over the limit (use a limit of 0 to
+ * disable this).
+ */
+static unsigned int
+test_print_range(unsigned long first, unsigned long last, unsigned long chars,
+                 unsigned int limit)
+{
+    unsigned int needed = 0;
+    unsigned long n;
+
+    for (n = first; n > 0; n /= 10)
+        needed++;
+    if (last > first) {
+        for (n = last; n > 0; n /= 10)
+            needed++;
+        needed++;
+    }
+    if (chars > 0)
+        needed += 2;
+    if (limit > 0 && chars + needed > limit) {
+        needed = 0;
+        if (chars <= limit) {
+            if (chars > 0) {
+                printf(", ");
+                needed += 2;
+            }
+            printf("...");
+            needed += 3;
+        }
+    } else {
+        if (chars > 0)
+            printf(", ");
+        if (last > first)
+            printf("%lu-", first);
+        printf("%lu", last);
+    }
+    return needed;
+}
+
+
+/*
+ * Summarize a single test set.  The second argument is 0 if the set exited
+ * cleanly, a positive integer representing the exit status if it exited
+ * with a non-zero status, and a negative integer representing the signal
+ * that terminated it if it was killed by a signal.
+ */
+static void
+test_summarize(struct testset *ts, int status)
+{
+    unsigned long i;
+    unsigned long missing = 0;
+    unsigned long failed = 0;
+    unsigned long first = 0;
+    unsigned long last = 0;
+
+    if (ts->aborted) {
+        fputs("ABORTED", stdout);
+        if (ts->count > 0)
+            printf(" (passed %lu/%lu)", ts->passed, ts->count - ts->skipped);
+    } else {
+        for (i = 0; i < ts->count; i++) {
+            if (ts->results[i] == TEST_INVALID) {
+                if (missing == 0)
+                    fputs("MISSED ", stdout);
+                if (first && i == last)
+                    last = i + 1;
+                else {
+                    if (first)
+                        test_print_range(first, last, missing - 1, 0);
+                    missing++;
+                    first = i + 1;
+                    last = i + 1;
+                }
+            }
+        }
+        if (first)
+            test_print_range(first, last, missing - 1, 0);
+        first = 0;
+        last = 0;
+        for (i = 0; i < ts->count; i++) {
+            if (ts->results[i] == TEST_FAIL) {
+                if (missing && !failed)
+                    fputs("; ", stdout);
+                if (failed == 0)
+                    fputs("FAILED ", stdout);
+                if (first && i == last)
+                    last = i + 1;
+                else {
+                    if (first)
+                        test_print_range(first, last, failed - 1, 0);
+                    failed++;
+                    first = i + 1;
+                    last = i + 1;
+                }
+            }
+        }
+        if (first)
+            test_print_range(first, last, failed - 1, 0);
+        if (!missing && !failed) {
+            fputs(!status ? "ok" : "dubious", stdout);
+            if (ts->skipped > 0) {
+                if (ts->skipped == 1)
+                    printf(" (skipped %lu test)", ts->skipped);
+                else
+                    printf(" (skipped %lu tests)", ts->skipped);
+            }
+        }
+    }
+    if (status > 0)
+        printf(" (exit status %d)", status);
+    else if (status < 0)
+        printf(" (killed by signal %d%s)", -status,
+               WCOREDUMP(ts->status) ? ", core dumped" : "");
+    putchar('\n');
+}
+
+
+/*
+ * Given a test set, analyze the results, classify the exit status, handle a
+ * few special error messages, and then pass it along to test_summarize() for
+ * the regular output.  Returns true if the test set ran successfully and all
+ * tests passed or were skipped, false otherwise.
+ */
+static int
+test_analyze(struct testset *ts)
+{
+    if (ts->reported)
+        return 0;
+    if (ts->all_skipped) {
+        if (ts->reason == NULL)
+            puts("skipped");
+        else
+            printf("skipped (%s)\n", ts->reason);
+        return 1;
+    } else if (WIFEXITED(ts->status) && WEXITSTATUS(ts->status) != 0) {
+        switch (WEXITSTATUS(ts->status)) {
+        case CHILDERR_DUP:
+            if (!ts->reported)
+                puts("ABORTED (can't dup file descriptors)");
+            break;
+        case CHILDERR_EXEC:
+            if (!ts->reported)
+                puts("ABORTED (execution failed -- not found?)");
+            break;
+        case CHILDERR_STDIN:
+        case CHILDERR_STDERR:
+            if (!ts->reported)
+                puts("ABORTED (can't open /dev/null)");
+            break;
+        default:
+            test_summarize(ts, WEXITSTATUS(ts->status));
+            break;
+        }
+        return 0;
+    } else if (WIFSIGNALED(ts->status)) {
+        test_summarize(ts, -WTERMSIG(ts->status));
+        return 0;
+    } else if (ts->plan != PLAN_FIRST && ts->plan != PLAN_FINAL) {
+        puts("ABORTED (no valid test plan)");
+        ts->aborted = 1;
+        return 0;
+    } else {
+        test_summarize(ts, 0);
+        return (ts->failed == 0);
+    }
+}
+
+
+/*
+ * Runs a single test set, accumulating and then reporting the results.
+ * Returns true if the test set was successfully run and all tests passed,
+ * false otherwise.
+ */
+static int
+test_run(struct testset *ts, enum test_verbose verbose)
+{
+    pid_t testpid, child;
+    int outfd, status;
+    unsigned long i;
+    FILE *output;
+    char buffer[BUFSIZ];
+
+    /* Run the test program. */
+    testpid = test_start(ts->command, &outfd);
+    output = fdopen(outfd, "r");
+    if (!output) {
+        puts("ABORTED");
+        fflush(stdout);
+        sysdie("fdopen failed");
+    }
+
+    /*
+     * Pass each line of output to test_checkline(), and print the line if
+     * verbosity is requested.
+     */
+    while (!ts->aborted && fgets(buffer, sizeof(buffer), output)) {
+        if (verbose)
+            printf("%s", buffer);
+        test_checkline(buffer, ts, verbose);
+    }
+    if (ferror(output) || ts->plan == PLAN_INIT)
+        ts->aborted = 1;
+    if (!verbose)
+        test_backspace(ts);
+
+    /*
+     * Consume the rest of the test output, close the output descriptor,
+     * retrieve the exit status, and pass that information to test_analyze()
+     * for eventual output.
+     */
+    while (fgets(buffer, sizeof(buffer), output))
+        if (verbose)
+            printf("%s", buffer);
+    fclose(output);
+    child = waitpid(testpid, &ts->status, 0);
+    if (child == (pid_t) -1) {
+        if (!ts->reported) {
+            puts("ABORTED");
+            fflush(stdout);
+        }
+        sysdie("waitpid for %u failed", (unsigned int) testpid);
+    }
+    if (ts->all_skipped)
+        ts->aborted = 0;
+    status = test_analyze(ts);
+
+    /* Convert missing tests to failed tests. */
+    for (i = 0; i < ts->count; i++) {
+        if (ts->results[i] == TEST_INVALID) {
+            ts->failed++;
+            ts->results[i] = TEST_FAIL;
+            status = 0;
+        }
+    }
+    return status;
+}
+
+
+/* Summarize a list of test failures. */
+static void
+test_fail_summary(const struct testlist *fails)
+{
+    struct testset *ts;
+    unsigned int chars;
+    unsigned long i, first, last, total;
+    double failed;
+
+    puts(header);
+
+    /* Failed Set                 Fail/Total (%) Skip Stat  Failing (25)
+       -------------------------- -------------- ---- ----  -------------- */
+    for (; fails; fails = fails->next) {
+        ts = fails->ts;
+        total = ts->count - ts->skipped;
+        failed = (double) ts->failed;
+        printf("%-26.26s %4lu/%-4lu %3.0f%% %4lu ", ts->file, ts->failed,
+               total, total ? (failed * 100.0) / (double) total : 0,
+               ts->skipped);
+        if (WIFEXITED(ts->status))
+            printf("%4d  ", WEXITSTATUS(ts->status));
+        else
+            printf("  --  ");
+        if (ts->aborted) {
+            puts("aborted");
+            continue;
+        }
+        chars = 0;
+        first = 0;
+        last = 0;
+        for (i = 0; i < ts->count; i++) {
+            if (ts->results[i] == TEST_FAIL) {
+                if (first != 0 && i == last)
+                    last = i + 1;
+                else {
+                    if (first != 0)
+                        chars += test_print_range(first, last, chars, 19);
+                    first = i + 1;
+                    last = i + 1;
+                }
+            }
+        }
+        if (first != 0)
+            test_print_range(first, last, chars, 19);
+        putchar('\n');
+    }
+}
+
+
+/*
+ * Check whether a given file path is a valid test.  Currently, this checks
+ * whether it is executable and is a regular file.  Returns true or false.
+ */
+static int
+is_valid_test(const char *path)
+{
+    struct stat st;
+
+    if (access(path, X_OK) < 0)
+        return 0;
+    if (stat(path, &st) < 0)
+        return 0;
+    if (!S_ISREG(st.st_mode))
+        return 0;
+    return 1;
+}
+
+
+/*
+ * Given the name of a test, a pointer to the testset struct, and the source
+ * and build directories, find the test.  We try first relative to the current
+ * directory, then in the build directory (if not NULL), then in the source
+ * directory.  In each of those directories, we first try a "-t" extension and
+ * then a ".t" extension.  When we find an executable program, we return the
+ * path to that program.  If none of those paths are executable, just fill in
+ * the name of the test as is.
+ *
+ * The caller is responsible for freeing the path member of the testset
+ * struct.
+ */
+static char *
+find_test(const char *name, const char *source, const char *build)
+{
+    char *path = NULL;
+    const char *bases[3], *suffix, *base;
+    unsigned int i, j;
+    const char *suffixes[3] = {"-t", ".t", ""};
+
+    /* Possible base directories. */
+    bases[0] = ".";
+    bases[1] = build;
+    bases[2] = source;
+
+    /* Try each suffix with each base. */
+    for (i = 0; i < ARRAY_SIZE(suffixes); i++) {
+        suffix = suffixes[i];
+        for (j = 0; j < ARRAY_SIZE(bases); j++) {
+            base = bases[j];
+            if (base == NULL)
+                continue;
+            path = concat(base, "/", name, suffix, (const char *) 0);
+            if (is_valid_test(path))
+                return path;
+            free(path);
+            path = NULL;
+        }
+    }
+    if (path == NULL)
+        path = xstrdup(name);
+    return path;
+}
+
+
+/*
+ * Parse a single line of a test list and store the test name and command to
+ * execute it in the given testset struct.
+ *
+ * Normally, each line is just the name of the test, which is located in the
+ * test directory and turned into a command to run.  However, each line may
+ * have whitespace-separated options, which change the command that's run.
+ * Current supported options are:
+ *
+ * valgrind
+ *     Run the test under valgrind if C_TAP_VALGRIND is set.  The contents
+ *     of that environment variable are taken as the valgrind command (with
+ *     options) to run.  The command is parsed with a simple split on
+ *     whitespace and no quoting is supported.
+ *
+ * libtool
+ *     If running under valgrind, use libtool to invoke valgrind.  This avoids
+ *     running valgrind on the wrapper shell script generated by libtool.  If
+ *     set, C_TAP_LIBTOOL must be set to the full path to the libtool program
+ *     to use to run valgrind and thus the test.  Ignored if the test isn't
+ *     being run under valgrind.
+ */
+static void
+parse_test_list_line(const char *line, struct testset *ts, const char *source,
+                     const char *build)
+{
+    const char *p, *end, *option, *libtool;
+    const char *valgrind = NULL;
+    unsigned int use_libtool = 0;
+    unsigned int use_valgrind = 0;
+    size_t len, i;
+
+    /* Determine the name of the test. */
+    p = skip_non_whitespace(line);
+    ts->file = xstrndup(line, p - line);
+
+    /* Check if any test options are set. */
+    p = skip_whitespace(p);
+    while (*p != '\0') {
+        end = skip_non_whitespace(p);
+        if (strncmp(p, "libtool", end - p) == 0) {
+            use_libtool = 1;
+        } else if (strncmp(p, "valgrind", end - p) == 0) {
+            valgrind = getenv("C_TAP_VALGRIND");
+            use_valgrind = (valgrind != NULL);
+        } else {
+            option = xstrndup(p, end - p);
+            die("unknown test list option %s", option);
+        }
+        p = skip_whitespace(end);
+    }
+
+    /* Construct the argv to run the test.  First, find the length. */
+    len = 1;
+    if (use_valgrind && valgrind != NULL) {
+        p = skip_whitespace(valgrind);
+        while (*p != '\0') {
+            len++;
+            p = skip_whitespace(skip_non_whitespace(p));
+        }
+        if (use_libtool)
+            len += 2;
+    }
+
+    /* Now, build the command. */
+    ts->command = xcalloc(len + 1, char *);
+    i = 0;
+    if (use_valgrind && valgrind != NULL) {
+        if (use_libtool) {
+            libtool = getenv("C_TAP_LIBTOOL");
+            if (libtool == NULL)
+                die("valgrind with libtool requested, but C_TAP_LIBTOOL is not"
+                    " set");
+            ts->command[i++] = xstrdup(libtool);
+            ts->command[i++] = xstrdup("--mode=execute");
+        }
+        p = skip_whitespace(valgrind);
+        while (*p != '\0') {
+            end = skip_non_whitespace(p);
+            ts->command[i++] = xstrndup(p, end - p);
+            p = skip_whitespace(end);
+        }
+    }
+    if (i != len - 1)
+        die("internal error while constructing command line");
+    ts->command[i++] = find_test(ts->file, source, build);
+    ts->command[i] = NULL;
+}
+
+
+/*
+ * Read a list of tests from a file, returning the list of tests as a struct
+ * testlist, or NULL if there were no tests (such as a file containing only
+ * comments).  Reports an error to standard error and exits if the list of
+ * tests cannot be read.
+ */
+static struct testlist *
+read_test_list(const char *filename, const char *source, const char *build)
+{
+    FILE *file;
+    unsigned int line;
+    size_t length;
+    char buffer[BUFSIZ];
+    const char *start;
+    struct testlist *listhead, *current;
+
+    /* Create the initial container list that will hold our results. */
+    listhead = xcalloc(1, struct testlist);
+    current = NULL;
+
+    /*
+     * Open our file of tests to run and read it line by line, creating a new
+     * struct testlist and struct testset for each line.
+     */
+    file = fopen(filename, "r");
+    if (file == NULL)
+        sysdie("can't open %s", filename);
+    line = 0;
+    while (fgets(buffer, sizeof(buffer), file)) {
+        line++;
+        length = strlen(buffer) - 1;
+        if (buffer[length] != '\n') {
+            fprintf(stderr, "%s:%u: line too long\n", filename, line);
+            exit(1);
+        }
+        buffer[length] = '\0';
+
+        /* Skip comments, leading spaces, and blank lines. */
+        start = skip_whitespace(buffer);
+        if (strlen(start) == 0)
+            continue;
+        if (start[0] == '#')
+            continue;
+
+        /* Allocate the new testset structure. */
+        if (current == NULL)
+            current = listhead;
+        else {
+            current->next = xcalloc(1, struct testlist);
+            current = current->next;
+        }
+        current->ts = xcalloc(1, struct testset);
+        current->ts->plan = PLAN_INIT;
+
+        /* Parse the line and store the results in the testset struct. */
+        parse_test_list_line(start, current->ts, source, build);
+    }
+    fclose(file);
+
+    /* If there were no tests, current is still NULL. */
+    if (current == NULL) {
+        free(listhead);
+        return NULL;
+    }
+
+    /* Return the results. */
+    return listhead;
+}
+
+
+/*
+ * Build a list of tests from command line arguments.  Takes the argv and argc
+ * representing the command line arguments and returns a newly allocated test
+ * list, or NULL if there were no tests.  The caller is responsible for
+ * freeing.
+ */
+static struct testlist *
+build_test_list(char *argv[], int argc, const char *source, const char *build)
+{
+    int i;
+    struct testlist *listhead, *current;
+
+    /* Create the initial container list that will hold our results. */
+    listhead = xcalloc(1, struct testlist);
+    current = NULL;
+
+    /* Walk the list of arguments and create test sets for them. */
+    for (i = 0; i < argc; i++) {
+        if (current == NULL)
+            current = listhead;
+        else {
+            current->next = xcalloc(1, struct testlist);
+            current = current->next;
+        }
+        current->ts = xcalloc(1, struct testset);
+        current->ts->plan = PLAN_INIT;
+        current->ts->file = xstrdup(argv[i]);
+        current->ts->command = xcalloc(2, char *);
+        current->ts->command[0] = find_test(current->ts->file, source, build);
+        current->ts->command[1] = NULL;
+    }
+
+    /* If there were no tests, current is still NULL. */
+    if (current == NULL) {
+        free(listhead);
+        return NULL;
+    }
+
+    /* Return the results. */
+    return listhead;
+}
+
+
+/* Free a struct testset. */
+static void
+free_testset(struct testset *ts)
+{
+    size_t i;
+
+    free(ts->file);
+    for (i = 0; ts->command[i] != NULL; i++)
+        free(ts->command[i]);
+    free(ts->command);
+    free(ts->results);
+    free(ts->reason);
+    free(ts);
+}
+
+
+/*
+ * Run a batch of tests.  Takes two additional parameters: the root of the
+ * source directory and the root of the build directory.  Test programs will
+ * be first searched for in the current directory, then the build directory,
+ * then the source directory.  Returns true iff all tests passed, and always
+ * frees the test list that's passed in.
+ */
+static int
+test_batch(struct testlist *tests, enum test_verbose verbose)
+{
+    size_t length, i;
+    size_t longest = 0;
+    unsigned int count = 0;
+    struct testset *ts;
+    struct timeval start, end;
+    struct rusage stats;
+    struct testlist *failhead = NULL;
+    struct testlist *failtail = NULL;
+    struct testlist *current, *next;
+    int succeeded;
+    unsigned long total = 0;
+    unsigned long passed = 0;
+    unsigned long skipped = 0;
+    unsigned long failed = 0;
+    unsigned long aborted = 0;
+
+    /* Walk the list of tests to find the longest name. */
+    for (current = tests; current != NULL; current = current->next) {
+        length = strlen(current->ts->file);
+        if (length > longest)
+            longest = length;
+    }
+
+    /*
+     * Add two to longest and round up to the nearest tab stop.  This is how
+     * wide the column for printing the current test name will be.
+     */
+    longest += 2;
+    if (longest % 8)
+        longest += 8 - (longest % 8);
+
+    /* Start the wall clock timer. */
+    gettimeofday(&start, NULL);
+
+    /* Now, plow through our tests again, running each one. */
+    for (current = tests; current != NULL; current = current->next) {
+        ts = current->ts;
+
+        /* Print out the name of the test file. */
+        fputs(ts->file, stdout);
+        if (verbose)
+            fputs("\n\n", stdout);
+        else
+            for (i = strlen(ts->file); i < longest; i++)
+                putchar('.');
+        if (isatty(STDOUT_FILENO))
+            fflush(stdout);
+
+        /* Run the test. */
+        succeeded = test_run(ts, verbose);
+        fflush(stdout);
+        if (verbose)
+            putchar('\n');
+
+        /* Record cumulative statistics. */
+        aborted += ts->aborted;
+        total += ts->count + ts->all_skipped;
+        passed += ts->passed;
+        skipped += ts->skipped + ts->all_skipped;
+        failed += ts->failed;
+        count++;
+
+        /* If the test fails, we shuffle it over to the fail list. */
+        if (!succeeded) {
+            if (failhead == NULL) {
+                failhead = xcalloc(1, struct testlist);
+                failtail = failhead;
+            } else {
+                failtail->next = xcalloc(1, struct testlist);
+                failtail = failtail->next;
+            }
+            failtail->ts = ts;
+            failtail->next = NULL;
+        }
+    }
+    total -= skipped;
+
+    /* Stop the timer and get our child resource statistics. */
+    gettimeofday(&end, NULL);
+    getrusage(RUSAGE_CHILDREN, &stats);
+
+    /* Summarize the failures and free the failure list. */
+    if (failhead != NULL) {
+        test_fail_summary(failhead);
+        while (failhead != NULL) {
+            next = failhead->next;
+            free(failhead);
+            failhead = next;
+        }
+    }
+
+    /* Free the memory used by the test lists. */
+    while (tests != NULL) {
+        next = tests->next;
+        free_testset(tests->ts);
+        free(tests);
+        tests = next;
+    }
+
+    /* Print out the final test summary. */
+    putchar('\n');
+    if (aborted != 0) {
+        if (aborted == 1)
+            printf("Aborted %lu test set", aborted);
+        else
+            printf("Aborted %lu test sets", aborted);
+        printf(", passed %lu/%lu tests", passed, total);
+    } else if (failed == 0)
+        fputs("All tests successful", stdout);
+    else
+        printf("Failed %lu/%lu tests, %.2f%% okay", failed, total,
+               (double) (total - failed) * 100.0 / (double) total);
+    if (skipped != 0) {
+        if (skipped == 1)
+            printf(", %lu test skipped", skipped);
+        else
+            printf(", %lu tests skipped", skipped);
+    }
+    puts(".");
+    printf("Files=%u,  Tests=%lu", count, total);
+    printf(",  %.2f seconds", tv_diff(&end, &start));
+    printf(" (%.2f usr + %.2f sys = %.2f CPU)\n", tv_seconds(&stats.ru_utime),
+           tv_seconds(&stats.ru_stime),
+           tv_sum(&stats.ru_utime, &stats.ru_stime));
+    return (failed == 0 && aborted == 0);
+}
+
+
+/*
+ * Run a single test case.  This involves just running the test program after
+ * having done the environment setup and finding the test program.
+ */
+static void
+test_single(const char *program, const char *source, const char *build)
+{
+    char *path;
+
+    path = find_test(program, source, build);
+    if (execl(path, path, (char *) 0) == -1)
+        sysdie("cannot exec %s", path);
+}
+
+
+/*
+ * Main routine.  Set the C_TAP_SOURCE, C_TAP_BUILD, SOURCE, and BUILD
+ * environment variables and then, given a file listing tests, run each test
+ * listed.
+ */
+int
+main(int argc, char *argv[])
+{
+    int option;
+    int status = 0;
+    int single = 0;
+    enum test_verbose verbose = CONCISE;
+    char *c_tap_source_env = NULL;
+    char *c_tap_build_env = NULL;
+    char *source_env = NULL;
+    char *build_env = NULL;
+    const char *program;
+    const char *shortlist;
+    const char *list = NULL;
+    const char *source = C_TAP_SOURCE;
+    const char *build = C_TAP_BUILD;
+    struct testlist *tests;
+
+    program = argv[0];
+    while ((option = getopt(argc, argv, "b:hl:os:v")) != EOF) {
+        switch (option) {
+        case 'b':
+            build = optarg;
+            break;
+        case 'h':
+            printf(usage_message, program, program, program, usage_extra);
+            exit(0);
+        case 'l':
+            list = optarg;
+            break;
+        case 'o':
+            single = 1;
+            break;
+        case 's':
+            source = optarg;
+            break;
+        case 'v':
+            verbose = VERBOSE;
+            break;
+        default:
+            exit(1);
+        }
+    }
+    argv += optind;
+    argc -= optind;
+    if ((list == NULL && argc < 1) || (list != NULL && argc > 0)) {
+        fprintf(stderr, usage_message, program, program, program, usage_extra);
+        exit(1);
+    }
+
+    /*
+     * If C_TAP_VERBOSE is set in the environment, that also turns on verbose
+     * mode.
+     */
+    if (getenv("C_TAP_VERBOSE") != NULL)
+        verbose = VERBOSE;
+
+    /*
+     * Set C_TAP_SOURCE and C_TAP_BUILD environment variables.  Also set
+     * SOURCE and BUILD for backward compatibility, although we're trying to
+     * migrate to the ones with a C_TAP_* prefix.
+     */
+    if (source != NULL) {
+        c_tap_source_env = concat("C_TAP_SOURCE=", source, (const char *) 0);
+        if (putenv(c_tap_source_env) != 0)
+            sysdie("cannot set C_TAP_SOURCE in the environment");
+        source_env = concat("SOURCE=", source, (const char *) 0);
+        if (putenv(source_env) != 0)
+            sysdie("cannot set SOURCE in the environment");
+    }
+    if (build != NULL) {
+        c_tap_build_env = concat("C_TAP_BUILD=", build, (const char *) 0);
+        if (putenv(c_tap_build_env) != 0)
+            sysdie("cannot set C_TAP_BUILD in the environment");
+        build_env = concat("BUILD=", build, (const char *) 0);
+        if (putenv(build_env) != 0)
+            sysdie("cannot set BUILD in the environment");
+    }
+
+    /* Run the tests as instructed. */
+    if (single)
+        test_single(argv[0], source, build);
+    else if (list != NULL) {
+        shortlist = strrchr(list, '/');
+        if (shortlist == NULL)
+            shortlist = list;
+        else
+            shortlist++;
+        printf(banner, shortlist);
+        tests = read_test_list(list, source, build);
+        status = test_batch(tests, verbose) ? 0 : 1;
+    } else {
+        tests = build_test_list(argv, argc, source, build);
+        status = test_batch(tests, verbose) ? 0 : 1;
+    }
+
+    /* For valgrind cleanliness, free all our memory. */
+    if (source_env != NULL) {
+        putenv((char *) "C_TAP_SOURCE=");
+        putenv((char *) "SOURCE=");
+        free(c_tap_source_env);
+        free(source_env);
+    }
+    if (build_env != NULL) {
+        putenv((char *) "C_TAP_BUILD=");
+        putenv((char *) "BUILD=");
+        free(c_tap_build_env);
+        free(build_env);
+    }
+    exit(status);
+}
\ No newline at end of file
diff --git a/t/tap/basic.c b/t/tap/basic.c
new file mode 100644
index 0000000000..704282b9c1
--- /dev/null
+++ b/t/tap/basic.c
@@ -0,0 +1,1029 @@
+/*
+ * Some utility routines for writing tests.
+ *
+ * Here are a variety of utility routines for writing tests compatible with
+ * the TAP protocol.  All routines of the form ok() or is*() take a test
+ * number and some number of appropriate arguments, check to be sure the
+ * results match the expected output using the arguments, and print out
+ * something appropriate for that test number.  Other utility routines help in
+ * constructing more complex tests, skipping tests, reporting errors, setting
+ * up the TAP output format, or finding things in the test environment.
+ *
+ * This file is part of C TAP Harness.  The current version plus supporting
+ * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
+ *
+ * Written by Russ Allbery <eagle@eyrie.org>
+ * Copyright 2009-2019, 2021 Russ Allbery <eagle@eyrie.org>
+ * Copyright 2001-2002, 2004-2008, 2011-2014
+ *     The Board of Trustees of the Leland Stanford Junior University
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: MIT
+ */
+
+#include <errno.h>
+#include <limits.h>
+#include <stdarg.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#ifdef _WIN32
+#    include <direct.h>
+#else
+#    include <sys/stat.h>
+#endif
+#include <sys/types.h>
+#include <unistd.h>
+
+#include <tap/basic.h>
+
+/* Windows provides mkdir and rmdir under different names. */
+#ifdef _WIN32
+#    define mkdir(p, m) _mkdir(p)
+#    define rmdir(p)    _rmdir(p)
+#endif
+
+/*
+ * The test count.  Always contains the number that will be used for the next
+ * test status.  This is exported to callers of the library.
+ */
+unsigned long testnum = 1;
+
+/*
+ * Status information stored so that we can give a test summary at the end of
+ * the test case.  We store the planned final test and the count of failures.
+ * We can get the highest test count from testnum.
+ */
+static unsigned long _planned = 0;
+static unsigned long _failed = 0;
+
+/*
+ * Store the PID of the process that called plan() and only summarize
+ * results when that process exits, so as to not misreport results in forked
+ * processes.
+ */
+static pid_t _process = 0;
+
+/*
+ * If true, we're doing lazy planning and will print out the plan based on the
+ * last test number at the end of testing.
+ */
+static int _lazy = 0;
+
+/*
+ * If true, the test was aborted by calling bail().  Currently, this is only
+ * used to ensure that we pass a false value to any cleanup functions even if
+ * all tests to that point have passed.
+ */
+static int _aborted = 0;
+
+/*
+ * Registered cleanup functions.  These are stored as a linked list and run in
+ * registered order by finish when the test program exits.  Each function is
+ * passed a boolean value indicating whether all tests were successful.
+ */
+struct cleanup_func {
+    test_cleanup_func func;
+    test_cleanup_func_with_data func_with_data;
+    void *data;
+    struct cleanup_func *next;
+};
+static struct cleanup_func *cleanup_funcs = NULL;
+
+/*
+ * Registered diag files.  Any output found in these files will be printed out
+ * as if it were passed to diag() before any other output we do.  This allows
+ * background processes to log to a file and have that output interleaved with
+ * the test output.
+ */
+struct diag_file {
+    char *name;
+    FILE *file;
+    char *buffer;
+    size_t bufsize;
+    struct diag_file *next;
+};
+static struct diag_file *diag_files = NULL;
+
+/*
+ * Print a specified prefix and then the test description.  Handles turning
+ * the argument list into a va_args structure suitable for passing to
+ * print_desc, which has to be done in a macro.  Assumes that format is the
+ * argument immediately before the variadic arguments.
+ */
+#define PRINT_DESC(prefix, format)  \
+    do {                            \
+        if (format != NULL) {       \
+            va_list args;           \
+            printf("%s", prefix);   \
+            va_start(args, format); \
+            vprintf(format, args);  \
+            va_end(args);           \
+        }                           \
+    } while (0)
+
+
+/*
+ * Form a new string by concatenating multiple strings.  The arguments must be
+ * terminated by (const char *) 0.
+ *
+ * This function only exists because we can't assume asprintf.  We can't
+ * simulate asprintf with snprintf because we're only assuming SUSv3, which
+ * does not require that snprintf with a NULL buffer return the required
+ * length.  When those constraints are relaxed, this should be ripped out and
+ * replaced with asprintf or a more trivial replacement with snprintf.
+ */
+static char *
+concat(const char *first, ...)
+{
+    va_list args;
+    char *result;
+    const char *string;
+    size_t offset;
+    size_t length = 0;
+
+    /*
+     * Find the total memory required.  Ensure we don't overflow length.  See
+     * the comment for breallocarray for why we're using UINT_MAX here.
+     */
+    va_start(args, first);
+    for (string = first; string != NULL; string = va_arg(args, const char *)) {
+        if (length >= UINT_MAX - strlen(string))
+            bail("strings too long in concat");
+        length += strlen(string);
+    }
+    va_end(args);
+    length++;
+
+    /* Create the string. */
+    result = bcalloc_type(length, char);
+    va_start(args, first);
+    offset = 0;
+    for (string = first; string != NULL; string = va_arg(args, const char *)) {
+        memcpy(result + offset, string, strlen(string));
+        offset += strlen(string);
+    }
+    va_end(args);
+    result[offset] = '\0';
+    return result;
+}
+
+
+/*
+ * Helper function for check_diag_files to handle a single line in a diag
+ * file.
+ *
+ * The general scheme here used is as follows: read one line of output.  If we
+ * get NULL, check for an error.  If there was one, bail out of the test
+ * program; otherwise, return, and the enclosing loop will check for EOF.
+ *
+ * If we get some data, see if it ends in a newline.  If it doesn't end in a
+ * newline, we have one of two cases: our buffer isn't large enough, in which
+ * case we resize it and try again, or we have incomplete data in the file, in
+ * which case we rewind the file and will try again next time.
+ *
+ * Returns a boolean indicating whether the last line was incomplete.
+ */
+static int
+handle_diag_file_line(struct diag_file *file, fpos_t where)
+{
+    int size;
+    size_t length;
+
+    /* Read the next line from the file. */
+    size = file->bufsize > INT_MAX ? INT_MAX : (int) file->bufsize;
+    if (fgets(file->buffer, size, file->file) == NULL) {
+        if (ferror(file->file))
+            sysbail("cannot read from %s", file->name);
+        return 0;
+    }
+
+    /*
+     * See if the line ends in a newline.  If not, see which error case we
+     * have.
+     */
+    length = strlen(file->buffer);
+    if (file->buffer[length - 1] != '\n') {
+        int incomplete = 0;
+
+        /* Check whether we ran out of buffer space and resize if so. */
+        if (length < file->bufsize - 1)
+            incomplete = 1;
+        else {
+            file->bufsize += BUFSIZ;
+            file->buffer =
+                breallocarray_type(file->buffer, file->bufsize, char);
+        }
+
+        /*
+         * On either incomplete lines or too small of a buffer, rewind
+         * and read the file again (on the next pass, if incomplete).
+         * It's simpler than trying to double-buffer the file.
+         */
+        if (fsetpos(file->file, &where) < 0)
+            sysbail("cannot set position in %s", file->name);
+        return incomplete;
+    }
+
+    /* We saw a complete line.  Print it out. */
+    printf("# %s", file->buffer);
+    return 0;
+}
+
+
+/*
+ * Check all registered diag_files for any output.  We only print out the
+ * output if we see a complete line; otherwise, we wait for the next newline.
+ */
+static void
+check_diag_files(void)
+{
+    struct diag_file *file;
+    fpos_t where;
+    int incomplete;
+
+    /*
+     * Walk through each file and read each line of output available.
+     */
+    for (file = diag_files; file != NULL; file = file->next) {
+        clearerr(file->file);
+
+        /* Store the current position in case we have to rewind. */
+        if (fgetpos(file->file, &where) < 0)
+            sysbail("cannot get position in %s", file->name);
+
+        /* Continue until we get EOF or an incomplete line of data. */
+        incomplete = 0;
+        while (!feof(file->file) && !incomplete) {
+            incomplete = handle_diag_file_line(file, where);
+        }
+    }
+}
+
+
+/*
+ * Our exit handler.  Called on completion of the test to report a summary of
+ * results provided we're still in the original process.  This also handles
+ * printing out the plan if we used plan_lazy(), although that's suppressed if
+ * we never ran a test (due to an early bail, for example), and running any
+ * registered cleanup functions.
+ */
+static void
+finish(void)
+{
+    int success, primary;
+    struct cleanup_func *current;
+    unsigned long highest = testnum - 1;
+    struct diag_file *file, *tmp;
+
+    /* Check for pending diag_file output. */
+    check_diag_files();
+
+    /* Free the diag_files. */
+    file = diag_files;
+    while (file != NULL) {
+        tmp = file;
+        file = file->next;
+        fclose(tmp->file);
+        free(tmp->name);
+        free(tmp->buffer);
+        free(tmp);
+    }
+    diag_files = NULL;
+
+    /*
+     * Determine whether all tests were successful, which is needed before
+     * calling cleanup functions since we pass that fact to the functions.
+     */
+    if (_planned == 0 && _lazy)
+        _planned = highest;
+    success = (!_aborted && _planned == highest && _failed == 0);
+
+    /*
+     * If there are any registered cleanup functions, we run those first.  We
+     * always run them, even if we didn't run a test.  Don't do anything
+     * except free the diag_files and call cleanup functions if we aren't the
+     * primary process (the process in which plan or plan_lazy was called),
+     * and tell the cleanup functions that fact.
+     */
+    primary = (_process == 0 || getpid() == _process);
+    while (cleanup_funcs != NULL) {
+        if (cleanup_funcs->func_with_data) {
+            void *data = cleanup_funcs->data;
+
+            cleanup_funcs->func_with_data(success, primary, data);
+        } else {
+            cleanup_funcs->func(success, primary);
+        }
+        current = cleanup_funcs;
+        cleanup_funcs = cleanup_funcs->next;
+        free(current);
+    }
+    if (!primary)
+        return;
+
+    /* Don't do anything further if we never planned a test. */
+    if (_planned == 0)
+        return;
+
+    /* If we're aborting due to bail, don't print summaries. */
+    if (_aborted)
+        return;
+
+    /* Print out the lazy plan if needed. */
+    fflush(stderr);
+    if (_lazy)
+        printf("1..%lu\n", _planned);
+
+    /* Print out a summary of the results. */
+    if (_planned > highest)
+        diag("Looks like you planned %lu test%s but only ran %lu", _planned,
+             (_planned > 1 ? "s" : ""), highest);
+    else if (_planned < highest)
+        diag("Looks like you planned %lu test%s but ran %lu extra", _planned,
+             (_planned > 1 ? "s" : ""), highest - _planned);
+    else if (_failed > 0)
+        diag("Looks like you failed %lu test%s of %lu", _failed,
+             (_failed > 1 ? "s" : ""), _planned);
+    else if (_planned != 1)
+        diag("All %lu tests successful or skipped", _planned);
+    else
+        diag("%lu test successful or skipped", _planned);
+}
+
+
+/*
+ * Initialize things.  Turns on line buffering on stdout and then prints out
+ * the number of tests in the test suite.  We intentionally don't check for
+ * pending diag_file output here, since it should really come after the plan.
+ */
+void
+plan(unsigned long count)
+{
+    if (setvbuf(stdout, NULL, _IOLBF, BUFSIZ) != 0)
+        sysdiag("cannot set stdout to line buffered");
+    fflush(stderr);
+    printf("1..%lu\n", count);
+    testnum = 1;
+    _planned = count;
+    _process = getpid();
+    if (atexit(finish) != 0) {
+        sysdiag("cannot register exit handler");
+        diag("cleanups will not be run");
+    }
+}
+
+
+/*
+ * Initialize things for lazy planning, where we'll automatically print out a
+ * plan at the end of the program.  Turns on line buffering on stdout as well.
+ */
+void
+plan_lazy(void)
+{
+    if (setvbuf(stdout, NULL, _IOLBF, BUFSIZ) != 0)
+        sysdiag("cannot set stdout to line buffered");
+    testnum = 1;
+    _process = getpid();
+    _lazy = 1;
+    if (atexit(finish) != 0)
+        sysbail("cannot register exit handler to display plan");
+}
+
+
+/*
+ * Skip the entire test suite and exits.  Should be called instead of plan(),
+ * not after it, since it prints out a special plan line.  Ignore diag_file
+ * output here, since it's not clear if it's allowed before the plan.
+ */
+void
+skip_all(const char *format, ...)
+{
+    fflush(stderr);
+    printf("1..0 # skip");
+    PRINT_DESC(" ", format);
+    putchar('\n');
+    exit(0);
+}
+
+
+/*
+ * Takes a boolean success value and assumes the test passes if that value
+ * is true and fails if that value is false.
+ */
+int
+ok(int success, const char *format, ...)
+{
+    fflush(stderr);
+    check_diag_files();
+    printf("%sok %lu", success ? "" : "not ", testnum++);
+    if (!success)
+        _failed++;
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Same as ok(), but takes the format arguments as a va_list.
+ */
+int
+okv(int success, const char *format, va_list args)
+{
+    fflush(stderr);
+    check_diag_files();
+    printf("%sok %lu", success ? "" : "not ", testnum++);
+    if (!success)
+        _failed++;
+    if (format != NULL) {
+        printf(" - ");
+        vprintf(format, args);
+    }
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Skip a test.
+ */
+void
+skip(const char *reason, ...)
+{
+    fflush(stderr);
+    check_diag_files();
+    printf("ok %lu # skip", testnum++);
+    PRINT_DESC(" ", reason);
+    putchar('\n');
+}
+
+
+/*
+ * Report the same status on the next count tests.
+ */
+int
+ok_block(unsigned long count, int success, const char *format, ...)
+{
+    unsigned long i;
+
+    fflush(stderr);
+    check_diag_files();
+    for (i = 0; i < count; i++) {
+        printf("%sok %lu", success ? "" : "not ", testnum++);
+        if (!success)
+            _failed++;
+        PRINT_DESC(" - ", format);
+        putchar('\n');
+    }
+    return success;
+}
+
+
+/*
+ * Skip the next count tests.
+ */
+void
+skip_block(unsigned long count, const char *reason, ...)
+{
+    unsigned long i;
+
+    fflush(stderr);
+    check_diag_files();
+    for (i = 0; i < count; i++) {
+        printf("ok %lu # skip", testnum++);
+        PRINT_DESC(" ", reason);
+        putchar('\n');
+    }
+}
+
+
+/*
+ * Takes two boolean values and requires the truth value of both match.
+ */
+int
+is_bool(int left, int right, const char *format, ...)
+{
+    int success;
+
+    fflush(stderr);
+    check_diag_files();
+    success = (!!left == !!right);
+    if (success)
+        printf("ok %lu", testnum++);
+    else {
+        diag(" left: %s", !!left ? "true" : "false");
+        diag("right: %s", !!right ? "true" : "false");
+        printf("not ok %lu", testnum++);
+        _failed++;
+    }
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Takes two integer values and requires they match.
+ */
+int
+is_int(long left, long right, const char *format, ...)
+{
+    int success;
+
+    fflush(stderr);
+    check_diag_files();
+    success = (left == right);
+    if (success)
+        printf("ok %lu", testnum++);
+    else {
+        diag(" left: %ld", left);
+        diag("right: %ld", right);
+        printf("not ok %lu", testnum++);
+        _failed++;
+    }
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Takes two strings and requires they match (using strcmp).  NULL arguments
+ * are permitted and handled correctly.
+ */
+int
+is_string(const char *left, const char *right, const char *format, ...)
+{
+    int success;
+
+    fflush(stderr);
+    check_diag_files();
+
+    /* Compare the strings, being careful of NULL. */
+    if (left == NULL)
+        success = (right == NULL);
+    else if (right == NULL)
+        success = 0;
+    else
+        success = (strcmp(left, right) == 0);
+
+    /* Report the results. */
+    if (success)
+        printf("ok %lu", testnum++);
+    else {
+        diag(" left: %s", left == NULL ? "(null)" : left);
+        diag("right: %s", right == NULL ? "(null)" : right);
+        printf("not ok %lu", testnum++);
+        _failed++;
+    }
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Takes two unsigned longs and requires they match.  On failure, reports them
+ * in hex.
+ */
+int
+is_hex(unsigned long left, unsigned long right, const char *format, ...)
+{
+    int success;
+
+    fflush(stderr);
+    check_diag_files();
+    success = (left == right);
+    if (success)
+        printf("ok %lu", testnum++);
+    else {
+        diag(" left: %lx", (unsigned long) left);
+        diag("right: %lx", (unsigned long) right);
+        printf("not ok %lu", testnum++);
+        _failed++;
+    }
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Takes pointers to a regions of memory and requires that len bytes from each
+ * match.  Otherwise reports any bytes which didn't match.
+ */
+int
+is_blob(const void *left, const void *right, size_t len, const char *format,
+        ...)
+{
+    int success;
+    size_t i;
+
+    fflush(stderr);
+    check_diag_files();
+    success = (memcmp(left, right, len) == 0);
+    if (success)
+        printf("ok %lu", testnum++);
+    else {
+        const unsigned char *left_c = (const unsigned char *) left;
+        const unsigned char *right_c = (const unsigned char *) right;
+
+        for (i = 0; i < len; i++) {
+            if (left_c[i] != right_c[i])
+                diag("offset %lu: left %02x, right %02x", (unsigned long) i,
+                     left_c[i], right_c[i]);
+        }
+        printf("not ok %lu", testnum++);
+        _failed++;
+    }
+    PRINT_DESC(" - ", format);
+    putchar('\n');
+    return success;
+}
+
+
+/*
+ * Bail out with an error.
+ */
+void
+bail(const char *format, ...)
+{
+    va_list args;
+
+    _aborted = 1;
+    fflush(stderr);
+    check_diag_files();
+    fflush(stdout);
+    printf("Bail out! ");
+    va_start(args, format);
+    vprintf(format, args);
+    va_end(args);
+    printf("\n");
+    exit(255);
+}
+
+
+/*
+ * Bail out with an error, appending strerror(errno).
+ */
+void
+sysbail(const char *format, ...)
+{
+    va_list args;
+    int oerrno = errno;
+
+    _aborted = 1;
+    fflush(stderr);
+    check_diag_files();
+    fflush(stdout);
+    printf("Bail out! ");
+    va_start(args, format);
+    vprintf(format, args);
+    va_end(args);
+    printf(": %s\n", strerror(oerrno));
+    exit(255);
+}
+
+
+/*
+ * Report a diagnostic to stderr.  Always returns 1 to allow embedding in
+ * compound statements.
+ */
+int
+diag(const char *format, ...)
+{
+    va_list args;
+
+    fflush(stderr);
+    check_diag_files();
+    fflush(stdout);
+    printf("# ");
+    va_start(args, format);
+    vprintf(format, args);
+    va_end(args);
+    printf("\n");
+    return 1;
+}
+
+
+/*
+ * Report a diagnostic to stderr, appending strerror(errno).  Always returns 1
+ * to allow embedding in compound statements.
+ */
+int
+sysdiag(const char *format, ...)
+{
+    va_list args;
+    int oerrno = errno;
+
+    fflush(stderr);
+    check_diag_files();
+    fflush(stdout);
+    printf("# ");
+    va_start(args, format);
+    vprintf(format, args);
+    va_end(args);
+    printf(": %s\n", strerror(oerrno));
+    return 1;
+}
+
+
+/*
+ * Register a new file for diag_file processing.
+ */
+void
+diag_file_add(const char *name)
+{
+    struct diag_file *file, *prev;
+
+    file = bcalloc_type(1, struct diag_file);
+    file->name = bstrdup(name);
+    file->file = fopen(file->name, "r");
+    if (file->file == NULL)
+        sysbail("cannot open %s", name);
+    file->buffer = bcalloc_type(BUFSIZ, char);
+    file->bufsize = BUFSIZ;
+    if (diag_files == NULL)
+        diag_files = file;
+    else {
+        for (prev = diag_files; prev->next != NULL; prev = prev->next)
+            ;
+        prev->next = file;
+    }
+}
+
+
+/*
+ * Remove a file from diag_file processing.  If the file is not found, do
+ * nothing, since there are some situations where it can be removed twice
+ * (such as if it's removed from a cleanup function, since cleanup functions
+ * are called after freeing all the diag_files).
+ */
+void
+diag_file_remove(const char *name)
+{
+    struct diag_file *file;
+    struct diag_file **prev = &diag_files;
+
+    for (file = diag_files; file != NULL; file = file->next) {
+        if (strcmp(file->name, name) == 0) {
+            *prev = file->next;
+            fclose(file->file);
+            free(file->name);
+            free(file->buffer);
+            free(file);
+            return;
+        }
+        prev = &file->next;
+    }
+}
+
+
+/*
+ * Allocate cleared memory, reporting a fatal error with bail on failure.
+ */
+void *
+bcalloc(size_t n, size_t size)
+{
+    void *p;
+
+    p = calloc(n, size);
+    if (p == NULL)
+        sysbail("failed to calloc %lu", (unsigned long) (n * size));
+    return p;
+}
+
+
+/*
+ * Allocate memory, reporting a fatal error with bail on failure.
+ */
+void *
+bmalloc(size_t size)
+{
+    void *p;
+
+    p = malloc(size);
+    if (p == NULL)
+        sysbail("failed to malloc %lu", (unsigned long) size);
+    return p;
+}
+
+
+/*
+ * Reallocate memory, reporting a fatal error with bail on failure.
+ */
+void *
+brealloc(void *p, size_t size)
+{
+    p = realloc(p, size);
+    if (p == NULL)
+        sysbail("failed to realloc %lu bytes", (unsigned long) size);
+    return p;
+}
+
+
+/*
+ * The same as brealloc, but determine the size by multiplying an element
+ * count by a size, similar to calloc.  The multiplication is checked for
+ * integer overflow.
+ *
+ * We should technically use SIZE_MAX here for the overflow check, but
+ * SIZE_MAX is C99 and we're only assuming C89 + SUSv3, which does not
+ * guarantee that it exists.  They do guarantee that UINT_MAX exists, and we
+ * can assume that UINT_MAX <= SIZE_MAX.
+ *
+ * (In theory, C89 and C99 permit size_t to be smaller than unsigned int, but
+ * I disbelieve in the existence of such systems and they will have to cope
+ * without overflow checks.)
+ */
+void *
+breallocarray(void *p, size_t n, size_t size)
+{
+    if (n > 0 && UINT_MAX / n <= size)
+        bail("reallocarray too large");
+    if (n == 0)
+        n = 1;
+    p = realloc(p, n * size);
+    if (p == NULL)
+        sysbail("failed to realloc %lu bytes", (unsigned long) (n * size));
+    return p;
+}
+
+
+/*
+ * Copy a string, reporting a fatal error with bail on failure.
+ */
+char *
+bstrdup(const char *s)
+{
+    char *p;
+    size_t len;
+
+    len = strlen(s) + 1;
+    p = (char *) malloc(len);
+    if (p == NULL)
+        sysbail("failed to strdup %lu bytes", (unsigned long) len);
+    memcpy(p, s, len);
+    return p;
+}
+
+
+/*
+ * Copy up to n characters of a string, reporting a fatal error with bail on
+ * failure.  Don't use the system strndup function, since it may not exist and
+ * the TAP library doesn't assume any portability support.
+ */
+char *
+bstrndup(const char *s, size_t n)
+{
+    const char *p;
+    char *copy;
+    size_t length;
+
+    /* Don't assume that the source string is nul-terminated. */
+    for (p = s; (size_t) (p - s) < n && *p != '\0'; p++)
+        ;
+    length = (size_t) (p - s);
+    copy = (char *) malloc(length + 1);
+    if (copy == NULL)
+        sysbail("failed to strndup %lu bytes", (unsigned long) length);
+    memcpy(copy, s, length);
+    copy[length] = '\0';
+    return copy;
+}
+
+
+/*
+ * Locate a test file.  Given the partial path to a file, look under
+ * C_TAP_BUILD and then C_TAP_SOURCE for the file and return the full path to
+ * the file.  Returns NULL if the file doesn't exist.  A non-NULL return
+ * should be freed with test_file_path_free().
+ */
+char *
+test_file_path(const char *file)
+{
+    char *base;
+    char *path = NULL;
+    const char *envs[] = {"C_TAP_BUILD", "C_TAP_SOURCE", NULL};
+    int i;
+
+    for (i = 0; envs[i] != NULL; i++) {
+        base = getenv(envs[i]);
+        if (base == NULL)
+            continue;
+        path = concat(base, "/", file, (const char *) 0);
+        if (access(path, R_OK) == 0)
+            break;
+        free(path);
+        path = NULL;
+    }
+    return path;
+}
+
+
+/*
+ * Free a path returned from test_file_path().  This function exists primarily
+ * for Windows, where memory must be freed from the same library domain that
+ * it was allocated from.
+ */
+void
+test_file_path_free(char *path)
+{
+    free(path);
+}
+
+
+/*
+ * Create a temporary directory, tmp, under C_TAP_BUILD if set and the current
+ * directory if it does not.  Returns the path to the temporary directory in
+ * newly allocated memory, and calls bail on any failure.  The return value
+ * should be freed with test_tmpdir_free.
+ *
+ * This function uses sprintf because it attempts to be independent of all
+ * other portability layers.  The use immediately after a memory allocation
+ * should be safe without using snprintf or strlcpy/strlcat.
+ */
+char *
+test_tmpdir(void)
+{
+    const char *build;
+    char *path = NULL;
+
+    build = getenv("C_TAP_BUILD");
+    if (build == NULL)
+        build = ".";
+    path = concat(build, "/tmp", (const char *) 0);
+    if (access(path, X_OK) < 0)
+        if (mkdir(path, 0777) < 0)
+            sysbail("error creating temporary directory %s", path);
+    return path;
+}
+
+
+/*
+ * Free a path returned from test_tmpdir() and attempt to remove the
+ * directory.  If we can't delete the directory, don't worry; something else
+ * that hasn't yet cleaned up may still be using it.
+ */
+void
+test_tmpdir_free(char *path)
+{
+    if (path != NULL)
+        rmdir(path);
+    free(path);
+}
+
+static void
+register_cleanup(test_cleanup_func func,
+                 test_cleanup_func_with_data func_with_data, void *data)
+{
+    struct cleanup_func *cleanup, **last;
+
+    cleanup = bcalloc_type(1, struct cleanup_func);
+    cleanup->func = func;
+    cleanup->func_with_data = func_with_data;
+    cleanup->data = data;
+    cleanup->next = NULL;
+    last = &cleanup_funcs;
+    while (*last != NULL)
+        last = &(*last)->next;
+    *last = cleanup;
+}
+
+/*
+ * Register a cleanup function that is called when testing ends.  All such
+ * registered functions will be run by finish.
+ */
+void
+test_cleanup_register(test_cleanup_func func)
+{
+    register_cleanup(func, NULL, NULL);
+}
+
+/*
+ * Same as above, but also allows an opaque pointer to be passed to the cleanup
+ * function.
+ */
+void
+test_cleanup_register_with_data(test_cleanup_func_with_data func, void *data)
+{
+    register_cleanup(NULL, func, data);
+}
diff --git a/t/tap/basic.h b/t/tap/basic.h
new file mode 100644
index 0000000000..afea8cb210
--- /dev/null
+++ b/t/tap/basic.h
@@ -0,0 +1,198 @@
+/*
+ * Basic utility routines for the TAP protocol.
+ *
+ * This file is part of C TAP Harness.  The current version plus supporting
+ * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
+ *
+ * Written by Russ Allbery <eagle@eyrie.org>
+ * Copyright 2009-2019, 2022 Russ Allbery <eagle@eyrie.org>
+ * Copyright 2001-2002, 2004-2008, 2011-2012, 2014
+ *     The Board of Trustees of the Leland Stanford Junior University
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: MIT
+ */
+
+#ifndef TAP_BASIC_H
+#define TAP_BASIC_H 1
+
+#include <stdarg.h> /* va_list */
+#include <stddef.h> /* size_t */
+#include <stdlib.h> /* free */
+#include <tap/macros.h>
+
+/*
+ * Used for iterating through arrays.  ARRAY_SIZE returns the number of
+ * elements in the array (useful for a < upper bound in a for loop) and
+ * ARRAY_END returns a pointer to the element past the end (ISO C99 makes it
+ * legal to refer to such a pointer as long as it's never dereferenced).
+ */
+// #define ARRAY_SIZE(array) (sizeof(array) / sizeof((array)[0]))
+// #define ARRAY_END(array)  (&(array)[ARRAY_SIZE(array)])
+
+BEGIN_DECLS
+
+/*
+ * The test count.  Always contains the number that will be used for the next
+ * test status.
+ */
+extern unsigned long testnum;
+
+/* Print out the number of tests and set standard output to line buffered. */
+void plan(unsigned long count);
+
+/*
+ * Prepare for lazy planning, in which the plan will be printed automatically
+ * at the end of the test program.
+ */
+void plan_lazy(void);
+
+/* Skip the entire test suite.  Call instead of plan. */
+void skip_all(const char *format, ...)
+    __attribute__((__noreturn__, __format__(printf, 1, 2)));
+
+/*
+ * Basic reporting functions.  The okv() function is the same as ok() but
+ * takes the test description as a va_list to make it easier to reuse the
+ * reporting infrastructure when writing new tests.  ok() and okv() return the
+ * value of the success argument.
+ */
+int ok(int success, const char *format, ...)
+    __attribute__((__format__(printf, 2, 3)));
+int okv(int success, const char *format, va_list args)
+    __attribute__((__format__(printf, 2, 0)));
+void skip(const char *reason, ...) __attribute__((__format__(printf, 1, 2)));
+
+/*
+ * Report the same status on, or skip, the next count tests.  ok_block()
+ * returns the value of the success argument.
+ */
+int ok_block(unsigned long count, int success, const char *format, ...)
+    __attribute__((__format__(printf, 3, 4)));
+void skip_block(unsigned long count, const char *reason, ...)
+    __attribute__((__format__(printf, 2, 3)));
+
+/*
+ * Compare two values.  Returns true if the test passes and false if it fails.
+ * is_bool takes an int since the bool type isn't fully portable yet, but
+ * interprets both arguments for their truth value, not for their numeric
+ * value.
+ */
+int is_bool(int, int, const char *format, ...)
+    __attribute__((__format__(printf, 3, 4)));
+int is_int(long, long, const char *format, ...)
+    __attribute__((__format__(printf, 3, 4)));
+int is_string(const char *, const char *, const char *format, ...)
+    __attribute__((__format__(printf, 3, 4)));
+int is_hex(unsigned long, unsigned long, const char *format, ...)
+    __attribute__((__format__(printf, 3, 4)));
+int is_blob(const void *, const void *, size_t, const char *format, ...)
+    __attribute__((__format__(printf, 4, 5)));
+
+/* Bail out with an error.  sysbail appends strerror(errno). */
+void bail(const char *format, ...)
+    __attribute__((__noreturn__, __nonnull__, __format__(printf, 1, 2)));
+void sysbail(const char *format, ...)
+    __attribute__((__noreturn__, __nonnull__, __format__(printf, 1, 2)));
+
+/* Report a diagnostic to stderr prefixed with #. */
+int diag(const char *format, ...)
+    __attribute__((__nonnull__, __format__(printf, 1, 2)));
+int sysdiag(const char *format, ...)
+    __attribute__((__nonnull__, __format__(printf, 1, 2)));
+
+/*
+ * Register or unregister a file that contains supplementary diagnostics.
+ * Before any other output, all registered files will be read, line by line,
+ * and each line will be reported as a diagnostic as if it were passed to
+ * diag().  Nul characters are not supported in these files and will result in
+ * truncated output.
+ */
+void diag_file_add(const char *file) __attribute__((__nonnull__));
+void diag_file_remove(const char *file) __attribute__((__nonnull__));
+
+/* Allocate memory, reporting a fatal error with bail on failure. */
+void *bcalloc(size_t, size_t)
+    __attribute__((__alloc_size__(1, 2), __malloc__(free),
+                   __warn_unused_result__));
+void *bmalloc(size_t) __attribute__((__alloc_size__(1), __malloc__(free),
+                                     __warn_unused_result__));
+void *breallocarray(void *, size_t, size_t)
+    __attribute__((__alloc_size__(2, 3), __malloc__(free),
+                   __warn_unused_result__));
+void *brealloc(void *, size_t)
+    __attribute__((__alloc_size__(2), __malloc__(free),
+                   __warn_unused_result__));
+char *bstrdup(const char *)
+    __attribute__((__malloc__(free), __nonnull__, __warn_unused_result__));
+char *bstrndup(const char *, size_t)
+    __attribute__((__malloc__(free), __nonnull__, __warn_unused_result__));
+
+/*
+ * Macros that cast the return value from b* memory functions, making them
+ * usable in C++ code and providing some additional type safety.
+ */
+#define bcalloc_type(n, type) ((type *) bcalloc((n), sizeof(type)))
+#define breallocarray_type(p, n, type) \
+    ((type *) breallocarray((p), (n), sizeof(type)))
+
+/*
+ * Find a test file under C_TAP_BUILD or C_TAP_SOURCE, returning the full
+ * path.  The returned path should be freed with test_file_path_free().
+ */
+void test_file_path_free(char *path);
+char *test_file_path(const char *file)
+    __attribute__((__malloc__(test_file_path_free), __nonnull__,
+                   __warn_unused_result__));
+
+/*
+ * Create a temporary directory relative to C_TAP_BUILD and return the path.
+ * The returned path should be freed with test_tmpdir_free().
+ */
+void test_tmpdir_free(char *path);
+char *test_tmpdir(void)
+    __attribute__((__malloc__(test_tmpdir_free), __warn_unused_result__));
+
+/*
+ * Register a cleanup function that is called when testing ends.  All such
+ * registered functions will be run during atexit handling (and are therefore
+ * subject to all the same constraints and caveats as atexit functions).
+ *
+ * The function must return void and will be passed two arguments: an int that
+ * will be true if the test completed successfully and false otherwise, and an
+ * int that will be true if the cleanup function is run in the primary process
+ * (the one that called plan or plan_lazy) and false otherwise.  If
+ * test_cleanup_register_with_data is used instead, a generic pointer can be
+ * provided and will be passed to the cleanup function as a third argument.
+ *
+ * test_cleanup_register_with_data is the better API and should have been the
+ * only API.  test_cleanup_register was an API error preserved for backward
+ * cmpatibility.
+ */
+typedef void (*test_cleanup_func)(int, int);
+typedef void (*test_cleanup_func_with_data)(int, int, void *);
+
+void test_cleanup_register(test_cleanup_func) __attribute__((__nonnull__));
+void test_cleanup_register_with_data(test_cleanup_func_with_data, void *)
+    __attribute__((__nonnull__));
+
+END_DECLS
+
+#endif /* TAP_BASIC_H */
diff --git a/t/tap/macros.h b/t/tap/macros.h
new file mode 100644
index 0000000000..0eabcb5847
--- /dev/null
+++ b/t/tap/macros.h
@@ -0,0 +1,109 @@
+/*
+ * Helpful macros for TAP header files.
+ *
+ * This is not, strictly speaking, related to TAP, but any TAP add-on is
+ * probably going to need these macros, so define them in one place so that
+ * everyone can pull them in.
+ *
+ * This file is part of C TAP Harness.  The current version plus supporting
+ * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
+ *
+ * Copyright 2008, 2012-2013, 2015, 2022 Russ Allbery <eagle@eyrie.org>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * SPDX-License-Identifier: MIT
+ */
+
+#ifndef TAP_MACROS_H
+#define TAP_MACROS_H 1
+
+/*
+ * __attribute__ is available in gcc 2.5 and later, but only with gcc 2.7
+ * could you use the __format__ form of the attributes, which is what we use
+ * (to avoid confusion with other macros), and only with gcc 2.96 can you use
+ * the attribute __malloc__.  2.96 is very old, so don't bother trying to get
+ * the other attributes to work with GCC versions between 2.7 and 2.96.
+ */
+#ifndef __attribute__
+#    if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 96)
+#        define __attribute__(spec) /* empty */
+#    endif
+#endif
+
+/*
+ * We use __alloc_size__, but it was only available in fairly recent versions
+ * of GCC.  Suppress warnings about the unknown attribute if GCC is too old.
+ * We know that we're GCC at this point, so we can use the GCC variadic macro
+ * extension, which will still work with versions of GCC too old to have C99
+ * variadic macro support.
+ */
+#if !defined(__attribute__) && !defined(__alloc_size__)
+#    if defined(__GNUC__) && !defined(__clang__)
+#        if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 3)
+#            define __alloc_size__(spec, args...) /* empty */
+#        endif
+#    endif
+#endif
+
+/* Suppress __warn_unused_result__ if gcc is too old. */
+#if !defined(__attribute__) && !defined(__warn_unused_result__)
+#    if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 4)
+#        define __warn_unused_result__ /* empty */
+#    endif
+#endif
+
+/*
+ * Suppress the argument to __malloc__ in Clang (not supported in at least
+ * version 13) and GCC versions prior to 11.
+ */
+#if !defined(__attribute__) && !defined(__malloc__)
+#    if defined(__clang__) || __GNUC__ < 11
+#        define __malloc__(dalloc) __malloc__
+#    endif
+#endif
+
+/*
+ * LLVM and Clang pretend to be GCC but don't support all of the __attribute__
+ * settings that GCC does.  For them, suppress warnings about unknown
+ * attributes on declarations.  This unfortunately will affect the entire
+ * compilation context, but there's no push and pop available.
+ */
+#if !defined(__attribute__) && (defined(__llvm__) || defined(__clang__))
+#    pragma GCC diagnostic ignored "-Wattributes"
+#endif
+
+/* Used for unused parameters to silence gcc warnings. */
+// #define UNUSED __attribute__((__unused__))
+
+/*
+ * BEGIN_DECLS is used at the beginning of declarations so that C++
+ * compilers don't mangle their names.  END_DECLS is used at the end.
+ */
+#undef BEGIN_DECLS
+#undef END_DECLS
+#ifdef __cplusplus
+#    define BEGIN_DECLS extern "C" {
+#    define END_DECLS   }
+#else
+#    define BEGIN_DECLS /* empty */
+#    define END_DECLS   /* empty */
+#endif
+
+#endif /* TAP_MACROS_H */

-- 
2.40.1.606.ga4b1b128d6-goog


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH RFC v2 4/4] unit test: add basic example and build rules
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
                   ` (2 preceding siblings ...)
  2023-05-17 23:56 ` [PATCH RFC v2 3/4] Add C TAP harness steadmon
@ 2023-05-17 23:56 ` steadmon
  2023-05-18 13:32   ` Phillip Wood
  2023-06-09 23:25 ` [RFC PATCH v3 0/1] Add a project document for adding unit tests Josh Steadmon
  2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
  5 siblings, 1 reply; 32+ messages in thread
From: steadmon @ 2023-05-17 23:56 UTC (permalink / raw)
  To: git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, chooglen,
	avarab, gitster, sandals, Calvin Wan

Integrate a simple strbuf unit test with Git's Makefiles.

You can build and run the unit tests with `make unit-tests` (or just
build them with `make build-unit-tests`). By default we use the basic
test runner from the C-TAP project, but users who prefer prove as a test
runner can set `DEFAULT_UNIT_TEST_TARGET=prove-unit-tests` instead.

We modify the `#include`s in the C TAP libraries so that we can build
them without having to include the t/ directory in our include search
path.

Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Josh Steadmon <steadmon@google.com>
Change-Id: Ie61eafd2bd8f8dc5b30449af1e436889f91da3b7
---
 .gitignore      |  2 ++
 Makefile        | 24 +++++++++++++++++++++++-
 t/Makefile      | 10 ++++++++++
 t/strbuf-test.c | 54 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 t/tap/basic.c   |  2 +-
 t/tap/basic.h   |  2 +-
 6 files changed, 91 insertions(+), 3 deletions(-)

diff --git a/.gitignore b/.gitignore
index e875c59054..464e301345 100644
--- a/.gitignore
+++ b/.gitignore
@@ -245,3 +245,5 @@ Release/
 /git.VC.db
 *.dSYM
 /contrib/buildsystems/out
+/t/runtests
+/t/unit-tests/
diff --git a/Makefile b/Makefile
index 8ee7c7e5a8..aa94e3ba45 100644
--- a/Makefile
+++ b/Makefile
@@ -661,6 +661,7 @@ BUILTIN_OBJS =
 BUILT_INS =
 COMPAT_CFLAGS =
 COMPAT_OBJS =
+CTAP_OBJS =
 XDIFF_OBJS =
 GENERATED_H =
 EXTRA_CPPFLAGS =
@@ -682,6 +683,8 @@ TEST_BUILTINS_OBJS =
 TEST_OBJS =
 TEST_PROGRAMS_NEED_X =
 THIRD_PARTY_SOURCES =
+UNIT_TEST_PROGRAMS =
+UNIT_TEST_DIR = t/unit-tests
 
 # Having this variable in your environment would break pipelines because
 # you cause "cd" to echo its destination to stdout.  It can also take
@@ -1318,6 +1321,10 @@ BUILTIN_OBJS += builtin/verify-tag.o
 BUILTIN_OBJS += builtin/worktree.o
 BUILTIN_OBJS += builtin/write-tree.o
 
+CTAP_OBJS += t/tap/basic.o
+UNIT_TEST_RUNNER = t/runtests
+UNIT_TEST_PROGRAMS += $(UNIT_TEST_DIR)/strbuf-test-t
+
 # THIRD_PARTY_SOURCES is a list of patterns compatible with the
 # $(filter) and $(filter-out) family of functions. They specify source
 # files which are taken from some third-party source where we want to be
@@ -2673,6 +2680,7 @@ OBJECTS += $(TEST_OBJS)
 OBJECTS += $(XDIFF_OBJS)
 OBJECTS += $(FUZZ_OBJS)
 OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
+OBJECTS += $(CTAP_OBJS)
 
 ifndef NO_CURL
 	OBJECTS += http.o http-walker.o remote-curl.o
@@ -3654,7 +3662,7 @@ clean: profile-clean coverage-clean cocciclean
 	$(RM) $(OBJECTS)
 	$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
 	$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
-	$(RM) $(TEST_PROGRAMS)
+	$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_RUNNER) $(UNIT_TEST_PROGRAMS)
 	$(RM) $(FUZZ_PROGRAMS)
 	$(RM) $(SP_OBJ)
 	$(RM) $(HCC)
@@ -3832,3 +3840,17 @@ $(FUZZ_PROGRAMS): all
 		$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
 
 fuzz-all: $(FUZZ_PROGRAMS)
+
+$(UNIT_TEST_DIR):
+	$(QUIET)mkdir $(UNIT_TEST_DIR)
+
+$(UNIT_TEST_PROGRAMS): $(UNIT_TEST_DIR) $(CTAP_OBJS) $(GITLIBS)
+	$(QUIET_CC)$(CC) -o $@ t/$(patsubst %-t,%,$(notdir $@)).c $(CTAP_OBJS) $(LIBS)
+
+$(UNIT_TEST_RUNNER): $(patsubst %,%.c,$(UNIT_TEST_RUNNER))
+	$(QUIET_CC)$(CC) -o $@ $^
+
+.PHONY: build-unit-tests unit-tests
+build-unit-tests: $(UNIT_TEST_PROGRAMS)
+unit-tests: $(UNIT_TEST_PROGRAMS) $(UNIT_TEST_RUNNER)
+	$(MAKE) -C t/ unit-tests
diff --git a/t/Makefile b/t/Makefile
index 3e00cdd801..9df1a4e34b 100644
--- a/t/Makefile
+++ b/t/Makefile
@@ -17,6 +17,7 @@ TAR ?= $(TAR)
 RM ?= rm -f
 PROVE ?= prove
 DEFAULT_TEST_TARGET ?= test
+DEFAULT_UNIT_TEST_TARGET ?= run-unit-tests
 TEST_LINT ?= test-lint
 
 ifdef TEST_OUTPUT_DIRECTORY
@@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
 TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
 CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
 CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
+UNIT_TESTS = $(sort $(wildcard unit-tests/*))
 
 # `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
 # checks all tests in all scripts via a single invocation, so tell individual
@@ -65,6 +67,14 @@ prove: pre-clean check-chainlint $(TEST_LINT)
 $(T):
 	@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
 
+unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
+
+run-unit-tests:
+	./runtests $(UNIT_TESTS)
+
+prove-unit-tests:
+	@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
+
 pre-clean:
 	$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
 
diff --git a/t/strbuf-test.c b/t/strbuf-test.c
new file mode 100644
index 0000000000..8f8d4e11db
--- /dev/null
+++ b/t/strbuf-test.c
@@ -0,0 +1,54 @@
+#include "tap/basic.h"
+
+#include "../git-compat-util.h"
+#include "../strbuf.h"
+
+int strbuf_init_test()
+{
+	struct strbuf *buf = malloc(sizeof(void*));
+	strbuf_init(buf, 0);
+
+	if (buf->buf[0] != '\0')
+		return 0;
+	if (buf->alloc != 0)
+		return 0;
+	if (buf->len != 0)
+		return 0;
+	return 1;
+}
+
+int strbuf_init_test2() {
+	struct strbuf *buf = malloc(sizeof(void*));
+	strbuf_init(buf, 100);
+
+	if (buf->buf[0] != '\0')
+		return 0;
+	if (buf->alloc != 101)
+		return 0;
+	if (buf->len != 0)
+		return 0;
+	return 1;
+}
+
+
+int strbuf_grow_test() {
+	struct strbuf *buf = malloc(sizeof(void*));
+	strbuf_grow(buf, 100);
+
+	if (buf->buf[0] != '\0')
+		return 0;
+	if (buf->alloc != 101)
+		return 0;
+	if (buf->len != 0)
+		return 0;
+	return 1;
+}
+
+int main(void)
+{
+	plan(3);
+	ok(strbuf_init_test(), "strbuf_init initializes properly");
+	ok(strbuf_init_test2(), "strbuf_init with hint initializes properly");
+	ok(strbuf_grow_test(), "strbuf_grow grows properly");
+	return 0;
+}
diff --git a/t/tap/basic.c b/t/tap/basic.c
index 704282b9c1..37c2d6f082 100644
--- a/t/tap/basic.c
+++ b/t/tap/basic.c
@@ -52,7 +52,7 @@
 #include <sys/types.h>
 #include <unistd.h>
 
-#include <tap/basic.h>
+#include "basic.h"
 
 /* Windows provides mkdir and rmdir under different names. */
 #ifdef _WIN32
diff --git a/t/tap/basic.h b/t/tap/basic.h
index afea8cb210..a0c0ef2c87 100644
--- a/t/tap/basic.h
+++ b/t/tap/basic.h
@@ -36,7 +36,7 @@
 #include <stdarg.h> /* va_list */
 #include <stddef.h> /* size_t */
 #include <stdlib.h> /* free */
-#include <tap/macros.h>
+#include "macros.h"
 
 /*
  * Used for iterating through arrays.  ARRAY_SIZE returns the number of

-- 
2.40.1.606.ga4b1b128d6-goog


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 2/4] unit tests: Add a project plan document
  2023-05-17 23:56 ` [PATCH RFC v2 2/4] unit tests: Add a project plan document steadmon
@ 2023-05-18 13:13   ` Phillip Wood
  2023-05-18 20:15   ` Glen Choo
  1 sibling, 0 replies; 32+ messages in thread
From: Phillip Wood @ 2023-05-18 13:13 UTC (permalink / raw)
  To: steadmon, git; +Cc: calvinwan, szeder.dev, chooglen, avarab, gitster, sandals

On 18/05/2023 00:56, steadmon@google.com wrote:
> Describe what we hope to accomplish by implementing unit tests, and
> explain some open questions and milestones.

Thanks for adding this.

> Change-Id: I182cdc1c15bdd1cbef6ffcf3d216b386f951e9fc
> ---
>   Documentation/Makefile                 |  1 +
>   Documentation/technical/unit-tests.txt | 47 ++++++++++++++++++++++++++++++++++
>   2 files changed, 48 insertions(+)
> 
> diff --git a/Documentation/Makefile b/Documentation/Makefile
> index b629176d7d..3f2383a12c 100644
> --- a/Documentation/Makefile
> +++ b/Documentation/Makefile
> @@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
>   TECH_DOCS += technical/send-pack-pipeline
>   TECH_DOCS += technical/shallow
>   TECH_DOCS += technical/trivial-merge
> +TECH_DOCS += technical/unit-tests
>   SP_ARTICLES += $(TECH_DOCS)
>   SP_ARTICLES += technical/api-index
>   
> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> new file mode 100644
> index 0000000000..7c575e6ef7
> --- /dev/null
> +++ b/Documentation/technical/unit-tests.txt
> @@ -0,0 +1,47 @@
> += Unit Testing
> +
> +In our current testing environment, we spend a significant amount of effort
> +crafting end-to-end tests for error conditions that could easily be captured by
> +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> +Unit tests additionally provide stability to the codebase and can simplify
> +debugging through isolation. Writing unit tests in pure C, rather than with our
> +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> +data around (no shell-isms required), and reduces testing runtime by not
> +spawning a separate process for every test invocation.
> +
> +Unit testing in C requires a separate testing harness that we ideally would
> +like to be TAP-style and to come with a non-restrictive license.

As we're already using prove as a TAP harness for our existing tests I'd 
prefer not to add another harness unless we really need to. prove allows 
us to run tests in parallel and has options for rerunning only the tests 
that failed last time and for running slow tests first to reduce the 
overall run time.

  I haven't looked at runtests in detail but at a quick glance I'm not 
sure which of those features it supports. I'm also worried about its 
windows compatibility. I see it sets some environment variables that 
some features of the test library require. I'm not sure if we plan to 
use those features, if we do I think we could probably set those paths 
when the tests are compiled.

Are you able to expand on why it needs a non-restrictive license? From a 
technical point of view surely anything that is GPLv2 compatible would 
be fine as that is the license we're already using for our code.

> Fortunately,
> +there already exists a https://github.com/rra/c-tap-harness/[C TAP harness
> +library] with an MIT license (at least for the files needed for our purposes).
> +We might also consider implementing
> +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[our
> +own TAP harness] just for Git.

If we do decide to go that route I'm very happy for you or one of your 
colleagues to take that patch forward.

> +We believe that a large body of unit tests, living alongside the existing test
> +suite, will improve code quality for the Git project.

This is slightly off-topic and can be addressed later. One thing that 
occurred to me was that if we end up with hundreds of unit files it 
would be good to link them into a single executable as we do with 
test-tool to avoid wasting time and disc space having to link hundreds 
of individual programs. We'd have to figure out how to run the 
individual tests though if we do that.

> +
> +== Open questions
> +
> +=== TAP harness
> +
> +We'll need to decide on a TAP harness. The C TAP library is easy to integrate,
> +but has a few drawbacks:
> +* (copy objections from lore thread)
> +* We may need to carry local patches against C TAP. We'll need to decide how to
> +  manage these. We could vendor the code in and modify them directly, or use a
> +  submodule (but then we'll need to decide on where to host the submodule with
> +  our patches on top).
> +
> +Phillip Wood has also proposed a new implementation of a TAP harness (linked
> +above). While it hasn't been thoroughly reviewed yet, it looks to support a few
> +nice features that C TAP does not, e.g. lazy test plans and skippable tests.

strictly speaking both those are supported in terms of TAP output by 
c-tap-harness but they're not very friendly to use. For me the big 
difference is that my library provides a set of check* macros and 
functions that automatically print diagnostic messages when a check 
fails and the test framework maintains the pass/fail state based on 
those checks.

Best Wishes

Phillip

> +== Milestones
> +
> +* Settle on final TAP harness
> +* Add useful tests of library-ish code
> +* Integrate with CI
> +* Integrate with
> +  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
> +  work]
> +* Run along with regular `make test` target
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 3/4] Add C TAP harness
  2023-05-17 23:56 ` [PATCH RFC v2 3/4] Add C TAP harness steadmon
@ 2023-05-18 13:15   ` Phillip Wood
  2023-05-18 20:50     ` Josh Steadmon
  0 siblings, 1 reply; 32+ messages in thread
From: Phillip Wood @ 2023-05-18 13:15 UTC (permalink / raw)
  To: steadmon, git
  Cc: calvinwan, szeder.dev, chooglen, avarab, gitster, sandals,
	Calvin Wan, Phillip Wood

On 18/05/2023 00:56, steadmon@google.com wrote:
> From: Calvin Wan <calvinwan@google.com>
> 
> Introduces the C TAP harness from https://github.com/rra/c-tap-harness/
> 
> There is also more complete documentation at
> https://www.eyrie.org/~eagle/software/c-tap-harness/
> 
> Signed-off-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>

Is that a mistake? I don't think I've contributed anything to this patch 
(unless you count complaining about it :-/)

Best Wishes

Phillip

> Change-Id: I611e22988e99b9407a4f60effaa7fbdb96ffb115
> ---
>   t/runtests.c   | 1789 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>   t/tap/basic.c  | 1029 ++++++++++++++++++++++++++++++++
>   t/tap/basic.h  |  198 +++++++
>   t/tap/macros.h |  109 ++++
>   4 files changed, 3125 insertions(+)
> 
> diff --git a/t/runtests.c b/t/runtests.c
> new file mode 100644
> index 0000000000..4a55a801a6
> --- /dev/null
> +++ b/t/runtests.c
> @@ -0,0 +1,1789 @@
> +/*
> + * Run a set of tests, reporting results.
> + *
> + * Test suite driver that runs a set of tests implementing a subset of the
> + * Test Anything Protocol (TAP) and reports the results.
> + *
> + * Any bug reports, bug fixes, and improvements are very much welcome and
> + * should be sent to the e-mail address below.  This program is part of C TAP
> + * Harness <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
> + *
> + * Copyright 2000-2001, 2004, 2006-2019, 2022 Russ Allbery <eagle@eyrie.org>
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: MIT
> + */
> +
> +/*
> + * Usage:
> + *
> + *      runtests [-hv] [-b <build-dir>] [-s <source-dir>] -l <test-list>
> + *      runtests [-hv] [-b <build-dir>] [-s <source-dir>] <test> [<test> ...]
> + *      runtests -o [-h] [-b <build-dir>] [-s <source-dir>] <test>
> + *
> + * In the first case, expects a list of executables located in the given file,
> + * one line per executable, possibly followed by a space-separated list of
> + * options.  For each one, runs it as part of a test suite, reporting results.
> + * In the second case, use the same infrastructure, but run only the tests
> + * listed on the command line.
> + *
> + * Test output should start with a line containing the number of tests
> + * (numbered from 1 to this number), optionally preceded by "1..", although
> + * that line may be given anywhere in the output.  Each additional line should
> + * be in the following format:
> + *
> + *      ok <number>
> + *      not ok <number>
> + *      ok <number> # skip
> + *      not ok <number> # todo
> + *
> + * where <number> is the number of the test.  An optional comment is permitted
> + * after the number if preceded by whitespace.  ok indicates success, not ok
> + * indicates failure.  "# skip" and "# todo" are a special cases of a comment,
> + * and must start with exactly that formatting.  They indicate the test was
> + * skipped for some reason (maybe because it doesn't apply to this platform)
> + * or is testing something known to currently fail.  The text following either
> + * "# skip" or "# todo" and whitespace is the reason.
> + *
> + * As a special case, the first line of the output may be in the form:
> + *
> + *      1..0 # skip some reason
> + *
> + * which indicates that this entire test case should be skipped and gives a
> + * reason.
> + *
> + * Any other lines are ignored, although for compliance with the TAP protocol
> + * all lines other than the ones in the above format should be sent to
> + * standard error rather than standard output and start with #.
> + *
> + * This is a subset of TAP as documented in Test::Harness::TAP or
> + * TAP::Parser::Grammar, which comes with Perl.
> + *
> + * If the -o option is given, instead run a single test and display all of its
> + * output.  This is intended for use with failing tests so that the person
> + * running the test suite can get more details about what failed.
> + *
> + * If built with the C preprocessor symbols C_TAP_SOURCE and C_TAP_BUILD
> + * defined, C TAP Harness will export those values in the environment so that
> + * tests can find the source and build directory and will look for tests under
> + * both directories.  These paths can also be set with the -b and -s
> + * command-line options, which will override anything set at build time.
> + *
> + * If the -v option is given, or the C_TAP_VERBOSE environment variable is set,
> + * display the full output of each test as it runs rather than showing a
> + * summary of the results of each test.
> + */
> +
> +/* Required for fdopen(), getopt(), and putenv(). */
> +#if defined(__STRICT_ANSI__) || defined(PEDANTIC)
> +#    ifndef _XOPEN_SOURCE
> +#        define _XOPEN_SOURCE 500
> +#    endif
> +#endif
> +
> +#include <ctype.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <limits.h>
> +#include <stdarg.h>
> +#include <stddef.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <strings.h>
> +#include <sys/stat.h>
> +#include <sys/time.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +#include <time.h>
> +#include <unistd.h>
> +
> +/* sys/time.h must be included before sys/resource.h on some platforms. */
> +#include <sys/resource.h>
> +
> +/* AIX 6.1 (and possibly later) doesn't have WCOREDUMP. */
> +#ifndef WCOREDUMP
> +#    define WCOREDUMP(status) ((unsigned) (status) &0x80)
> +#endif
> +
> +/*
> + * POSIX requires that these be defined in <unistd.h>, but they're not always
> + * available.  If one of them has been defined, all the rest almost certainly
> + * have.
> + */
> +#ifndef STDIN_FILENO
> +#    define STDIN_FILENO  0
> +#    define STDOUT_FILENO 1
> +#    define STDERR_FILENO 2
> +#endif
> +
> +/*
> + * Used for iterating through arrays.  Returns the number of elements in the
> + * array (useful for a < upper bound in a for loop).
> + */
> +#define ARRAY_SIZE(array) (sizeof(array) / sizeof((array)[0]))
> +
> +/*
> + * The source and build versions of the tests directory.  This is used to set
> + * the C_TAP_SOURCE and C_TAP_BUILD environment variables (and the SOURCE and
> + * BUILD environment variables set for backward compatibility) and find test
> + * programs, if set.  Normally, this should be set as part of the build
> + * process to the test subdirectories of $(abs_top_srcdir) and
> + * $(abs_top_builddir) respectively.
> + */
> +#ifndef C_TAP_SOURCE
> +#    define C_TAP_SOURCE NULL
> +#endif
> +#ifndef C_TAP_BUILD
> +#    define C_TAP_BUILD NULL
> +#endif
> +
> +/* Test status codes. */
> +enum test_status {
> +    TEST_FAIL,
> +    TEST_PASS,
> +    TEST_SKIP,
> +    TEST_INVALID
> +};
> +
> +/* Really, just a boolean, but this is more self-documenting. */
> +enum test_verbose {
> +    CONCISE = 0,
> +    VERBOSE = 1
> +};
> +
> +/* Indicates the state of our plan. */
> +enum plan_status {
> +    PLAN_INIT,    /* Nothing seen yet. */
> +    PLAN_FIRST,   /* Plan seen before any tests. */
> +    PLAN_PENDING, /* Test seen and no plan yet. */
> +    PLAN_FINAL    /* Plan seen after some tests. */
> +};
> +
> +/* Error exit statuses for test processes. */
> +#define CHILDERR_DUP    100 /* Couldn't redirect stderr or stdout. */
> +#define CHILDERR_EXEC   101 /* Couldn't exec child process. */
> +#define CHILDERR_STDIN  102 /* Couldn't open stdin file. */
> +#define CHILDERR_STDERR 103 /* Couldn't open stderr file. */
> +
> +/* Structure to hold data for a set of tests. */
> +struct testset {
> +    char *file;                /* The file name of the test. */
> +    char **command;            /* The argv vector to run the command. */
> +    enum plan_status plan;     /* The status of our plan. */
> +    unsigned long count;       /* Expected count of tests. */
> +    unsigned long current;     /* The last seen test number. */
> +    unsigned int length;       /* The length of the last status message. */
> +    unsigned long passed;      /* Count of passing tests. */
> +    unsigned long failed;      /* Count of failing lists. */
> +    unsigned long skipped;     /* Count of skipped tests (passed). */
> +    unsigned long allocated;   /* The size of the results table. */
> +    enum test_status *results; /* Table of results by test number. */
> +    unsigned int aborted;      /* Whether the set was aborted. */
> +    unsigned int reported;     /* Whether the results were reported. */
> +    int status;                /* The exit status of the test. */
> +    unsigned int all_skipped;  /* Whether all tests were skipped. */
> +    char *reason;              /* Why all tests were skipped. */
> +};
> +
> +/* Structure to hold a linked list of test sets. */
> +struct testlist {
> +    struct testset *ts;
> +    struct testlist *next;
> +};
> +
> +/*
> + * Usage message.  Should be used as a printf format with four arguments: the
> + * path to runtests, given three times, and the usage_description.  This is
> + * split into variables to satisfy the pedantic ISO C90 limit on strings.
> + */
> +static const char usage_message[] = "\
> +Usage: %s [-hv] [-b <build-dir>] [-s <source-dir>] <test> ...\n\
> +       %s [-hv] [-b <build-dir>] [-s <source-dir>] -l <test-list>\n\
> +       %s -o [-h] [-b <build-dir>] [-s <source-dir>] <test>\n\
> +\n\
> +Options:\n\
> +    -b <build-dir>      Set the build directory to <build-dir>\n\
> +%s";
> +static const char usage_extra[] = "\
> +    -l <list>           Take the list of tests to run from <test-list>\n\
> +    -o                  Run a single test rather than a list of tests\n\
> +    -s <source-dir>     Set the source directory to <source-dir>\n\
> +    -v                  Show the full output of each test\n\
> +\n\
> +runtests normally runs each test listed on the command line.  With the -l\n\
> +option, it instead runs every test listed in a file.  With the -o option,\n\
> +it instead runs a single test and shows its complete output.\n";
> +
> +/*
> + * Header used for test output.  %s is replaced by the file name of the list
> + * of tests.
> + */
> +static const char banner[] = "\n\
> +Running all tests listed in %s.  If any tests fail, run the failing\n\
> +test program with runtests -o to see more details.\n\n";
> +
> +/* Header for reports of failed tests. */
> +static const char header[] = "\n\
> +Failed Set                 Fail/Total (%) Skip Stat  Failing Tests\n\
> +-------------------------- -------------- ---- ----  ------------------------";
> +
> +/* Include the file name and line number in malloc failures. */
> +#define xcalloc(n, type) \
> +    ((type *) x_calloc((n), sizeof(type), __FILE__, __LINE__))
> +#define xmalloc(size)     ((char *) x_malloc((size), __FILE__, __LINE__))
> +#define xstrdup(p)        x_strdup((p), __FILE__, __LINE__)
> +#define xstrndup(p, size) x_strndup((p), (size), __FILE__, __LINE__)
> +#define xreallocarray(p, n, type) \
> +    ((type *) x_reallocarray((p), (n), sizeof(type), __FILE__, __LINE__))
> +
> +/*
> + * __attribute__ is available in gcc 2.5 and later, but only with gcc 2.7
> + * could you use the __format__ form of the attributes, which is what we use
> + * (to avoid confusion with other macros).
> + */
> +#ifndef __attribute__
> +#    if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 7)
> +#        define __attribute__(spec) /* empty */
> +#    endif
> +#endif
> +
> +/*
> + * We use __alloc_size__, but it was only available in fairly recent versions
> + * of GCC.  Suppress warnings about the unknown attribute if GCC is too old.
> + * We know that we're GCC at this point, so we can use the GCC variadic macro
> + * extension, which will still work with versions of GCC too old to have C99
> + * variadic macro support.
> + */
> +#if !defined(__attribute__) && !defined(__alloc_size__)
> +#    if defined(__GNUC__) && !defined(__clang__)
> +#        if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 3)
> +#            define __alloc_size__(spec, args...) /* empty */
> +#        endif
> +#    endif
> +#endif
> +
> +/*
> + * Suppress the argument to __malloc__ in Clang (not supported in at least
> + * version 13) and GCC versions prior to 11.
> + */
> +#if !defined(__attribute__) && !defined(__malloc__)
> +#    if defined(__clang__) || __GNUC__ < 11
> +#        define __malloc__(dalloc) __malloc__
> +#    endif
> +#endif
> +
> +/*
> + * LLVM and Clang pretend to be GCC but don't support all of the __attribute__
> + * settings that GCC does.  For them, suppress warnings about unknown
> + * attributes on declarations.  This unfortunately will affect the entire
> + * compilation context, but there's no push and pop available.
> + */
> +#if !defined(__attribute__) && (defined(__llvm__) || defined(__clang__))
> +#    pragma GCC diagnostic ignored "-Wattributes"
> +#endif
> +
> +/* Declare internal functions that benefit from compiler attributes. */
> +static void die(const char *, ...)
> +    __attribute__((__nonnull__, __noreturn__, __format__(printf, 1, 2)));
> +static void sysdie(const char *, ...)
> +    __attribute__((__nonnull__, __noreturn__, __format__(printf, 1, 2)));
> +static void *x_calloc(size_t, size_t, const char *, int)
> +    __attribute__((__alloc_size__(1, 2), __malloc__(free), __nonnull__));
> +static void *x_malloc(size_t, const char *, int)
> +    __attribute__((__alloc_size__(1), __malloc__(free), __nonnull__));
> +static void *x_reallocarray(void *, size_t, size_t, const char *, int)
> +    __attribute__((__alloc_size__(2, 3), __malloc__(free), __nonnull__(4)));
> +static char *x_strdup(const char *, const char *, int)
> +    __attribute__((__malloc__(free), __nonnull__));
> +static char *x_strndup(const char *, size_t, const char *, int)
> +    __attribute__((__malloc__(free), __nonnull__));
> +
> +
> +/*
> + * Report a fatal error and exit.
> + */
> +static void
> +die(const char *format, ...)
> +{
> +    va_list args;
> +
> +    fflush(stdout);
> +    fprintf(stderr, "runtests: ");
> +    va_start(args, format);
> +    vfprintf(stderr, format, args);
> +    va_end(args);
> +    fprintf(stderr, "\n");
> +    exit(1);
> +}
> +
> +
> +/*
> + * Report a fatal error, including the results of strerror, and exit.
> + */
> +static void
> +sysdie(const char *format, ...)
> +{
> +    int oerrno;
> +    va_list args;
> +
> +    oerrno = errno;
> +    fflush(stdout);
> +    fprintf(stderr, "runtests: ");
> +    va_start(args, format);
> +    vfprintf(stderr, format, args);
> +    va_end(args);
> +    fprintf(stderr, ": %s\n", strerror(oerrno));
> +    exit(1);
> +}
> +
> +
> +/*
> + * Allocate zeroed memory, reporting a fatal error and exiting on failure.
> + */
> +static void *
> +x_calloc(size_t n, size_t size, const char *file, int line)
> +{
> +    void *p;
> +
> +    n = (n > 0) ? n : 1;
> +    size = (size > 0) ? size : 1;
> +    p = calloc(n, size);
> +    if (p == NULL)
> +        sysdie("failed to calloc %lu bytes at %s line %d",
> +               (unsigned long) size, file, line);
> +    return p;
> +}
> +
> +
> +/*
> + * Allocate memory, reporting a fatal error and exiting on failure.
> + */
> +static void *
> +x_malloc(size_t size, const char *file, int line)
> +{
> +    void *p;
> +
> +    p = malloc(size);
> +    if (p == NULL)
> +        sysdie("failed to malloc %lu bytes at %s line %d",
> +               (unsigned long) size, file, line);
> +    return p;
> +}
> +
> +
> +/*
> + * Reallocate memory, reporting a fatal error and exiting on failure.
> + *
> + * We should technically use SIZE_MAX here for the overflow check, but
> + * SIZE_MAX is C99 and we're only assuming C89 + SUSv3, which does not
> + * guarantee that it exists.  They do guarantee that UINT_MAX exists, and we
> + * can assume that UINT_MAX <= SIZE_MAX.  And we should not be allocating
> + * anything anywhere near that large.
> + *
> + * (In theory, C89 and C99 permit size_t to be smaller than unsigned int, but
> + * I disbelieve in the existence of such systems and they will have to cope
> + * without overflow checks.)
> + */
> +static void *
> +x_reallocarray(void *p, size_t n, size_t size, const char *file, int line)
> +{
> +    n = (n > 0) ? n : 1;
> +    size = (size > 0) ? size : 1;
> +
> +    if (n > 0 && UINT_MAX / n <= size)
> +        sysdie("realloc too large at %s line %d", file, line);
> +    p = realloc(p, n * size);
> +    if (p == NULL)
> +        sysdie("failed to realloc %lu bytes at %s line %d",
> +               (unsigned long) (n * size), file, line);
> +    return p;
> +}
> +
> +
> +/*
> + * Copy a string, reporting a fatal error and exiting on failure.
> + */
> +static char *
> +x_strdup(const char *s, const char *file, int line)
> +{
> +    char *p;
> +    size_t len;
> +
> +    len = strlen(s) + 1;
> +    p = (char *) malloc(len);
> +    if (p == NULL)
> +        sysdie("failed to strdup %lu bytes at %s line %d", (unsigned long) len,
> +               file, line);
> +    memcpy(p, s, len);
> +    return p;
> +}
> +
> +
> +/*
> + * Copy the first n characters of a string, reporting a fatal error and
> + * existing on failure.
> + *
> + * Avoid using the system strndup function since it may not exist (on Mac OS
> + * X, for example), and there's no need to introduce another portability
> + * requirement.
> + */
> +char *
> +x_strndup(const char *s, size_t size, const char *file, int line)
> +{
> +    const char *p;
> +    size_t len;
> +    char *copy;
> +
> +    /* Don't assume that the source string is nul-terminated. */
> +    for (p = s; (size_t) (p - s) < size && *p != '\0'; p++)
> +        ;
> +    len = (size_t) (p - s);
> +    copy = (char *) malloc(len + 1);
> +    if (copy == NULL)
> +        sysdie("failed to strndup %lu bytes at %s line %d",
> +               (unsigned long) len, file, line);
> +    memcpy(copy, s, len);
> +    copy[len] = '\0';
> +    return copy;
> +}
> +
> +
> +/*
> + * Form a new string by concatenating multiple strings.  The arguments must be
> + * terminated by (const char *) 0.
> + *
> + * This function only exists because we can't assume asprintf.  We can't
> + * simulate asprintf with snprintf because we're only assuming SUSv3, which
> + * does not require that snprintf with a NULL buffer return the required
> + * length.  When those constraints are relaxed, this should be ripped out and
> + * replaced with asprintf or a more trivial replacement with snprintf.
> + */
> +static char *
> +concat(const char *first, ...)
> +{
> +    va_list args;
> +    char *result;
> +    const char *string;
> +    size_t offset;
> +    size_t length = 0;
> +
> +    /*
> +     * Find the total memory required.  Ensure we don't overflow length.  We
> +     * aren't guaranteed to have SIZE_MAX, so use UINT_MAX as an acceptable
> +     * substitute (see the x_nrealloc comments).
> +     */
> +    va_start(args, first);
> +    for (string = first; string != NULL; string = va_arg(args, const char *)) {
> +        if (length >= UINT_MAX - strlen(string)) {
> +            errno = EINVAL;
> +            sysdie("strings too long in concat");
> +        }
> +        length += strlen(string);
> +    }
> +    va_end(args);
> +    length++;
> +
> +    /* Create the string. */
> +    result = xmalloc(length);
> +    va_start(args, first);
> +    offset = 0;
> +    for (string = first; string != NULL; string = va_arg(args, const char *)) {
> +        memcpy(result + offset, string, strlen(string));
> +        offset += strlen(string);
> +    }
> +    va_end(args);
> +    result[offset] = '\0';
> +    return result;
> +}
> +
> +
> +/*
> + * Given a struct timeval, return the number of seconds it represents as a
> + * double.  Use difftime() to convert a time_t to a double.
> + */
> +static double
> +tv_seconds(const struct timeval *tv)
> +{
> +    return difftime(tv->tv_sec, 0) + (double) tv->tv_usec * 1e-6;
> +}
> +
> +
> +/*
> + * Given two struct timevals, return the difference in seconds.
> + */
> +static double
> +tv_diff(const struct timeval *tv1, const struct timeval *tv0)
> +{
> +    return tv_seconds(tv1) - tv_seconds(tv0);
> +}
> +
> +
> +/*
> + * Given two struct timevals, return the sum in seconds as a double.
> + */
> +static double
> +tv_sum(const struct timeval *tv1, const struct timeval *tv2)
> +{
> +    return tv_seconds(tv1) + tv_seconds(tv2);
> +}
> +
> +
> +/*
> + * Given a pointer to a string, skip any leading whitespace and return a
> + * pointer to the first non-whitespace character.
> + */
> +static const char *
> +skip_whitespace(const char *p)
> +{
> +    while (isspace((unsigned char) (*p)))
> +        p++;
> +    return p;
> +}
> +
> +
> +/*
> + * Given a pointer to a string, skip any non-whitespace characters and return
> + * a pointer to the first whitespace character, or to the end of the string.
> + */
> +static const char *
> +skip_non_whitespace(const char *p)
> +{
> +    while (*p != '\0' && !isspace((unsigned char) (*p)))
> +        p++;
> +    return p;
> +}
> +
> +
> +/*
> + * Start a program, connecting its stdout to a pipe on our end and its stderr
> + * to /dev/null, and storing the file descriptor to read from in the two
> + * argument.  Returns the PID of the new process.  Errors are fatal.
> + */
> +static pid_t
> +test_start(char *const *command, int *fd)
> +{
> +    int fds[2], infd, errfd;
> +    pid_t child;
> +
> +    /* Create a pipe used to capture the output from the test program. */
> +    if (pipe(fds) == -1) {
> +        puts("ABORTED");
> +        fflush(stdout);
> +        sysdie("can't create pipe");
> +    }
> +
> +    /* Fork a child process, massage the file descriptors, and exec. */
> +    child = fork();
> +    switch (child) {
> +    case -1:
> +        puts("ABORTED");
> +        fflush(stdout);
> +        sysdie("can't fork");
> +
> +    /* In the child.  Set up our standard output. */
> +    case 0:
> +        close(fds[0]);
> +        close(STDOUT_FILENO);
> +        if (dup2(fds[1], STDOUT_FILENO) < 0)
> +            _exit(CHILDERR_DUP);
> +        close(fds[1]);
> +
> +        /* Point standard input at /dev/null. */
> +        close(STDIN_FILENO);
> +        infd = open("/dev/null", O_RDONLY);
> +        if (infd < 0)
> +            _exit(CHILDERR_STDIN);
> +        if (infd != STDIN_FILENO) {
> +            if (dup2(infd, STDIN_FILENO) < 0)
> +                _exit(CHILDERR_DUP);
> +            close(infd);
> +        }
> +
> +        /* Point standard error at /dev/null. */
> +        close(STDERR_FILENO);
> +        errfd = open("/dev/null", O_WRONLY);
> +        if (errfd < 0)
> +            _exit(CHILDERR_STDERR);
> +        if (errfd != STDERR_FILENO) {
> +            if (dup2(errfd, STDERR_FILENO) < 0)
> +                _exit(CHILDERR_DUP);
> +            close(errfd);
> +        }
> +
> +        /* Now, exec our process. */
> +        if (execv(command[0], command) == -1)
> +            _exit(CHILDERR_EXEC);
> +        break;
> +
> +    /* In parent.  Close the extra file descriptor. */
> +    default:
> +        close(fds[1]);
> +        break;
> +    }
> +    *fd = fds[0];
> +    return child;
> +}
> +
> +
> +/*
> + * Back up over the output saying what test we were executing.
> + */
> +static void
> +test_backspace(struct testset *ts)
> +{
> +    unsigned int i;
> +
> +    if (!isatty(STDOUT_FILENO))
> +        return;
> +    for (i = 0; i < ts->length; i++)
> +        putchar('\b');
> +    for (i = 0; i < ts->length; i++)
> +        putchar(' ');
> +    for (i = 0; i < ts->length; i++)
> +        putchar('\b');
> +    ts->length = 0;
> +}
> +
> +
> +/*
> + * Allocate or resize the array of test results to be large enough to contain
> + * the test number in.
> + */
> +static void
> +resize_results(struct testset *ts, unsigned long n)
> +{
> +    unsigned long i;
> +    size_t s;
> +
> +    /* If there's already enough space, return quickly. */
> +    if (n <= ts->allocated)
> +        return;
> +
> +    /*
> +     * If no space has been allocated, do the initial allocation.  Otherwise,
> +     * resize.  Start with 32 test cases and then add 1024 with each resize to
> +     * try to reduce the number of reallocations.
> +     */
> +    if (ts->allocated == 0) {
> +        s = (n > 32) ? n : 32;
> +        ts->results = xcalloc(s, enum test_status);
> +    } else {
> +        s = (n > ts->allocated + 1024) ? n : ts->allocated + 1024;
> +        ts->results = xreallocarray(ts->results, s, enum test_status);
> +    }
> +
> +    /* Set the results for the newly-allocated test array. */
> +    for (i = ts->allocated; i < s; i++)
> +        ts->results[i] = TEST_INVALID;
> +    ts->allocated = s;
> +}
> +
> +
> +/*
> + * Report an invalid test number and set the appropriate flags.  Pulled into a
> + * separate function since we do this in several places.
> + */
> +static void
> +invalid_test_number(struct testset *ts, long n, enum test_verbose verbose)
> +{
> +    if (!verbose)
> +        test_backspace(ts);
> +    printf("ABORTED (invalid test number %ld)\n", n);
> +    ts->aborted = 1;
> +    ts->reported = 1;
> +}
> +
> +
> +/*
> + * Read the plan line of test output, which should contain the range of test
> + * numbers.  We may initialize the testset structure here if we haven't yet
> + * seen a test.  Return true if initialization succeeded and the test should
> + * continue, false otherwise.
> + */
> +static int
> +test_plan(const char *line, struct testset *ts, enum test_verbose verbose)
> +{
> +    long n;
> +
> +    /*
> +     * Accept a plan without the leading 1.. for compatibility with older
> +     * versions of runtests.  This will only be allowed if we've not yet seen
> +     * a test result.
> +     */
> +    line = skip_whitespace(line);
> +    if (strncmp(line, "1..", 3) == 0)
> +        line += 3;
> +
> +    /*
> +     * Get the count and check it for validity.
> +     *
> +     * If we have something of the form "1..0 # skip foo", the whole file was
> +     * skipped; record that.  If we do skip the whole file, zero out all of
> +     * our statistics, since they're no longer relevant.
> +     *
> +     * strtol is called with a second argument to advance the line pointer
> +     * past the count to make it simpler to detect the # skip case.
> +     */
> +    n = strtol(line, (char **) &line, 10);
> +    if (n == 0) {
> +        line = skip_whitespace(line);
> +        if (*line == '#') {
> +            line = skip_whitespace(line + 1);
> +            if (strncasecmp(line, "skip", 4) == 0) {
> +                line = skip_whitespace(line + 4);
> +                if (*line != '\0') {
> +                    ts->reason = xstrdup(line);
> +                    ts->reason[strlen(ts->reason) - 1] = '\0';
> +                }
> +                ts->all_skipped = 1;
> +                ts->aborted = 1;
> +                ts->count = 0;
> +                ts->passed = 0;
> +                ts->skipped = 0;
> +                ts->failed = 0;
> +                return 0;
> +            }
> +        }
> +    }
> +    if (n <= 0) {
> +        puts("ABORTED (invalid test count)");
> +        ts->aborted = 1;
> +        ts->reported = 1;
> +        return 0;
> +    }
> +
> +    /*
> +     * If we are doing lazy planning, check the plan against the largest test
> +     * number that we saw and fail now if we saw a check outside the plan
> +     * range.
> +     */
> +    if (ts->plan == PLAN_PENDING && (unsigned long) n < ts->count) {
> +        invalid_test_number(ts, (long) ts->count, verbose);
> +        return 0;
> +    }
> +
> +    /*
> +     * Otherwise, allocated or resize the results if needed and update count,
> +     * and then record that we've seen a plan.
> +     */
> +    resize_results(ts, (unsigned long) n);
> +    ts->count = (unsigned long) n;
> +    if (ts->plan == PLAN_INIT)
> +        ts->plan = PLAN_FIRST;
> +    else if (ts->plan == PLAN_PENDING)
> +        ts->plan = PLAN_FINAL;
> +    return 1;
> +}
> +
> +
> +/*
> + * Given a single line of output from a test, parse it and return the success
> + * status of that test.  Anything printed to stdout not matching the form
> + * /^(not )?ok \d+/ is ignored.  Sets ts->current to the test number that just
> + * reported status.
> + */
> +static void
> +test_checkline(const char *line, struct testset *ts, enum test_verbose verbose)
> +{
> +    enum test_status status = TEST_PASS;
> +    const char *bail;
> +    char *end;
> +    long number;
> +    unsigned long current;
> +    int outlen;
> +
> +    /* Before anything, check for a test abort. */
> +    bail = strstr(line, "Bail out!");
> +    if (bail != NULL) {
> +        bail = skip_whitespace(bail + strlen("Bail out!"));
> +        if (*bail != '\0') {
> +            size_t length;
> +
> +            length = strlen(bail);
> +            if (bail[length - 1] == '\n')
> +                length--;
> +            if (!verbose)
> +                test_backspace(ts);
> +            printf("ABORTED (%.*s)\n", (int) length, bail);
> +            ts->reported = 1;
> +        }
> +        ts->aborted = 1;
> +        return;
> +    }
> +
> +    /*
> +     * If the given line isn't newline-terminated, it was too big for an
> +     * fgets(), which means ignore it.
> +     */
> +    if (line[strlen(line) - 1] != '\n')
> +        return;
> +
> +    /* If the line begins with a hash mark, ignore it. */
> +    if (line[0] == '#')
> +        return;
> +
> +    /* If we haven't yet seen a plan, look for one. */
> +    if (ts->plan == PLAN_INIT && isdigit((unsigned char) (*line))) {
> +        if (!test_plan(line, ts, verbose))
> +            return;
> +    } else if (strncmp(line, "1..", 3) == 0) {
> +        if (ts->plan == PLAN_PENDING) {
> +            if (!test_plan(line, ts, verbose))
> +                return;
> +        } else {
> +            if (!verbose)
> +                test_backspace(ts);
> +            puts("ABORTED (multiple plans)");
> +            ts->aborted = 1;
> +            ts->reported = 1;
> +            return;
> +        }
> +    }
> +
> +    /* Parse the line, ignoring something we can't parse. */
> +    if (strncmp(line, "not ", 4) == 0) {
> +        status = TEST_FAIL;
> +        line += 4;
> +    }
> +    if (strncmp(line, "ok", 2) != 0)
> +        return;
> +    line = skip_whitespace(line + 2);
> +    errno = 0;
> +    number = strtol(line, &end, 10);
> +    if (errno != 0 || end == line)
> +        current = ts->current + 1;
> +    else if (number <= 0) {
> +        invalid_test_number(ts, number, verbose);
> +        return;
> +    } else
> +        current = (unsigned long) number;
> +    if (current > ts->count && ts->plan == PLAN_FIRST) {
> +        invalid_test_number(ts, (long) current, verbose);
> +        return;
> +    }
> +
> +    /* We have a valid test result.  Tweak the results array if needed. */
> +    if (ts->plan == PLAN_INIT || ts->plan == PLAN_PENDING) {
> +        ts->plan = PLAN_PENDING;
> +        resize_results(ts, current);
> +        if (current > ts->count)
> +            ts->count = current;
> +    }
> +
> +    /*
> +     * Handle directives.  We should probably do something more interesting
> +     * with unexpected passes of todo tests.
> +     */
> +    while (isdigit((unsigned char) (*line)))
> +        line++;
> +    line = skip_whitespace(line);
> +    if (*line == '#') {
> +        line = skip_whitespace(line + 1);
> +        if (strncasecmp(line, "skip", 4) == 0)
> +            status = TEST_SKIP;
> +        if (strncasecmp(line, "todo", 4) == 0)
> +            status = (status == TEST_FAIL) ? TEST_SKIP : TEST_FAIL;
> +    }
> +
> +    /* Make sure that the test number is in range and not a duplicate. */
> +    if (ts->results[current - 1] != TEST_INVALID) {
> +        if (!verbose)
> +            test_backspace(ts);
> +        printf("ABORTED (duplicate test number %lu)\n", current);
> +        ts->aborted = 1;
> +        ts->reported = 1;
> +        return;
> +    }
> +
> +    /* Good results.  Increment our various counters. */
> +    switch (status) {
> +    case TEST_PASS:
> +        ts->passed++;
> +        break;
> +    case TEST_FAIL:
> +        ts->failed++;
> +        break;
> +    case TEST_SKIP:
> +        ts->skipped++;
> +        break;
> +    case TEST_INVALID:
> +        break;
> +    }
> +    ts->current = current;
> +    ts->results[current - 1] = status;
> +    if (!verbose && isatty(STDOUT_FILENO)) {
> +        test_backspace(ts);
> +        if (ts->plan == PLAN_PENDING)
> +            outlen = printf("%lu/?", current);
> +        else
> +            outlen = printf("%lu/%lu", current, ts->count);
> +        ts->length = (outlen >= 0) ? (unsigned int) outlen : 0;
> +        fflush(stdout);
> +    }
> +}
> +
> +
> +/*
> + * Print out a range of test numbers, returning the number of characters it
> + * took up.  Takes the first number, the last number, the number of characters
> + * already printed on the line, and the limit of number of characters the line
> + * can hold.  Add a comma and a space before the range if chars indicates that
> + * something has already been printed on the line, and print ... instead if
> + * chars plus the space needed would go over the limit (use a limit of 0 to
> + * disable this).
> + */
> +static unsigned int
> +test_print_range(unsigned long first, unsigned long last, unsigned long chars,
> +                 unsigned int limit)
> +{
> +    unsigned int needed = 0;
> +    unsigned long n;
> +
> +    for (n = first; n > 0; n /= 10)
> +        needed++;
> +    if (last > first) {
> +        for (n = last; n > 0; n /= 10)
> +            needed++;
> +        needed++;
> +    }
> +    if (chars > 0)
> +        needed += 2;
> +    if (limit > 0 && chars + needed > limit) {
> +        needed = 0;
> +        if (chars <= limit) {
> +            if (chars > 0) {
> +                printf(", ");
> +                needed += 2;
> +            }
> +            printf("...");
> +            needed += 3;
> +        }
> +    } else {
> +        if (chars > 0)
> +            printf(", ");
> +        if (last > first)
> +            printf("%lu-", first);
> +        printf("%lu", last);
> +    }
> +    return needed;
> +}
> +
> +
> +/*
> + * Summarize a single test set.  The second argument is 0 if the set exited
> + * cleanly, a positive integer representing the exit status if it exited
> + * with a non-zero status, and a negative integer representing the signal
> + * that terminated it if it was killed by a signal.
> + */
> +static void
> +test_summarize(struct testset *ts, int status)
> +{
> +    unsigned long i;
> +    unsigned long missing = 0;
> +    unsigned long failed = 0;
> +    unsigned long first = 0;
> +    unsigned long last = 0;
> +
> +    if (ts->aborted) {
> +        fputs("ABORTED", stdout);
> +        if (ts->count > 0)
> +            printf(" (passed %lu/%lu)", ts->passed, ts->count - ts->skipped);
> +    } else {
> +        for (i = 0; i < ts->count; i++) {
> +            if (ts->results[i] == TEST_INVALID) {
> +                if (missing == 0)
> +                    fputs("MISSED ", stdout);
> +                if (first && i == last)
> +                    last = i + 1;
> +                else {
> +                    if (first)
> +                        test_print_range(first, last, missing - 1, 0);
> +                    missing++;
> +                    first = i + 1;
> +                    last = i + 1;
> +                }
> +            }
> +        }
> +        if (first)
> +            test_print_range(first, last, missing - 1, 0);
> +        first = 0;
> +        last = 0;
> +        for (i = 0; i < ts->count; i++) {
> +            if (ts->results[i] == TEST_FAIL) {
> +                if (missing && !failed)
> +                    fputs("; ", stdout);
> +                if (failed == 0)
> +                    fputs("FAILED ", stdout);
> +                if (first && i == last)
> +                    last = i + 1;
> +                else {
> +                    if (first)
> +                        test_print_range(first, last, failed - 1, 0);
> +                    failed++;
> +                    first = i + 1;
> +                    last = i + 1;
> +                }
> +            }
> +        }
> +        if (first)
> +            test_print_range(first, last, failed - 1, 0);
> +        if (!missing && !failed) {
> +            fputs(!status ? "ok" : "dubious", stdout);
> +            if (ts->skipped > 0) {
> +                if (ts->skipped == 1)
> +                    printf(" (skipped %lu test)", ts->skipped);
> +                else
> +                    printf(" (skipped %lu tests)", ts->skipped);
> +            }
> +        }
> +    }
> +    if (status > 0)
> +        printf(" (exit status %d)", status);
> +    else if (status < 0)
> +        printf(" (killed by signal %d%s)", -status,
> +               WCOREDUMP(ts->status) ? ", core dumped" : "");
> +    putchar('\n');
> +}
> +
> +
> +/*
> + * Given a test set, analyze the results, classify the exit status, handle a
> + * few special error messages, and then pass it along to test_summarize() for
> + * the regular output.  Returns true if the test set ran successfully and all
> + * tests passed or were skipped, false otherwise.
> + */
> +static int
> +test_analyze(struct testset *ts)
> +{
> +    if (ts->reported)
> +        return 0;
> +    if (ts->all_skipped) {
> +        if (ts->reason == NULL)
> +            puts("skipped");
> +        else
> +            printf("skipped (%s)\n", ts->reason);
> +        return 1;
> +    } else if (WIFEXITED(ts->status) && WEXITSTATUS(ts->status) != 0) {
> +        switch (WEXITSTATUS(ts->status)) {
> +        case CHILDERR_DUP:
> +            if (!ts->reported)
> +                puts("ABORTED (can't dup file descriptors)");
> +            break;
> +        case CHILDERR_EXEC:
> +            if (!ts->reported)
> +                puts("ABORTED (execution failed -- not found?)");
> +            break;
> +        case CHILDERR_STDIN:
> +        case CHILDERR_STDERR:
> +            if (!ts->reported)
> +                puts("ABORTED (can't open /dev/null)");
> +            break;
> +        default:
> +            test_summarize(ts, WEXITSTATUS(ts->status));
> +            break;
> +        }
> +        return 0;
> +    } else if (WIFSIGNALED(ts->status)) {
> +        test_summarize(ts, -WTERMSIG(ts->status));
> +        return 0;
> +    } else if (ts->plan != PLAN_FIRST && ts->plan != PLAN_FINAL) {
> +        puts("ABORTED (no valid test plan)");
> +        ts->aborted = 1;
> +        return 0;
> +    } else {
> +        test_summarize(ts, 0);
> +        return (ts->failed == 0);
> +    }
> +}
> +
> +
> +/*
> + * Runs a single test set, accumulating and then reporting the results.
> + * Returns true if the test set was successfully run and all tests passed,
> + * false otherwise.
> + */
> +static int
> +test_run(struct testset *ts, enum test_verbose verbose)
> +{
> +    pid_t testpid, child;
> +    int outfd, status;
> +    unsigned long i;
> +    FILE *output;
> +    char buffer[BUFSIZ];
> +
> +    /* Run the test program. */
> +    testpid = test_start(ts->command, &outfd);
> +    output = fdopen(outfd, "r");
> +    if (!output) {
> +        puts("ABORTED");
> +        fflush(stdout);
> +        sysdie("fdopen failed");
> +    }
> +
> +    /*
> +     * Pass each line of output to test_checkline(), and print the line if
> +     * verbosity is requested.
> +     */
> +    while (!ts->aborted && fgets(buffer, sizeof(buffer), output)) {
> +        if (verbose)
> +            printf("%s", buffer);
> +        test_checkline(buffer, ts, verbose);
> +    }
> +    if (ferror(output) || ts->plan == PLAN_INIT)
> +        ts->aborted = 1;
> +    if (!verbose)
> +        test_backspace(ts);
> +
> +    /*
> +     * Consume the rest of the test output, close the output descriptor,
> +     * retrieve the exit status, and pass that information to test_analyze()
> +     * for eventual output.
> +     */
> +    while (fgets(buffer, sizeof(buffer), output))
> +        if (verbose)
> +            printf("%s", buffer);
> +    fclose(output);
> +    child = waitpid(testpid, &ts->status, 0);
> +    if (child == (pid_t) -1) {
> +        if (!ts->reported) {
> +            puts("ABORTED");
> +            fflush(stdout);
> +        }
> +        sysdie("waitpid for %u failed", (unsigned int) testpid);
> +    }
> +    if (ts->all_skipped)
> +        ts->aborted = 0;
> +    status = test_analyze(ts);
> +
> +    /* Convert missing tests to failed tests. */
> +    for (i = 0; i < ts->count; i++) {
> +        if (ts->results[i] == TEST_INVALID) {
> +            ts->failed++;
> +            ts->results[i] = TEST_FAIL;
> +            status = 0;
> +        }
> +    }
> +    return status;
> +}
> +
> +
> +/* Summarize a list of test failures. */
> +static void
> +test_fail_summary(const struct testlist *fails)
> +{
> +    struct testset *ts;
> +    unsigned int chars;
> +    unsigned long i, first, last, total;
> +    double failed;
> +
> +    puts(header);
> +
> +    /* Failed Set                 Fail/Total (%) Skip Stat  Failing (25)
> +       -------------------------- -------------- ---- ----  -------------- */
> +    for (; fails; fails = fails->next) {
> +        ts = fails->ts;
> +        total = ts->count - ts->skipped;
> +        failed = (double) ts->failed;
> +        printf("%-26.26s %4lu/%-4lu %3.0f%% %4lu ", ts->file, ts->failed,
> +               total, total ? (failed * 100.0) / (double) total : 0,
> +               ts->skipped);
> +        if (WIFEXITED(ts->status))
> +            printf("%4d  ", WEXITSTATUS(ts->status));
> +        else
> +            printf("  --  ");
> +        if (ts->aborted) {
> +            puts("aborted");
> +            continue;
> +        }
> +        chars = 0;
> +        first = 0;
> +        last = 0;
> +        for (i = 0; i < ts->count; i++) {
> +            if (ts->results[i] == TEST_FAIL) {
> +                if (first != 0 && i == last)
> +                    last = i + 1;
> +                else {
> +                    if (first != 0)
> +                        chars += test_print_range(first, last, chars, 19);
> +                    first = i + 1;
> +                    last = i + 1;
> +                }
> +            }
> +        }
> +        if (first != 0)
> +            test_print_range(first, last, chars, 19);
> +        putchar('\n');
> +    }
> +}
> +
> +
> +/*
> + * Check whether a given file path is a valid test.  Currently, this checks
> + * whether it is executable and is a regular file.  Returns true or false.
> + */
> +static int
> +is_valid_test(const char *path)
> +{
> +    struct stat st;
> +
> +    if (access(path, X_OK) < 0)
> +        return 0;
> +    if (stat(path, &st) < 0)
> +        return 0;
> +    if (!S_ISREG(st.st_mode))
> +        return 0;
> +    return 1;
> +}
> +
> +
> +/*
> + * Given the name of a test, a pointer to the testset struct, and the source
> + * and build directories, find the test.  We try first relative to the current
> + * directory, then in the build directory (if not NULL), then in the source
> + * directory.  In each of those directories, we first try a "-t" extension and
> + * then a ".t" extension.  When we find an executable program, we return the
> + * path to that program.  If none of those paths are executable, just fill in
> + * the name of the test as is.
> + *
> + * The caller is responsible for freeing the path member of the testset
> + * struct.
> + */
> +static char *
> +find_test(const char *name, const char *source, const char *build)
> +{
> +    char *path = NULL;
> +    const char *bases[3], *suffix, *base;
> +    unsigned int i, j;
> +    const char *suffixes[3] = {"-t", ".t", ""};
> +
> +    /* Possible base directories. */
> +    bases[0] = ".";
> +    bases[1] = build;
> +    bases[2] = source;
> +
> +    /* Try each suffix with each base. */
> +    for (i = 0; i < ARRAY_SIZE(suffixes); i++) {
> +        suffix = suffixes[i];
> +        for (j = 0; j < ARRAY_SIZE(bases); j++) {
> +            base = bases[j];
> +            if (base == NULL)
> +                continue;
> +            path = concat(base, "/", name, suffix, (const char *) 0);
> +            if (is_valid_test(path))
> +                return path;
> +            free(path);
> +            path = NULL;
> +        }
> +    }
> +    if (path == NULL)
> +        path = xstrdup(name);
> +    return path;
> +}
> +
> +
> +/*
> + * Parse a single line of a test list and store the test name and command to
> + * execute it in the given testset struct.
> + *
> + * Normally, each line is just the name of the test, which is located in the
> + * test directory and turned into a command to run.  However, each line may
> + * have whitespace-separated options, which change the command that's run.
> + * Current supported options are:
> + *
> + * valgrind
> + *     Run the test under valgrind if C_TAP_VALGRIND is set.  The contents
> + *     of that environment variable are taken as the valgrind command (with
> + *     options) to run.  The command is parsed with a simple split on
> + *     whitespace and no quoting is supported.
> + *
> + * libtool
> + *     If running under valgrind, use libtool to invoke valgrind.  This avoids
> + *     running valgrind on the wrapper shell script generated by libtool.  If
> + *     set, C_TAP_LIBTOOL must be set to the full path to the libtool program
> + *     to use to run valgrind and thus the test.  Ignored if the test isn't
> + *     being run under valgrind.
> + */
> +static void
> +parse_test_list_line(const char *line, struct testset *ts, const char *source,
> +                     const char *build)
> +{
> +    const char *p, *end, *option, *libtool;
> +    const char *valgrind = NULL;
> +    unsigned int use_libtool = 0;
> +    unsigned int use_valgrind = 0;
> +    size_t len, i;
> +
> +    /* Determine the name of the test. */
> +    p = skip_non_whitespace(line);
> +    ts->file = xstrndup(line, p - line);
> +
> +    /* Check if any test options are set. */
> +    p = skip_whitespace(p);
> +    while (*p != '\0') {
> +        end = skip_non_whitespace(p);
> +        if (strncmp(p, "libtool", end - p) == 0) {
> +            use_libtool = 1;
> +        } else if (strncmp(p, "valgrind", end - p) == 0) {
> +            valgrind = getenv("C_TAP_VALGRIND");
> +            use_valgrind = (valgrind != NULL);
> +        } else {
> +            option = xstrndup(p, end - p);
> +            die("unknown test list option %s", option);
> +        }
> +        p = skip_whitespace(end);
> +    }
> +
> +    /* Construct the argv to run the test.  First, find the length. */
> +    len = 1;
> +    if (use_valgrind && valgrind != NULL) {
> +        p = skip_whitespace(valgrind);
> +        while (*p != '\0') {
> +            len++;
> +            p = skip_whitespace(skip_non_whitespace(p));
> +        }
> +        if (use_libtool)
> +            len += 2;
> +    }
> +
> +    /* Now, build the command. */
> +    ts->command = xcalloc(len + 1, char *);
> +    i = 0;
> +    if (use_valgrind && valgrind != NULL) {
> +        if (use_libtool) {
> +            libtool = getenv("C_TAP_LIBTOOL");
> +            if (libtool == NULL)
> +                die("valgrind with libtool requested, but C_TAP_LIBTOOL is not"
> +                    " set");
> +            ts->command[i++] = xstrdup(libtool);
> +            ts->command[i++] = xstrdup("--mode=execute");
> +        }
> +        p = skip_whitespace(valgrind);
> +        while (*p != '\0') {
> +            end = skip_non_whitespace(p);
> +            ts->command[i++] = xstrndup(p, end - p);
> +            p = skip_whitespace(end);
> +        }
> +    }
> +    if (i != len - 1)
> +        die("internal error while constructing command line");
> +    ts->command[i++] = find_test(ts->file, source, build);
> +    ts->command[i] = NULL;
> +}
> +
> +
> +/*
> + * Read a list of tests from a file, returning the list of tests as a struct
> + * testlist, or NULL if there were no tests (such as a file containing only
> + * comments).  Reports an error to standard error and exits if the list of
> + * tests cannot be read.
> + */
> +static struct testlist *
> +read_test_list(const char *filename, const char *source, const char *build)
> +{
> +    FILE *file;
> +    unsigned int line;
> +    size_t length;
> +    char buffer[BUFSIZ];
> +    const char *start;
> +    struct testlist *listhead, *current;
> +
> +    /* Create the initial container list that will hold our results. */
> +    listhead = xcalloc(1, struct testlist);
> +    current = NULL;
> +
> +    /*
> +     * Open our file of tests to run and read it line by line, creating a new
> +     * struct testlist and struct testset for each line.
> +     */
> +    file = fopen(filename, "r");
> +    if (file == NULL)
> +        sysdie("can't open %s", filename);
> +    line = 0;
> +    while (fgets(buffer, sizeof(buffer), file)) {
> +        line++;
> +        length = strlen(buffer) - 1;
> +        if (buffer[length] != '\n') {
> +            fprintf(stderr, "%s:%u: line too long\n", filename, line);
> +            exit(1);
> +        }
> +        buffer[length] = '\0';
> +
> +        /* Skip comments, leading spaces, and blank lines. */
> +        start = skip_whitespace(buffer);
> +        if (strlen(start) == 0)
> +            continue;
> +        if (start[0] == '#')
> +            continue;
> +
> +        /* Allocate the new testset structure. */
> +        if (current == NULL)
> +            current = listhead;
> +        else {
> +            current->next = xcalloc(1, struct testlist);
> +            current = current->next;
> +        }
> +        current->ts = xcalloc(1, struct testset);
> +        current->ts->plan = PLAN_INIT;
> +
> +        /* Parse the line and store the results in the testset struct. */
> +        parse_test_list_line(start, current->ts, source, build);
> +    }
> +    fclose(file);
> +
> +    /* If there were no tests, current is still NULL. */
> +    if (current == NULL) {
> +        free(listhead);
> +        return NULL;
> +    }
> +
> +    /* Return the results. */
> +    return listhead;
> +}
> +
> +
> +/*
> + * Build a list of tests from command line arguments.  Takes the argv and argc
> + * representing the command line arguments and returns a newly allocated test
> + * list, or NULL if there were no tests.  The caller is responsible for
> + * freeing.
> + */
> +static struct testlist *
> +build_test_list(char *argv[], int argc, const char *source, const char *build)
> +{
> +    int i;
> +    struct testlist *listhead, *current;
> +
> +    /* Create the initial container list that will hold our results. */
> +    listhead = xcalloc(1, struct testlist);
> +    current = NULL;
> +
> +    /* Walk the list of arguments and create test sets for them. */
> +    for (i = 0; i < argc; i++) {
> +        if (current == NULL)
> +            current = listhead;
> +        else {
> +            current->next = xcalloc(1, struct testlist);
> +            current = current->next;
> +        }
> +        current->ts = xcalloc(1, struct testset);
> +        current->ts->plan = PLAN_INIT;
> +        current->ts->file = xstrdup(argv[i]);
> +        current->ts->command = xcalloc(2, char *);
> +        current->ts->command[0] = find_test(current->ts->file, source, build);
> +        current->ts->command[1] = NULL;
> +    }
> +
> +    /* If there were no tests, current is still NULL. */
> +    if (current == NULL) {
> +        free(listhead);
> +        return NULL;
> +    }
> +
> +    /* Return the results. */
> +    return listhead;
> +}
> +
> +
> +/* Free a struct testset. */
> +static void
> +free_testset(struct testset *ts)
> +{
> +    size_t i;
> +
> +    free(ts->file);
> +    for (i = 0; ts->command[i] != NULL; i++)
> +        free(ts->command[i]);
> +    free(ts->command);
> +    free(ts->results);
> +    free(ts->reason);
> +    free(ts);
> +}
> +
> +
> +/*
> + * Run a batch of tests.  Takes two additional parameters: the root of the
> + * source directory and the root of the build directory.  Test programs will
> + * be first searched for in the current directory, then the build directory,
> + * then the source directory.  Returns true iff all tests passed, and always
> + * frees the test list that's passed in.
> + */
> +static int
> +test_batch(struct testlist *tests, enum test_verbose verbose)
> +{
> +    size_t length, i;
> +    size_t longest = 0;
> +    unsigned int count = 0;
> +    struct testset *ts;
> +    struct timeval start, end;
> +    struct rusage stats;
> +    struct testlist *failhead = NULL;
> +    struct testlist *failtail = NULL;
> +    struct testlist *current, *next;
> +    int succeeded;
> +    unsigned long total = 0;
> +    unsigned long passed = 0;
> +    unsigned long skipped = 0;
> +    unsigned long failed = 0;
> +    unsigned long aborted = 0;
> +
> +    /* Walk the list of tests to find the longest name. */
> +    for (current = tests; current != NULL; current = current->next) {
> +        length = strlen(current->ts->file);
> +        if (length > longest)
> +            longest = length;
> +    }
> +
> +    /*
> +     * Add two to longest and round up to the nearest tab stop.  This is how
> +     * wide the column for printing the current test name will be.
> +     */
> +    longest += 2;
> +    if (longest % 8)
> +        longest += 8 - (longest % 8);
> +
> +    /* Start the wall clock timer. */
> +    gettimeofday(&start, NULL);
> +
> +    /* Now, plow through our tests again, running each one. */
> +    for (current = tests; current != NULL; current = current->next) {
> +        ts = current->ts;
> +
> +        /* Print out the name of the test file. */
> +        fputs(ts->file, stdout);
> +        if (verbose)
> +            fputs("\n\n", stdout);
> +        else
> +            for (i = strlen(ts->file); i < longest; i++)
> +                putchar('.');
> +        if (isatty(STDOUT_FILENO))
> +            fflush(stdout);
> +
> +        /* Run the test. */
> +        succeeded = test_run(ts, verbose);
> +        fflush(stdout);
> +        if (verbose)
> +            putchar('\n');
> +
> +        /* Record cumulative statistics. */
> +        aborted += ts->aborted;
> +        total += ts->count + ts->all_skipped;
> +        passed += ts->passed;
> +        skipped += ts->skipped + ts->all_skipped;
> +        failed += ts->failed;
> +        count++;
> +
> +        /* If the test fails, we shuffle it over to the fail list. */
> +        if (!succeeded) {
> +            if (failhead == NULL) {
> +                failhead = xcalloc(1, struct testlist);
> +                failtail = failhead;
> +            } else {
> +                failtail->next = xcalloc(1, struct testlist);
> +                failtail = failtail->next;
> +            }
> +            failtail->ts = ts;
> +            failtail->next = NULL;
> +        }
> +    }
> +    total -= skipped;
> +
> +    /* Stop the timer and get our child resource statistics. */
> +    gettimeofday(&end, NULL);
> +    getrusage(RUSAGE_CHILDREN, &stats);
> +
> +    /* Summarize the failures and free the failure list. */
> +    if (failhead != NULL) {
> +        test_fail_summary(failhead);
> +        while (failhead != NULL) {
> +            next = failhead->next;
> +            free(failhead);
> +            failhead = next;
> +        }
> +    }
> +
> +    /* Free the memory used by the test lists. */
> +    while (tests != NULL) {
> +        next = tests->next;
> +        free_testset(tests->ts);
> +        free(tests);
> +        tests = next;
> +    }
> +
> +    /* Print out the final test summary. */
> +    putchar('\n');
> +    if (aborted != 0) {
> +        if (aborted == 1)
> +            printf("Aborted %lu test set", aborted);
> +        else
> +            printf("Aborted %lu test sets", aborted);
> +        printf(", passed %lu/%lu tests", passed, total);
> +    } else if (failed == 0)
> +        fputs("All tests successful", stdout);
> +    else
> +        printf("Failed %lu/%lu tests, %.2f%% okay", failed, total,
> +               (double) (total - failed) * 100.0 / (double) total);
> +    if (skipped != 0) {
> +        if (skipped == 1)
> +            printf(", %lu test skipped", skipped);
> +        else
> +            printf(", %lu tests skipped", skipped);
> +    }
> +    puts(".");
> +    printf("Files=%u,  Tests=%lu", count, total);
> +    printf(",  %.2f seconds", tv_diff(&end, &start));
> +    printf(" (%.2f usr + %.2f sys = %.2f CPU)\n", tv_seconds(&stats.ru_utime),
> +           tv_seconds(&stats.ru_stime),
> +           tv_sum(&stats.ru_utime, &stats.ru_stime));
> +    return (failed == 0 && aborted == 0);
> +}
> +
> +
> +/*
> + * Run a single test case.  This involves just running the test program after
> + * having done the environment setup and finding the test program.
> + */
> +static void
> +test_single(const char *program, const char *source, const char *build)
> +{
> +    char *path;
> +
> +    path = find_test(program, source, build);
> +    if (execl(path, path, (char *) 0) == -1)
> +        sysdie("cannot exec %s", path);
> +}
> +
> +
> +/*
> + * Main routine.  Set the C_TAP_SOURCE, C_TAP_BUILD, SOURCE, and BUILD
> + * environment variables and then, given a file listing tests, run each test
> + * listed.
> + */
> +int
> +main(int argc, char *argv[])
> +{
> +    int option;
> +    int status = 0;
> +    int single = 0;
> +    enum test_verbose verbose = CONCISE;
> +    char *c_tap_source_env = NULL;
> +    char *c_tap_build_env = NULL;
> +    char *source_env = NULL;
> +    char *build_env = NULL;
> +    const char *program;
> +    const char *shortlist;
> +    const char *list = NULL;
> +    const char *source = C_TAP_SOURCE;
> +    const char *build = C_TAP_BUILD;
> +    struct testlist *tests;
> +
> +    program = argv[0];
> +    while ((option = getopt(argc, argv, "b:hl:os:v")) != EOF) {
> +        switch (option) {
> +        case 'b':
> +            build = optarg;
> +            break;
> +        case 'h':
> +            printf(usage_message, program, program, program, usage_extra);
> +            exit(0);
> +        case 'l':
> +            list = optarg;
> +            break;
> +        case 'o':
> +            single = 1;
> +            break;
> +        case 's':
> +            source = optarg;
> +            break;
> +        case 'v':
> +            verbose = VERBOSE;
> +            break;
> +        default:
> +            exit(1);
> +        }
> +    }
> +    argv += optind;
> +    argc -= optind;
> +    if ((list == NULL && argc < 1) || (list != NULL && argc > 0)) {
> +        fprintf(stderr, usage_message, program, program, program, usage_extra);
> +        exit(1);
> +    }
> +
> +    /*
> +     * If C_TAP_VERBOSE is set in the environment, that also turns on verbose
> +     * mode.
> +     */
> +    if (getenv("C_TAP_VERBOSE") != NULL)
> +        verbose = VERBOSE;
> +
> +    /*
> +     * Set C_TAP_SOURCE and C_TAP_BUILD environment variables.  Also set
> +     * SOURCE and BUILD for backward compatibility, although we're trying to
> +     * migrate to the ones with a C_TAP_* prefix.
> +     */
> +    if (source != NULL) {
> +        c_tap_source_env = concat("C_TAP_SOURCE=", source, (const char *) 0);
> +        if (putenv(c_tap_source_env) != 0)
> +            sysdie("cannot set C_TAP_SOURCE in the environment");
> +        source_env = concat("SOURCE=", source, (const char *) 0);
> +        if (putenv(source_env) != 0)
> +            sysdie("cannot set SOURCE in the environment");
> +    }
> +    if (build != NULL) {
> +        c_tap_build_env = concat("C_TAP_BUILD=", build, (const char *) 0);
> +        if (putenv(c_tap_build_env) != 0)
> +            sysdie("cannot set C_TAP_BUILD in the environment");
> +        build_env = concat("BUILD=", build, (const char *) 0);
> +        if (putenv(build_env) != 0)
> +            sysdie("cannot set BUILD in the environment");
> +    }
> +
> +    /* Run the tests as instructed. */
> +    if (single)
> +        test_single(argv[0], source, build);
> +    else if (list != NULL) {
> +        shortlist = strrchr(list, '/');
> +        if (shortlist == NULL)
> +            shortlist = list;
> +        else
> +            shortlist++;
> +        printf(banner, shortlist);
> +        tests = read_test_list(list, source, build);
> +        status = test_batch(tests, verbose) ? 0 : 1;
> +    } else {
> +        tests = build_test_list(argv, argc, source, build);
> +        status = test_batch(tests, verbose) ? 0 : 1;
> +    }
> +
> +    /* For valgrind cleanliness, free all our memory. */
> +    if (source_env != NULL) {
> +        putenv((char *) "C_TAP_SOURCE=");
> +        putenv((char *) "SOURCE=");
> +        free(c_tap_source_env);
> +        free(source_env);
> +    }
> +    if (build_env != NULL) {
> +        putenv((char *) "C_TAP_BUILD=");
> +        putenv((char *) "BUILD=");
> +        free(c_tap_build_env);
> +        free(build_env);
> +    }
> +    exit(status);
> +}
> \ No newline at end of file
> diff --git a/t/tap/basic.c b/t/tap/basic.c
> new file mode 100644
> index 0000000000..704282b9c1
> --- /dev/null
> +++ b/t/tap/basic.c
> @@ -0,0 +1,1029 @@
> +/*
> + * Some utility routines for writing tests.
> + *
> + * Here are a variety of utility routines for writing tests compatible with
> + * the TAP protocol.  All routines of the form ok() or is*() take a test
> + * number and some number of appropriate arguments, check to be sure the
> + * results match the expected output using the arguments, and print out
> + * something appropriate for that test number.  Other utility routines help in
> + * constructing more complex tests, skipping tests, reporting errors, setting
> + * up the TAP output format, or finding things in the test environment.
> + *
> + * This file is part of C TAP Harness.  The current version plus supporting
> + * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
> + *
> + * Written by Russ Allbery <eagle@eyrie.org>
> + * Copyright 2009-2019, 2021 Russ Allbery <eagle@eyrie.org>
> + * Copyright 2001-2002, 2004-2008, 2011-2014
> + *     The Board of Trustees of the Leland Stanford Junior University
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: MIT
> + */
> +
> +#include <errno.h>
> +#include <limits.h>
> +#include <stdarg.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#ifdef _WIN32
> +#    include <direct.h>
> +#else
> +#    include <sys/stat.h>
> +#endif
> +#include <sys/types.h>
> +#include <unistd.h>
> +
> +#include <tap/basic.h>
> +
> +/* Windows provides mkdir and rmdir under different names. */
> +#ifdef _WIN32
> +#    define mkdir(p, m) _mkdir(p)
> +#    define rmdir(p)    _rmdir(p)
> +#endif
> +
> +/*
> + * The test count.  Always contains the number that will be used for the next
> + * test status.  This is exported to callers of the library.
> + */
> +unsigned long testnum = 1;
> +
> +/*
> + * Status information stored so that we can give a test summary at the end of
> + * the test case.  We store the planned final test and the count of failures.
> + * We can get the highest test count from testnum.
> + */
> +static unsigned long _planned = 0;
> +static unsigned long _failed = 0;
> +
> +/*
> + * Store the PID of the process that called plan() and only summarize
> + * results when that process exits, so as to not misreport results in forked
> + * processes.
> + */
> +static pid_t _process = 0;
> +
> +/*
> + * If true, we're doing lazy planning and will print out the plan based on the
> + * last test number at the end of testing.
> + */
> +static int _lazy = 0;
> +
> +/*
> + * If true, the test was aborted by calling bail().  Currently, this is only
> + * used to ensure that we pass a false value to any cleanup functions even if
> + * all tests to that point have passed.
> + */
> +static int _aborted = 0;
> +
> +/*
> + * Registered cleanup functions.  These are stored as a linked list and run in
> + * registered order by finish when the test program exits.  Each function is
> + * passed a boolean value indicating whether all tests were successful.
> + */
> +struct cleanup_func {
> +    test_cleanup_func func;
> +    test_cleanup_func_with_data func_with_data;
> +    void *data;
> +    struct cleanup_func *next;
> +};
> +static struct cleanup_func *cleanup_funcs = NULL;
> +
> +/*
> + * Registered diag files.  Any output found in these files will be printed out
> + * as if it were passed to diag() before any other output we do.  This allows
> + * background processes to log to a file and have that output interleaved with
> + * the test output.
> + */
> +struct diag_file {
> +    char *name;
> +    FILE *file;
> +    char *buffer;
> +    size_t bufsize;
> +    struct diag_file *next;
> +};
> +static struct diag_file *diag_files = NULL;
> +
> +/*
> + * Print a specified prefix and then the test description.  Handles turning
> + * the argument list into a va_args structure suitable for passing to
> + * print_desc, which has to be done in a macro.  Assumes that format is the
> + * argument immediately before the variadic arguments.
> + */
> +#define PRINT_DESC(prefix, format)  \
> +    do {                            \
> +        if (format != NULL) {       \
> +            va_list args;           \
> +            printf("%s", prefix);   \
> +            va_start(args, format); \
> +            vprintf(format, args);  \
> +            va_end(args);           \
> +        }                           \
> +    } while (0)
> +
> +
> +/*
> + * Form a new string by concatenating multiple strings.  The arguments must be
> + * terminated by (const char *) 0.
> + *
> + * This function only exists because we can't assume asprintf.  We can't
> + * simulate asprintf with snprintf because we're only assuming SUSv3, which
> + * does not require that snprintf with a NULL buffer return the required
> + * length.  When those constraints are relaxed, this should be ripped out and
> + * replaced with asprintf or a more trivial replacement with snprintf.
> + */
> +static char *
> +concat(const char *first, ...)
> +{
> +    va_list args;
> +    char *result;
> +    const char *string;
> +    size_t offset;
> +    size_t length = 0;
> +
> +    /*
> +     * Find the total memory required.  Ensure we don't overflow length.  See
> +     * the comment for breallocarray for why we're using UINT_MAX here.
> +     */
> +    va_start(args, first);
> +    for (string = first; string != NULL; string = va_arg(args, const char *)) {
> +        if (length >= UINT_MAX - strlen(string))
> +            bail("strings too long in concat");
> +        length += strlen(string);
> +    }
> +    va_end(args);
> +    length++;
> +
> +    /* Create the string. */
> +    result = bcalloc_type(length, char);
> +    va_start(args, first);
> +    offset = 0;
> +    for (string = first; string != NULL; string = va_arg(args, const char *)) {
> +        memcpy(result + offset, string, strlen(string));
> +        offset += strlen(string);
> +    }
> +    va_end(args);
> +    result[offset] = '\0';
> +    return result;
> +}
> +
> +
> +/*
> + * Helper function for check_diag_files to handle a single line in a diag
> + * file.
> + *
> + * The general scheme here used is as follows: read one line of output.  If we
> + * get NULL, check for an error.  If there was one, bail out of the test
> + * program; otherwise, return, and the enclosing loop will check for EOF.
> + *
> + * If we get some data, see if it ends in a newline.  If it doesn't end in a
> + * newline, we have one of two cases: our buffer isn't large enough, in which
> + * case we resize it and try again, or we have incomplete data in the file, in
> + * which case we rewind the file and will try again next time.
> + *
> + * Returns a boolean indicating whether the last line was incomplete.
> + */
> +static int
> +handle_diag_file_line(struct diag_file *file, fpos_t where)
> +{
> +    int size;
> +    size_t length;
> +
> +    /* Read the next line from the file. */
> +    size = file->bufsize > INT_MAX ? INT_MAX : (int) file->bufsize;
> +    if (fgets(file->buffer, size, file->file) == NULL) {
> +        if (ferror(file->file))
> +            sysbail("cannot read from %s", file->name);
> +        return 0;
> +    }
> +
> +    /*
> +     * See if the line ends in a newline.  If not, see which error case we
> +     * have.
> +     */
> +    length = strlen(file->buffer);
> +    if (file->buffer[length - 1] != '\n') {
> +        int incomplete = 0;
> +
> +        /* Check whether we ran out of buffer space and resize if so. */
> +        if (length < file->bufsize - 1)
> +            incomplete = 1;
> +        else {
> +            file->bufsize += BUFSIZ;
> +            file->buffer =
> +                breallocarray_type(file->buffer, file->bufsize, char);
> +        }
> +
> +        /*
> +         * On either incomplete lines or too small of a buffer, rewind
> +         * and read the file again (on the next pass, if incomplete).
> +         * It's simpler than trying to double-buffer the file.
> +         */
> +        if (fsetpos(file->file, &where) < 0)
> +            sysbail("cannot set position in %s", file->name);
> +        return incomplete;
> +    }
> +
> +    /* We saw a complete line.  Print it out. */
> +    printf("# %s", file->buffer);
> +    return 0;
> +}
> +
> +
> +/*
> + * Check all registered diag_files for any output.  We only print out the
> + * output if we see a complete line; otherwise, we wait for the next newline.
> + */
> +static void
> +check_diag_files(void)
> +{
> +    struct diag_file *file;
> +    fpos_t where;
> +    int incomplete;
> +
> +    /*
> +     * Walk through each file and read each line of output available.
> +     */
> +    for (file = diag_files; file != NULL; file = file->next) {
> +        clearerr(file->file);
> +
> +        /* Store the current position in case we have to rewind. */
> +        if (fgetpos(file->file, &where) < 0)
> +            sysbail("cannot get position in %s", file->name);
> +
> +        /* Continue until we get EOF or an incomplete line of data. */
> +        incomplete = 0;
> +        while (!feof(file->file) && !incomplete) {
> +            incomplete = handle_diag_file_line(file, where);
> +        }
> +    }
> +}
> +
> +
> +/*
> + * Our exit handler.  Called on completion of the test to report a summary of
> + * results provided we're still in the original process.  This also handles
> + * printing out the plan if we used plan_lazy(), although that's suppressed if
> + * we never ran a test (due to an early bail, for example), and running any
> + * registered cleanup functions.
> + */
> +static void
> +finish(void)
> +{
> +    int success, primary;
> +    struct cleanup_func *current;
> +    unsigned long highest = testnum - 1;
> +    struct diag_file *file, *tmp;
> +
> +    /* Check for pending diag_file output. */
> +    check_diag_files();
> +
> +    /* Free the diag_files. */
> +    file = diag_files;
> +    while (file != NULL) {
> +        tmp = file;
> +        file = file->next;
> +        fclose(tmp->file);
> +        free(tmp->name);
> +        free(tmp->buffer);
> +        free(tmp);
> +    }
> +    diag_files = NULL;
> +
> +    /*
> +     * Determine whether all tests were successful, which is needed before
> +     * calling cleanup functions since we pass that fact to the functions.
> +     */
> +    if (_planned == 0 && _lazy)
> +        _planned = highest;
> +    success = (!_aborted && _planned == highest && _failed == 0);
> +
> +    /*
> +     * If there are any registered cleanup functions, we run those first.  We
> +     * always run them, even if we didn't run a test.  Don't do anything
> +     * except free the diag_files and call cleanup functions if we aren't the
> +     * primary process (the process in which plan or plan_lazy was called),
> +     * and tell the cleanup functions that fact.
> +     */
> +    primary = (_process == 0 || getpid() == _process);
> +    while (cleanup_funcs != NULL) {
> +        if (cleanup_funcs->func_with_data) {
> +            void *data = cleanup_funcs->data;
> +
> +            cleanup_funcs->func_with_data(success, primary, data);
> +        } else {
> +            cleanup_funcs->func(success, primary);
> +        }
> +        current = cleanup_funcs;
> +        cleanup_funcs = cleanup_funcs->next;
> +        free(current);
> +    }
> +    if (!primary)
> +        return;
> +
> +    /* Don't do anything further if we never planned a test. */
> +    if (_planned == 0)
> +        return;
> +
> +    /* If we're aborting due to bail, don't print summaries. */
> +    if (_aborted)
> +        return;
> +
> +    /* Print out the lazy plan if needed. */
> +    fflush(stderr);
> +    if (_lazy)
> +        printf("1..%lu\n", _planned);
> +
> +    /* Print out a summary of the results. */
> +    if (_planned > highest)
> +        diag("Looks like you planned %lu test%s but only ran %lu", _planned,
> +             (_planned > 1 ? "s" : ""), highest);
> +    else if (_planned < highest)
> +        diag("Looks like you planned %lu test%s but ran %lu extra", _planned,
> +             (_planned > 1 ? "s" : ""), highest - _planned);
> +    else if (_failed > 0)
> +        diag("Looks like you failed %lu test%s of %lu", _failed,
> +             (_failed > 1 ? "s" : ""), _planned);
> +    else if (_planned != 1)
> +        diag("All %lu tests successful or skipped", _planned);
> +    else
> +        diag("%lu test successful or skipped", _planned);
> +}
> +
> +
> +/*
> + * Initialize things.  Turns on line buffering on stdout and then prints out
> + * the number of tests in the test suite.  We intentionally don't check for
> + * pending diag_file output here, since it should really come after the plan.
> + */
> +void
> +plan(unsigned long count)
> +{
> +    if (setvbuf(stdout, NULL, _IOLBF, BUFSIZ) != 0)
> +        sysdiag("cannot set stdout to line buffered");
> +    fflush(stderr);
> +    printf("1..%lu\n", count);
> +    testnum = 1;
> +    _planned = count;
> +    _process = getpid();
> +    if (atexit(finish) != 0) {
> +        sysdiag("cannot register exit handler");
> +        diag("cleanups will not be run");
> +    }
> +}
> +
> +
> +/*
> + * Initialize things for lazy planning, where we'll automatically print out a
> + * plan at the end of the program.  Turns on line buffering on stdout as well.
> + */
> +void
> +plan_lazy(void)
> +{
> +    if (setvbuf(stdout, NULL, _IOLBF, BUFSIZ) != 0)
> +        sysdiag("cannot set stdout to line buffered");
> +    testnum = 1;
> +    _process = getpid();
> +    _lazy = 1;
> +    if (atexit(finish) != 0)
> +        sysbail("cannot register exit handler to display plan");
> +}
> +
> +
> +/*
> + * Skip the entire test suite and exits.  Should be called instead of plan(),
> + * not after it, since it prints out a special plan line.  Ignore diag_file
> + * output here, since it's not clear if it's allowed before the plan.
> + */
> +void
> +skip_all(const char *format, ...)
> +{
> +    fflush(stderr);
> +    printf("1..0 # skip");
> +    PRINT_DESC(" ", format);
> +    putchar('\n');
> +    exit(0);
> +}
> +
> +
> +/*
> + * Takes a boolean success value and assumes the test passes if that value
> + * is true and fails if that value is false.
> + */
> +int
> +ok(int success, const char *format, ...)
> +{
> +    fflush(stderr);
> +    check_diag_files();
> +    printf("%sok %lu", success ? "" : "not ", testnum++);
> +    if (!success)
> +        _failed++;
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Same as ok(), but takes the format arguments as a va_list.
> + */
> +int
> +okv(int success, const char *format, va_list args)
> +{
> +    fflush(stderr);
> +    check_diag_files();
> +    printf("%sok %lu", success ? "" : "not ", testnum++);
> +    if (!success)
> +        _failed++;
> +    if (format != NULL) {
> +        printf(" - ");
> +        vprintf(format, args);
> +    }
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Skip a test.
> + */
> +void
> +skip(const char *reason, ...)
> +{
> +    fflush(stderr);
> +    check_diag_files();
> +    printf("ok %lu # skip", testnum++);
> +    PRINT_DESC(" ", reason);
> +    putchar('\n');
> +}
> +
> +
> +/*
> + * Report the same status on the next count tests.
> + */
> +int
> +ok_block(unsigned long count, int success, const char *format, ...)
> +{
> +    unsigned long i;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    for (i = 0; i < count; i++) {
> +        printf("%sok %lu", success ? "" : "not ", testnum++);
> +        if (!success)
> +            _failed++;
> +        PRINT_DESC(" - ", format);
> +        putchar('\n');
> +    }
> +    return success;
> +}
> +
> +
> +/*
> + * Skip the next count tests.
> + */
> +void
> +skip_block(unsigned long count, const char *reason, ...)
> +{
> +    unsigned long i;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    for (i = 0; i < count; i++) {
> +        printf("ok %lu # skip", testnum++);
> +        PRINT_DESC(" ", reason);
> +        putchar('\n');
> +    }
> +}
> +
> +
> +/*
> + * Takes two boolean values and requires the truth value of both match.
> + */
> +int
> +is_bool(int left, int right, const char *format, ...)
> +{
> +    int success;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    success = (!!left == !!right);
> +    if (success)
> +        printf("ok %lu", testnum++);
> +    else {
> +        diag(" left: %s", !!left ? "true" : "false");
> +        diag("right: %s", !!right ? "true" : "false");
> +        printf("not ok %lu", testnum++);
> +        _failed++;
> +    }
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Takes two integer values and requires they match.
> + */
> +int
> +is_int(long left, long right, const char *format, ...)
> +{
> +    int success;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    success = (left == right);
> +    if (success)
> +        printf("ok %lu", testnum++);
> +    else {
> +        diag(" left: %ld", left);
> +        diag("right: %ld", right);
> +        printf("not ok %lu", testnum++);
> +        _failed++;
> +    }
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Takes two strings and requires they match (using strcmp).  NULL arguments
> + * are permitted and handled correctly.
> + */
> +int
> +is_string(const char *left, const char *right, const char *format, ...)
> +{
> +    int success;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +
> +    /* Compare the strings, being careful of NULL. */
> +    if (left == NULL)
> +        success = (right == NULL);
> +    else if (right == NULL)
> +        success = 0;
> +    else
> +        success = (strcmp(left, right) == 0);
> +
> +    /* Report the results. */
> +    if (success)
> +        printf("ok %lu", testnum++);
> +    else {
> +        diag(" left: %s", left == NULL ? "(null)" : left);
> +        diag("right: %s", right == NULL ? "(null)" : right);
> +        printf("not ok %lu", testnum++);
> +        _failed++;
> +    }
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Takes two unsigned longs and requires they match.  On failure, reports them
> + * in hex.
> + */
> +int
> +is_hex(unsigned long left, unsigned long right, const char *format, ...)
> +{
> +    int success;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    success = (left == right);
> +    if (success)
> +        printf("ok %lu", testnum++);
> +    else {
> +        diag(" left: %lx", (unsigned long) left);
> +        diag("right: %lx", (unsigned long) right);
> +        printf("not ok %lu", testnum++);
> +        _failed++;
> +    }
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Takes pointers to a regions of memory and requires that len bytes from each
> + * match.  Otherwise reports any bytes which didn't match.
> + */
> +int
> +is_blob(const void *left, const void *right, size_t len, const char *format,
> +        ...)
> +{
> +    int success;
> +    size_t i;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    success = (memcmp(left, right, len) == 0);
> +    if (success)
> +        printf("ok %lu", testnum++);
> +    else {
> +        const unsigned char *left_c = (const unsigned char *) left;
> +        const unsigned char *right_c = (const unsigned char *) right;
> +
> +        for (i = 0; i < len; i++) {
> +            if (left_c[i] != right_c[i])
> +                diag("offset %lu: left %02x, right %02x", (unsigned long) i,
> +                     left_c[i], right_c[i]);
> +        }
> +        printf("not ok %lu", testnum++);
> +        _failed++;
> +    }
> +    PRINT_DESC(" - ", format);
> +    putchar('\n');
> +    return success;
> +}
> +
> +
> +/*
> + * Bail out with an error.
> + */
> +void
> +bail(const char *format, ...)
> +{
> +    va_list args;
> +
> +    _aborted = 1;
> +    fflush(stderr);
> +    check_diag_files();
> +    fflush(stdout);
> +    printf("Bail out! ");
> +    va_start(args, format);
> +    vprintf(format, args);
> +    va_end(args);
> +    printf("\n");
> +    exit(255);
> +}
> +
> +
> +/*
> + * Bail out with an error, appending strerror(errno).
> + */
> +void
> +sysbail(const char *format, ...)
> +{
> +    va_list args;
> +    int oerrno = errno;
> +
> +    _aborted = 1;
> +    fflush(stderr);
> +    check_diag_files();
> +    fflush(stdout);
> +    printf("Bail out! ");
> +    va_start(args, format);
> +    vprintf(format, args);
> +    va_end(args);
> +    printf(": %s\n", strerror(oerrno));
> +    exit(255);
> +}
> +
> +
> +/*
> + * Report a diagnostic to stderr.  Always returns 1 to allow embedding in
> + * compound statements.
> + */
> +int
> +diag(const char *format, ...)
> +{
> +    va_list args;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    fflush(stdout);
> +    printf("# ");
> +    va_start(args, format);
> +    vprintf(format, args);
> +    va_end(args);
> +    printf("\n");
> +    return 1;
> +}
> +
> +
> +/*
> + * Report a diagnostic to stderr, appending strerror(errno).  Always returns 1
> + * to allow embedding in compound statements.
> + */
> +int
> +sysdiag(const char *format, ...)
> +{
> +    va_list args;
> +    int oerrno = errno;
> +
> +    fflush(stderr);
> +    check_diag_files();
> +    fflush(stdout);
> +    printf("# ");
> +    va_start(args, format);
> +    vprintf(format, args);
> +    va_end(args);
> +    printf(": %s\n", strerror(oerrno));
> +    return 1;
> +}
> +
> +
> +/*
> + * Register a new file for diag_file processing.
> + */
> +void
> +diag_file_add(const char *name)
> +{
> +    struct diag_file *file, *prev;
> +
> +    file = bcalloc_type(1, struct diag_file);
> +    file->name = bstrdup(name);
> +    file->file = fopen(file->name, "r");
> +    if (file->file == NULL)
> +        sysbail("cannot open %s", name);
> +    file->buffer = bcalloc_type(BUFSIZ, char);
> +    file->bufsize = BUFSIZ;
> +    if (diag_files == NULL)
> +        diag_files = file;
> +    else {
> +        for (prev = diag_files; prev->next != NULL; prev = prev->next)
> +            ;
> +        prev->next = file;
> +    }
> +}
> +
> +
> +/*
> + * Remove a file from diag_file processing.  If the file is not found, do
> + * nothing, since there are some situations where it can be removed twice
> + * (such as if it's removed from a cleanup function, since cleanup functions
> + * are called after freeing all the diag_files).
> + */
> +void
> +diag_file_remove(const char *name)
> +{
> +    struct diag_file *file;
> +    struct diag_file **prev = &diag_files;
> +
> +    for (file = diag_files; file != NULL; file = file->next) {
> +        if (strcmp(file->name, name) == 0) {
> +            *prev = file->next;
> +            fclose(file->file);
> +            free(file->name);
> +            free(file->buffer);
> +            free(file);
> +            return;
> +        }
> +        prev = &file->next;
> +    }
> +}
> +
> +
> +/*
> + * Allocate cleared memory, reporting a fatal error with bail on failure.
> + */
> +void *
> +bcalloc(size_t n, size_t size)
> +{
> +    void *p;
> +
> +    p = calloc(n, size);
> +    if (p == NULL)
> +        sysbail("failed to calloc %lu", (unsigned long) (n * size));
> +    return p;
> +}
> +
> +
> +/*
> + * Allocate memory, reporting a fatal error with bail on failure.
> + */
> +void *
> +bmalloc(size_t size)
> +{
> +    void *p;
> +
> +    p = malloc(size);
> +    if (p == NULL)
> +        sysbail("failed to malloc %lu", (unsigned long) size);
> +    return p;
> +}
> +
> +
> +/*
> + * Reallocate memory, reporting a fatal error with bail on failure.
> + */
> +void *
> +brealloc(void *p, size_t size)
> +{
> +    p = realloc(p, size);
> +    if (p == NULL)
> +        sysbail("failed to realloc %lu bytes", (unsigned long) size);
> +    return p;
> +}
> +
> +
> +/*
> + * The same as brealloc, but determine the size by multiplying an element
> + * count by a size, similar to calloc.  The multiplication is checked for
> + * integer overflow.
> + *
> + * We should technically use SIZE_MAX here for the overflow check, but
> + * SIZE_MAX is C99 and we're only assuming C89 + SUSv3, which does not
> + * guarantee that it exists.  They do guarantee that UINT_MAX exists, and we
> + * can assume that UINT_MAX <= SIZE_MAX.
> + *
> + * (In theory, C89 and C99 permit size_t to be smaller than unsigned int, but
> + * I disbelieve in the existence of such systems and they will have to cope
> + * without overflow checks.)
> + */
> +void *
> +breallocarray(void *p, size_t n, size_t size)
> +{
> +    if (n > 0 && UINT_MAX / n <= size)
> +        bail("reallocarray too large");
> +    if (n == 0)
> +        n = 1;
> +    p = realloc(p, n * size);
> +    if (p == NULL)
> +        sysbail("failed to realloc %lu bytes", (unsigned long) (n * size));
> +    return p;
> +}
> +
> +
> +/*
> + * Copy a string, reporting a fatal error with bail on failure.
> + */
> +char *
> +bstrdup(const char *s)
> +{
> +    char *p;
> +    size_t len;
> +
> +    len = strlen(s) + 1;
> +    p = (char *) malloc(len);
> +    if (p == NULL)
> +        sysbail("failed to strdup %lu bytes", (unsigned long) len);
> +    memcpy(p, s, len);
> +    return p;
> +}
> +
> +
> +/*
> + * Copy up to n characters of a string, reporting a fatal error with bail on
> + * failure.  Don't use the system strndup function, since it may not exist and
> + * the TAP library doesn't assume any portability support.
> + */
> +char *
> +bstrndup(const char *s, size_t n)
> +{
> +    const char *p;
> +    char *copy;
> +    size_t length;
> +
> +    /* Don't assume that the source string is nul-terminated. */
> +    for (p = s; (size_t) (p - s) < n && *p != '\0'; p++)
> +        ;
> +    length = (size_t) (p - s);
> +    copy = (char *) malloc(length + 1);
> +    if (copy == NULL)
> +        sysbail("failed to strndup %lu bytes", (unsigned long) length);
> +    memcpy(copy, s, length);
> +    copy[length] = '\0';
> +    return copy;
> +}
> +
> +
> +/*
> + * Locate a test file.  Given the partial path to a file, look under
> + * C_TAP_BUILD and then C_TAP_SOURCE for the file and return the full path to
> + * the file.  Returns NULL if the file doesn't exist.  A non-NULL return
> + * should be freed with test_file_path_free().
> + */
> +char *
> +test_file_path(const char *file)
> +{
> +    char *base;
> +    char *path = NULL;
> +    const char *envs[] = {"C_TAP_BUILD", "C_TAP_SOURCE", NULL};
> +    int i;
> +
> +    for (i = 0; envs[i] != NULL; i++) {
> +        base = getenv(envs[i]);
> +        if (base == NULL)
> +            continue;
> +        path = concat(base, "/", file, (const char *) 0);
> +        if (access(path, R_OK) == 0)
> +            break;
> +        free(path);
> +        path = NULL;
> +    }
> +    return path;
> +}
> +
> +
> +/*
> + * Free a path returned from test_file_path().  This function exists primarily
> + * for Windows, where memory must be freed from the same library domain that
> + * it was allocated from.
> + */
> +void
> +test_file_path_free(char *path)
> +{
> +    free(path);
> +}
> +
> +
> +/*
> + * Create a temporary directory, tmp, under C_TAP_BUILD if set and the current
> + * directory if it does not.  Returns the path to the temporary directory in
> + * newly allocated memory, and calls bail on any failure.  The return value
> + * should be freed with test_tmpdir_free.
> + *
> + * This function uses sprintf because it attempts to be independent of all
> + * other portability layers.  The use immediately after a memory allocation
> + * should be safe without using snprintf or strlcpy/strlcat.
> + */
> +char *
> +test_tmpdir(void)
> +{
> +    const char *build;
> +    char *path = NULL;
> +
> +    build = getenv("C_TAP_BUILD");
> +    if (build == NULL)
> +        build = ".";
> +    path = concat(build, "/tmp", (const char *) 0);
> +    if (access(path, X_OK) < 0)
> +        if (mkdir(path, 0777) < 0)
> +            sysbail("error creating temporary directory %s", path);
> +    return path;
> +}
> +
> +
> +/*
> + * Free a path returned from test_tmpdir() and attempt to remove the
> + * directory.  If we can't delete the directory, don't worry; something else
> + * that hasn't yet cleaned up may still be using it.
> + */
> +void
> +test_tmpdir_free(char *path)
> +{
> +    if (path != NULL)
> +        rmdir(path);
> +    free(path);
> +}
> +
> +static void
> +register_cleanup(test_cleanup_func func,
> +                 test_cleanup_func_with_data func_with_data, void *data)
> +{
> +    struct cleanup_func *cleanup, **last;
> +
> +    cleanup = bcalloc_type(1, struct cleanup_func);
> +    cleanup->func = func;
> +    cleanup->func_with_data = func_with_data;
> +    cleanup->data = data;
> +    cleanup->next = NULL;
> +    last = &cleanup_funcs;
> +    while (*last != NULL)
> +        last = &(*last)->next;
> +    *last = cleanup;
> +}
> +
> +/*
> + * Register a cleanup function that is called when testing ends.  All such
> + * registered functions will be run by finish.
> + */
> +void
> +test_cleanup_register(test_cleanup_func func)
> +{
> +    register_cleanup(func, NULL, NULL);
> +}
> +
> +/*
> + * Same as above, but also allows an opaque pointer to be passed to the cleanup
> + * function.
> + */
> +void
> +test_cleanup_register_with_data(test_cleanup_func_with_data func, void *data)
> +{
> +    register_cleanup(NULL, func, data);
> +}
> diff --git a/t/tap/basic.h b/t/tap/basic.h
> new file mode 100644
> index 0000000000..afea8cb210
> --- /dev/null
> +++ b/t/tap/basic.h
> @@ -0,0 +1,198 @@
> +/*
> + * Basic utility routines for the TAP protocol.
> + *
> + * This file is part of C TAP Harness.  The current version plus supporting
> + * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
> + *
> + * Written by Russ Allbery <eagle@eyrie.org>
> + * Copyright 2009-2019, 2022 Russ Allbery <eagle@eyrie.org>
> + * Copyright 2001-2002, 2004-2008, 2011-2012, 2014
> + *     The Board of Trustees of the Leland Stanford Junior University
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: MIT
> + */
> +
> +#ifndef TAP_BASIC_H
> +#define TAP_BASIC_H 1
> +
> +#include <stdarg.h> /* va_list */
> +#include <stddef.h> /* size_t */
> +#include <stdlib.h> /* free */
> +#include <tap/macros.h>
> +
> +/*
> + * Used for iterating through arrays.  ARRAY_SIZE returns the number of
> + * elements in the array (useful for a < upper bound in a for loop) and
> + * ARRAY_END returns a pointer to the element past the end (ISO C99 makes it
> + * legal to refer to such a pointer as long as it's never dereferenced).
> + */
> +// #define ARRAY_SIZE(array) (sizeof(array) / sizeof((array)[0]))
> +// #define ARRAY_END(array)  (&(array)[ARRAY_SIZE(array)])
> +
> +BEGIN_DECLS
> +
> +/*
> + * The test count.  Always contains the number that will be used for the next
> + * test status.
> + */
> +extern unsigned long testnum;
> +
> +/* Print out the number of tests and set standard output to line buffered. */
> +void plan(unsigned long count);
> +
> +/*
> + * Prepare for lazy planning, in which the plan will be printed automatically
> + * at the end of the test program.
> + */
> +void plan_lazy(void);
> +
> +/* Skip the entire test suite.  Call instead of plan. */
> +void skip_all(const char *format, ...)
> +    __attribute__((__noreturn__, __format__(printf, 1, 2)));
> +
> +/*
> + * Basic reporting functions.  The okv() function is the same as ok() but
> + * takes the test description as a va_list to make it easier to reuse the
> + * reporting infrastructure when writing new tests.  ok() and okv() return the
> + * value of the success argument.
> + */
> +int ok(int success, const char *format, ...)
> +    __attribute__((__format__(printf, 2, 3)));
> +int okv(int success, const char *format, va_list args)
> +    __attribute__((__format__(printf, 2, 0)));
> +void skip(const char *reason, ...) __attribute__((__format__(printf, 1, 2)));
> +
> +/*
> + * Report the same status on, or skip, the next count tests.  ok_block()
> + * returns the value of the success argument.
> + */
> +int ok_block(unsigned long count, int success, const char *format, ...)
> +    __attribute__((__format__(printf, 3, 4)));
> +void skip_block(unsigned long count, const char *reason, ...)
> +    __attribute__((__format__(printf, 2, 3)));
> +
> +/*
> + * Compare two values.  Returns true if the test passes and false if it fails.
> + * is_bool takes an int since the bool type isn't fully portable yet, but
> + * interprets both arguments for their truth value, not for their numeric
> + * value.
> + */
> +int is_bool(int, int, const char *format, ...)
> +    __attribute__((__format__(printf, 3, 4)));
> +int is_int(long, long, const char *format, ...)
> +    __attribute__((__format__(printf, 3, 4)));
> +int is_string(const char *, const char *, const char *format, ...)
> +    __attribute__((__format__(printf, 3, 4)));
> +int is_hex(unsigned long, unsigned long, const char *format, ...)
> +    __attribute__((__format__(printf, 3, 4)));
> +int is_blob(const void *, const void *, size_t, const char *format, ...)
> +    __attribute__((__format__(printf, 4, 5)));
> +
> +/* Bail out with an error.  sysbail appends strerror(errno). */
> +void bail(const char *format, ...)
> +    __attribute__((__noreturn__, __nonnull__, __format__(printf, 1, 2)));
> +void sysbail(const char *format, ...)
> +    __attribute__((__noreturn__, __nonnull__, __format__(printf, 1, 2)));
> +
> +/* Report a diagnostic to stderr prefixed with #. */
> +int diag(const char *format, ...)
> +    __attribute__((__nonnull__, __format__(printf, 1, 2)));
> +int sysdiag(const char *format, ...)
> +    __attribute__((__nonnull__, __format__(printf, 1, 2)));
> +
> +/*
> + * Register or unregister a file that contains supplementary diagnostics.
> + * Before any other output, all registered files will be read, line by line,
> + * and each line will be reported as a diagnostic as if it were passed to
> + * diag().  Nul characters are not supported in these files and will result in
> + * truncated output.
> + */
> +void diag_file_add(const char *file) __attribute__((__nonnull__));
> +void diag_file_remove(const char *file) __attribute__((__nonnull__));
> +
> +/* Allocate memory, reporting a fatal error with bail on failure. */
> +void *bcalloc(size_t, size_t)
> +    __attribute__((__alloc_size__(1, 2), __malloc__(free),
> +                   __warn_unused_result__));
> +void *bmalloc(size_t) __attribute__((__alloc_size__(1), __malloc__(free),
> +                                     __warn_unused_result__));
> +void *breallocarray(void *, size_t, size_t)
> +    __attribute__((__alloc_size__(2, 3), __malloc__(free),
> +                   __warn_unused_result__));
> +void *brealloc(void *, size_t)
> +    __attribute__((__alloc_size__(2), __malloc__(free),
> +                   __warn_unused_result__));
> +char *bstrdup(const char *)
> +    __attribute__((__malloc__(free), __nonnull__, __warn_unused_result__));
> +char *bstrndup(const char *, size_t)
> +    __attribute__((__malloc__(free), __nonnull__, __warn_unused_result__));
> +
> +/*
> + * Macros that cast the return value from b* memory functions, making them
> + * usable in C++ code and providing some additional type safety.
> + */
> +#define bcalloc_type(n, type) ((type *) bcalloc((n), sizeof(type)))
> +#define breallocarray_type(p, n, type) \
> +    ((type *) breallocarray((p), (n), sizeof(type)))
> +
> +/*
> + * Find a test file under C_TAP_BUILD or C_TAP_SOURCE, returning the full
> + * path.  The returned path should be freed with test_file_path_free().
> + */
> +void test_file_path_free(char *path);
> +char *test_file_path(const char *file)
> +    __attribute__((__malloc__(test_file_path_free), __nonnull__,
> +                   __warn_unused_result__));
> +
> +/*
> + * Create a temporary directory relative to C_TAP_BUILD and return the path.
> + * The returned path should be freed with test_tmpdir_free().
> + */
> +void test_tmpdir_free(char *path);
> +char *test_tmpdir(void)
> +    __attribute__((__malloc__(test_tmpdir_free), __warn_unused_result__));
> +
> +/*
> + * Register a cleanup function that is called when testing ends.  All such
> + * registered functions will be run during atexit handling (and are therefore
> + * subject to all the same constraints and caveats as atexit functions).
> + *
> + * The function must return void and will be passed two arguments: an int that
> + * will be true if the test completed successfully and false otherwise, and an
> + * int that will be true if the cleanup function is run in the primary process
> + * (the one that called plan or plan_lazy) and false otherwise.  If
> + * test_cleanup_register_with_data is used instead, a generic pointer can be
> + * provided and will be passed to the cleanup function as a third argument.
> + *
> + * test_cleanup_register_with_data is the better API and should have been the
> + * only API.  test_cleanup_register was an API error preserved for backward
> + * cmpatibility.
> + */
> +typedef void (*test_cleanup_func)(int, int);
> +typedef void (*test_cleanup_func_with_data)(int, int, void *);
> +
> +void test_cleanup_register(test_cleanup_func) __attribute__((__nonnull__));
> +void test_cleanup_register_with_data(test_cleanup_func_with_data, void *)
> +    __attribute__((__nonnull__));
> +
> +END_DECLS
> +
> +#endif /* TAP_BASIC_H */
> diff --git a/t/tap/macros.h b/t/tap/macros.h
> new file mode 100644
> index 0000000000..0eabcb5847
> --- /dev/null
> +++ b/t/tap/macros.h
> @@ -0,0 +1,109 @@
> +/*
> + * Helpful macros for TAP header files.
> + *
> + * This is not, strictly speaking, related to TAP, but any TAP add-on is
> + * probably going to need these macros, so define them in one place so that
> + * everyone can pull them in.
> + *
> + * This file is part of C TAP Harness.  The current version plus supporting
> + * documentation is at <https://www.eyrie.org/~eagle/software/c-tap-harness/>.
> + *
> + * Copyright 2008, 2012-2013, 2015, 2022 Russ Allbery <eagle@eyrie.org>
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + * SPDX-License-Identifier: MIT
> + */
> +
> +#ifndef TAP_MACROS_H
> +#define TAP_MACROS_H 1
> +
> +/*
> + * __attribute__ is available in gcc 2.5 and later, but only with gcc 2.7
> + * could you use the __format__ form of the attributes, which is what we use
> + * (to avoid confusion with other macros), and only with gcc 2.96 can you use
> + * the attribute __malloc__.  2.96 is very old, so don't bother trying to get
> + * the other attributes to work with GCC versions between 2.7 and 2.96.
> + */
> +#ifndef __attribute__
> +#    if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 96)
> +#        define __attribute__(spec) /* empty */
> +#    endif
> +#endif
> +
> +/*
> + * We use __alloc_size__, but it was only available in fairly recent versions
> + * of GCC.  Suppress warnings about the unknown attribute if GCC is too old.
> + * We know that we're GCC at this point, so we can use the GCC variadic macro
> + * extension, which will still work with versions of GCC too old to have C99
> + * variadic macro support.
> + */
> +#if !defined(__attribute__) && !defined(__alloc_size__)
> +#    if defined(__GNUC__) && !defined(__clang__)
> +#        if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 3)
> +#            define __alloc_size__(spec, args...) /* empty */
> +#        endif
> +#    endif
> +#endif
> +
> +/* Suppress __warn_unused_result__ if gcc is too old. */
> +#if !defined(__attribute__) && !defined(__warn_unused_result__)
> +#    if __GNUC__ < 3 || (__GNUC__ == 3 && __GNUC_MINOR__ < 4)
> +#        define __warn_unused_result__ /* empty */
> +#    endif
> +#endif
> +
> +/*
> + * Suppress the argument to __malloc__ in Clang (not supported in at least
> + * version 13) and GCC versions prior to 11.
> + */
> +#if !defined(__attribute__) && !defined(__malloc__)
> +#    if defined(__clang__) || __GNUC__ < 11
> +#        define __malloc__(dalloc) __malloc__
> +#    endif
> +#endif
> +
> +/*
> + * LLVM and Clang pretend to be GCC but don't support all of the __attribute__
> + * settings that GCC does.  For them, suppress warnings about unknown
> + * attributes on declarations.  This unfortunately will affect the entire
> + * compilation context, but there's no push and pop available.
> + */
> +#if !defined(__attribute__) && (defined(__llvm__) || defined(__clang__))
> +#    pragma GCC diagnostic ignored "-Wattributes"
> +#endif
> +
> +/* Used for unused parameters to silence gcc warnings. */
> +// #define UNUSED __attribute__((__unused__))
> +
> +/*
> + * BEGIN_DECLS is used at the beginning of declarations so that C++
> + * compilers don't mangle their names.  END_DECLS is used at the end.
> + */
> +#undef BEGIN_DECLS
> +#undef END_DECLS
> +#ifdef __cplusplus
> +#    define BEGIN_DECLS extern "C" {
> +#    define END_DECLS   }
> +#else
> +#    define BEGIN_DECLS /* empty */
> +#    define END_DECLS   /* empty */
> +#endif
> +
> +#endif /* TAP_MACROS_H */
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 4/4] unit test: add basic example and build rules
  2023-05-17 23:56 ` [PATCH RFC v2 4/4] unit test: add basic example and build rules steadmon
@ 2023-05-18 13:32   ` Phillip Wood
  0 siblings, 0 replies; 32+ messages in thread
From: Phillip Wood @ 2023-05-18 13:32 UTC (permalink / raw)
  To: steadmon, git
  Cc: calvinwan, szeder.dev, chooglen, avarab, gitster, sandals,
	Calvin Wan

On 18/05/2023 00:56, steadmon@google.com wrote:
> Integrate a simple strbuf unit test with Git's Makefiles.
> 
> You can build and run the unit tests with `make unit-tests` (or just
> build them with `make build-unit-tests`). By default we use the basic
> test runner from the C-TAP project, but users who prefer prove as a test
> runner can set `DEFAULT_UNIT_TEST_TARGET=prove-unit-tests` instead.
> 
> We modify the `#include`s in the C TAP libraries so that we can build
> them without having to include the t/ directory in our include search
> path.

Thanks for adding some example tests, it is really helpful to see how 
the library will be used.

I tried building the units test with SANITIZE=address set and I get lots 
of link errors complaining about undefined references to __asan_*

> Signed-off-by: Calvin Wan <calvinwan@google.com>
> Signed-off-by: Josh Steadmon <steadmon@google.com>
> Change-Id: Ie61eafd2bd8f8dc5b30449af1e436889f91da3b7

> diff --git a/t/strbuf-test.c b/t/strbuf-test.c
> new file mode 100644
> index 0000000000..8f8d4e11db
> --- /dev/null
> +++ b/t/strbuf-test.c
> @@ -0,0 +1,54 @@
> +#include "tap/basic.h"
> +
> +#include "../git-compat-util.h"
> +#include "../strbuf.h"
> +
> +int strbuf_init_test()
> +{
> +	struct strbuf *buf = malloc(sizeof(void*));

Is there a reason to use dynamic allocation here. Also I think you need 
sizeof(*buf) to allocate the correct size.

> +	strbuf_init(buf, 0);
> +
> +	if (buf->buf[0] != '\0')
> +		return 0;
> +	if (buf->alloc != 0)
> +		return 0;
> +	if (buf->len != 0)
> +		return 0;
> +	return 1;
> +}

This test nicely illustrates why I'd prefer a different approach. The 
test author has to maintain the pass/fail state and there are no 
diagnostics if it fails to tell you which check failed. To be clear I 
view the lack of diagnostics as the fault of the test framework, not the 
test author. I'd prefer something like

	void strbuf_init_test(void)
	{
		struct strbuf buf;

		strbuf_init(&buf, 0);
		check_char(buf.buf[0] == '\0');
		check_uint(buf.alloc, ==, 0);
		check_uint(buf.len, ==, 0);
	}

which would be run as

	TEST(strbuf_init_test(), "strbuf_init initializes properly");

in main() and provide diagnostics like

     # check "buf.alloc == 0" failed at my-test.c:102
     #    left: 2
     #   right: 0

when a check fails.

> +int strbuf_init_test2() {
> +	struct strbuf *buf = malloc(sizeof(void*));
> +	strbuf_init(buf, 100);
> +
> +	if (buf->buf[0] != '\0')
> +		return 0;
> +	if (buf->alloc != 101)

Strictly speaking I think the API guarantees that at least 100 bytes 
will be allocated, not the exact amount as does alloc_grow() below.

> +		return 0;
> +	if (buf->len != 0)
> +		return 0;
> +	return 1;
> +}
> +
> +
> +int strbuf_grow_test() {
> +	struct strbuf *buf = malloc(sizeof(void*));
> +	strbuf_grow(buf, 100);
> +
> +	if (buf->buf[0] != '\0')
> +		return 0;
> +	if (buf->alloc != 101)
> +		return 0;
> +	if (buf->len != 0)
> +		return 0;
> +	return 1;
> +}

Best Wishes

Phillip


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file
  2023-05-17 23:56 ` [PATCH RFC v2 1/4] common-main: split common_exit() into a new file steadmon
@ 2023-05-18 17:17   ` Junio C Hamano
  2023-07-14 23:38     ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Josh Steadmon
  0 siblings, 1 reply; 32+ messages in thread
From: Junio C Hamano @ 2023-05-18 17:17 UTC (permalink / raw)
  To: steadmon
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	sandals

steadmon@google.com writes:

> It is convenient to have common_exit() in its own object file so that
> standalone programs may link to it (and any other object files that
> depend on it) while still having their own independent main() function.

I am not so sure if this is a good direction to go in, though.  The
common_exit() function does two things that are very specific to and
dependent on what Git runtime has supposed to have done, like
initializing trace2 subsystem and linking with usage.c to make
bug_called_must_BUG exist.

I understand that a third-party or standalone non-Git programs may
want to do _more_ than what our main() does when starting up, but it
should be doable if make our main() call out to a hook function,
whose definition in Git is a no-op, that can be replaced by their
own implementation to do whatever they want to happen in main(), no?

The reason why I am not comfortable with this patch is because I
cannot say why this split is better than other possible split.  For
example, we could instead split only our 'main' out to a separate
file, say "main.c", and put main.o together with common-main.o to
libgit.a to be found by the linker, and that arrangement will also
help your "standalone programs" having their own main() function.
Now with these two possible ways to split (and there may be other
split that may be even more convenient; I simply do not know), which
one is better, and what's the argument for each approach?

> So let's move it to a new common-exit.c file and update the Makefile
> accordingly.
>
> Change-Id: I41b90059eb9031f40c9f65374b4b047e7ba3aac0
> ---
>  Makefile      |  1 +
>  common-exit.c | 26 ++++++++++++++++++++++++++
>  common-main.c | 24 ------------------------
>  3 files changed, 27 insertions(+), 24 deletions(-)

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 2/4] unit tests: Add a project plan document
  2023-05-17 23:56 ` [PATCH RFC v2 2/4] unit tests: Add a project plan document steadmon
  2023-05-18 13:13   ` Phillip Wood
@ 2023-05-18 20:15   ` Glen Choo
  2023-05-24 17:40     ` Josh Steadmon
  2023-06-01  9:19     ` Phillip Wood
  1 sibling, 2 replies; 32+ messages in thread
From: Glen Choo @ 2023-05-18 20:15 UTC (permalink / raw)
  To: steadmon, git
  Cc: Josh Steadmon, calvinwan, szeder.dev, phillip.wood123, avarab,
	gitster, sandals

steadmon@google.com writes:

> Describe what we hope to accomplish by implementing unit tests, and
> explain some open questions and milestones.

Thanks! I found this very helpful.

> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> new file mode 100644
> index 0000000000..7c575e6ef7
> --- /dev/null
> +++ b/Documentation/technical/unit-tests.txt
> @@ -0,0 +1,47 @@
> += Unit Testing
> +
> +In our current testing environment, we spend a significant amount of effort
> +crafting end-to-end tests for error conditions that could easily be captured by
> +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> +Unit tests additionally provide stability to the codebase and can simplify
> +debugging through isolation. Writing unit tests in pure C, rather than with our
> +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> +data around (no shell-isms required), and reduces testing runtime by not
> +spawning a separate process for every test invocation.

The stated goals make sense to me, and I believe they are worth
restating. I believe this is mostly taken from Calvin's v1 cover letter

  https://lore.kernel.org/git/20230427175007.902278-1-calvinwan@google.com

so perhaps he should receive some writing credit in a commit trailer
(Helped-by?).

> +== Open questions
> +
> +=== TAP harness
> +
> +We'll need to decide on a TAP harness. The C TAP library is easy to integrate,
> +but has a few drawbacks:
> +* (copy objections from lore thread)
> +* We may need to carry local patches against C TAP. We'll need to decide how to
> +  manage these. We could vendor the code in and modify them directly, or use a
> +  submodule (but then we'll need to decide on where to host the submodule with
> +  our patches on top).
> +
> +Phillip Wood has also proposed a new implementation of a TAP harness (linked
> +above). While it hasn't been thoroughly reviewed yet, it looks to support a few
> +nice features that C TAP does not, e.g. lazy test plans and skippable tests.

A third option would be to pick another, more mature third party testing
library. As I mentioned in

  https://lore.kernel.org/git/kl6lpm76zcg7.fsf@chooglen-macbookpro.roam.corp.google.com

my primary concern is the maintainability and extensibility of a third
party library that (no offense to the original author) is not used very
widely, is relatively underdocumented, is missing features that we want,
and whose maintenance policy is relatively unknown to us. I'm not
opposed to taking in a third party testing framework, but we need to be
sure that we can rely on it instead of being something that requires
active upkeep.

I don't what sorts of testing libraries exist for C or how widely they
are used, but a quick web search gives some candidates that seem like
plausible alternatives to C TAP Harness:

- cmocka https://cmocka.org/ supports TAP, assert macros and mocking,
  and is used by other projects (their website indicates Samba, OpenVPN,
  etc). The documentation is a bit lacking IMO. It's apparently a fork
  of cmockery, which I'm not familiar with.

- Check https://libcheck.github.io/check/ supports TAP and has
  relatively good documentation, though the last release seems to have
  been 3 years ago.

- µnit https://nemequ.github.io/munit/ has a shiny website with nice
  docs (and a handy list of other unit test frameworks we can look at).
  The last release also seems to be ~3 years ago. Not sure if this
  supports TAP.

For flexibility, I also think it's reasonable for us to roll our own
testing library. I think it is perfectly fine for our unit test
framework to be suboptimal at the beginning, but owning the code makes
it relatively easy to fix bugs and extend it to our liking.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 3/4] Add C TAP harness
  2023-05-18 13:15   ` Phillip Wood
@ 2023-05-18 20:50     ` Josh Steadmon
  0 siblings, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-05-18 20:50 UTC (permalink / raw)
  To: phillip.wood
  Cc: git, calvinwan, szeder.dev, chooglen, avarab, gitster, sandals,
	Calvin Wan

On 2023.05.18 14:15, Phillip Wood wrote:
> On 18/05/2023 00:56, steadmon@google.com wrote:
> > From: Calvin Wan <calvinwan@google.com>
> > 
> > Introduces the C TAP harness from https://github.com/rra/c-tap-harness/
> > 
> > There is also more complete documentation at
> > https://www.eyrie.org/~eagle/software/c-tap-harness/
> > 
> > Signed-off-by: Calvin Wan <calvinwan@google.com>
> > Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
> 
> Is that a mistake? I don't think I've contributed anything to this patch
> (unless you count complaining about it :-/)

Yes, sorry. I was experimenting with b4[1] for my mailing list workflow,
I think it got confused by your patch in [2]. I'll try to make sure it
doesn't reoccur for v3.

[1]: https://github.com/mricon/b4
[2]: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 2/4] unit tests: Add a project plan document
  2023-05-18 20:15   ` Glen Choo
@ 2023-05-24 17:40     ` Josh Steadmon
  2023-06-01  9:19     ` Phillip Wood
  1 sibling, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-05-24 17:40 UTC (permalink / raw)
  To: Glen Choo
  Cc: git, calvinwan, szeder.dev, phillip.wood123, avarab, gitster,
	sandals

On 2023.05.18 13:15, Glen Choo wrote:
> steadmon@google.com writes:
> 
> > Describe what we hope to accomplish by implementing unit tests, and
> > explain some open questions and milestones.
> 
> Thanks! I found this very helpful.
> 
> > diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> > new file mode 100644
> > index 0000000000..7c575e6ef7
> > --- /dev/null
> > +++ b/Documentation/technical/unit-tests.txt
> > @@ -0,0 +1,47 @@
> > += Unit Testing
> > +
> > +In our current testing environment, we spend a significant amount of effort
> > +crafting end-to-end tests for error conditions that could easily be captured by
> > +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> > +Unit tests additionally provide stability to the codebase and can simplify
> > +debugging through isolation. Writing unit tests in pure C, rather than with our
> > +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> > +data around (no shell-isms required), and reduces testing runtime by not
> > +spawning a separate process for every test invocation.
> 
> The stated goals make sense to me, and I believe they are worth
> restating. I believe this is mostly taken from Calvin's v1 cover letter
> 
>   https://lore.kernel.org/git/20230427175007.902278-1-calvinwan@google.com
> 
> so perhaps he should receive some writing credit in a commit trailer
> (Helped-by?).

Yeah, I missed some intended trailers while testing out b4. I'll make
sure it gets added back in V3.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH RFC v2 2/4] unit tests: Add a project plan document
  2023-05-18 20:15   ` Glen Choo
  2023-05-24 17:40     ` Josh Steadmon
@ 2023-06-01  9:19     ` Phillip Wood
  1 sibling, 0 replies; 32+ messages in thread
From: Phillip Wood @ 2023-06-01  9:19 UTC (permalink / raw)
  To: Glen Choo, steadmon, git; +Cc: szeder.dev, avarab, gitster, sandals, Calvin Wan

Hi Glen

On 18/05/2023 21:15, Glen Choo wrote:
> steadmon@google.com writes:
 >
> A third option would be to pick another, more mature third party testing
> library. As I mentioned in
> 
>    https://lore.kernel.org/git/kl6lpm76zcg7.fsf@chooglen-macbookpro.roam.corp.google.com
> 
> my primary concern is the maintainability and extensibility of a third
> party library that (no offense to the original author) is not used very
> widely, is relatively underdocumented, is missing features that we want,
> and whose maintenance policy is relatively unknown to us. I'm not
> opposed to taking in a third party testing framework, but we need to be
> sure that we can rely on it instead of being something that requires
> active upkeep.
> 
> I don't what sorts of testing libraries exist for C or how widely they
> are used, but a quick web search gives some candidates that seem like
> plausible alternatives to C TAP Harness:
> 
> - cmocka https://cmocka.org/ supports TAP, assert macros and mocking,
>    and is used by other projects (their website indicates Samba, OpenVPN,
>    etc). The documentation is a bit lacking IMO. It's apparently a fork
>    of cmockery, which I'm not familiar with.
> 
> - Check https://libcheck.github.io/check/ supports TAP and has
>    relatively good documentation, though the last release seems to have
>    been 3 years ago.
> 
> - µnit https://nemequ.github.io/munit/ has a shiny website with nice
>    docs (and a handy list of other unit test frameworks we can look at).
>    The last release also seems to be ~3 years ago. Not sure if this
>    supports TAP.

Thanks for finding those. I'd add

  - libtap https://github.com/zorgnax/libtap which seems to have nice
      assertions and TAP output. It is licensed under LGPL3 but used
      to be GPL2 and there don't seem to have been many changes since
      the license change so we could just use the older version.

I think it would be helpful to have a section in the plan which clearly 
states the features we want (TAP output, helpers that make it easy to 
write tests with good diagnostic output, ...) in our unit test framework 
and explains how the selected solution meets those criteria.

It would also be helpful to have a guide to writing effective tests that 
sets out some guidelines on where unit tests are appropriate and 
outlines our expectations in terms of style, coverage, leak cleanliness etc.

Best Wishes

Phillip

> For flexibility, I also think it's reasonable for us to roll our own
> testing library. I think it is perfectly fine for our unit test
> framework to be suboptimal at the beginning, but owning the code makes
> it relatively easy to fix bugs and extend it to our liking.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 0/1] Add a project document for adding unit tests
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
                   ` (3 preceding siblings ...)
  2023-05-17 23:56 ` [PATCH RFC v2 4/4] unit test: add basic example and build rules steadmon
@ 2023-06-09 23:25 ` Josh Steadmon
  2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
  5 siblings, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-09 23:25 UTC (permalink / raw)
  To: git
  Cc: calvinwan, szeder.dev, phillip.wood123, chooglen, avarab, gitster,
	sandals

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions). Unit tests additionally provide stability to the
codebase and can simplify debugging through isolation. Turning parts of
Git into libraries[1] gives us the ability to run unit tests on the
libraries and to write unit tests in C. Writing unit tests in pure C,
rather than with our current shell/test-tool helper setup, simplifies
test setup, simplifies passing data around (no shell-isms required), and
reduces testing runtime by not spawning a separate process for every
test invocation.

This patch adds a project document describing our goals for adding unit
tests, as well as a discussion of features needed from prospective test
frameworks or harnesses. It also includes a WIP comparison of various
proposed frameworks. Later iterations of this series will probably
include a sample unit test and Makefile integration once we've settled
on a framework.

In addition to reviewing the document itself, reviewers can help this
series progress by helping to fill in the framework comparison table.

[1] https://lore.kernel.org/git/CAJoAoZ=Cig_kLocxKGax31sU7Xe4==BGzC__Bg2_pr7krNq6MA@mail.gmail.com/# You can add trailers to the cover letter. Any email addresses found in

Changes in v3:
- Expand the doc with discussion of desired features and a WIP
  comparison.
- Drop all implementation patches until a framework is selected.
- Link to v2: https://lore.kernel.org/r/20230517-unit-tests-v2-v2-0-21b5b60f4b32@google.com

Josh Steadmon (1):
      unit tests: Add a project plan document

Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 141 ++++++++++++++++++++++++++++++++
 2 files changed, 142 insertions(+)


base-commit: 69c786637d7a7fe3b2b8f7d989af095f5f49c3a8
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
                   ` (4 preceding siblings ...)
  2023-06-09 23:25 ` [RFC PATCH v3 0/1] Add a project document for adding unit tests Josh Steadmon
@ 2023-06-09 23:25 ` Josh Steadmon
  2023-06-13 22:30   ` Junio C Hamano
                     ` (2 more replies)
  5 siblings, 3 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-09 23:25 UTC (permalink / raw)
  To: git
  Cc: calvinwan, szeder.dev, phillip.wood123, chooglen, avarab, gitster,
	sandals

In our current testing environment, we spend a significant amount of
effort crafting end-to-end tests for error conditions that could easily
be captured by unit tests (or we simply forgo some hard-to-setup and
rare error conditions).Describe what we hope to accomplish by
implementing unit tests, and explain some open questions and milestones.
Discuss desired features for test frameworks/harnesses, and provide a
preliminary comparison of several different frameworks.

Signed-off-by: Josh Steadmon <steadmon@google.com>
Coauthored-by: Calvin Wan <calvinwan@google.com>
---
 Documentation/Makefile                 |   1 +
 Documentation/technical/unit-tests.txt | 141 +++++++++++++++++++++++++
 2 files changed, 142 insertions(+)
 create mode 100644 Documentation/technical/unit-tests.txt

diff --git a/Documentation/Makefile b/Documentation/Makefile
index b629176d7d..3f2383a12c 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
 TECH_DOCS += technical/send-pack-pipeline
 TECH_DOCS += technical/shallow
 TECH_DOCS += technical/trivial-merge
+TECH_DOCS += technical/unit-tests
 SP_ARTICLES += $(TECH_DOCS)
 SP_ARTICLES += technical/api-index
 
diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
new file mode 100644
index 0000000000..dac8062a43
--- /dev/null
+++ b/Documentation/technical/unit-tests.txt
@@ -0,0 +1,141 @@
+= Unit Testing
+
+In our current testing environment, we spend a significant amount of effort
+crafting end-to-end tests for error conditions that could easily be captured by
+unit tests (or we simply forgo some hard-to-setup and rare error conditions).
+Unit tests additionally provide stability to the codebase and can simplify
+debugging through isolation. Writing unit tests in pure C, rather than with our
+current shell/test-tool helper setup, simplifies test setup, simplifies passing
+data around (no shell-isms required), and reduces testing runtime by not
+spawning a separate process for every test invocation.
+
+We believe that a large body of unit tests, living alongside the existing test
+suite, will improve code quality for the Git project.
+
+== Definitions
+
+For the purposes of this document, we'll use *test framework* to refer to
+projects that support writing test cases and running tests within the context
+of a single executable. *Test harness* will refer to projects that manage
+running multiple executables (each of which may contain multiple test cases) and
+aggregating their results.
+
+In reality, these terms are not strictly defined, and many of the projects
+discussed below contain features from both categories.
+
+
+== Choosing a framework & harness
+
+=== Desired features
+
+==== TAP support
+
+The https://testanything.org/[Test Anything Protocol] is a text-based interface
+that allows tests to communicate with a test harness. It is already used by
+Git's integration test suite. Supporting TAP output is a mandatory feature for
+any prospective test framework.
+
+==== Diagnostic output
+
+When a test case fails, the framework must generate enough diagnostic output to
+help developers find the appropriate test case in source code in order to debug
+the failure.
+
+==== Parallel execution
+
+Ideally, we will build up a significant collection of unit tests cases, most
+likely split across multiple executables. It will be necessary to run these
+tests in parallel to enable fast develop-test-debug cycles.
+
+==== Vendorable or ubiquitous
+
+If possible, we want to avoid forcing Git developers to install new tools just
+to run unit tests. So any prospective frameworks and harnesses must either be
+vendorable (meaning, we can copy their source directly into Git's repository),
+or so ubiquitous that it is reasonable to expect that most developers will have
+the tools installed already.
+
+==== Maintainable / extensible
+
+It is unlikely that any pre-existing project perfectly fits our needs, so any
+project we select will need to be actively maintained and open to accepting
+changes. Alternatively, assuming we are vendoring the source into our repo, it
+must be simple enough that Git developers can feel comfortable making changes as
+needed to our version.
+
+==== Major platform support
+
+At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
+
+==== Lazy test planning
+
+TAP supports the notion of _test plans_, which communicate which test cases are
+expected to run, or which tests actually ran. This allows test harnesses to
+detect if the TAP output has been truncated, or if some tests were skipped due
+to errors or bugs.
+
+The test framework should handle creating plans at runtime, rather than
+requiring test developers to manually create plans, which leads to both human-
+and merge-errors.
+
+==== Skippable tests
+
+Test authors may wish to skip certain test cases based on runtime circumstances,
+so the framework should support this.
+
+==== Test scheduling / re-running
+
+The test harness scheduling should be configurable so that e.g. developers can
+choose to run slow tests first, or to run only tests that failed in a previous
+run.
+
+==== Mock support
+
+Unit test authors may wish to test code that interacts with objects that may be
+inconvenient to handle in a test (e.g. interacting with a network service).
+Mocking allows test authors to provide a fake implementation of these objects
+for more convenient tests.
+
+==== Signal & exception handling
+
+The test framework must fail gracefully when test cases are themselves buggy or
+when they are interrupted by signals during runtime.
+
+==== Coverage reports
+
+It may be convenient to generate coverage reports when running unit tests
+(although it may be possible to accomplish this regardless of test framework /
+harness support).
+
+
+=== Comparison
+
+[format="csv",options="header",width="75%"]
+|=====
+Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
+https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
+https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
+https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
+https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
+https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
+https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
+https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
+https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
+https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
+https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
+https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
+https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
+https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
+https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
+|=====
+
+== Milestones
+
+* Settle on final framework
+* Add useful tests of library-like code
+* Integrate with Makefile
+* Integrate with CI
+* Integrate with
+  https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
+  work]
+* Run alongside regular `make test` target
-- 
2.41.0.162.gfafddb0af9-goog


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
@ 2023-06-13 22:30   ` Junio C Hamano
  2023-06-30 22:18     ` Josh Steadmon
  2023-06-29 19:42   ` Linus Arver
  2023-06-30 14:07   ` Phillip Wood
  2 siblings, 1 reply; 32+ messages in thread
From: Junio C Hamano @ 2023-06-13 22:30 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	sandals

Josh Steadmon <steadmon@google.com> writes:

> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions).Describe what we hope to accomplish by
> implementing unit tests, and explain some open questions and milestones.
> Discuss desired features for test frameworks/harnesses, and provide a
> preliminary comparison of several different frameworks.
>
> Signed-off-by: Josh Steadmon <steadmon@google.com>
> Coauthored-by: Calvin Wan <calvinwan@google.com>
> ---

The co-author should also signal his acceptance of the D-C-O with
his own S-o-b.  [*1*] gives a good example.

> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> new file mode 100644
> index 0000000000..dac8062a43
> --- /dev/null
> +++ b/Documentation/technical/unit-tests.txt
> @@ -0,0 +1,141 @@
> += Unit Testing
> +
> +In our current testing environment, we spend a significant amount of effort
> +crafting end-to-end tests for error conditions that could easily be captured by
> +unit tests (or we simply forgo some hard-to-setup and rare error conditions).
> +Unit tests additionally provide stability to the codebase and can simplify
> +debugging through isolation. Writing unit tests in pure C, rather than with our
> +current shell/test-tool helper setup, simplifies test setup, simplifies passing
> +data around (no shell-isms required), and reduces testing runtime by not
> +spawning a separate process for every test invocation.
> +
> +We believe that a large body of unit tests, living alongside the existing test
> +suite, will improve code quality for the Git project.
> +
> +== Definitions
> +
> +For the purposes of this document, we'll use *test framework* to refer to
> +projects that support writing test cases and running tests within the context
> +of a single executable. *Test harness* will refer to projects that manage
> +running multiple executables (each of which may contain multiple test cases) and
> +aggregating their results.
> +
> +In reality, these terms are not strictly defined, and many of the projects
> +discussed below contain features from both categories.
> +

OK.

> +== Choosing a framework & harness
> +
> +=== Desired features
> +
> +==== TAP support
> +
> +The https://testanything.org/[Test Anything Protocol] is a text-based interface
> +that allows tests to communicate with a test harness. It is already used by
> +Git's integration test suite. Supporting TAP output is a mandatory feature for
> +any prospective test framework.
> +
> +==== Diagnostic output
> +
> +When a test case fails, the framework must generate enough diagnostic output to
> +help developers find the appropriate test case in source code in order to debug
> +the failure.
> +
> +==== Parallel execution
> +
> +Ideally, we will build up a significant collection of unit tests cases, most
> +likely split across multiple executables. It will be necessary to run these
> +tests in parallel to enable fast develop-test-debug cycles.
> +
> +==== Vendorable or ubiquitous
> +
> +If possible, we want to avoid forcing Git developers to install new tools just
> +to run unit tests. So any prospective frameworks and harnesses must either be
> +vendorable (meaning, we can copy their source directly into Git's repository),
> +or so ubiquitous that it is reasonable to expect that most developers will have
> +the tools installed already.
> +
> +==== Maintainable / extensible
> +
> +It is unlikely that any pre-existing project perfectly fits our needs, so any
> +project we select will need to be actively maintained and open to accepting
> +changes. Alternatively, assuming we are vendoring the source into our repo, it
> +must be simple enough that Git developers can feel comfortable making changes as
> +needed to our version.
> +
> +==== Major platform support
> +
> +At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
> +
> +==== Lazy test planning
> +
> +TAP supports the notion of _test plans_, which communicate which test cases are
> +expected to run, or which tests actually ran. This allows test harnesses to
> +detect if the TAP output has been truncated, or if some tests were skipped due
> +to errors or bugs.
> +
> +The test framework should handle creating plans at runtime, rather than
> +requiring test developers to manually create plans, which leads to both human-
> +and merge-errors.
> +
> +==== Skippable tests
> +
> +Test authors may wish to skip certain test cases based on runtime circumstances,
> +so the framework should support this.
> +
> +==== Test scheduling / re-running
> +
> +The test harness scheduling should be configurable so that e.g. developers can
> +choose to run slow tests first, or to run only tests that failed in a previous
> +run.
> +
> +==== Mock support
> +
> +Unit test authors may wish to test code that interacts with objects that may be
> +inconvenient to handle in a test (e.g. interacting with a network service).
> +Mocking allows test authors to provide a fake implementation of these objects
> +for more convenient tests.
> +
> +==== Signal & exception handling
> +
> +The test framework must fail gracefully when test cases are themselves buggy or
> +when they are interrupted by signals during runtime.
> +
> +==== Coverage reports
> +
> +It may be convenient to generate coverage reports when running unit tests
> +(although it may be possible to accomplish this regardless of test framework /
> +harness support).

Good to see evaluation criteria listed.


[Reference]

*1* https://lore.kernel.org/git/836a5665b7df065811edc678cb8e70004f7b7c49.1683581621.git.me@ttaylorr.com/


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
  2023-06-13 22:30   ` Junio C Hamano
@ 2023-06-29 19:42   ` Linus Arver
  2023-06-29 20:48     ` Josh Steadmon
  2023-06-29 21:21     ` Junio C Hamano
  2023-06-30 14:07   ` Phillip Wood
  2 siblings, 2 replies; 32+ messages in thread
From: Linus Arver @ 2023-06-29 19:42 UTC (permalink / raw)
  To: Josh Steadmon, git
  Cc: calvinwan, szeder.dev, phillip.wood123, chooglen, avarab, gitster,
	sandals

Hello,

Josh Steadmon <steadmon@google.com> writes:

> In our current testing environment, we spend a significant amount of
> effort crafting end-to-end tests for error conditions that could easily
> be captured by unit tests (or we simply forgo some hard-to-setup and
> rare error conditions).Describe what we hope to accomplish by

I see a minor typo (no space before the word "Describe").

> +=== Comparison
> +
> +[format="csv",options="header",width="75%"]
> +|=====
> +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
> +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
> +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
> +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
> +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
> +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> +|=====

This table is a little hard to read. Do you have your patch on GitHub or
somewhere else where this table is rendered with HTML?

It would help to explain each of the answers that are filled in
with the word "Partial", to better understand why it is the case. I
suspect this might get a little verbose, in which case I suggest just
giving each framework its own heading.

The column names here are slightly different from the headings used
under "Desired features"; I suggest making them the same.

Also, how about grouping some of these together? For example "Diagnostic
output" and "Coverage reports" feel like they could be grouped under
"Output formats". Here's one way to group these:

    1. Output formats

    TAP support
    Diagnostic output
    Coverage reports

    2. Cost of adoption

    Vendorable / ubiquitous
    Maintainable / extensible
    Major platform support

    3. Performance flexibility

    Parallel execution
    Lazy test planning
    Runtime-skippable tests
    Scheduling / re-running

    4. Developer experience

    Mocks
    Signal & exception handling

I can think of some other metrics to add to the comparison, namely:

    1. Age (how old is the framework)
    2. Size in KLOC (thousands of lines of code)
    3. Adoption rate (which notable C projects already use this framework?)
    4. Project health (how active are its developers?)

I think for 3 and 4, we could probably mine some data out of GitHub
itself.

Lastly it would be helpful if we can mark some of these categories as
must-haves. For example would lack of "Major platform support" alone
disqualify a test framework? This would help fill in the empty bits in
the comparison table because we could skip looking too deeply into a
framework if it fails to meet a must-have requirement.

Thanks,
Linus

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-29 19:42   ` Linus Arver
@ 2023-06-29 20:48     ` Josh Steadmon
  2023-06-30 19:31       ` Linus Arver
  2023-06-30 21:33       ` Josh Steadmon
  2023-06-29 21:21     ` Junio C Hamano
  1 sibling, 2 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-29 20:48 UTC (permalink / raw)
  To: Linus Arver
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	gitster, sandals

On 2023.06.29 12:42, Linus Arver wrote:
> Hello,
> 
> Josh Steadmon <steadmon@google.com> writes:
> 
> > In our current testing environment, we spend a significant amount of
> > effort crafting end-to-end tests for error conditions that could easily
> > be captured by unit tests (or we simply forgo some hard-to-setup and
> > rare error conditions).Describe what we hope to accomplish by
> 
> I see a minor typo (no space before the word "Describe").

Thanks, fixed for V4.

> > +=== Comparison
> > +
> > +[format="csv",options="header",width="75%"]
> > +|=====
> > +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
> > +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
> > +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
> > +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
> > +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +|=====
> 
> This table is a little hard to read. Do you have your patch on GitHub or
> somewhere else where this table is rendered with HTML?

Yes, I've pushed a WIP of this to:
https://github.com/steadmon/git/blob/unit-tests-asciidoc/Documentation/technical/unit-tests.adoc

However, this doesn't render the color coding in the table, so you may
also want to just build it locally:
`make -C Documentation technical/unit-tests.html`

> It would help to explain each of the answers that are filled in
> with the word "Partial", to better understand why it is the case. I
> suspect this might get a little verbose, in which case I suggest just
> giving each framework its own heading.

Yeah that is coming in V4.

> The column names here are slightly different from the headings used
> under "Desired features"; I suggest making them the same.

Fixed for V4.

> Also, how about grouping some of these together? For example "Diagnostic
> output" and "Coverage reports" feel like they could be grouped under
> "Output formats". Here's one way to group these:
> 
>     1. Output formats
> 
>     TAP support
>     Diagnostic output
>     Coverage reports
> 
>     2. Cost of adoption
> 
>     Vendorable / ubiquitous
>     Maintainable / extensible
>     Major platform support
> 
>     3. Performance flexibility
> 
>     Parallel execution
>     Lazy test planning
>     Runtime-skippable tests
>     Scheduling / re-running
> 
>     4. Developer experience
> 
>     Mocks
>     Signal & exception handling

I didn't state it outright, but they're roughly but not perfectly
ordered by priority. Of course, other people may prioritize these
differently, and I'm not set on this ordering either. Grouping by
category does seem more useful.


> I can think of some other metrics to add to the comparison, namely:
> 
>     1. Age (how old is the framework)
>     2. Size in KLOC (thousands of lines of code)
>     3. Adoption rate (which notable C projects already use this framework?)
>     4. Project health (how active are its developers?)
> 
> I think for 3 and 4, we could probably mine some data out of GitHub
> itself.

Interesting, I'll see about adding some of these.


> Lastly it would be helpful if we can mark some of these categories as
> must-haves. For example would lack of "Major platform support" alone
> disqualify a test framework? This would help fill in the empty bits in
> the comparison table because we could skip looking too deeply into a
> framework if it fails to meet a must-have requirement.

Yeah, right now I think supporting TAP is the only non-negotiable one,
but I'll add a discussion about priorities.

> Thanks,
> Linus

Thanks for the review!

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-29 19:42   ` Linus Arver
  2023-06-29 20:48     ` Josh Steadmon
@ 2023-06-29 21:21     ` Junio C Hamano
  2023-06-30  0:11       ` Linus Arver
  1 sibling, 1 reply; 32+ messages in thread
From: Junio C Hamano @ 2023-06-29 21:21 UTC (permalink / raw)
  To: Linus Arver
  Cc: Josh Steadmon, git, calvinwan, szeder.dev, phillip.wood123,
	chooglen, avarab, sandals

Linus Arver <linusa@google.com> writes:

>> ...
>> +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
>> +|=====
>
> This table is a little hard to read. Do you have your patch on GitHub or
> somewhere else where this table is rendered with HTML?

Great suggestion (veiled in a question).

> It would help to explain each of the answers that are filled in
> with the word "Partial", to better understand why it is the case. I
> suspect this might get a little verbose, in which case I suggest just
> giving each framework its own heading.
>
> The column names here are slightly different from the headings used
> under "Desired features"; I suggest making them the same.
>
> Also, how about grouping some of these together? For example "Diagnostic
> output" and "Coverage reports" feel like they could be grouped under
> "Output formats". Here's one way to group these:
>
>     1. Output formats
>
>     TAP support
>     Diagnostic output
>     Coverage reports
>
>     2. Cost of adoption
>
>     Vendorable / ubiquitous
>     Maintainable / extensible
>     Major platform support
>
>     3. Performance flexibility
>
>     Parallel execution
>     Lazy test planning
>     Runtime-skippable tests
>     Scheduling / re-running
>
>     4. Developer experience
>
>     Mocks
>     Signal & exception handling
>
> I can think of some other metrics to add to the comparison, namely:
>
>     1. Age (how old is the framework)
>     2. Size in KLOC (thousands of lines of code)
>     3. Adoption rate (which notable C projects already use this framework?)
>     4. Project health (how active are its developers?)
>
> I think for 3 and 4, we could probably mine some data out of GitHub
> itself.

Great additions (if we are mere users do we care much about #2,
though?).

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-29 21:21     ` Junio C Hamano
@ 2023-06-30  0:11       ` Linus Arver
  0 siblings, 0 replies; 32+ messages in thread
From: Linus Arver @ 2023-06-30  0:11 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Josh Steadmon, git, calvinwan, szeder.dev, phillip.wood123,
	chooglen, avarab, sandals

Junio C Hamano <gitster@pobox.com> writes:

>> I can think of some other metrics to add to the comparison, namely:
>>
>>     1. Age (how old is the framework)
>>     2. Size in KLOC (thousands of lines of code)
>>     3. Adoption rate (which notable C projects already use this framework?)
>>     4. Project health (how active are its developers?)
>>
>> I think for 3 and 4, we could probably mine some data out of GitHub
>> itself.
>
> Great additions (if we are mere users do we care much about #2,
> though?).

Sorry, I forgot to add why I think these metrics are useful: I think
they give some signal about how much influence/respect the framework has
in our industry, with the assumption that the influence/respect
positively correlates with how "good" (sound architecture, well-written,
easy to use, simple to understand, etc) the framework is. For the
frameworks hosted in GitHub, perhaps the number of GitHub Stars is a
better estimate for measuring influence/respect.

That said, I think #2 (measuring KLOC) would still be useful to know
(and is easy enough with tools like tokei [1]), mainly for the scenario
where the framework becomes abandonware. Certainly, a framework with a
lower KLOC count would have a lower maintenance burden if we ever need
to step in to help maintain the framework ourselves. To me this is one
reason why I like the idea of using Phillip Wood's framework [2]
(granted, it is currently only a proof of concept).

[1] https://github.com/XAMPPRocky/tokei
[2] https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
  2023-06-13 22:30   ` Junio C Hamano
  2023-06-29 19:42   ` Linus Arver
@ 2023-06-30 14:07   ` Phillip Wood
  2023-06-30 18:47     ` K Wan
  2023-06-30 22:35     ` Josh Steadmon
  2 siblings, 2 replies; 32+ messages in thread
From: Phillip Wood @ 2023-06-30 14:07 UTC (permalink / raw)
  To: Josh Steadmon, git
  Cc: calvinwan, szeder.dev, chooglen, avarab, gitster, sandals

Hi Josh

Thanks for putting this together, I think it is really helpful to have a 
comparison of the various options. Sorry for the slow reply, I was off 
the list for a couple of weeks.

On 10/06/2023 00:25, Josh Steadmon wrote:
> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> new file mode 100644
> index 0000000000..dac8062a43
> --- /dev/null
> +++ b/Documentation/technical/unit-tests.txt
> @@ -0,0 +1,141 @@
> += Unit Testing

I've deleted the sections I agree with to avoid quoting parts that are 
not relevant to my comments.

> +== Definitions
> +
> +For the purposes of this document, we'll use *test framework* to refer to
> +projects that support writing test cases and running tests within the context
> +of a single executable. *Test harness* will refer to projects that manage
> +running multiple executables (each of which may contain multiple test cases) and
> +aggregating their results.

Thanks for adding this, it is really helpful to have definitions for 
what we mean by "test framework" and "test harness" within the git 
project. It might be worth mentioning somewhere that we already use 
prove as a test harness when running our integration tests.

> +In reality, these terms are not strictly defined, and many of the projects
> +discussed below contain features from both categories.

> +
> +== Choosing a framework & harness
> +
> +=== Desired features
> +
> [...]
> +==== Parallel execution
> +
> +Ideally, we will build up a significant collection of unit tests cases, most
> +likely split across multiple executables. It will be necessary to run these
> +tests in parallel to enable fast develop-test-debug cycles.

This is a good point, though I think it is really a property of the 
harness rather than the framework so we might want to indicate in the 
table whether a framework provides parallelism itself or relies on the 
harness providing it.

 > [...]
> +==== Major platform support
> +
> +At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.

I think we'd want to be able to run unit tests on *BSD and NonStop as 
well, especially as I think some of the platform dependent code probably 
lends itself to being unit tested. I suspect a framework that covers 
Linux and MacOS would probably run on those platforms as well (I don't 
think NonStop has complete POSIX support but it is hard to imagine a 
test framework doing anything very exotic)

> [...]
> +==== Mock support
> +
> +Unit test authors may wish to test code that interacts with objects that may be
> +inconvenient to handle in a test (e.g. interacting with a network service).
> +Mocking allows test authors to provide a fake implementation of these objects
> +for more convenient tests.

Do we have any idea what sort of thing we're likely to want to mock and 
what we want that support to look like?

> +==== Signal & exception handling
> +
> +The test framework must fail gracefully when test cases are themselves buggy or
> +when they are interrupted by signals during runtime.

I had assumed that it would be enough for the test harness to detect if 
a test executable was killed by a signal or exited early due to a bug in 
the test script. That requires the framework to have robust support for 
lazy test plans but I'm not sure that we need it to catch and recover 
from things like SIGSEGV.

> +==== Coverage reports
> +
> +It may be convenient to generate coverage reports when running unit tests
> +(although it may be possible to accomplish this regardless of test framework /
> +harness support).

I agree this would be useful, though perhaps we should build it on our 
existing gcov usage.

Related to this do we want timing reports from the harness or the framework?

> +
> +=== Comparison
> +
> +[format="csv",options="header",width="75%"]
> +|=====
> +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
> +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
> +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
> +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
> +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
> +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
> +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> +|=====

Thanks for going through these projects, hopefully we can use this 
information to make a decision on a framework soon.

Best Wishes

Phillip

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-30 14:07   ` Phillip Wood
@ 2023-06-30 18:47     ` K Wan
  2023-06-30 22:35     ` Josh Steadmon
  1 sibling, 0 replies; 32+ messages in thread
From: K Wan @ 2023-06-30 18:47 UTC (permalink / raw)
  To: phillip.wood
  Cc: Josh Steadmon, git, szeder.dev, chooglen, avarab, gitster,
	sandals

Hi 


Pleasure exclude my email from this discussion.

Thank you

> On Jun 30, 2023, at 6:08 AM, Phillip Wood <phillip.wood123@gmail.com> wrote:
> 
> Hi Josh
> 
> Thanks for putting this together, I think it is really helpful to have a comparison of the various options. Sorry for the slow reply, I was off the list for a couple of weeks.
> 
>> On 10/06/2023 00:25, Josh Steadmon wrote:
>> diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
>> new file mode 100644
>> index 0000000000..dac8062a43
>> --- /dev/null
>> +++ b/Documentation/technical/unit-tests.txt
>> @@ -0,0 +1,141 @@
>> += Unit Testing
> 
> I've deleted the sections I agree with to avoid quoting parts that are not relevant to my comments.
> 
>> +== Definitions
>> +
>> +For the purposes of this document, we'll use *test framework* to refer to
>> +projects that support writing test cases and running tests within the context
>> +of a single executable. *Test harness* will refer to projects that manage
>> +running multiple executables (each of which may contain multiple test cases) and
>> +aggregating their results.
> 
> Thanks for adding this, it is really helpful to have definitions for what we mean by "test framework" and "test harness" within the git project. It might be worth mentioning somewhere that we already use prove as a test harness when running our integration tests.
> 
>> +In reality, these terms are not strictly defined, and many of the projects
>> +discussed below contain features from both categories.
> 
>> +
>> +== Choosing a framework & harness
>> +
>> +=== Desired features
>> +
>> [...]
>> +==== Parallel execution
>> +
>> +Ideally, we will build up a significant collection of unit tests cases, most
>> +likely split across multiple executables. It will be necessary to run these
>> +tests in parallel to enable fast develop-test-debug cycles.
> 
> This is a good point, though I think it is really a property of the harness rather than the framework so we might want to indicate in the table whether a framework provides parallelism itself or relies on the harness providing it.
> 
> > [...]
>> +==== Major platform support
>> +
>> +At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
> 
> I think we'd want to be able to run unit tests on *BSD and NonStop as well, especially as I think some of the platform dependent code probably lends itself to being unit tested. I suspect a framework that covers Linux and MacOS would probably run on those platforms as well (I don't think NonStop has complete POSIX support but it is hard to imagine a test framework doing anything very exotic)
> 
>> [...]
>> +==== Mock support
>> +
>> +Unit test authors may wish to test code that interacts with objects that may be
>> +inconvenient to handle in a test (e.g. interacting with a network service).
>> +Mocking allows test authors to provide a fake implementation of these objects
>> +for more convenient tests.
> 
> Do we have any idea what sort of thing we're likely to want to mock and what we want that support to look like?
> 
>> +==== Signal & exception handling
>> +
>> +The test framework must fail gracefully when test cases are themselves buggy or
>> +when they are interrupted by signals during runtime.
> 
> I had assumed that it would be enough for the test harness to detect if a test executable was killed by a signal or exited early due to a bug in the test script. That requires the framework to have robust support for lazy test plans but I'm not sure that we need it to catch and recover from things like SIGSEGV.
> 
>> +==== Coverage reports
>> +
>> +It may be convenient to generate coverage reports when running unit tests
>> +(although it may be possible to accomplish this regardless of test framework /
>> +harness support).
> 
> I agree this would be useful, though perhaps we should build it on our existing gcov usage.
> 
> Related to this do we want timing reports from the harness or the framework?
> 
>> +
>> +=== Comparison
>> +
>> +[format="csv",options="header",width="75%"]
>> +|=====
>> +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
>> +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
>> +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
>> +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
>> +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
>> +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
>> +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
>> +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
>> +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
>> +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
>> +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
>> +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
>> +|=====
> 
> Thanks for going through these projects, hopefully we can use this information to make a decision on a framework soon.
> 
> Best Wishes
> 
> Phillip

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-29 20:48     ` Josh Steadmon
@ 2023-06-30 19:31       ` Linus Arver
  2023-07-06 18:24         ` Glen Choo
  2023-06-30 21:33       ` Josh Steadmon
  1 sibling, 1 reply; 32+ messages in thread
From: Linus Arver @ 2023-06-30 19:31 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	gitster, sandals

Josh Steadmon <steadmon@google.com> writes:

> On 2023.06.29 12:42, Linus Arver wrote:
>> I can think of some other metrics to add to the comparison, namely:
>> 
>>     1. Age (how old is the framework)
>>     2. Size in KLOC (thousands of lines of code)
>>     3. Adoption rate (which notable C projects already use this framework?)
>>     4. Project health (how active are its developers?)
>> 
>> I think for 3 and 4, we could probably mine some data out of GitHub
>> itself.
>
> Interesting, I'll see about adding some of these.

Sorry, one more thing worth considering is the ability to add tests
inline with production code (where the test code can be removed in
production builds). There are a number of benefits to this and, I think
it is a useful feature to have. I saw this feature being advertised for
a C++ testing framework called doctest [1], but I assume it is also
possible in C. Could you include it as another (nice to have?) feature
under the "Developer experience" category? (Or, reject it if this
"inlined tests" style is not possible in C?)

[1] https://github.com/doctest/doctest

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-29 20:48     ` Josh Steadmon
  2023-06-30 19:31       ` Linus Arver
@ 2023-06-30 21:33       ` Josh Steadmon
  1 sibling, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-30 21:33 UTC (permalink / raw)
  To: Linus Arver, git, calvinwan, szeder.dev, phillip.wood123,
	chooglen, avarab, gitster, sandals

On 2023.06.29 13:48, Josh Steadmon wrote:
> On 2023.06.29 12:42, Linus Arver wrote:
> > I can think of some other metrics to add to the comparison, namely:
> > 
> >     1. Age (how old is the framework)
> >     2. Size in KLOC (thousands of lines of code)
> >     3. Adoption rate (which notable C projects already use this framework?)
> >     4. Project health (how active are its developers?)
> > 
> > I think for 3 and 4, we could probably mine some data out of GitHub
> > itself.
> 
> Interesting, I'll see about adding some of these.

For now, I'm going to exclude Age (because all else being equal, it's
not clear to me why we would prefer and older or younger project) and
Project Health (because it's not clear how to begin to measure this). If
you have further thoughts that could clarify, I'd be happy to
reconsider.

Thanks!

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-13 22:30   ` Junio C Hamano
@ 2023-06-30 22:18     ` Josh Steadmon
  0 siblings, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-30 22:18 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	sandals

On 2023.06.13 15:30, Junio C Hamano wrote:
> Josh Steadmon <steadmon@google.com> writes:
> 
> > In our current testing environment, we spend a significant amount of
> > effort crafting end-to-end tests for error conditions that could easily
> > be captured by unit tests (or we simply forgo some hard-to-setup and
> > rare error conditions).Describe what we hope to accomplish by
> > implementing unit tests, and explain some open questions and milestones.
> > Discuss desired features for test frameworks/harnesses, and provide a
> > preliminary comparison of several different frameworks.
> >
> > Signed-off-by: Josh Steadmon <steadmon@google.com>
> > Coauthored-by: Calvin Wan <calvinwan@google.com>
> > ---
> 
> The co-author should also signal his acceptance of the D-C-O with
> his own S-o-b.  [*1*] gives a good example.

Fixed in V4. Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-30 14:07   ` Phillip Wood
  2023-06-30 18:47     ` K Wan
@ 2023-06-30 22:35     ` Josh Steadmon
  1 sibling, 0 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-06-30 22:35 UTC (permalink / raw)
  To: phillip.wood
  Cc: git, calvinwan, szeder.dev, chooglen, avarab, gitster, sandals

On 2023.06.30 15:07, Phillip Wood wrote:
> Hi Josh
> 
> Thanks for putting this together, I think it is really helpful to have a
> comparison of the various options. Sorry for the slow reply, I was off the
> list for a couple of weeks.

Thank you for the review! Unfortunately I didn't see it in time for any
of these to make it into v4, but I'll keep them all as TODOs.


> On 10/06/2023 00:25, Josh Steadmon wrote:
> > diff --git a/Documentation/technical/unit-tests.txt b/Documentation/technical/unit-tests.txt
> > new file mode 100644
> > index 0000000000..dac8062a43
> > --- /dev/null
> > +++ b/Documentation/technical/unit-tests.txt

> > +== Definitions
> > +
> > +For the purposes of this document, we'll use *test framework* to refer to
> > +projects that support writing test cases and running tests within the context
> > +of a single executable. *Test harness* will refer to projects that manage
> > +running multiple executables (each of which may contain multiple test cases) and
> > +aggregating their results.
> 
> Thanks for adding this, it is really helpful to have definitions for what we
> mean by "test framework" and "test harness" within the git project. It might
> be worth mentioning somewhere that we already use prove as a test harness
> when running our integration tests.

Yeah, I'll try to clarify this, probably not in V4 though. I'll note it
as a remaining TODO.

> > +==== Parallel execution
> > +
> > +Ideally, we will build up a significant collection of unit tests cases, most
> > +likely split across multiple executables. It will be necessary to run these
> > +tests in parallel to enable fast develop-test-debug cycles.
> 
> This is a good point, though I think it is really a property of the harness
> rather than the framework so we might want to indicate in the table whether
> a framework provides parallelism itself or relies on the harness providing
> it.

Same here.

> > [...]
> > +==== Major platform support
> > +
> > +At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
> 
> I think we'd want to be able to run unit tests on *BSD and NonStop as well,
> especially as I think some of the platform dependent code probably lends
> itself to being unit tested. I suspect a framework that covers Linux and
> MacOS would probably run on those platforms as well (I don't think NonStop
> has complete POSIX support but it is hard to imagine a test framework doing
> anything very exotic)

Yes, unfortunately I don't have easy access to either of these, but I'll
try to figure out how to evaluate these.

> > [...]
> > +==== Mock support
> > +
> > +Unit test authors may wish to test code that interacts with objects that may be
> > +inconvenient to handle in a test (e.g. interacting with a network service).
> > +Mocking allows test authors to provide a fake implementation of these objects
> > +for more convenient tests.
> 
> Do we have any idea what sort of thing we're likely to want to mock and what
> we want that support to look like?

Not at the moment. Another TODO for v5.

> > +==== Signal & exception handling
> > +
> > +The test framework must fail gracefully when test cases are themselves buggy or
> > +when they are interrupted by signals during runtime.
> 
> I had assumed that it would be enough for the test harness to detect if a
> test executable was killed by a signal or exited early due to a bug in the
> test script. That requires the framework to have robust support for lazy
> test plans but I'm not sure that we need it to catch and recover from things
> like SIGSEGV.

I think as long as a SIGSEGV in the test code doesn't cause the entire
test run to crash, we'll be OK. Agreed that this is really more of a
harness feature.


> > +==== Coverage reports
> > +
> > +It may be convenient to generate coverage reports when running unit tests
> > +(although it may be possible to accomplish this regardless of test framework /
> > +harness support).
> 
> I agree this would be useful, though perhaps we should build it on our
> existing gcov usage.
> 
> Related to this do we want timing reports from the harness or the framework?

I'll add this as well in V5.

> > +
> > +=== Comparison
> > +
> > +[format="csv",options="header",width="75%"]
> > +|=====
> > +Framework,"TAP support","Diagnostic output","Parallel execution","Vendorable / ubiquitous","Maintainable / extensible","Major platform support","Lazy test planning","Runtime- skippable tests","Scheduling / re-running",Mocks,"Signal & exception handling","Coverage reports"
> > +https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.],[lime-background]#True#,[lime-background]#True#,?,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,[lime-background]#True#,?,?,[red-background]#False#,?,?
> > +https://cmocka.org/[cmocka],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[lime-background]#True#,?,?
> > +https://libcheck.github.io/check/[Check],[lime-background]#True#,[lime-background]#True#,?,[red-background]#False#,[yellow-background]#Partial#,[lime-background]#True#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/rra/c-tap-harness/[C TAP],[lime-background]#True#,[red-background]#False#,?,[lime-background]#True#,[yellow-background]#Partial#,[yellow-background]#Partial#,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/silentbicycle/greatest[Greatest],[yellow-background]#Partial#,?,?,[lime-background]#True#,[yellow-background]#Partial#,?,[yellow-background]#Partial#,?,?,[red-background]#False#,?,?
> > +https://github.com/Snaipe/Criterion[Criterion],[lime-background]#True#,?,?,[red-background]#False#,?,[lime-background]#True#,?,?,?,[red-background]#False#,?,?
> > +https://github.com/zorgnax/libtap[libtap],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +https://nemequ.github.io/munit/[µnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/google/cmockery[cmockery],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/lpabon/cmockery2[cmockery2],?,?,?,?,?,?,?,?,?,[lime-background]#True#,?,?
> > +https://github.com/ThrowTheSwitch/Unity[Unity],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://github.com/siu/minunit[minunit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://cunit.sourceforge.net/[CUnit],?,?,?,?,?,?,?,?,?,?,?,?
> > +https://www.kindahl.net/mytap/doc/index.html[MyTAP],[lime-background]#True#,?,?,?,?,?,?,?,?,?,?,?
> > +|=====
> 
> Thanks for going through these projects, hopefully we can use this
> information to make a decision on a framework soon.
> 
> Best Wishes
> 
> Phillip

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-06-30 19:31       ` Linus Arver
@ 2023-07-06 18:24         ` Glen Choo
  2023-07-06 19:02           ` Junio C Hamano
  0 siblings, 1 reply; 32+ messages in thread
From: Glen Choo @ 2023-07-06 18:24 UTC (permalink / raw)
  To: Linus Arver, Josh Steadmon
  Cc: git, calvinwan, szeder.dev, phillip.wood123, avarab, gitster,
	sandals

Linus Arver <linusa@google.com> writes:

> Josh Steadmon <steadmon@google.com> writes:
>
> Sorry, one more thing worth considering is the ability to add tests
> inline with production code (where the test code can be removed in
> production builds). There are a number of benefits to this and, I think
> it is a useful feature to have. I saw this feature being advertised for
> a C++ testing framework called doctest [1], but I assume it is also
> possible in C. Could you include it as another (nice to have?) feature
> under the "Developer experience" category? (Or, reject it if this
> "inlined tests" style is not possible in C?)
>
> [1] https://github.com/doctest/doctest

I suspect it's possible in C, but it's not that desirable for Git.

- Inline tests are, by nature, non-production 'noise' in the source
  file, and can hamper readability of the file. This will probably be
  exaggerated in Git because our interfaces weren't designed with unit
  tests in mind, so tests may be extremley noisy to set up.

- The described inline approach seems to be to build the same executable
  but with different flags. But for libification, we'd want to verify
  that we can build a separate executable with only a subset of files. A
  natural way to do that is to have unit tests in a separate file and
  for that to depend on only the subset we want.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-07-06 18:24         ` Glen Choo
@ 2023-07-06 19:02           ` Junio C Hamano
  2023-07-06 22:48             ` Glen Choo
  0 siblings, 1 reply; 32+ messages in thread
From: Junio C Hamano @ 2023-07-06 19:02 UTC (permalink / raw)
  To: Glen Choo
  Cc: Linus Arver, Josh Steadmon, git, calvinwan, szeder.dev,
	phillip.wood123, avarab, sandals

Glen Choo <chooglen@google.com> writes:

> - Inline tests are, by nature, non-production 'noise' in the source
>   file, and can hamper readability of the file. This will probably be
>   exaggerated in Git because our interfaces weren't designed with unit
>   tests in mind, so tests may be extremley noisy to set up.

I do agree with the first sentence, but I am not sure what you mean
by "our interfaces weren't designed with unit tests in mind".  Do
you mean that in the longer term it would be good to tweak the
interfaces with "unit tests in mind" (and add new interfaces that
way?  Or do you mean interfaces that are written with "unit tests
in mind" inherently becomes noisy when inline non-production tests
are mixed in?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH v3 1/1] unit tests: Add a project plan document
  2023-07-06 19:02           ` Junio C Hamano
@ 2023-07-06 22:48             ` Glen Choo
  0 siblings, 0 replies; 32+ messages in thread
From: Glen Choo @ 2023-07-06 22:48 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Linus Arver, Josh Steadmon, git, calvinwan, szeder.dev,
	phillip.wood123, avarab, sandals

Junio C Hamano <gitster@pobox.com> writes:

> Glen Choo <chooglen@google.com> writes:
>
>> - Inline tests are, by nature, non-production 'noise' in the source
>>   file, and can hamper readability of the file. This will probably be
>>   exaggerated in Git because our interfaces weren't designed with unit
>>   tests in mind, so tests may be extremley noisy to set up.
>
> I do agree with the first sentence, but I am not sure what you mean
> by "our interfaces weren't designed with unit tests in mind".

[...]

> Do
> you mean that in the longer term it would be good to tweak the
> interfaces with "unit tests in mind" (and add new interfaces that
> way?

Ah. I agree, but this isn't what I meant.

> Or do you mean interfaces that are written with "unit tests
> in mind" inherently becomes noisy when inline non-production tests
> are mixed in?

I meant the opposite of this actually. Interfaces with "unit tests" in
mind result in simpler tests, so they will be less complicated to setup
and thus be less noisy. Testing the existing, pre-unit test interfaces
will be very noisy, so I don't think they will lend themselves well to
inline tests.

But of course, (per the prior point) we are trying to make the
interfaces cleaner, which should naturally make them more unit
test-friendly, so this will become less important over time. I think the
other objection - that we want unit tests to enforce cleanliness of our
build dependencies - is the more important one.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file)
  2023-05-18 17:17   ` Junio C Hamano
@ 2023-07-14 23:38     ` Josh Steadmon
  2023-07-15  0:34       ` Splitting common-main Junio C Hamano
  2023-08-14 13:09       ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Jeff Hostetler
  0 siblings, 2 replies; 32+ messages in thread
From: Josh Steadmon @ 2023-07-14 23:38 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	sandals

Hi, I'd like to revisit this as it's also relevant to a non-unit-test
issue (`make fuzz-all` is currently broken). I have some questions
inline below:

On 2023.05.18 10:17, Junio C Hamano wrote:
> steadmon@google.com writes:
> 
> > It is convenient to have common_exit() in its own object file so that
> > standalone programs may link to it (and any other object files that
> > depend on it) while still having their own independent main() function.
> 
> I am not so sure if this is a good direction to go in, though.  The
> common_exit() function does two things that are very specific to and
> dependent on what Git runtime has supposed to have done, like
> initializing trace2 subsystem and linking with usage.c to make
> bug_called_must_BUG exist.

True. We won't call common_exit() unless we're trying to exit() from a
file that also includes git-compat-util.h, but I guess that's not a
guarantee that trace2 is initialized or that usage.o is linked.

> I understand that a third-party or standalone non-Git programs may
> want to do _more_ than what our main() does when starting up, but it
> should be doable if make our main() call out to a hook function,
> whose definition in Git is a no-op, that can be replaced by their
> own implementation to do whatever they want to happen in main(), no?
> 
> The reason why I am not comfortable with this patch is because I
> cannot say why this split is better than other possible split.  For
> example, we could instead split only our 'main' out to a separate
> file, say "main.c", and put main.o together with common-main.o to
> libgit.a to be found by the linker, and that arrangement will also
> help your "standalone programs" having their own main() function.
> Now with these two possible ways to split (and there may be other
> split that may be even more convenient; I simply do not know), which
> one is better, and what's the argument for each approach?

Sorry, I don't think I'm understanding your proposal here properly,
please let me know where I'm going wrong: isn't this functionally
equivalent to my patch, just with different filenames? Now main() would
live in main.c (vs. my common-main.c), while check_bug_if_BUG() and
common_exit() would live in common-main.c (now a misnomer, vs. my
common-exit.c). I'm not following how that changes anything so I'm
pretty sure I've misunderstood.

The issue I was trying to solve (whether for a unit-test framework or
for the fuzzing engine) is that we don't have direct control over their
main(), and so we can't rename it to avoid conflicts with our main().

I guess there may be some linker magic we could do to avoid the conflict
and have (our) main() call (their, renamed) main()? I don't know offhand
if that's actually possible, just speculating. Even if possible, it
feels more complicated to me, but again that may just be due to my lack
of linker knowledge.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Splitting common-main
  2023-07-14 23:38     ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Josh Steadmon
@ 2023-07-15  0:34       ` Junio C Hamano
  2023-08-14 13:09       ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Jeff Hostetler
  1 sibling, 0 replies; 32+ messages in thread
From: Junio C Hamano @ 2023-07-15  0:34 UTC (permalink / raw)
  To: Josh Steadmon
  Cc: git, calvinwan, szeder.dev, phillip.wood123, chooglen, avarab,
	sandals

Josh Steadmon <steadmon@google.com> writes:

> Sorry, I don't think I'm understanding your proposal here properly,
> please let me know where I'm going wrong: isn't this functionally
> equivalent to my patch, just with different filenames? Now main() would
> live in main.c (vs. my common-main.c), while check_bug_if_BUG() and
> common_exit() would live in common-main.c (now a misnomer, vs. my
> common-exit.c). I'm not following how that changes anything so I'm
> pretty sure I've misunderstood.

Sorry, the old discussion has expired out of my brain, and asking
what I had in mind back then is too late.

Your common-main.c has stuff _other than_ main(), and the remaining
main() has tons of Git specific stuff.  It may be one way to split,
but I did not find a reason to convince myself that it was a good
split.

What I was wondering as a straw-man alternative was to have main.c
that has only this and nothing else:

    $ cat >main.c <<\EOF
    #include "git-compat-util.h" /* or whatever defines git_main() */
    int main(int ac, char **av)
    {
	return git_main(ac, av);
    }
    EOF

Then in common-main.c, rename main() to git_main().

I was not saying such a split would be superiour compared to how you
moved only some stuff out of common-main.c to a separate file.  I
had trouble equally with your split and with the above strawman,
because I did not (and do not) quite see how one would evaluate the
result (go back to the message you are responding to for details).

> The issue I was trying to solve (whether for a unit-test framework or
> for the fuzzing engine) is that we don't have direct control over their
> main(), and so we can't rename it to avoid conflicts with our main().

Sure.  And if we by default use a very thin main() that calls
git_main(), it would be very easy for them to replace that main.o
file with their own implementation of main(); as long as they
eventually call git_main(), they can borrow what we do in ours.

> I guess there may be some linker magic we could do to avoid the conflict
> and have (our) main() call (their, renamed) main()?

We can throw a main.o that has the implementation of our default
"main" function into "libgit.a".

Then, when we link our "git" program (either built-in programs that
are reachable from git.o, or standalone programs like remote-curl.o
that have their own cmd_main()), we list our object files (but we do
not have to list main.o) and tuck libgit.a at the end of the linker
command line.  As the start-up runtime code needs to find symbol
"main", and the linker sees no object files listed has "main", the
linker goes in and finds main.o stored in libgit.a (which has "main"
defined) and that will end up being linked.

If on the other hand when we link somebody else's program that has
its own "main()", we list the object files that make up the program,
including the one that has their "main()", before "libgit.a" and
the linker does not bother trying to find "main" in libgit.a:main.o
so the resulting binary will use their main().

Is that what you are looking for?

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file)
  2023-07-14 23:38     ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Josh Steadmon
  2023-07-15  0:34       ` Splitting common-main Junio C Hamano
@ 2023-08-14 13:09       ` Jeff Hostetler
  1 sibling, 0 replies; 32+ messages in thread
From: Jeff Hostetler @ 2023-08-14 13:09 UTC (permalink / raw)
  To: Josh Steadmon, Junio C Hamano, git, calvinwan, szeder.dev,
	phillip.wood123, chooglen, avarab, sandals



On 7/14/23 7:38 PM, Josh Steadmon wrote:
> Hi, I'd like to revisit this as it's also relevant to a non-unit-test
> issue (`make fuzz-all` is currently broken). I have some questions
> inline below:
> 
> On 2023.05.18 10:17, Junio C Hamano wrote:
>> steadmon@google.com writes:
>>
>>> It is convenient to have common_exit() in its own object file so that
>>> standalone programs may link to it (and any other object files that
>>> depend on it) while still having their own independent main() function.
>>
>> I am not so sure if this is a good direction to go in, though.  The
>> common_exit() function does two things that are very specific to and
>> dependent on what Git runtime has supposed to have done, like
>> initializing trace2 subsystem and linking with usage.c to make
>> bug_called_must_BUG exist.
> 
> True. We won't call common_exit() unless we're trying to exit() from a
> file that also includes git-compat-util.h, but I guess that's not a
> guarantee that trace2 is initialized or that usage.o is linked.
> 
>> I understand that a third-party or standalone non-Git programs may
>> want to do _more_ than what our main() does when starting up, but it
>> should be doable if make our main() call out to a hook function,
>> whose definition in Git is a no-op, that can be replaced by their
>> own implementation to do whatever they want to happen in main(), no?
>>
>> The reason why I am not comfortable with this patch is because I
>> cannot say why this split is better than other possible split.  For
>> example, we could instead split only our 'main' out to a separate
>> file, say "main.c", and put main.o together with common-main.o to
>> libgit.a to be found by the linker, and that arrangement will also
>> help your "standalone programs" having their own main() function.
>> Now with these two possible ways to split (and there may be other
>> split that may be even more convenient; I simply do not know), which
>> one is better, and what's the argument for each approach?
> 
> Sorry, I don't think I'm understanding your proposal here properly,
> please let me know where I'm going wrong: isn't this functionally
> equivalent to my patch, just with different filenames? Now main() would
> live in main.c (vs. my common-main.c), while check_bug_if_BUG() and
> common_exit() would live in common-main.c (now a misnomer, vs. my
> common-exit.c). I'm not following how that changes anything so I'm
> pretty sure I've misunderstood.
> 
> The issue I was trying to solve (whether for a unit-test framework or
> for the fuzzing engine) is that we don't have direct control over their
> main(), and so we can't rename it to avoid conflicts with our main().
> 
> I guess there may be some linker magic we could do to avoid the conflict
> and have (our) main() call (their, renamed) main()? I don't know offhand
> if that's actually possible, just speculating. Even if possible, it
> feels more complicated to me, but again that may just be due to my lack
> of linker knowledge.


I missed the original discussion and am definitely late to the party,
but an FYI there is also a `wmain()` in `compat/mingw.c` that is used
for MSVC builds on Windows.  It sets up some OS process stuff before
calling the actual `main()`.

Jeff

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2023-08-14 13:10 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-17 23:56 [PATCH RFC v2 0/4] Add an external testing library for unit tests steadmon
2023-05-17 23:56 ` [PATCH RFC v2 1/4] common-main: split common_exit() into a new file steadmon
2023-05-18 17:17   ` Junio C Hamano
2023-07-14 23:38     ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Josh Steadmon
2023-07-15  0:34       ` Splitting common-main Junio C Hamano
2023-08-14 13:09       ` Splitting common-main (Was: Re: [PATCH RFC v2 1/4] common-main: split common_exit() into a new file) Jeff Hostetler
2023-05-17 23:56 ` [PATCH RFC v2 2/4] unit tests: Add a project plan document steadmon
2023-05-18 13:13   ` Phillip Wood
2023-05-18 20:15   ` Glen Choo
2023-05-24 17:40     ` Josh Steadmon
2023-06-01  9:19     ` Phillip Wood
2023-05-17 23:56 ` [PATCH RFC v2 3/4] Add C TAP harness steadmon
2023-05-18 13:15   ` Phillip Wood
2023-05-18 20:50     ` Josh Steadmon
2023-05-17 23:56 ` [PATCH RFC v2 4/4] unit test: add basic example and build rules steadmon
2023-05-18 13:32   ` Phillip Wood
2023-06-09 23:25 ` [RFC PATCH v3 0/1] Add a project document for adding unit tests Josh Steadmon
2023-06-09 23:25 ` [RFC PATCH v3 1/1] unit tests: Add a project plan document Josh Steadmon
2023-06-13 22:30   ` Junio C Hamano
2023-06-30 22:18     ` Josh Steadmon
2023-06-29 19:42   ` Linus Arver
2023-06-29 20:48     ` Josh Steadmon
2023-06-30 19:31       ` Linus Arver
2023-07-06 18:24         ` Glen Choo
2023-07-06 19:02           ` Junio C Hamano
2023-07-06 22:48             ` Glen Choo
2023-06-30 21:33       ` Josh Steadmon
2023-06-29 21:21     ` Junio C Hamano
2023-06-30  0:11       ` Linus Arver
2023-06-30 14:07   ` Phillip Wood
2023-06-30 18:47     ` K Wan
2023-06-30 22:35     ` Josh Steadmon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).