From: Eric Wong <e@80x24.org>
To: spew@80x24.org
Subject: [PATCH 3/5] mjit.c: allow working on platforms without SIGCHLD
Date: Mon, 25 Jun 2018 00:22:18 +0000 [thread overview]
Message-ID: <20180625002220.29490-4-e@80x24.org> (raw)
In-Reply-To: <20180625002220.29490-1-e@80x24.org>
While we're at it, simplify lock management in exec_process by
avoiding early returns.
---
mjit.c | 33 ++++++++++++++++++++-------------
1 file changed, 20 insertions(+), 13 deletions(-)
diff --git a/mjit.c b/mjit.c
index 82c8ae2d7f..7af87ec799 100644
--- a/mjit.c
+++ b/mjit.c
@@ -128,6 +128,7 @@ pid_t ruby_waitpid_locked(rb_vm_t *, rb_pid_t, int *status, int options,
#define WEXITSTATUS(S) (S)
#define WIFSIGNALED(S) (0)
typedef intptr_t pid_t;
+#define USE_RUBY_WAITPID_LOCKED (0)
#endif
/* Atomically set function pointer if possible. */
@@ -141,6 +142,10 @@ typedef intptr_t pid_t;
# define MJIT_ATOMIC_SET(var, val) ATOMIC_SET(var, val)
#endif
+#ifndef USE_RUBY_WAITPID_LOCKED /* platforms with real waitpid */
+# define USE_RUBY_WAITPID_LOCKED (1)
+#endif /* USE_RUBY_WAITPID_LOCKED */
+
/* A copy of MJIT portion of MRI options since MJIT initialization. We
need them as MJIT threads still can work when the most MRI data were
freed. */
@@ -386,24 +391,23 @@ start_process(const char *path, char *const *argv)
static int
exec_process(const char *path, char *const argv[])
{
- int stat, exit_code;
+ int stat, exit_code = -2;
pid_t pid;
- rb_vm_t *vm = GET_VM();
+ rb_vm_t *vm = USE_RUBY_WAITPID_LOCKED ? GET_VM() : 0;
rb_nativethread_cond_t cond;
- rb_nativethread_lock_lock(&vm->waitpid_lock);
- pid = start_process(path, argv);
- if (pid <= 0) {
- rb_nativethread_lock_unlock(&vm->waitpid_lock);
- return -2;
+ if (vm) {
+ rb_native_cond_initialize(&cond);
+ rb_nativethread_lock_lock(&vm->waitpid_lock);
}
- rb_native_cond_initialize(&cond);
- for (;;) {
- pid_t r = ruby_waitpid_locked(vm, pid, &stat, 0, &cond);
+
+ pid = start_process(path, argv);
+ for (;pid > 0;) {
+ pid_t r = vm ? ruby_waitpid_locked(vm, pid, &stat, 0, &cond)
+ : waitpid(pid, &stat, 0);
if (r == -1) {
if (errno == EINTR) continue; /* should never happen */
fprintf(stderr, "waitpid: %s\n", strerror(errno));
- exit_code = -2;
break;
}
else if (r == pid) {
@@ -416,8 +420,11 @@ exec_process(const char *path, char *const argv[])
}
}
}
- rb_nativethread_lock_unlock(&vm->waitpid_lock);
- rb_native_cond_destroy(&cond);
+
+ if (vm) {
+ rb_native_cond_destroy(&cond);
+ rb_nativethread_lock_unlock(&vm->waitpid_lock);
+ }
return exit_code;
}
--
EW
next prev parent reply other threads:[~2018-06-25 0:22 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-25 0:22 [PATCH 0/5] SIGCHLD hijacking for Process.wait compatibility with MJIT Eric Wong
2018-06-25 0:22 ` [PATCH 1/5] hijack SIGCHLD handler for internal use Eric Wong
2018-06-25 0:22 ` [PATCH 2/5] fix SIGCHLD hijacking race conditions Eric Wong
2018-06-25 0:22 ` Eric Wong [this message]
2018-06-25 0:22 ` [PATCH 4/5] Revert "test_process.rb: skip tests for Bug 14867" Eric Wong
2018-06-25 0:22 ` [PATCH 5/5] Revert "spec: skip Process wait specs on MJIT" Eric Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180625002220.29490-4-e@80x24.org \
--to=e@80x24.org \
--cc=spew@80x24.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).