From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: X-Spam-Status: No, score=-4.2 required=3.0 tests=ALL_TRUSTED,BAYES_00, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id 33EF71FAF1 for ; Sun, 20 Nov 2022 09:14:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=80x24.org; s=selector1; t=1668935665; bh=0iCCcT2uXAd5t15tHRXka+s7MSgC3ZktMXuW8UxYLus=; h=From:To:Subject:Date:In-Reply-To:References:From; b=09OnsVo0PfenVkOgLv49NL/mbmdDWrTPMCfydh7voNNptOs2OwDMrmglQ7Is8AQVH 8Ru7iDLHNjXbE5DG06oP8HBowf4zbNyBPcV6MNBxMnnD+Uln0hsDLNn6+alCchPDhA M5AhDU5P2MScT1kz1uOC4oozXTyqqHvloktF2YRA= From: Eric Wong To: mwrap-perl@80x24.org Subject: [PATCH 3/3] malloc_trim: clean up idle arenas immediately Date: Sun, 20 Nov 2022 09:14:24 +0000 Message-Id: <20221120091424.2420768-4-e@80x24.org> In-Reply-To: <20221120091424.2420768-1-e@80x24.org> References: <20221120091424.2420768-1-e@80x24.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit List-Id: There's no need to worry about race conditions if operating on idle arenas, so clean them up immediately rather than cleaning them up after a pthread_create (which may never come). We'll also inform active threads about the trim earlier so they have more cycles to react to the lazy trim request. --- mymalloc.h | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/mymalloc.h b/mymalloc.h index e789cf3..4904b74 100644 --- a/mymalloc.h +++ b/mymalloc.h @@ -150,23 +150,30 @@ static void remote_free_finish(mstate ms) int malloc_trim(size_t pad) { - mstate ms = ms_tsd; + mstate m; + int ret = 0; CHECK(int, 0, pthread_mutex_lock(&global_mtx)); - { /* be lazy for sibling threads, readers are not synchronized */ - mstate m; - cds_list_for_each_entry(m, &arenas_unused, arena_node) - uatomic_set(&m->trim_check, 0); - cds_list_for_each_entry(m, &arenas_active, arena_node) - uatomic_set(&m->trim_check, 0); + + /* be lazy for active sibling threads, readers are not synchronized */ + cds_list_for_each_entry(m, &arenas_active, arena_node) + uatomic_set(&m->trim_check, 0); + + /* nobody is using idle arenas, clean immediately */ + cds_list_for_each_entry(m, &arenas_unused, arena_node) { + m->trim_check = 0; + remote_free_finish(m); + ret |= sys_trim(m, pad); } + CHECK(int, 0, pthread_mutex_unlock(&global_mtx)); - if (ms) { /* trim our own arena immediately */ - remote_free_finish(ms); - return sys_trim(ms, pad); + m = ms_tsd; + if (m) { /* trim our own arena immediately */ + remote_free_finish(m); + ret |= sys_trim(m, pad); } - return 0; + return ret; } static void remote_free_enqueue(mstate fm, void *mem)