Historical speck list archives
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: speck@linutronix.de
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Subject: [MODERATED] [PATCH 2/5] SSB extra 3
Date: Thu,  3 May 2018 15:29:45 -0700	[thread overview]
Message-ID: <=?utf-8?q?=3C00e5d109f9753def844137c613a841ddca17de74=2E152538?= =?utf-8?q?3411=2Egit=2Edave=2Ehansen=40intel=2Ecom=3E?=> (raw)
In-Reply-To: <cover.1525383411.git.dave.hansen@intel.com>
In-Reply-To: <cover.1525383411.git.dave.hansen@intel.com>

From: Dave Hansen <dave.hansen@linux.intel.com>

Now that we have hooks called when we enter/exit the BFP code, tracks
when we enter/leave.  We "leave" lazily.  The first time we leave, we
schedule some work to do the actual "leave" at some point in the future.
This way, we do not thrash by enabling and disabling mitigations
frequently.

This means that the per-BPF-program overhead is hopefully just the
cost of incrementing and decrementing a per-cpu variable.

The per-cpu counter 'bpf_prog_active' looks superficially like a great
mechanism to use.  However, it does not track active BPF programs.
It appears to just be active when eprobe BPF handlers are running.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
---
 include/linux/filter.h | 11 ++++++++++
 net/core/filter.c      | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 69 insertions(+)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index e92854d..83c1298 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -490,12 +490,23 @@ struct sk_filter {
 	struct bpf_prog	*prog;
 };
 
+DECLARE_PER_CPU(unsigned int, bpf_prog_ran);
+
 static inline void bpf_enter_prog(const struct bpf_prog *fp)
 {
+	int *count = &get_cpu_var(bpf_prog_ran);
+	(*count)++;
 }
 
+extern void bpf_leave_prog_deferred(const struct bpf_prog *fp);
 static inline void bpf_leave_prog(const struct bpf_prog *fp)
 {
+	int *count = this_cpu_ptr(&bpf_prog_ran);
+	if (*count == 1)
+		bpf_leave_prog_deferred(fp);
+	else
+		(*count)--;
+	put_cpu_var(bpf_prog_ran);
 }
 
 #define BPF_PROG_RUN(filter, ctx)  ({				\
diff --git a/net/core/filter.c b/net/core/filter.c
index d31aff9..ffca000 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -5649,3 +5649,61 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
 	release_sock(sk);
 	return ret;
 }
+
+/*
+ * 0 when no BPF code has executed on the CPU.
+ * Incremented when running BPF code.
+ * When ==1, work will be scheduled.
+ * When >1, work will not be scheduled because work is already
+ * scheduled.
+ * When work is performed, count will be decremented from 1->0.
+ */
+DEFINE_PER_CPU(unsigned int, bpf_prog_ran);
+EXPORT_SYMBOL_GPL(bpf_prog_ran);
+static void bpf_done_on_this_cpu(struct work_struct *work)
+{
+	if (!this_cpu_dec_return(bpf_prog_ran))
+		return;
+
+	/*
+	 * This is unexpected.  The elevated refcount indicates
+	 * being in the *middle* of a BPF program, which should
+	 * be impossible.  They are executed inside
+	 * rcu_read_lock() where we can not sleep and where
+	 * preemption is disabled.
+	 */
+	WARN_ON_ONCE(1);
+}
+
+DEFINE_PER_CPU(struct delayed_work, bpf_prog_delayed_work);
+static __init int bpf_init_delayed_work(void)
+{
+	int i;
+
+	for_each_possible_cpu(i) {
+		struct delayed_work *w = &per_cpu(bpf_prog_delayed_work, i);
+
+		INIT_DELAYED_WORK(w, bpf_done_on_this_cpu);
+	}
+	return 0;
+}
+subsys_initcall(bpf_init_delayed_work);
+
+/*
+ * Must be called with preempt disabled
+ *
+ * The schedule_delayed_work_on() is relatively expensive.  So,
+ * this way, someone doing a bunch of repeated BPF calls will
+ * only pay the cost of scheduling work on the *first* BPF call.
+ * The subsequent calls only pay the cost of incrementing a
+ * per-cpu variable, which is cheap.
+ */
+void bpf_leave_prog_deferred(const struct bpf_prog *fp)
+{
+	int cpu = smp_processor_id();
+	struct delayed_work *w = &per_cpu(bpf_prog_delayed_work, cpu);
+	unsigned long delay_jiffies = msecs_to_jiffies(10);
+
+	schedule_delayed_work_on(cpu, w, delay_jiffies);
+}
+EXPORT_SYMBOL_GPL(bpf_leave_prog_deferred);
-- 
2.9.5

  parent reply	other threads:[~2018-05-03 22:34 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-03 22:29 [MODERATED] [PATCH 0/5] SSB extra 0 Dave Hansen
2018-05-03 22:29 ` [MODERATED] [PATCH 1/5] SSB extra 2 Dave Hansen
2018-05-03 22:29 ` Dave Hansen [this message]
2018-05-03 22:29 ` [MODERATED] [PATCH 3/5] SSB extra 1 Dave Hansen
2018-05-03 22:29 ` [MODERATED] [PATCH 4/5] SSB extra 5 Dave Hansen
2018-05-03 22:29 ` [MODERATED] [PATCH 5/5] SSB extra 4 Dave Hansen
2018-05-03 23:27 ` [MODERATED] Re: [PATCH 0/5] SSB extra 0 Kees Cook
2018-05-04  1:37   ` Dave Hansen
2018-05-04 22:26     ` Kees Cook
2018-05-23  7:17       ` [MODERATED] cBPF affectedness (was Re: [PATCH 0/5] SSB extra 0) Jiri Kosina
2018-05-23 13:56         ` [MODERATED] " Alexei Starovoitov
2018-05-04  9:20 ` [MODERATED] Re: [PATCH 1/5] SSB extra 2 Peter Zijlstra
2018-05-04 14:04   ` Dave Hansen
2018-05-04 15:50     ` Peter Zijlstra
2018-05-04 15:54       ` Linus Torvalds
2018-05-04 13:33 ` [PATCH 3/5] SSB extra 1 Thomas Gleixner
2018-05-04 14:22   ` [MODERATED] " Dave Hansen
2018-05-04 14:26     ` Thomas Gleixner
2018-05-04 16:04       ` [MODERATED] " Andi Kleen
2018-05-04 16:09         ` Thomas Gleixner
2018-05-04 16:28           ` [MODERATED] " Andi Kleen
2018-05-04 16:32             ` Thomas Gleixner
2018-05-04 16:43               ` [MODERATED] " Dave Hansen
2018-05-04 18:39                 ` Thomas Gleixner
2018-05-06  8:32                   ` Thomas Gleixner
2018-05-06 21:48                     ` Thomas Gleixner
2018-05-06 22:40                       ` [MODERATED] " Dave Hansen
2018-05-07  6:19                         ` Thomas Gleixner
2018-05-04 17:01 ` [MODERATED] Re: [PATCH 4/5] SSB extra 5 Konrad Rzeszutek Wilk
2018-05-21  9:56 ` [MODERATED] Re: [PATCH 5/5] SSB extra 4 Jiri Kosina
2018-05-21 13:38   ` Thomas Gleixner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='=?utf-8?q?=3C00e5d109f9753def844137c613a841ddca17de74=2E152538?= =?utf-8?q?3411=2Egit=2Edave=2Ehansen=40intel=2Ecom=3E?=' \
    --to=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=speck@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).