* [PATCH v3 00/35] bitops: add atomic find_bit() operations
@ 2023-12-12 2:27 Yury Norov
2023-12-12 2:27 ` [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Yury Norov
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-12 2:27 UTC (permalink / raw)
To: linux-kernel, David S. Miller, H. Peter Anvin,
James E.J. Bottomley, K. Y. Srinivasan, Md. Haris Iqbal,
Akinobu Mita, Andrew Morton, Bjorn Andersson, Borislav Petkov,
Chaitanya Kulkarni, Christian Brauner, Damien Le Moal,
Dave Hansen, David Disseldorp, Edward Cree, Eric Dumazet,
Fenghua Yu, Geert Uytterhoeven, Greg Kroah-Hartman,
Gregory Greenman, Hans Verkuil, Hans de Goede, Hugh Dickins,
Ingo Molnar, Jakub Kicinski, Jaroslav Kysela, Jason Gunthorpe,
Jens Axboe, Jiri Pirko, Jiri Slaby, Kalle Valo, Karsten Graul,
Karsten Keil, Kees Cook, Leon Romanovsky, Mark Rutland,
Martin Habets, Mauro Carvalho Chehab, Michael Ellerman,
Michal Simek, Nicholas Piggin, Oliver Neukum, Paolo Abeni,
Paolo Bonzini, Peter Zijlstra, Ping-Ke Shih, Rich Felker,
Rob Herring, Robin Murphy, Sean Christopherson, Shuai Xue,
Stanislaw Gruszka, Steven Rostedt, Thomas Bogendoerfer,
Thomas Gleixner, Valentin Schneider, Vitaly Kuznetsov,
Wenjia Zhang, Will Deacon, Yoshinori Sato,
GR-QLogic-Storage-Upstream, alsa-devel, ath10k, dmaengine, iommu,
kvm, linux-arm-kernel, linux-arm-msm, linux-block,
linux-bluetooth, linux-hyperv, linux-m68k, linux-media,
linux-mips, linux-net-drivers, linux-pci, linux-rdma, linux-s390,
linux-scsi, linux-serial, linux-sh, linux-sound, linux-usb,
linux-wireless, linuxppc-dev, mpi3mr-linuxdrv.pdl, netdev,
sparclinux, x86
Cc: Yury Norov, Jan Kara, Mirsad Todorovac, Matthew Wilcox,
Rasmus Villemoes, Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov,
Bart Van Assche, Sergey Shtylyov
Add helpers around test_and_{set,clear}_bit() that allow to search for
clear or set bits and flip them atomically.
The target patterns may look like this:
for (idx = 0; idx < nbits; idx++)
if (test_and_clear_bit(idx, bitmap))
do_something(idx);
Or like this:
do {
bit = find_first_bit(bitmap, nbits);
if (bit >= nbits)
return nbits;
} while (!test_and_clear_bit(bit, bitmap));
return bit;
In both cases, the opencoded loop may be converted to a single function
or iterator call. Correspondingly:
for_each_test_and_clear_bit(idx, bitmap, nbits)
do_something(idx);
Or:
return find_and_clear_bit(bitmap, nbits);
Obviously, the less routine code people have to write themself, the
less probability to make a mistake.
Those are not only handy helpers but also resolve a non-trivial
issue of using non-atomic find_bit() together with atomic
test_and_{set,clear)_bit().
The trick is that find_bit() implies that the bitmap is a regular
non-volatile piece of memory, and compiler is allowed to use such
optimization techniques like re-fetching memory instead of caching it.
For example, find_first_bit() is implemented like this:
for (idx = 0; idx * BITS_PER_LONG < sz; idx++) {
val = addr[idx];
if (val) {
sz = min(idx * BITS_PER_LONG + __ffs(val), sz);
break;
}
}
On register-memory architectures, like x86, compiler may decide to
access memory twice - first time to compare against 0, and second time
to fetch its value to pass it to __ffs().
When running find_first_bit() on volatile memory, the memory may get
changed in-between, and for instance, it may lead to passing 0 to
__ffs(), which is undefined. This is a potentially dangerous call.
find_and_clear_bit() as a wrapper around test_and_clear_bit()
naturally treats underlying bitmap as a volatile memory and prevents
compiler from such optimizations.
Now that KCSAN is catching exactly this type of situations and warns on
undercover memory modifications. We can use it to reveal improper usage
of find_bit(), and convert it to atomic find_and_*_bit() as appropriate.
In some cases concurrent operations with plain find_bit() are acceptable.
For example:
- two threads running find_*_bit(): safe wrt ffs(0) and returns correct
value, because underlying bitmap is unchanged;
- find_next_bit() in parallel with set or clear_bit(), when modifying
a bit prior to the start bit to search: safe and correct;
- find_first_bit() in parallel with set_bit(): safe, but may return wrong
bit number;
- find_first_zero_bit() in parallel with clear_bit(): same as above.
In last 2 cases find_bit() may not return a correct bit number, but
it may be OK if caller requires any (not exactly the first) set or clear
bit, correspondingly.
In such cases, KCSAN may be safely silenced with data_race(). But in most
cases where KCSAN detects concurrency people should carefully review their
code and likely protect critical sections or switch to atomic
find_and_bit(), as appropriate.
The 1st patch of the series adds the following atomic primitives:
find_and_set_bit(addr, nbits);
find_and_set_next_bit(addr, nbits, start);
...
Here find_and_{set,clear} part refers to the corresponding
test_and_{set,clear}_bit function. Suffixes like _wrap or _lock
derive their semantics from corresponding find() or test() functions.
For brevity, the naming omits the fact that we search for zero bit in
find_and_set, and correspondingly search for set bit in find_and_clear
functions.
The patch also adds iterators with atomic semantics, like
for_each_test_and_set_bit(). Here, the naming rule is to simply prefix
corresponding atomic operation with 'for_each'.
In [1] Jan reported 2% slowdown in a single-thread search test when
switching find_bit() function to treat bitmaps as volatile arrays. On
the other hand, kernel robot in the same thread reported +3.7% to the
performance of will-it-scale.per_thread_ops test.
Assuming that our compilers are sane and generate better code against
properly annotated data, the above discrepancy doesn't look weird. When
running on non-volatile bitmaps, plain find_bit() outperforms atomic
find_and_bit(), and vice-versa.
So, all users of find_bit() API, where heavy concurrency is expected,
are encouraged to switch to atomic find_and_bit() as appropriate.
The 1st patch of this series adds atomic find_and_bit() API, 2nd adds
a basic test for new API, and all the following patches spread it over
the kernel.
They can be applied separately from each other on per-subsystems basis,
or I can pull them in bitmap tree, as appropriate.
[1] https://lore.kernel.org/lkml/634f5fdf-e236-42cf-be8d-48a581c21660@alu.unizg.hr/T/#m3e7341eb3571753f3acf8fe166f3fb5b2c12e615
---
v1: https://lore.kernel.org/netdev/20231118155105.25678-29-yury.norov@gmail.com/T/
v2: https://lore.kernel.org/all/20231204185101.ddmkvsr2xxsmoh2u@quack3/T/
v3:
- collect more reviews;
- align wording in commit messages @ Bjorn Helgaas;
- add examples where non-atomic find_bit() may safely race @ Jan Kara;
- patch #3: use if-else instead of ternary operator @ Jens Axboe;
- patch #13: align coding style @ Vitaly Kuznetsov, Sean Christopherson;
Yury Norov (35):
lib/find: add atomic find_bit() primitives
lib/find: add test for atomic find_bit() ops
lib/sbitmap; optimize __sbitmap_get_word() by using find_and_set_bit()
watch_queue: optimize post_one_notification() by using
find_and_clear_bit()
sched: add cpumask_find_and_set() and use it in __mm_cid_get()
mips: sgi-ip30: optimize heart_alloc_int() by using find_and_set_bit()
sparc: optimize alloc_msi() by using find_and_set_bit()
perf/arm: use atomic find_bit() API
drivers/perf: optimize ali_drw_get_counter_idx() by using
find_and_set_bit()
dmaengine: idxd: optimize perfmon_assign_event()
ath10k: optimize ath10k_snoc_napi_poll() with an atomic iterator
wifi: rtw88: optimize the driver by using atomic iterator
KVM: x86: hyper-v: optimize and cleanup kvm_hv_process_stimers()
PCI: hv: Optimize hv_get_dom_num() by using find_and_set_bit()
scsi: core: optimize scsi_evt_emit() by using an atomic iterator
scsi: mpi3mr: optimize the driver by using find_and_set_bit()
scsi: qedi: optimize qedi_get_task_idx() by using find_and_set_bit()
powerpc: optimize arch code by using atomic find_bit() API
iommu: optimize subsystem by using atomic find_bit() API
media: radio-shark: optimize driver by using atomic find_bit() API
sfc: optimize driver by using atomic find_bit() API
tty: nozomi: optimize interrupt_handler()
usb: cdc-acm: optimize acm_softint()
block: null_blk: replace get_tag() with a generic
find_and_set_bit_lock()
RDMA/rtrs: optimize __rtrs_get_permit() by using
find_and_set_bit_lock()
mISDN: optimize get_free_devid()
media: em28xx: cx231xx: optimize drivers by using find_and_set_bit()
ethernet: rocker: optimize ofdpa_port_internal_vlan_id_get()
serial: sc12is7xx: optimize sc16is7xx_alloc_line()
bluetooth: optimize cmtp_alloc_block_id()
net: smc: optimize smc_wr_tx_get_free_slot_index()
ALSA: use atomic find_bit() functions where applicable
m68k: optimize get_mmu_context()
microblaze: optimize get_mmu_context()
sh: mach-x3proto: optimize ilsel_enable()
arch/m68k/include/asm/mmu_context.h | 11 +-
arch/microblaze/include/asm/mmu_context_mm.h | 11 +-
arch/mips/sgi-ip30/ip30-irq.c | 12 +-
arch/powerpc/mm/book3s32/mmu_context.c | 10 +-
arch/powerpc/platforms/pasemi/dma_lib.c | 45 +--
arch/powerpc/platforms/powernv/pci-sriov.c | 12 +-
arch/sh/boards/mach-x3proto/ilsel.c | 4 +-
arch/sparc/kernel/pci_msi.c | 9 +-
arch/x86/kvm/hyperv.c | 40 +--
drivers/block/null_blk/main.c | 41 +--
drivers/dma/idxd/perfmon.c | 8 +-
drivers/infiniband/ulp/rtrs/rtrs-clt.c | 15 +-
drivers/iommu/arm/arm-smmu/arm-smmu.h | 10 +-
drivers/iommu/msm_iommu.c | 18 +-
drivers/isdn/mISDN/core.c | 9 +-
drivers/media/radio/radio-shark.c | 5 +-
drivers/media/radio/radio-shark2.c | 5 +-
drivers/media/usb/cx231xx/cx231xx-cards.c | 16 +-
drivers/media/usb/em28xx/em28xx-cards.c | 37 +--
drivers/net/ethernet/rocker/rocker_ofdpa.c | 11 +-
drivers/net/ethernet/sfc/rx_common.c | 4 +-
drivers/net/ethernet/sfc/siena/rx_common.c | 4 +-
drivers/net/ethernet/sfc/siena/siena_sriov.c | 14 +-
drivers/net/wireless/ath/ath10k/snoc.c | 9 +-
drivers/net/wireless/realtek/rtw88/pci.c | 5 +-
drivers/net/wireless/realtek/rtw89/pci.c | 5 +-
drivers/pci/controller/pci-hyperv.c | 7 +-
drivers/perf/alibaba_uncore_drw_pmu.c | 10 +-
drivers/perf/arm-cci.c | 24 +-
drivers/perf/arm-ccn.c | 10 +-
drivers/perf/arm_dmc620_pmu.c | 9 +-
drivers/perf/arm_pmuv3.c | 8 +-
drivers/scsi/mpi3mr/mpi3mr_os.c | 21 +-
drivers/scsi/qedi/qedi_main.c | 9 +-
drivers/scsi/scsi_lib.c | 7 +-
drivers/tty/nozomi.c | 5 +-
drivers/tty/serial/sc16is7xx.c | 8 +-
drivers/usb/class/cdc-acm.c | 5 +-
include/linux/cpumask.h | 12 +
include/linux/find.h | 293 +++++++++++++++++++
kernel/sched/sched.h | 14 +-
kernel/watch_queue.c | 6 +-
lib/find_bit.c | 85 ++++++
lib/sbitmap.c | 46 +--
lib/test_bitmap.c | 61 ++++
net/bluetooth/cmtp/core.c | 10 +-
net/smc/smc_wr.c | 10 +-
sound/pci/hda/hda_codec.c | 7 +-
sound/usb/caiaq/audio.c | 13 +-
49 files changed, 631 insertions(+), 419 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3 01/35] lib/find: add atomic find_bit() primitives
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
@ 2023-12-12 2:27 ` Yury Norov
2023-12-12 2:27 ` [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops Yury Norov
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-12 2:27 UTC (permalink / raw)
To: linux-kernel, David S. Miller, H. Peter Anvin,
James E.J. Bottomley, K. Y. Srinivasan, Md. Haris Iqbal,
Akinobu Mita, Andrew Morton, Bjorn Andersson, Borislav Petkov,
Chaitanya Kulkarni, Christian Brauner, Damien Le Moal,
Dave Hansen, David Disseldorp, Edward Cree, Eric Dumazet,
Fenghua Yu, Geert Uytterhoeven, Greg Kroah-Hartman,
Gregory Greenman, Hans Verkuil, Hans de Goede, Hugh Dickins,
Ingo Molnar, Jakub Kicinski, Jaroslav Kysela, Jason Gunthorpe,
Jens Axboe, Jiri Pirko, Jiri Slaby, Kalle Valo, Karsten Graul,
Karsten Keil, Kees Cook, Leon Romanovsky, Mark Rutland,
Martin Habets, Mauro Carvalho Chehab, Michael Ellerman,
Michal Simek, Nicholas Piggin, Oliver Neukum, Paolo Abeni,
Paolo Bonzini, Peter Zijlstra, Ping-Ke Shih, Rich Felker,
Rob Herring, Robin Murphy, Sean Christopherson, Shuai Xue,
Stanislaw Gruszka, Steven Rostedt, Thomas Bogendoerfer,
Thomas Gleixner, Valentin Schneider, Vitaly Kuznetsov,
Wenjia Zhang, Will Deacon, Yoshinori Sato,
GR-QLogic-Storage-Upstream, alsa-devel, ath10k, dmaengine, iommu,
kvm, linux-arm-kernel, linux-arm-msm, linux-block,
linux-bluetooth, linux-hyperv, linux-m68k, linux-media,
linux-mips, linux-net-drivers, linux-pci, linux-rdma, linux-s390,
linux-scsi, linux-serial, linux-sh, linux-sound, linux-usb,
linux-wireless, linuxppc-dev, mpi3mr-linuxdrv.pdl, netdev,
sparclinux, x86
Cc: Yury Norov, Jan Kara, Mirsad Todorovac, Matthew Wilcox,
Rasmus Villemoes, Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov,
Bart Van Assche, Sergey Shtylyov
Add helpers around test_and_{set,clear}_bit() that allow to search for
clear or set bits and flip them atomically.
The target patterns may look like this:
for (idx = 0; idx < nbits; idx++)
if (test_and_clear_bit(idx, bitmap))
do_something(idx);
Or like this:
do {
bit = find_first_bit(bitmap, nbits);
if (bit >= nbits)
return nbits;
} while (!test_and_clear_bit(bit, bitmap));
return bit;
In both cases, the opencoded loop may be converted to a single function
or iterator call. Correspondingly:
for_each_test_and_clear_bit(idx, bitmap, nbits)
do_something(idx);
Or:
return find_and_clear_bit(bitmap, nbits);
Obviously, the less routine code people have to write themself, the
less probability to make a mistake.
Those are not only handy helpers but also resolve a non-trivial
issue of using non-atomic find_bit() together with atomic
test_and_{set,clear)_bit().
The trick is that find_bit() implies that the bitmap is a regular
non-volatile piece of memory, and compiler is allowed to use such
optimization techniques like re-fetching memory instead of caching it.
For example, find_first_bit() is implemented like this:
for (idx = 0; idx * BITS_PER_LONG < sz; idx++) {
val = addr[idx];
if (val) {
sz = min(idx * BITS_PER_LONG + __ffs(val), sz);
break;
}
}
On register-memory architectures, like x86, compiler may decide to
access memory twice - first time to compare against 0, and second time
to fetch its value to pass it to __ffs().
When running find_first_bit() on volatile memory, the memory may get
changed in-between, and for instance, it may lead to passing 0 to
__ffs(), which is undefined. This is a potentially dangerous call.
find_and_clear_bit() as a wrapper around test_and_clear_bit()
naturally treats underlying bitmap as a volatile memory and prevents
compiler from such optimizations.
Now that KCSAN is catching exactly this type of situations and warns on
undercover memory modifications. We can use it to reveal improper usage
of find_bit(), and convert it to atomic find_and_*_bit() as appropriate.
In some cases concurrent operations with plain find_bit() are acceptable.
For example:
- two threads running find_*_bit(): safe wrt ffs(0) and returns correct
value, because underlying bitmap is unchanged;
- find_next_bit() in parallel with set or clear_bit(), when modifying
a bit prior to the start bit to search: safe and correct;
- find_first_bit() in parallel with set_bit(): safe, but may return wrong
bit number;
- find_first_zero_bit() in parallel with clear_bit(): same as above.
In last 2 cases find_bit() may not return a correct bit number, but
it may be OK if caller requires any (not exactly the first) set or clear
bit, correspondingly.
In such cases, KCSAN may be safely silenced with data_race(). But in most
cases where KCSAN detects concurrency people should carefully review their
code and likely protect critical sections or switch to atomic
find_and_bit(), as appropriate.
The 1st patch of the series adds the following atomic primitives:
find_and_set_bit(addr, nbits);
find_and_set_next_bit(addr, nbits, start);
...
Here find_and_{set,clear} part refers to the corresponding
test_and_{set,clear}_bit function. Suffixes like _wrap or _lock
derive their semantics from corresponding find() or test() functions.
For brevity, the naming omits the fact that we search for zero bit in
find_and_set, and correspondingly search for set bit in find_and_clear
functions.
The patch also adds iterators with atomic semantics, like
for_each_test_and_set_bit(). Here, the naming rule is to simply prefix
corresponding atomic operation with 'for_each'.
CC: Bart Van Assche <bvanassche@acm.org>
CC: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
include/linux/find.h | 293 +++++++++++++++++++++++++++++++++++++++++++
lib/find_bit.c | 85 +++++++++++++
2 files changed, 378 insertions(+)
diff --git a/include/linux/find.h b/include/linux/find.h
index 5e4f39ef2e72..237513356ffa 100644
--- a/include/linux/find.h
+++ b/include/linux/find.h
@@ -32,6 +32,16 @@ extern unsigned long _find_first_and_bit(const unsigned long *addr1,
extern unsigned long _find_first_zero_bit(const unsigned long *addr, unsigned long size);
extern unsigned long _find_last_bit(const unsigned long *addr, unsigned long size);
+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits);
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr, unsigned long nbits,
+ unsigned long start);
+
#ifdef __BIG_ENDIAN
unsigned long _find_first_zero_bit_le(const unsigned long *addr, unsigned long size);
unsigned long _find_next_zero_bit_le(const unsigned long *addr, unsigned
@@ -460,6 +470,267 @@ unsigned long __for_each_wrap(const unsigned long *bitmap, unsigned long size,
return bit < start ? bit : size;
}
+/**
+ * find_and_set_bit - Find a zero bit and set it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap size in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, 0);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_bit(addr, nbits);
+}
+
+
+/**
+ * find_and_set_next_bit - Find a zero bit and set it, starting from @offset
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the bitmap, starting from @offset.
+ * It's also not guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, offset);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_next_bit(addr, nbits, offset);
+}
+
+/**
+ * find_and_set_bit_wrap - find and set bit starting at @offset, wrapping around zero
+ * @addr: The first address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * Returns: the bit number for the next clear bit, or first clear bit up to @offset,
+ * while atomically setting it. If no bits are found, returns @nbits.
+ */
+static inline
+unsigned long find_and_set_bit_wrap(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ unsigned long bit = find_and_set_next_bit(addr, nbits, offset);
+
+ if (bit < nbits || offset == 0)
+ return bit;
+
+ bit = find_and_set_bit(addr, offset);
+ return bit < offset ? bit : nbits;
+}
+
+/**
+ * find_and_set_bit_lock - find a zero bit, then set it atomically with lock
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, 0);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit_lock(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_bit_lock(addr, nbits);
+}
+
+/**
+ * find_and_set_next_bit_lock - find a zero bit and set it atomically with lock
+ * @addr: The address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the range. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and set bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_set_next_bit_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr | ~GENMASK(nbits - 1, offset);
+ if (val == ~0UL)
+ return nbits;
+ ret = ffz(val);
+ } while (test_and_set_bit_lock(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_set_next_bit_lock(addr, nbits, offset);
+}
+
+/**
+ * find_and_set_bit_wrap_lock - find zero bit starting at @ofset and set it
+ * with lock, and wrap around zero if nothing found
+ * @addr: The first address to base the search on
+ * @nbits: The bitmap size in bits
+ * @offset: The bitnumber to start searching at
+ *
+ * Returns: the bit number for the next set bit, or first set bit up to @offset
+ * If no bits are set, returns @nbits.
+ */
+static inline
+unsigned long find_and_set_bit_wrap_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ unsigned long bit = find_and_set_next_bit_lock(addr, nbits, offset);
+
+ if (bit < nbits || offset == 0)
+ return bit;
+
+ bit = find_and_set_bit_lock(addr, offset);
+ return bit < offset ? bit : nbits;
+}
+
+/**
+ * find_and_clear_bit - Find a set bit and clear it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the bitmap. It's also not
+ * guaranteed that if @nbits is returned, the bitmap is empty.
+ *
+ * The function does guarantee that if returned value is in range [0 .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and cleared bit, or @nbits if no bits found
+ */
+static inline unsigned long find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr & GENMASK(nbits - 1, 0);
+ if (val == 0)
+ return nbits;
+ ret = __ffs(val);
+ } while (!test_and_clear_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_clear_bit(addr, nbits);
+}
+
+/**
+ * find_and_clear_next_bit - Find a set bit next after @offset, and clear it atomically
+ * @addr: The address to base the search on
+ * @nbits: The bitmap nbits in bits
+ * @offset: bit offset at which to start searching
+ *
+ * This function is designed to operate in concurrent access environment.
+ *
+ * Because of concurrency and volatile nature of underlying bitmap, it's not
+ * guaranteed that the found bit is the 1st bit in the range It's also not
+ * guaranteed that if @nbits is returned, there's no set bits after @offset.
+ *
+ * The function does guarantee that if returned value is in range [@offset .. @nbits),
+ * the acquired bit belongs to the caller exclusively.
+ *
+ * Returns: found and cleared bit, or @nbits if no bits found
+ */
+static inline
+unsigned long find_and_clear_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long offset)
+{
+ if (small_const_nbits(nbits)) {
+ unsigned long val, ret;
+
+ do {
+ val = *addr & GENMASK(nbits - 1, offset);
+ if (val == 0)
+ return nbits;
+ ret = __ffs(val);
+ } while (!test_and_clear_bit(ret, addr));
+
+ return ret;
+ }
+
+ return _find_and_clear_next_bit(addr, nbits, offset);
+}
+
/**
* find_next_clump8 - find next 8-bit clump with set bits in a memory region
* @clump: location to store copy of found clump
@@ -577,6 +848,28 @@ unsigned long find_next_bit_le(const void *addr, unsigned
#define for_each_set_bit_from(bit, addr, size) \
for (; (bit) = find_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++)
+/* same as for_each_set_bit() but atomically clears each found bit */
+#define for_each_test_and_clear_bit(bit, addr, size) \
+ for ((bit) = 0; \
+ (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
+/* same as for_each_set_bit_from() but atomically clears each found bit */
+#define for_each_test_and_clear_bit_from(bit, addr, size) \
+ for (; (bit) = find_and_clear_next_bit((addr), (size), (bit)), (bit) < (size); (bit)++)
+
+/* same as for_each_clear_bit() but atomically sets each found bit */
+#define for_each_test_and_set_bit(bit, addr, size) \
+ for ((bit) = 0; \
+ (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
+/* same as for_each_clear_bit_from() but atomically clears each found bit */
+#define for_each_test_and_set_bit_from(bit, addr, size) \
+ for (; \
+ (bit) = find_and_set_next_bit((addr), (size), (bit)), (bit) < (size); \
+ (bit)++)
+
#define for_each_clear_bit(bit, addr, size) \
for ((bit) = 0; \
(bit) = find_next_zero_bit((addr), (size), (bit)), (bit) < (size); \
diff --git a/lib/find_bit.c b/lib/find_bit.c
index 32f99e9a670e..c9b6b9f96610 100644
--- a/lib/find_bit.c
+++ b/lib/find_bit.c
@@ -116,6 +116,91 @@ unsigned long _find_first_and_bit(const unsigned long *addr1,
EXPORT_SYMBOL(_find_first_and_bit);
#endif
+unsigned long _find_and_set_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_bit);
+
+unsigned long _find_and_set_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_next_bit);
+
+unsigned long _find_and_set_bit_lock(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(~addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit_lock(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_bit_lock);
+
+unsigned long _find_and_set_next_bit_lock(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_NEXT_BIT(~addr[idx], /* nop */, nbits, start);
+ if (bit >= nbits)
+ return nbits;
+ } while (test_and_set_bit_lock(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_set_next_bit_lock);
+
+unsigned long _find_and_clear_bit(volatile unsigned long *addr, unsigned long nbits)
+{
+ unsigned long bit;
+
+ do {
+ bit = FIND_FIRST_BIT(addr[idx], /* nop */, nbits);
+ if (bit >= nbits)
+ return nbits;
+ } while (!test_and_clear_bit(bit, addr));
+
+ return bit;
+}
+EXPORT_SYMBOL(_find_and_clear_bit);
+
+unsigned long _find_and_clear_next_bit(volatile unsigned long *addr,
+ unsigned long nbits, unsigned long start)
+{
+ do {
+ start = FIND_NEXT_BIT(addr[idx], /* nop */, nbits, start);
+ if (start >= nbits)
+ return nbits;
+ } while (!test_and_clear_bit(start, addr));
+
+ return start;
+}
+EXPORT_SYMBOL(_find_and_clear_next_bit);
+
#ifndef find_first_zero_bit
/*
* Find the first cleared bit in a memory region.
--
2.40.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
2023-12-12 2:27 ` [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Yury Norov
@ 2023-12-12 2:27 ` Yury Norov
2023-12-12 2:27 ` [PATCH v3 22/35] tty: nozomi: optimize interrupt_handler() Yury Norov
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-12 2:27 UTC (permalink / raw)
To: linux-kernel, David S. Miller, H. Peter Anvin,
James E.J. Bottomley, K. Y. Srinivasan, Md. Haris Iqbal,
Akinobu Mita, Andrew Morton, Bjorn Andersson, Borislav Petkov,
Chaitanya Kulkarni, Christian Brauner, Damien Le Moal,
Dave Hansen, David Disseldorp, Edward Cree, Eric Dumazet,
Fenghua Yu, Geert Uytterhoeven, Greg Kroah-Hartman,
Gregory Greenman, Hans Verkuil, Hans de Goede, Hugh Dickins,
Ingo Molnar, Jakub Kicinski, Jaroslav Kysela, Jason Gunthorpe,
Jens Axboe, Jiri Pirko, Jiri Slaby, Kalle Valo, Karsten Graul,
Karsten Keil, Kees Cook, Leon Romanovsky, Mark Rutland,
Martin Habets, Mauro Carvalho Chehab, Michael Ellerman,
Michal Simek, Nicholas Piggin, Oliver Neukum, Paolo Abeni,
Paolo Bonzini, Peter Zijlstra, Ping-Ke Shih, Rich Felker,
Rob Herring, Robin Murphy, Sean Christopherson, Shuai Xue,
Stanislaw Gruszka, Steven Rostedt, Thomas Bogendoerfer,
Thomas Gleixner, Valentin Schneider, Vitaly Kuznetsov,
Wenjia Zhang, Will Deacon, Yoshinori Sato,
GR-QLogic-Storage-Upstream, alsa-devel, ath10k, dmaengine, iommu,
kvm, linux-arm-kernel, linux-arm-msm, linux-block,
linux-bluetooth, linux-hyperv, linux-m68k, linux-media,
linux-mips, linux-net-drivers, linux-pci, linux-rdma, linux-s390,
linux-scsi, linux-serial, linux-sh, linux-sound, linux-usb,
linux-wireless, linuxppc-dev, mpi3mr-linuxdrv.pdl, netdev,
sparclinux, x86
Cc: Yury Norov, Jan Kara, Mirsad Todorovac, Matthew Wilcox,
Rasmus Villemoes, Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov,
Bart Van Assche, Sergey Shtylyov
Add basic functionality test for new API.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
lib/test_bitmap.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index 65f22c2578b0..277e1ca9fd28 100644
--- a/lib/test_bitmap.c
+++ b/lib/test_bitmap.c
@@ -221,6 +221,65 @@ static void __init test_zero_clear(void)
expect_eq_pbl("", bmap, 1024);
}
+static void __init test_find_and_bit(void)
+{
+ unsigned long w, w_part, bit, cnt = 0;
+ DECLARE_BITMAP(bmap, EXP1_IN_BITS);
+
+ /*
+ * Test find_and_clear{_next}_bit() and corresponding
+ * iterators
+ */
+ bitmap_copy(bmap, exp1, EXP1_IN_BITS);
+ w = bitmap_weight(bmap, EXP1_IN_BITS);
+
+ for_each_test_and_clear_bit(bit, bmap, EXP1_IN_BITS)
+ cnt++;
+
+ expect_eq_uint(w, cnt);
+ expect_eq_uint(0, bitmap_weight(bmap, EXP1_IN_BITS));
+
+ bitmap_copy(bmap, exp1, EXP1_IN_BITS);
+ w = bitmap_weight(bmap, EXP1_IN_BITS);
+ w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3);
+
+ cnt = 0;
+ bit = EXP1_IN_BITS / 3;
+ for_each_test_and_clear_bit_from(bit, bmap, EXP1_IN_BITS)
+ cnt++;
+
+ expect_eq_uint(bitmap_weight(bmap, EXP1_IN_BITS), bitmap_weight(bmap, EXP1_IN_BITS / 3));
+ expect_eq_uint(w_part, bitmap_weight(bmap, EXP1_IN_BITS));
+ expect_eq_uint(w - w_part, cnt);
+
+ /*
+ * Test find_and_set{_next}_bit() and corresponding
+ * iterators
+ */
+ bitmap_copy(bmap, exp1, EXP1_IN_BITS);
+ w = bitmap_weight(bmap, EXP1_IN_BITS);
+ cnt = 0;
+
+ for_each_test_and_set_bit(bit, bmap, EXP1_IN_BITS)
+ cnt++;
+
+ expect_eq_uint(EXP1_IN_BITS - w, cnt);
+ expect_eq_uint(EXP1_IN_BITS, bitmap_weight(bmap, EXP1_IN_BITS));
+
+ bitmap_copy(bmap, exp1, EXP1_IN_BITS);
+ w = bitmap_weight(bmap, EXP1_IN_BITS);
+ w_part = bitmap_weight(bmap, EXP1_IN_BITS / 3);
+ cnt = 0;
+
+ bit = EXP1_IN_BITS / 3;
+ for_each_test_and_set_bit_from(bit, bmap, EXP1_IN_BITS)
+ cnt++;
+
+ expect_eq_uint(EXP1_IN_BITS - bitmap_weight(bmap, EXP1_IN_BITS),
+ EXP1_IN_BITS / 3 - bitmap_weight(bmap, EXP1_IN_BITS / 3));
+ expect_eq_uint(EXP1_IN_BITS * 2 / 3 - (w - w_part), cnt);
+}
+
static void __init test_find_nth_bit(void)
{
unsigned long b, bit, cnt = 0;
@@ -1273,6 +1332,8 @@ static void __init selftest(void)
test_for_each_clear_bitrange_from();
test_for_each_set_clump8();
test_for_each_set_bit_wrap();
+
+ test_find_and_bit();
}
KSTM_MODULE_LOADERS(test_bitmap);
--
2.40.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 22/35] tty: nozomi: optimize interrupt_handler()
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
2023-12-12 2:27 ` [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Yury Norov
2023-12-12 2:27 ` [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops Yury Norov
@ 2023-12-12 2:27 ` Yury Norov
2023-12-12 2:27 ` [PATCH v3 29/35] serial: sc12is7xx: optimize sc16is7xx_alloc_line() Yury Norov
2023-12-16 21:48 ` [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
4 siblings, 0 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-12 2:27 UTC (permalink / raw)
To: linux-kernel, Greg Kroah-Hartman, Jiri Slaby, linux-serial
Cc: Yury Norov, Jan Kara, Mirsad Todorovac, Matthew Wilcox,
Rasmus Villemoes, Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov,
Bart Van Assche, Sergey Shtylyov
In the exit path of interrupt_handler(), dc->flip map is traversed bit
by bit to find and clear set bits and call tty_flip_buffer_push() for
corresponding ports.
Simplify it by using for_each_test_and_clear_bit(), as it skips already
clear bits.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
drivers/tty/nozomi.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
index 02cd40147b3a..de0503247391 100644
--- a/drivers/tty/nozomi.c
+++ b/drivers/tty/nozomi.c
@@ -1220,9 +1220,8 @@ static irqreturn_t interrupt_handler(int irq, void *dev_id)
exit_handler:
spin_unlock(&dc->spin_mutex);
- for (a = 0; a < NOZOMI_MAX_PORTS; a++)
- if (test_and_clear_bit(a, &dc->flip))
- tty_flip_buffer_push(&dc->port[a].port);
+ for_each_test_and_clear_bit(a, &dc->flip, NOZOMI_MAX_PORTS)
+ tty_flip_buffer_push(&dc->port[a].port);
return IRQ_HANDLED;
none:
--
2.40.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 29/35] serial: sc12is7xx: optimize sc16is7xx_alloc_line()
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
` (2 preceding siblings ...)
2023-12-12 2:27 ` [PATCH v3 22/35] tty: nozomi: optimize interrupt_handler() Yury Norov
@ 2023-12-12 2:27 ` Yury Norov
2023-12-16 21:48 ` [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
4 siblings, 0 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-12 2:27 UTC (permalink / raw)
To: linux-kernel, Greg Kroah-Hartman, Jiri Slaby, Hugo Villeneuve,
Lech Perczak, Ilpo Järvinen, Andy Shevchenko,
Uwe Kleine-König, Thomas Gleixner, Hui Wang, Isaac True,
Yury Norov, linux-serial
Cc: Jan Kara, Mirsad Todorovac, Matthew Wilcox, Rasmus Villemoes,
Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov, Bart Van Assche,
Sergey Shtylyov
Instead of polling every bit in sc16is7xx_lines, use a dedicated
find_and_set_bit(), and make the function a simple one-liner.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
drivers/tty/serial/sc16is7xx.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index cf0c6120d30e..a7adb6ad0bbd 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -427,15 +427,9 @@ static void sc16is7xx_port_update(struct uart_port *port, u8 reg,
static int sc16is7xx_alloc_line(void)
{
- int i;
-
BUILD_BUG_ON(SC16IS7XX_MAX_DEVS > BITS_PER_LONG);
- for (i = 0; i < SC16IS7XX_MAX_DEVS; i++)
- if (!test_and_set_bit(i, &sc16is7xx_lines))
- break;
-
- return i;
+ return find_and_set_bit(&sc16is7xx_lines, SC16IS7XX_MAX_DEVS);
}
static void sc16is7xx_power(struct uart_port *port, int on)
--
2.40.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3 00/35] bitops: add atomic find_bit() operations
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
` (3 preceding siblings ...)
2023-12-12 2:27 ` [PATCH v3 29/35] serial: sc12is7xx: optimize sc16is7xx_alloc_line() Yury Norov
@ 2023-12-16 21:48 ` Yury Norov
4 siblings, 0 replies; 6+ messages in thread
From: Yury Norov @ 2023-12-16 21:48 UTC (permalink / raw)
To: linux-kernel, David S. Miller, H. Peter Anvin,
James E.J. Bottomley, K. Y. Srinivasan, Md. Haris Iqbal,
Akinobu Mita, Andrew Morton, Bjorn Andersson, Borislav Petkov,
Chaitanya Kulkarni, Christian Brauner, Damien Le Moal,
Dave Hansen, David Disseldorp, Edward Cree, Eric Dumazet,
Fenghua Yu, Geert Uytterhoeven, Greg Kroah-Hartman,
Gregory Greenman, Hans Verkuil, Hans de Goede, Hugh Dickins,
Ingo Molnar, Jakub Kicinski, Jaroslav Kysela, Jason Gunthorpe,
Jens Axboe, Jiri Pirko, Jiri Slaby, Kalle Valo, Karsten Graul,
Karsten Keil, Kees Cook, Leon Romanovsky, Mark Rutland,
Martin Habets, Mauro Carvalho Chehab, Michael Ellerman,
Michal Simek, Nicholas Piggin, Oliver Neukum, Paolo Abeni,
Paolo Bonzini, Peter Zijlstra, Ping-Ke Shih, Rich Felker,
Rob Herring, Robin Murphy, Sean Christopherson, Shuai Xue,
Stanislaw Gruszka, Steven Rostedt, Thomas Bogendoerfer,
Thomas Gleixner, Valentin Schneider, Vitaly Kuznetsov,
Wenjia Zhang, Will Deacon, Yoshinori Sato,
GR-QLogic-Storage-Upstream, alsa-devel, ath10k, dmaengine, iommu,
kvm, linux-arm-kernel, linux-arm-msm, linux-block,
linux-bluetooth, linux-hyperv, linux-m68k, linux-media,
linux-mips, linux-net-drivers, linux-pci, linux-rdma, linux-s390,
linux-scsi, linux-serial, linux-sh, linux-sound, linux-usb,
linux-wireless, linuxppc-dev, mpi3mr-linuxdrv.pdl, netdev,
sparclinux, x86
Cc: Jan Kara, Mirsad Todorovac, Matthew Wilcox, Rasmus Villemoes,
Andy Shevchenko, Maxim Kuvyrkov, Alexey Klimov, Bart Van Assche,
Sergey Shtylyov
On Mon, Dec 11, 2023 at 06:27:14PM -0800, Yury Norov wrote:
> Add helpers around test_and_{set,clear}_bit() that allow to search for
> clear or set bits and flip them atomically.
>
> The target patterns may look like this:
>
> for (idx = 0; idx < nbits; idx++)
> if (test_and_clear_bit(idx, bitmap))
> do_something(idx);
>
> Or like this:
>
> do {
> bit = find_first_bit(bitmap, nbits);
> if (bit >= nbits)
> return nbits;
> } while (!test_and_clear_bit(bit, bitmap));
> return bit;
>
> In both cases, the opencoded loop may be converted to a single function
> or iterator call. Correspondingly:
>
> for_each_test_and_clear_bit(idx, bitmap, nbits)
> do_something(idx);
>
> Or:
> return find_and_clear_bit(bitmap, nbits);
>
> Obviously, the less routine code people have to write themself, the
> less probability to make a mistake.
>
> Those are not only handy helpers but also resolve a non-trivial
> issue of using non-atomic find_bit() together with atomic
> test_and_{set,clear)_bit().
>
> The trick is that find_bit() implies that the bitmap is a regular
> non-volatile piece of memory, and compiler is allowed to use such
> optimization techniques like re-fetching memory instead of caching it.
>
> For example, find_first_bit() is implemented like this:
>
> for (idx = 0; idx * BITS_PER_LONG < sz; idx++) {
> val = addr[idx];
> if (val) {
> sz = min(idx * BITS_PER_LONG + __ffs(val), sz);
> break;
> }
> }
>
> On register-memory architectures, like x86, compiler may decide to
> access memory twice - first time to compare against 0, and second time
> to fetch its value to pass it to __ffs().
>
> When running find_first_bit() on volatile memory, the memory may get
> changed in-between, and for instance, it may lead to passing 0 to
> __ffs(), which is undefined. This is a potentially dangerous call.
>
> find_and_clear_bit() as a wrapper around test_and_clear_bit()
> naturally treats underlying bitmap as a volatile memory and prevents
> compiler from such optimizations.
>
> Now that KCSAN is catching exactly this type of situations and warns on
> undercover memory modifications. We can use it to reveal improper usage
> of find_bit(), and convert it to atomic find_and_*_bit() as appropriate.
>
> In some cases concurrent operations with plain find_bit() are acceptable.
> For example:
>
> - two threads running find_*_bit(): safe wrt ffs(0) and returns correct
> value, because underlying bitmap is unchanged;
> - find_next_bit() in parallel with set or clear_bit(), when modifying
> a bit prior to the start bit to search: safe and correct;
> - find_first_bit() in parallel with set_bit(): safe, but may return wrong
> bit number;
> - find_first_zero_bit() in parallel with clear_bit(): same as above.
>
> In last 2 cases find_bit() may not return a correct bit number, but
> it may be OK if caller requires any (not exactly the first) set or clear
> bit, correspondingly.
>
> In such cases, KCSAN may be safely silenced with data_race(). But in most
> cases where KCSAN detects concurrency people should carefully review their
> code and likely protect critical sections or switch to atomic
> find_and_bit(), as appropriate.
>
> The 1st patch of the series adds the following atomic primitives:
>
> find_and_set_bit(addr, nbits);
> find_and_set_next_bit(addr, nbits, start);
> ...
>
> Here find_and_{set,clear} part refers to the corresponding
> test_and_{set,clear}_bit function. Suffixes like _wrap or _lock
> derive their semantics from corresponding find() or test() functions.
>
> For brevity, the naming omits the fact that we search for zero bit in
> find_and_set, and correspondingly search for set bit in find_and_clear
> functions.
>
> The patch also adds iterators with atomic semantics, like
> for_each_test_and_set_bit(). Here, the naming rule is to simply prefix
> corresponding atomic operation with 'for_each'.
>
> In [1] Jan reported 2% slowdown in a single-thread search test when
> switching find_bit() function to treat bitmaps as volatile arrays. On
> the other hand, kernel robot in the same thread reported +3.7% to the
> performance of will-it-scale.per_thread_ops test.
>
> Assuming that our compilers are sane and generate better code against
> properly annotated data, the above discrepancy doesn't look weird. When
> running on non-volatile bitmaps, plain find_bit() outperforms atomic
> find_and_bit(), and vice-versa.
>
> So, all users of find_bit() API, where heavy concurrency is expected,
> are encouraged to switch to atomic find_and_bit() as appropriate.
>
> The 1st patch of this series adds atomic find_and_bit() API, 2nd adds
> a basic test for new API, and all the following patches spread it over
> the kernel.
>
> They can be applied separately from each other on per-subsystems basis,
> or I can pull them in bitmap tree, as appropriate.
>
> [1] https://lore.kernel.org/lkml/634f5fdf-e236-42cf-be8d-48a581c21660@alu.unizg.hr/T/#m3e7341eb3571753f3acf8fe166f3fb5b2c12e615
Thank you all for reviews and comments. Now moving the series to
bitmap-for-next for testing.
Thanks,
Yury
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-12-16 21:48 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-12 2:27 [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
2023-12-12 2:27 ` [PATCH v3 01/35] lib/find: add atomic find_bit() primitives Yury Norov
2023-12-12 2:27 ` [PATCH v3 02/35] lib/find: add test for atomic find_bit() ops Yury Norov
2023-12-12 2:27 ` [PATCH v3 22/35] tty: nozomi: optimize interrupt_handler() Yury Norov
2023-12-12 2:27 ` [PATCH v3 29/35] serial: sc12is7xx: optimize sc16is7xx_alloc_line() Yury Norov
2023-12-16 21:48 ` [PATCH v3 00/35] bitops: add atomic find_bit() operations Yury Norov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).