messages from 2021-05-24 20:46:41 to 2021-05-30 07:33:56 UTC [more...]
BUG: scheduling while atomic when nvmet_rdma_queue_response fails in posting a request
2021-05-30 7:33 UTC
[bug report][regression] device node still exists after blktests nvme/011 finished
2021-05-29 11:58 UTC (3+ messages)
[PATCH v3 2/2] acpi: Move check for _DSD StorageD3Enable property to acpi
2021-05-28 16:02 UTC
[PATCH v3 1/2] nvme: Look for StorageD3Enable on companion ACPI device instead
2021-05-28 16:02 UTC
[PATCH v3 0/2] Improvements to StorageD3Enable
2021-05-28 16:01 UTC
[RFC PATCH v6 00/27] NVMeTCP Offload ULP and QEDN Device Driver
2021-05-28 13:07 UTC (42+ messages)
` [RFC PATCH v6 01/27] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP
` [RFC PATCH v6 02/27] nvme-fabrics: Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions
` [RFC PATCH v6 03/27] nvme-fabrics: Expose nvmf_check_required_opts() globally
` [RFC PATCH v6 04/27] nvme-tcp-offload: Add device scan implementation
` [RFC PATCH v6 05/27] nvme-tcp-offload: Add controller level implementation
` [RFC PATCH v6 06/27] nvme-tcp-offload: Add controller level error recovery implementation
` [RFC PATCH v6 07/27] nvme-tcp-offload: Add queue level implementation
` [RFC PATCH v6 08/27] nvme-tcp-offload: Add IO "
` [RFC PATCH v6 09/27] qed: Add TCP_ULP FW resource layout
` [RFC PATCH v6 10/27] qed: Add NVMeTCP Offload PF Level FW and HW HSI
` [RFC PATCH v6 11/27] qed: Add NVMeTCP Offload Connection "
` [RFC PATCH v6 12/27] qed: Add support of HW filter block
` [RFC PATCH v6 13/27] qed: Add NVMeTCP Offload IO Level FW and HW HSI
` [RFC PATCH v6 14/27] qed: Add NVMeTCP Offload IO Level FW Initializations
` [RFC PATCH v6 15/27] qed: Add IP services APIs support
` [RFC PATCH v6 16/27] qedn: Add qedn - Marvell's NVMeTCP HW offload vendor driver
` [RFC PATCH v6 17/27] qedn: Add qedn probe
` [RFC PATCH v6 18/27] qedn: Add qedn_claim_dev API support
` [RFC PATCH v6 19/27] qedn: Add IRQ and fast-path resources initializations
` [RFC PATCH v6 20/27] qedn: Add connection-level slowpath functionality
` [RFC PATCH v6 21/27] qedn: Add support of configuring HW filter block
` [RFC PATCH v6 22/27] qedn: Add IO level qedn_send_req and fw_cq workqueue
` [RFC PATCH v6 23/27] qedn: Add support of Task and SGL
` [RFC PATCH v6 24/27] qedn: Add support of NVME ICReq & ICResp
` [RFC PATCH v6 25/27] qedn: Add IO level fastpath functionality
` [RFC PATCH v6 26/27] qedn: Add Connection and IO level recovery flows
` [RFC PATCH v6 27/27] qedn: Add support of ASYNC
[PATCH v2] nvme-rdma: fix in-casule data send for chained sgls
2021-05-28 1:16 UTC
[PATCH] nvme-rdma: fix in-casule data send for chained sgls
2021-05-28 1:16 UTC (3+ messages)
nvmeof Issues with Zen 3/Ryzen 5000 Initiator
2021-05-27 21:36 UTC (7+ messages)
[RFC PATCH v5 00/27] NVMeTCP Offload ULP and QEDN Device Driver
2021-05-27 20:03 UTC (18+ messages)
` [RFC PATCH v5 01/27] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP
` [RFC PATCH v5 03/27] nvme-tcp-offload: Add device scan implementation
` [RFC PATCH v5 04/27] nvme-tcp-offload: Add controller level implementation
` [RFC PATCH v5 06/27] nvme-tcp-offload: Add queue "
` [RFC PATCH v5 08/27] nvme-tcp-offload: Add Timeout and ASYNC Support
[bug report] nvme sends invalid command capsule over rdma transport for 5KiB write when target supports MSDBD > 1
2021-05-27 17:48 UTC (9+ messages)
[PATCH v2] nvme: Look for StorageD3Enable on companion ACPI device instead
2021-05-27 17:28 UTC (8+ messages)
[PATCH] nvmet-tcp: fix memory leak when having inflight commands on disconnect
2021-05-27 16:08 UTC
[GIT PULL] nvme fixes for Linux 5.13
2021-05-27 13:38 UTC (2+ messages)
[PATCH] nvme-pci: Avoid to go into d3cold if device can't use npss
2021-05-27 12:08 UTC (15+ messages)
[PATCH] nvme: Look for StorageD3Enable on the actual acpi device as well as root port
2021-05-27 11:37 UTC (2+ messages)
[PATCH] nvmet: use new ana_log_size instead the old one
2021-05-27 11:33 UTC (3+ messages)
[LSF/MM/BPF TOPIC] block namespaces
2021-05-27 8:01 UTC
[LSF/MM/BPF TOPIC] Memory folios
2021-05-27 7:41 UTC (3+ messages)
[PATCH] nvmet-tcp: fix memory leak when having inflight commands on disconnect
2021-05-27 7:38 UTC (4+ messages)
[linux-nvme:nvme-5.13] BUILD SUCCESS aaeadd7075dc9e184bc7876e9dd7b3bada771df2
2021-05-27 2:49 UTC
[PATCH V5] drivers/nvme: Add support for ACPI StorageD3Enable property
2021-05-27 2:20 UTC
[PATCH v4] drivers/nvme: Add support for ACPI StorageD3Enable property
2021-05-26 19:25 UTC (3+ messages)
` [PATCH V5] "
[PATCH] nvme-pci: set some AMD PCIe downstream storage device to D3 for s2idle
2021-05-26 17:42 UTC (26+ messages)
[PATCH] fabrics: add fast_io_fail_tmo option
2021-05-26 16:47 UTC (2+ messages)
[PATCH 0/4] nvme-loop: fixes for concurrent reset and delete
2021-05-26 15:23 UTC (5+ messages)
` [PATCH 1/4] nvme/loop: reset queue count to 1 in nvme_loop_destroy_io_queues()
` [PATCH 2/4] nvme/loop: clear NVME_LOOP_Q_LIVE when nvme_loop_configure_admin_queue() fails
` [PATCH 3/4] nvme/loop: check for NVME_LOOP_Q_LIVE in nvme_loop_destroy_admin_queue()
` [PATCH 4/4] nvme/loop: Do not warn for deleted controllers during reset
[PATCH] nvme-tcp: remove incorrect Kconfig dep in BLK_DEV_NVME
2021-05-26 14:19 UTC (4+ messages)
[PATCH] nvmet: fix false keep-alive timeout when a controller is torn down
2021-05-26 14:17 UTC (6+ messages)
[PATCH v2 0/4] nvme: protect against possible request reference after completion
2021-05-26 9:26 UTC (13+ messages)
` [PATCH v2 2/4] nvme-pci: limit maximum queue depth to 4095
` [PATCH v2 3/4] nvme-tcp: don't check blk_mq_tag_to_rq when receiving pdu data
` [PATCH v2 4/4] nvme: code command_id with a genctr for use-after-free validation
[PATCHv3 0/4] block and nvme passthrough error handling
2021-05-26 8:47 UTC (9+ messages)
` [PATCHv3 1/4] block: support polling through blk_execute_rq
` [PATCHv3 2/4] nvme: use blk_execute_rq() for passthrough commands
` [PATCHv3 3/4] block: return errors from blk_execute_rq()
[PATCH nvme-cli] systemd/nvmf-autoconnect.service: load nvme-fabrics before autoconnect
2021-05-26 7:25 UTC
[linux-nvme:nvme-5.14 4/21] ERROR: modpost: "blk_cleanup_queue" [drivers/nvme/host/nvme-tcp.ko] undefined!
2021-05-26 7:08 UTC
[PATCH 26/26] block: unexport blk_alloc_queue
2021-05-26 4:49 UTC (19+ messages)
` [PATCH 12/26] bcache: convert to blk_alloc_disk/blk_cleanup_disk
` [dm-devel] [PATCH 01/26] block: refactor device number setup in __device_add_disk
` "
` [dm-devel] [PATCH 05/26] block: add blk_alloc_disk and blk_cleanup_disk APIs
` [PATCH 06/26] brd: convert to blk_alloc_disk/blk_cleanup_disk
` [PATCH 13/26] dm: "
` [PATCH 14/26] md: "
` [PATCH 18/26] nvme-multipath: "
` simplify gendisk and request_queue allocation for bio based drivers
[linux-nvme:nvme-5.13] BUILD SUCCESS 77094a0082f4d57f45fbcd9e79e4bc354408604f
2021-05-25 20:29 UTC
[linux-nvme:nvme-5.14] BUILD SUCCESS 67dbcdc3689a25249092be0c851848efbcc62f38
2021-05-25 20:29 UTC
[PATCH] nvme: Have NVME_FABRICS select NVME_CORE instead of transport drivers
2021-05-25 20:17 UTC (3+ messages)
[PATCH] nvmet/fc: do not check for invalid target port in nvmet_fc_handle_fcp_rqst()
2021-05-25 12:54 UTC
[PATCH] nvme: Use NN for max_namespaces if MNAN is zero
2021-05-25 7:52 UTC (8+ messages)
[PATCH 0/4] nvme-fabrics: cleanup around nvmf_log_connect_error()
2021-05-25 7:32 UTC (2+ messages)
[PATCHv6 RESEND 1/1] nvme-tcp: Add option to set the physical interface to be used when connecting over TCP sockets
2021-05-25 7:32 UTC (2+ messages)
[PATCHv4] nvme: allow to re-attach namespaces after all paths are down
2021-05-25 7:31 UTC (2+ messages)
[PATCH] nvme-fabrics: decode host pathing error for connect
2021-05-25 7:25 UTC (2+ messages)
[PATCH] nvme/fc: Short-circuit reconnect retries
2021-05-25 7:24 UTC (2+ messages)
[PATCH] nvme: fix potential memory leaks in nvme_cdev_add
2021-05-25 7:24 UTC (3+ messages)
[PATCH 0/2] nvmet-tcp: fix connect error when inline data size to 0
2021-05-25 7:24 UTC (3+ messages)
` [PATCH 1/2] nvmet-tcp: fix inline data size comparison in nvmet_tcp_queue_response
page: next (older) | prev (newer) | latest
- recent:[subjects (threaded)|topics (new)|topics (active)]
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).