intel-wired-lan.lists.osuosl.org archive mirror
 help / color / mirror / Atom feed
From: "Romanowski, Rafal" <rafal.romanowski@intel.com>
To: "Keller, Jacob E" <jacob.e.keller@intel.com>,
	Intel Wired LAN <intel-wired-lan@lists.osuosl.org>
Cc: "Keller, Jacob E" <jacob.e.keller@intel.com>,
	"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v2 2/2] ice: store VF relative MSI-X index in q_vector->vf_reg_idx
Date: Wed, 10 Apr 2024 08:18:35 +0000	[thread overview]
Message-ID: <SJ0PR11MB5865B740E1DEFFDE49AC35FB8F062@SJ0PR11MB5865.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20240322214445.1653263-3-jacob.e.keller@intel.com>

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Jacob Keller
> Sent: Friday, March 22, 2024 10:45 PM
> To: Intel Wired LAN <intel-wired-lan@lists.osuosl.org>
> Cc: Keller, Jacob E <jacob.e.keller@intel.com>; Nguyen, Anthony L
> <anthony.l.nguyen@intel.com>
> Subject: [Intel-wired-lan] [PATCH iwl-next v2 2/2] ice: store VF relative MSI-X
> index in q_vector->vf_reg_idx
> 
> The ice physical function driver needs to configure the association of queues
> and interrupts on behalf of its virtual functions. This is done over virtchnl by
> the VF sending messages during its initialization phase. These messages
> contain a vector_id which the VF wants to associate with a given queue. This
> ID is relative to the VF space, where 0 indicates the control IRQ for non-queue
> interrupts.
> 
> When programming the mapping, the PF driver currently passes this vector_id
> directly to the low level functions for programming. This works for SR-IOV,
> because the hardware uses the VF-based indexing for interrupts.
> 
> This won't work for Scalable IOV, which uses PF-based indexing for
> programming its VSIs. To handle this, the driver needs to be able to look up the
> proper index to use for programming. For typical IRQs, this would be the
> q_vector->reg_idx field.
> 
> The q_vector->reg_idx can't be set to a VF relative value, because it is used
> when the PF needs to control the interrupt, such as when triggering a software
> interrupt on stopping the Tx queue. Thus, introduce a new q_vector-
> >vf_reg_idx which can store the VF relative index for registers which expect
> this.
> 
> Use this in ice_cfg_interrupt to look up the VF index from the q_vector.
> This allows removing the vector ID parameter of ice_cfg_interrupt. Also notice
> that this function returns an int, but then is cast to the virtchnl error
> enumeration, virtchnl_status_code. Update the return type to indicate it does
> not return an integer error code. We can't use normal error codes here
> because the return values are passed across the virtchnl interface.
> 
> This will allow the future Scalable IOV VFs to correctly look up the index
> needed for programming the VF queues without breaking SR-IOV.
> 
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> ---
> I found a bug during further testing in the v1 case because the irq.index field is
> used to determine if the IRQ vector was actually allocated. I then investigated
> if q_vector->reg_idx was necessary or if it could be replaced.
> It turns out that it is used during the shutdown path for the Tx queues as part
> of the stop queue procedure. Thus, I opted to replace the functionality by
> using a separate vf_reg_idx field that fits in a 2-byte gap in the current
> q_vector structure.
> 
> Changes since v2:
> * introduce new vf_reg_idx instead of (ab)using msi_map index field
> * Keep ice_calc_vf_reg_idx and use it to assign both the PF-relative
>   register index as well as the VF index needed for virtchnl
> * Assign vf_reg_idx to the same as reg_idx for all non-ICE_VSI_VF VSIs.
> 
>  drivers/net/ethernet/intel/ice/ice.h          |  3 ++-
>  drivers/net/ethernet/intel/ice/ice_base.c     |  3 ++-
>  drivers/net/ethernet/intel/ice/ice_sriov.c    |  7 ++++---
>  drivers/net/ethernet/intel/ice/ice_sriov.h    |  5 ++---
>  drivers/net/ethernet/intel/ice/ice_virtchnl.c | 14 +++++++-------
>  5 files changed, 17 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice.h
> b/drivers/net/ethernet/intel/ice/ice.h
> index a7e88d797d4c..67a3236ab1fc 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -459,7 +459,7 @@ struct ice_q_vector {
>  	struct ice_vsi *vsi;
> 
>  	u16 v_idx;			/* index in the vsi->q_vector array. */
> -	u16 reg_idx;
> +	u16 reg_idx;			/* PF relative register index */
>  	u8 num_ring_rx;			/* total number of Rx rings in
> vector */
>  	u8 num_ring_tx;			/* total number of Tx rings in
> vector */
>  	u8 wb_on_itr:1;			/* if true, WB on ITR is
> enabled */
> @@ -481,6 +481,7 @@ struct ice_q_vector {
>  	char name[ICE_INT_NAME_STR_LEN];
> 
>  	u16 total_events;	/* net_dim(): number of interrupts processed
> */
> +	u16 vf_reg_idx;		/* VF relative register index */
>  	struct msi_map irq;
>  } ____cacheline_internodealigned_in_smp;
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_base.c
> b/drivers/net/ethernet/intel/ice/ice_base.c
> index 662fc395edcc..5e1d5a76ee00 100644
> --- a/drivers/net/ethernet/intel/ice/ice_base.c
> +++ b/drivers/net/ethernet/intel/ice/ice_base.c
> @@ -121,7 +121,7 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi,
> u16 v_idx)
>  	q_vector->irq.index = -ENOENT;
> 
>  	if (vsi->type == ICE_VSI_VF) {
> -		q_vector->reg_idx = ice_calc_vf_reg_idx(vsi->vf, q_vector);
> +		ice_calc_vf_reg_idx(vsi->vf, q_vector);
>  		goto out;
>  	} else if (vsi->type == ICE_VSI_CTRL && vsi->vf) {
>  		struct ice_vsi *ctrl_vsi = ice_get_vf_ctrl_vsi(pf, vsi); @@ -145,6
> +145,7 @@ static int ice_vsi_alloc_q_vector(struct ice_vsi *vsi, u16 v_idx)
> 
>  skip_alloc:
>  	q_vector->reg_idx = q_vector->irq.index;
> +	q_vector->vf_reg_idx = q_vector->irq.index;
> 
>  	/* only set affinity_mask if the CPU is online */
>  	if (cpu_online(v_idx))
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index 5e9521876617..fb2e96db647e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -360,13 +360,14 @@ static void ice_ena_vf_mappings(struct ice_vf *vf)
>   * @vf: VF to calculate the register index for
>   * @q_vector: a q_vector associated to the VF
>   */
> -int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector)
> +void ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector
> +*q_vector)
>  {
>  	if (!vf || !q_vector)
> -		return -EINVAL;
> +		return;
> 
>  	/* always add one to account for the OICR being the first MSIX */
> -	return vf->first_vector_idx + q_vector->v_idx + 1;
> +	q_vector->vf_reg_idx = q_vector->v_idx + ICE_NONQ_VECS_VF;
> +	q_vector->reg_idx = vf->first_vector_idx + q_vector->vf_reg_idx;
>  }
> 
>  /**
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h
> b/drivers/net/ethernet/intel/ice/ice_sriov.h
> index 8488df38b586..4ba8fb53aea1 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.h
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
> @@ -49,7 +49,7 @@ int ice_set_vf_link_state(struct net_device *netdev, int
> vf_id, int link_state);
> 
>  int ice_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool ena);
> 
> -int ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector *q_vector);
> +void ice_calc_vf_reg_idx(struct ice_vf *vf, struct ice_q_vector
> +*q_vector);
> 
>  int
>  ice_get_vf_stats(struct net_device *netdev, int vf_id, @@ -130,11 +130,10
> @@ ice_set_vf_bw(struct net_device __always_unused *netdev,
>  	return -EOPNOTSUPP;
>  }
> 
> -static inline int
> +static inline void
>  ice_calc_vf_reg_idx(struct ice_vf __always_unused *vf,
>  		    struct ice_q_vector __always_unused *q_vector)  {
> -	return 0;
>  }
> 
>  static inline int
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> index 1ff9818b4c84..1c6ce0c4ed4e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
> @@ -1505,13 +1505,12 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8
> *msg)
>   * ice_cfg_interrupt
>   * @vf: pointer to the VF info
>   * @vsi: the VSI being configured
> - * @vector_id: vector ID
>   * @map: vector map for mapping vectors to queues
>   * @q_vector: structure for interrupt vector
>   * configure the IRQ to queue map
>   */
> -static int
> -ice_cfg_interrupt(struct ice_vf *vf, struct ice_vsi *vsi, u16 vector_id,
> +static enum virtchnl_status_code
> +ice_cfg_interrupt(struct ice_vf *vf, struct ice_vsi *vsi,
>  		  struct virtchnl_vector_map *map,
>  		  struct ice_q_vector *q_vector)
>  {
> @@ -1531,7 +1530,8 @@ ice_cfg_interrupt(struct ice_vf *vf, struct ice_vsi
> *vsi, u16 vector_id,
>  		q_vector->num_ring_rx++;
>  		q_vector->rx.itr_idx = map->rxitr_idx;
>  		vsi->rx_rings[vsi_q_id]->q_vector = q_vector;
> -		ice_cfg_rxq_interrupt(vsi, vsi_q_id, vector_id,
> +		ice_cfg_rxq_interrupt(vsi, vsi_q_id,
> +				      q_vector->vf_reg_idx,
>  				      q_vector->rx.itr_idx);
>  	}
> 
> @@ -1545,7 +1545,8 @@ ice_cfg_interrupt(struct ice_vf *vf, struct ice_vsi
> *vsi, u16 vector_id,
>  		q_vector->num_ring_tx++;
>  		q_vector->tx.itr_idx = map->txitr_idx;
>  		vsi->tx_rings[vsi_q_id]->q_vector = q_vector;
> -		ice_cfg_txq_interrupt(vsi, vsi_q_id, vector_id,
> +		ice_cfg_txq_interrupt(vsi, vsi_q_id,
> +				      q_vector->vf_reg_idx,
>  				      q_vector->tx.itr_idx);
>  	}
> 
> @@ -1619,8 +1620,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf,
> u8 *msg)
>  		}
> 
>  		/* lookout for the invalid queue index */
> -		v_ret = (enum virtchnl_status_code)
> -			ice_cfg_interrupt(vf, vsi, vector_id, map, q_vector);
> +		v_ret = ice_cfg_interrupt(vf, vsi, map, q_vector);
>  		if (v_ret)
>  			goto error_param;
>  	}
> --
> 2.44.0.53.g0f9d4d28b7e6


Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>



      reply	other threads:[~2024-04-10  8:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-22 21:44 [Intel-wired-lan] [PATCH iwl-next v2 0/2] ice: minor cleanups for VF IRQ logic Jacob Keller
2024-03-22 21:44 ` [Intel-wired-lan] [PATCH iwl-next v2 1/2] ice: set vf->num_msix in ice_initialize_vf_entry() Jacob Keller
2024-04-10  8:14   ` Romanowski, Rafal
2024-03-22 21:44 ` [Intel-wired-lan] [PATCH iwl-next v2 2/2] ice: store VF relative MSI-X index in q_vector->vf_reg_idx Jacob Keller
2024-04-10  8:18   ` Romanowski, Rafal [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB5865B740E1DEFFDE49AC35FB8F062@SJ0PR11MB5865.namprd11.prod.outlook.com \
    --to=rafal.romanowski@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jacob.e.keller@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).