2008-10-06 00:07:45 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2008 Felix Fietkau <nbd@openwrt.org>
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
|
|
* published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* Based on minstrel.c:
|
|
|
|
* Copyright (C) 2005-2007 Derek Smithies <derek@indranet.co.nz>
|
|
|
|
* Sponsored by Indranet Technologies Ltd
|
|
|
|
*
|
|
|
|
* Based on sample.c:
|
|
|
|
* Copyright (c) 2005 John Bicket
|
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions
|
|
|
|
* are met:
|
|
|
|
* 1. Redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer,
|
|
|
|
* without modification.
|
|
|
|
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
|
|
|
|
* similar to the "NO WARRANTY" disclaimer below ("Disclaimer") and any
|
|
|
|
* redistribution must be conditioned upon including a substantially
|
|
|
|
* similar Disclaimer requirement for further binary redistribution.
|
|
|
|
* 3. Neither the names of the above-listed copyright holders nor the names
|
|
|
|
* of any contributors may be used to endorse or promote products derived
|
|
|
|
* from this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* Alternatively, this software may be distributed under the terms of the
|
|
|
|
* GNU General Public License ("GPL") version 2 as published by the Free
|
|
|
|
* Software Foundation.
|
|
|
|
*
|
|
|
|
* NO WARRANTY
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTIBILITY
|
|
|
|
* AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
|
|
|
|
* THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY,
|
|
|
|
* OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
|
|
|
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
|
|
|
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
|
|
|
|
* IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
|
|
|
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
|
|
|
|
* THE POSSIBILITY OF SUCH DAMAGES.
|
|
|
|
*/
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/debugfs.h>
|
|
|
|
#include <linux/random.h>
|
|
|
|
#include <linux/ieee80211.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2008-10-06 00:07:45 +08:00
|
|
|
#include <net/mac80211.h>
|
|
|
|
#include "rate.h"
|
|
|
|
#include "rc80211_minstrel.h"
|
|
|
|
|
|
|
|
#define SAMPLE_TBL(_mi, _idx, _col) \
|
|
|
|
_mi->sample_table[(_idx * SAMPLE_COLUMNS) + _col]
|
|
|
|
|
|
|
|
/* convert mac80211 rate index to local array index */
|
|
|
|
static inline int
|
|
|
|
rix_to_ndx(struct minstrel_sta_info *mi, int rix)
|
|
|
|
{
|
|
|
|
int i = rix;
|
|
|
|
for (i = rix; i >= 0; i--)
|
|
|
|
if (mi->r[i].rix == rix)
|
|
|
|
break;
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
|
2015-03-25 04:09:40 +08:00
|
|
|
/* return current EMWA throughput */
|
2015-03-25 04:09:41 +08:00
|
|
|
int minstrel_get_tp_avg(struct minstrel_rate *mr, int prob_ewma)
|
2015-03-25 04:09:40 +08:00
|
|
|
{
|
|
|
|
int usecs;
|
|
|
|
|
|
|
|
usecs = mr->perfect_tx_time;
|
|
|
|
if (!usecs)
|
|
|
|
usecs = 1000000;
|
|
|
|
|
|
|
|
/* reset thr. below 10% success */
|
|
|
|
if (mr->stats.prob_ewma < MINSTREL_FRAC(10, 100))
|
|
|
|
return 0;
|
2015-03-25 04:09:41 +08:00
|
|
|
|
|
|
|
if (prob_ewma > MINSTREL_FRAC(90, 100))
|
|
|
|
return MINSTREL_TRUNC(100000 * (MINSTREL_FRAC(90, 100) / usecs));
|
2015-03-25 04:09:40 +08:00
|
|
|
else
|
2015-03-25 04:09:41 +08:00
|
|
|
return MINSTREL_TRUNC(100000 * (prob_ewma / usecs));
|
2015-03-25 04:09:40 +08:00
|
|
|
}
|
|
|
|
|
2013-03-05 06:30:07 +08:00
|
|
|
/* find & sort topmost throughput rates */
|
|
|
|
static inline void
|
|
|
|
minstrel_sort_best_tp_rates(struct minstrel_sta_info *mi, int i, u8 *tp_list)
|
|
|
|
{
|
2015-07-28 16:30:16 +08:00
|
|
|
int j;
|
|
|
|
struct minstrel_rate_stats *tmp_mrs;
|
2015-03-25 04:09:41 +08:00
|
|
|
struct minstrel_rate_stats *cur_mrs = &mi->r[i].stats;
|
2013-03-05 06:30:07 +08:00
|
|
|
|
2015-07-28 16:30:16 +08:00
|
|
|
for (j = MAX_THR_RATES; j > 0; --j) {
|
2015-03-25 04:09:41 +08:00
|
|
|
tmp_mrs = &mi->r[tp_list[j - 1]].stats;
|
2015-07-28 16:30:16 +08:00
|
|
|
if (minstrel_get_tp_avg(&mi->r[i], cur_mrs->prob_ewma) <=
|
|
|
|
minstrel_get_tp_avg(&mi->r[tp_list[j - 1]], tmp_mrs->prob_ewma))
|
|
|
|
break;
|
2015-03-25 04:09:41 +08:00
|
|
|
}
|
2015-03-25 04:09:40 +08:00
|
|
|
|
2013-03-05 06:30:07 +08:00
|
|
|
if (j < MAX_THR_RATES - 1)
|
|
|
|
memmove(&tp_list[j + 1], &tp_list[j], MAX_THR_RATES - (j + 1));
|
|
|
|
if (j < MAX_THR_RATES)
|
|
|
|
tp_list[j] = i;
|
|
|
|
}
|
|
|
|
|
2013-04-22 22:14:43 +08:00
|
|
|
static void
|
|
|
|
minstrel_set_rate(struct minstrel_sta_info *mi, struct ieee80211_sta_rates *ratetbl,
|
|
|
|
int offset, int idx)
|
|
|
|
{
|
|
|
|
struct minstrel_rate *r = &mi->r[idx];
|
|
|
|
|
|
|
|
ratetbl->rate[offset].idx = r->rix;
|
|
|
|
ratetbl->rate[offset].count = r->adjusted_retry_count;
|
|
|
|
ratetbl->rate[offset].count_cts = r->retry_count_cts;
|
2014-09-10 05:22:13 +08:00
|
|
|
ratetbl->rate[offset].count_rts = r->stats.retry_count_rtscts;
|
2013-04-22 22:14:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
minstrel_update_rates(struct minstrel_priv *mp, struct minstrel_sta_info *mi)
|
|
|
|
{
|
|
|
|
struct ieee80211_sta_rates *ratetbl;
|
|
|
|
int i = 0;
|
|
|
|
|
|
|
|
ratetbl = kzalloc(sizeof(*ratetbl), GFP_ATOMIC);
|
|
|
|
if (!ratetbl)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Start with max_tp_rate */
|
|
|
|
minstrel_set_rate(mi, ratetbl, i++, mi->max_tp_rate[0]);
|
|
|
|
|
|
|
|
if (mp->hw->max_rates >= 3) {
|
|
|
|
/* At least 3 tx rates supported, use max_tp_rate2 next */
|
|
|
|
minstrel_set_rate(mi, ratetbl, i++, mi->max_tp_rate[1]);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mp->hw->max_rates >= 2) {
|
|
|
|
/* At least 2 tx rates supported, use max_prob_rate next */
|
|
|
|
minstrel_set_rate(mi, ratetbl, i++, mi->max_prob_rate);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Use lowest rate last */
|
|
|
|
ratetbl->rate[i].idx = mi->lowest_rix;
|
|
|
|
ratetbl->rate[i].count = mp->max_retry;
|
|
|
|
ratetbl->rate[i].count_cts = mp->max_retry;
|
|
|
|
ratetbl->rate[i].count_rts = mp->max_retry;
|
|
|
|
|
|
|
|
rate_control_set_rates(mp->hw, mi->sta, ratetbl);
|
|
|
|
}
|
|
|
|
|
2015-03-25 04:09:38 +08:00
|
|
|
/*
|
2015-03-25 04:09:43 +08:00
|
|
|
* Recalculate statistics and counters of a given rate
|
2015-03-25 04:09:38 +08:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
minstrel_calc_rate_stats(struct minstrel_rate_stats *mrs)
|
|
|
|
{
|
|
|
|
if (unlikely(mrs->attempts > 0)) {
|
|
|
|
mrs->sample_skipped = 0;
|
|
|
|
mrs->cur_prob = MINSTREL_FRAC(mrs->success, mrs->attempts);
|
2015-03-25 04:09:43 +08:00
|
|
|
if (unlikely(!mrs->att_hist)) {
|
2015-03-25 04:09:39 +08:00
|
|
|
mrs->prob_ewma = mrs->cur_prob;
|
2015-03-25 04:09:43 +08:00
|
|
|
} else {
|
|
|
|
/* update exponential weighted moving variance */
|
|
|
|
mrs->prob_ewmsd = minstrel_ewmsd(mrs->prob_ewmsd,
|
|
|
|
mrs->cur_prob,
|
|
|
|
mrs->prob_ewma,
|
|
|
|
EWMA_LEVEL);
|
|
|
|
|
|
|
|
/*update exponential weighted moving avarage */
|
2015-03-25 04:09:39 +08:00
|
|
|
mrs->prob_ewma = minstrel_ewma(mrs->prob_ewma,
|
2015-03-25 04:09:43 +08:00
|
|
|
mrs->cur_prob,
|
|
|
|
EWMA_LEVEL);
|
|
|
|
}
|
2015-03-25 04:09:38 +08:00
|
|
|
mrs->att_hist += mrs->attempts;
|
|
|
|
mrs->succ_hist += mrs->success;
|
|
|
|
} else {
|
|
|
|
mrs->sample_skipped++;
|
|
|
|
}
|
|
|
|
|
|
|
|
mrs->last_success = mrs->success;
|
|
|
|
mrs->last_attempts = mrs->attempts;
|
|
|
|
mrs->success = 0;
|
|
|
|
mrs->attempts = 0;
|
|
|
|
}
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
static void
|
|
|
|
minstrel_update_stats(struct minstrel_priv *mp, struct minstrel_sta_info *mi)
|
|
|
|
{
|
2013-03-05 06:30:07 +08:00
|
|
|
u8 tmp_tp_rate[MAX_THR_RATES];
|
|
|
|
u8 tmp_prob_rate = 0;
|
2015-03-25 04:09:40 +08:00
|
|
|
int i, tmp_cur_tp, tmp_prob_tp;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-12-18 15:44:16 +08:00
|
|
|
for (i = 0; i < MAX_THR_RATES; i++)
|
2013-03-05 06:30:07 +08:00
|
|
|
tmp_tp_rate[i] = 0;
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
for (i = 0; i < mi->n_rates; i++) {
|
|
|
|
struct minstrel_rate *mr = &mi->r[i];
|
2014-09-10 05:22:13 +08:00
|
|
|
struct minstrel_rate_stats *mrs = &mi->r[i].stats;
|
2015-03-25 04:09:41 +08:00
|
|
|
struct minstrel_rate_stats *tmp_mrs = &mi->r[tmp_prob_rate].stats;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2015-03-25 04:09:43 +08:00
|
|
|
/* Update statistics of success probability per rate */
|
2015-03-25 04:09:38 +08:00
|
|
|
minstrel_calc_rate_stats(mrs);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
/* Sample less often below the 10% chance of success.
|
|
|
|
* Sample less often above the 95% chance of success. */
|
2015-03-25 04:09:39 +08:00
|
|
|
if (mrs->prob_ewma > MINSTREL_FRAC(95, 100) ||
|
|
|
|
mrs->prob_ewma < MINSTREL_FRAC(10, 100)) {
|
2014-09-10 05:22:13 +08:00
|
|
|
mr->adjusted_retry_count = mrs->retry_count >> 1;
|
2008-10-06 00:07:45 +08:00
|
|
|
if (mr->adjusted_retry_count > 2)
|
|
|
|
mr->adjusted_retry_count = 2;
|
2008-10-16 01:13:59 +08:00
|
|
|
mr->sample_limit = 4;
|
2008-10-06 00:07:45 +08:00
|
|
|
} else {
|
2008-10-16 01:13:59 +08:00
|
|
|
mr->sample_limit = -1;
|
2014-09-10 05:22:13 +08:00
|
|
|
mr->adjusted_retry_count = mrs->retry_count;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
if (!mr->adjusted_retry_count)
|
|
|
|
mr->adjusted_retry_count = 2;
|
|
|
|
|
2013-03-05 06:30:07 +08:00
|
|
|
minstrel_sort_best_tp_rates(mi, i, tmp_tp_rate);
|
|
|
|
|
|
|
|
/* To determine the most robust rate (max_prob_rate) used at
|
|
|
|
* 3rd mmr stage we distinct between two cases:
|
|
|
|
* (1) if any success probabilitiy >= 95%, out of those rates
|
|
|
|
* choose the maximum throughput rate as max_prob_rate
|
|
|
|
* (2) if all success probabilities < 95%, the rate with
|
2014-10-30 13:55:58 +08:00
|
|
|
* highest success probability is chosen as max_prob_rate */
|
2015-03-25 04:09:39 +08:00
|
|
|
if (mrs->prob_ewma >= MINSTREL_FRAC(95, 100)) {
|
2015-03-25 04:09:41 +08:00
|
|
|
tmp_cur_tp = minstrel_get_tp_avg(mr, mrs->prob_ewma);
|
|
|
|
tmp_prob_tp = minstrel_get_tp_avg(&mi->r[tmp_prob_rate],
|
|
|
|
tmp_mrs->prob_ewma);
|
2015-03-25 04:09:40 +08:00
|
|
|
if (tmp_cur_tp >= tmp_prob_tp)
|
2013-03-05 06:30:07 +08:00
|
|
|
tmp_prob_rate = i;
|
|
|
|
} else {
|
2015-03-25 04:09:41 +08:00
|
|
|
if (mrs->prob_ewma >= tmp_mrs->prob_ewma)
|
2013-03-05 06:30:07 +08:00
|
|
|
tmp_prob_rate = i;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-03-05 06:30:07 +08:00
|
|
|
/* Assign the new rate set */
|
|
|
|
memcpy(mi->max_tp_rate, tmp_tp_rate, sizeof(mi->max_tp_rate));
|
|
|
|
mi->max_prob_rate = tmp_prob_rate;
|
2013-03-05 06:30:03 +08:00
|
|
|
|
2013-08-27 22:59:46 +08:00
|
|
|
#ifdef CONFIG_MAC80211_DEBUGFS
|
|
|
|
/* use fixed index if set */
|
|
|
|
if (mp->fixed_rate_idx != -1) {
|
|
|
|
mi->max_tp_rate[0] = mp->fixed_rate_idx;
|
|
|
|
mi->max_tp_rate[1] = mp->fixed_rate_idx;
|
|
|
|
mi->max_prob_rate = mp->fixed_rate_idx;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2013-03-05 06:30:03 +08:00
|
|
|
/* Reset update timer */
|
2015-03-25 04:09:39 +08:00
|
|
|
mi->last_stats_update = jiffies;
|
2013-04-22 22:14:43 +08:00
|
|
|
|
|
|
|
minstrel_update_rates(mp, mi);
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
minstrel_tx_status(void *priv, struct ieee80211_supported_band *sband,
|
2013-12-18 15:44:16 +08:00
|
|
|
struct ieee80211_sta *sta, void *priv_sta,
|
2014-11-20 03:08:09 +08:00
|
|
|
struct ieee80211_tx_info *info)
|
2008-10-06 00:07:45 +08:00
|
|
|
{
|
2012-11-16 01:27:56 +08:00
|
|
|
struct minstrel_priv *mp = priv;
|
2008-10-06 00:07:45 +08:00
|
|
|
struct minstrel_sta_info *mi = priv_sta;
|
2008-10-21 18:40:02 +08:00
|
|
|
struct ieee80211_tx_rate *ar = info->status.rates;
|
|
|
|
int i, ndx;
|
|
|
|
int success;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2008-10-21 18:40:02 +08:00
|
|
|
success = !!(info->flags & IEEE80211_TX_STAT_ACK);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2008-10-21 18:40:02 +08:00
|
|
|
for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) {
|
|
|
|
if (ar[i].idx < 0)
|
2008-10-06 00:07:45 +08:00
|
|
|
break;
|
|
|
|
|
2008-10-21 18:40:02 +08:00
|
|
|
ndx = rix_to_ndx(mi, ar[i].idx);
|
2009-07-03 13:25:08 +08:00
|
|
|
if (ndx < 0)
|
|
|
|
continue;
|
|
|
|
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->r[ndx].stats.attempts += ar[i].count;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2009-08-18 08:15:55 +08:00
|
|
|
if ((i != IEEE80211_TX_MAX_RATES - 1) && (ar[i + 1].idx < 0))
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->r[ndx].stats.success += success;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && (i >= 0))
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->sample_packets++;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
if (mi->sample_deferred > 0)
|
|
|
|
mi->sample_deferred--;
|
2012-11-16 01:27:56 +08:00
|
|
|
|
2015-03-25 04:09:39 +08:00
|
|
|
if (time_after(jiffies, mi->last_stats_update +
|
2012-11-16 01:27:56 +08:00
|
|
|
(mp->update_interval * HZ) / 1000))
|
|
|
|
minstrel_update_stats(mp, mi);
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static inline unsigned int
|
|
|
|
minstrel_get_retry_count(struct minstrel_rate *mr,
|
2013-12-18 15:44:16 +08:00
|
|
|
struct ieee80211_tx_info *info)
|
2008-10-06 00:07:45 +08:00
|
|
|
{
|
2014-12-17 20:38:34 +08:00
|
|
|
u8 retry = mr->adjusted_retry_count;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-04-16 19:38:43 +08:00
|
|
|
if (info->control.use_rts)
|
2014-12-17 20:38:34 +08:00
|
|
|
retry = max_t(u8, 2, min(mr->stats.retry_count_rtscts, retry));
|
2013-04-16 19:38:43 +08:00
|
|
|
else if (info->control.use_cts_prot)
|
2014-12-17 20:38:34 +08:00
|
|
|
retry = max_t(u8, 2, min(mr->retry_count_cts, retry));
|
2008-10-06 00:07:45 +08:00
|
|
|
return retry;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
minstrel_get_next_sample(struct minstrel_sta_info *mi)
|
|
|
|
{
|
|
|
|
unsigned int sample_ndx;
|
2013-03-05 06:30:03 +08:00
|
|
|
sample_ndx = SAMPLE_TBL(mi, mi->sample_row, mi->sample_column);
|
|
|
|
mi->sample_row++;
|
2013-03-05 06:30:05 +08:00
|
|
|
if ((int) mi->sample_row >= mi->n_rates) {
|
2013-03-05 06:30:03 +08:00
|
|
|
mi->sample_row = 0;
|
2008-10-06 00:07:45 +08:00
|
|
|
mi->sample_column++;
|
|
|
|
if (mi->sample_column >= SAMPLE_COLUMNS)
|
|
|
|
mi->sample_column = 0;
|
|
|
|
}
|
|
|
|
return sample_ndx;
|
|
|
|
}
|
|
|
|
|
2008-10-28 23:49:41 +08:00
|
|
|
static void
|
2008-10-21 18:40:02 +08:00
|
|
|
minstrel_get_rate(void *priv, struct ieee80211_sta *sta,
|
|
|
|
void *priv_sta, struct ieee80211_tx_rate_control *txrc)
|
2008-10-06 00:07:45 +08:00
|
|
|
{
|
2008-10-21 18:40:02 +08:00
|
|
|
struct sk_buff *skb = txrc->skb;
|
2008-10-06 00:07:45 +08:00
|
|
|
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
|
|
|
|
struct minstrel_sta_info *mi = priv_sta;
|
|
|
|
struct minstrel_priv *mp = priv;
|
2013-04-22 22:14:43 +08:00
|
|
|
struct ieee80211_tx_rate *rate = &info->control.rates[0];
|
|
|
|
struct minstrel_rate *msr, *mr;
|
|
|
|
unsigned int ndx;
|
2013-03-05 06:30:03 +08:00
|
|
|
bool mrr_capable;
|
2013-07-15 20:35:06 +08:00
|
|
|
bool prev_sample;
|
2013-04-22 22:14:43 +08:00
|
|
|
int delta;
|
2013-03-05 06:30:03 +08:00
|
|
|
int sampling_ratio;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-03-05 06:30:03 +08:00
|
|
|
/* management/no-ack frames do not use rate control */
|
2009-07-17 01:05:41 +08:00
|
|
|
if (rate_control_send_low(sta, priv_sta, txrc))
|
2008-10-06 00:07:45 +08:00
|
|
|
return;
|
|
|
|
|
2013-03-05 06:30:03 +08:00
|
|
|
/* check multi-rate-retry capabilities & adjust lookaround_rate */
|
|
|
|
mrr_capable = mp->has_mrr &&
|
|
|
|
!txrc->rts &&
|
|
|
|
!txrc->bss_conf->use_cts_prot;
|
|
|
|
if (mrr_capable)
|
|
|
|
sampling_ratio = mp->lookaround_rate_mrr;
|
|
|
|
else
|
|
|
|
sampling_ratio = mp->lookaround_rate;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-03-05 06:30:03 +08:00
|
|
|
/* increase sum packet counter */
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->total_packets++;
|
2013-03-05 06:30:03 +08:00
|
|
|
|
2013-08-27 22:59:46 +08:00
|
|
|
#ifdef CONFIG_MAC80211_DEBUGFS
|
|
|
|
if (mp->fixed_rate_idx != -1)
|
|
|
|
return;
|
|
|
|
#endif
|
|
|
|
|
2014-09-10 05:22:13 +08:00
|
|
|
delta = (mi->total_packets * sampling_ratio / 100) -
|
|
|
|
(mi->sample_packets + mi->sample_deferred / 2);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-04-22 22:14:43 +08:00
|
|
|
/* delta < 0: no sampling required */
|
2013-07-15 20:35:06 +08:00
|
|
|
prev_sample = mi->prev_sample;
|
2013-04-22 22:14:43 +08:00
|
|
|
mi->prev_sample = false;
|
|
|
|
if (delta < 0 || (!mrr_capable && prev_sample))
|
|
|
|
return;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2014-09-10 05:22:13 +08:00
|
|
|
if (mi->total_packets >= 10000) {
|
2013-04-22 22:14:43 +08:00
|
|
|
mi->sample_deferred = 0;
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->sample_packets = 0;
|
|
|
|
mi->total_packets = 0;
|
2013-04-22 22:14:43 +08:00
|
|
|
} else if (delta > mi->n_rates * 2) {
|
|
|
|
/* With multi-rate retry, not every planned sample
|
|
|
|
* attempt actually gets used, due to the way the retry
|
|
|
|
* chain is set up - [max_tp,sample,prob,lowest] for
|
|
|
|
* sample_rate < max_tp.
|
|
|
|
*
|
|
|
|
* If there's too much sampling backlog and the link
|
|
|
|
* starts getting worse, minstrel would start bursting
|
|
|
|
* out lots of sampling frames, which would result
|
|
|
|
* in a large throughput loss. */
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->sample_packets += (delta - mi->n_rates * 2);
|
2013-04-22 22:14:43 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* get next random rate sample */
|
|
|
|
ndx = minstrel_get_next_sample(mi);
|
|
|
|
msr = &mi->r[ndx];
|
|
|
|
mr = &mi->r[mi->max_tp_rate[0]];
|
|
|
|
|
|
|
|
/* Decide if direct ( 1st mrr stage) or indirect (2nd mrr stage)
|
|
|
|
* rate sampling method should be used.
|
|
|
|
* Respect such rates that are not sampled for 20 interations.
|
|
|
|
*/
|
|
|
|
if (mrr_capable &&
|
|
|
|
msr->perfect_tx_time > mr->perfect_tx_time &&
|
2014-09-10 05:22:13 +08:00
|
|
|
msr->stats.sample_skipped < 20) {
|
2013-04-22 22:14:43 +08:00
|
|
|
/* Only use IEEE80211_TX_CTL_RATE_CTRL_PROBE to mark
|
|
|
|
* packets that have the sampling rate deferred to the
|
|
|
|
* second MRR stage. Increase the sample counter only
|
|
|
|
* if the deferred sample rate was actually used.
|
|
|
|
* Use the sample_deferred counter to make sure that
|
|
|
|
* the sampling is not done in large bursts */
|
|
|
|
info->flags |= IEEE80211_TX_CTL_RATE_CTRL_PROBE;
|
|
|
|
rate++;
|
|
|
|
mi->sample_deferred++;
|
|
|
|
} else {
|
2015-02-19 19:57:40 +08:00
|
|
|
if (!msr->sample_limit)
|
2013-04-22 22:14:43 +08:00
|
|
|
return;
|
|
|
|
|
2014-09-10 05:22:13 +08:00
|
|
|
mi->sample_packets++;
|
2013-04-22 22:14:43 +08:00
|
|
|
if (msr->sample_limit > 0)
|
|
|
|
msr->sample_limit--;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
2008-10-16 01:13:59 +08:00
|
|
|
|
|
|
|
/* If we're not using MRR and the sampling rate already
|
|
|
|
* has a probability of >95%, we shouldn't be attempting
|
|
|
|
* to use it, as this only wastes precious airtime */
|
2013-04-22 22:14:43 +08:00
|
|
|
if (!mrr_capable &&
|
2015-03-25 04:09:39 +08:00
|
|
|
(mi->r[ndx].stats.prob_ewma > MINSTREL_FRAC(95, 100)))
|
2008-10-06 00:07:45 +08:00
|
|
|
return;
|
|
|
|
|
2013-04-22 22:14:43 +08:00
|
|
|
mi->prev_sample = true;
|
2013-03-05 06:30:03 +08:00
|
|
|
|
2013-04-22 22:14:43 +08:00
|
|
|
rate->idx = mi->r[ndx].rix;
|
|
|
|
rate->count = minstrel_get_retry_count(&mi->r[ndx], info);
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void
|
2012-04-11 14:47:56 +08:00
|
|
|
calc_rate_durations(enum ieee80211_band band,
|
|
|
|
struct minstrel_rate *d,
|
2013-07-08 22:55:51 +08:00
|
|
|
struct ieee80211_rate *rate,
|
|
|
|
struct cfg80211_chan_def *chandef)
|
2008-10-06 00:07:45 +08:00
|
|
|
{
|
|
|
|
int erp = !!(rate->flags & IEEE80211_RATE_ERP_G);
|
2013-07-08 22:55:51 +08:00
|
|
|
int shift = ieee80211_chandef_get_shift(chandef);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2012-04-11 14:47:56 +08:00
|
|
|
d->perfect_tx_time = ieee80211_frame_duration(band, 1200,
|
2013-07-08 22:55:51 +08:00
|
|
|
DIV_ROUND_UP(rate->bitrate, 1 << shift), erp, 1,
|
|
|
|
shift);
|
2012-04-11 14:47:56 +08:00
|
|
|
d->ack_time = ieee80211_frame_duration(band, 10,
|
2013-07-08 22:55:51 +08:00
|
|
|
DIV_ROUND_UP(rate->bitrate, 1 << shift), erp, 1,
|
|
|
|
shift);
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
init_sample_table(struct minstrel_sta_info *mi)
|
|
|
|
{
|
|
|
|
unsigned int i, col, new_idx;
|
|
|
|
u8 rnd[8];
|
|
|
|
|
|
|
|
mi->sample_column = 0;
|
2013-03-05 06:30:03 +08:00
|
|
|
mi->sample_row = 0;
|
2013-03-05 06:30:05 +08:00
|
|
|
memset(mi->sample_table, 0xff, SAMPLE_COLUMNS * mi->n_rates);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
for (col = 0; col < SAMPLE_COLUMNS; col++) {
|
2013-11-13 17:54:19 +08:00
|
|
|
prandom_bytes(rnd, sizeof(rnd));
|
2013-03-05 06:30:05 +08:00
|
|
|
for (i = 0; i < mi->n_rates; i++) {
|
|
|
|
new_idx = (i + rnd[i & 7]) % mi->n_rates;
|
|
|
|
while (SAMPLE_TBL(mi, new_idx, col) != 0xff)
|
|
|
|
new_idx = (new_idx + 1) % mi->n_rates;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-03-05 06:30:05 +08:00
|
|
|
SAMPLE_TBL(mi, new_idx, col) = i;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
minstrel_rate_init(void *priv, struct ieee80211_supported_band *sband,
|
2013-07-08 22:55:50 +08:00
|
|
|
struct cfg80211_chan_def *chandef,
|
|
|
|
struct ieee80211_sta *sta, void *priv_sta)
|
2008-10-06 00:07:45 +08:00
|
|
|
{
|
|
|
|
struct minstrel_sta_info *mi = priv_sta;
|
|
|
|
struct minstrel_priv *mp = priv;
|
2008-12-22 22:35:31 +08:00
|
|
|
struct ieee80211_rate *ctl_rate;
|
2008-10-06 00:07:45 +08:00
|
|
|
unsigned int i, n = 0;
|
|
|
|
unsigned int t_slot = 9; /* FIXME: get real slot time */
|
2013-07-08 22:55:51 +08:00
|
|
|
u32 rate_flags;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-04-22 22:14:43 +08:00
|
|
|
mi->sta = sta;
|
2008-10-06 00:07:45 +08:00
|
|
|
mi->lowest_rix = rate_lowest_index(sband, sta);
|
2008-12-22 22:35:31 +08:00
|
|
|
ctl_rate = &sband->bitrates[mi->lowest_rix];
|
2012-04-11 14:47:56 +08:00
|
|
|
mi->sp_ack_dur = ieee80211_frame_duration(sband->band, 10,
|
|
|
|
ctl_rate->bitrate,
|
2013-07-08 22:55:51 +08:00
|
|
|
!!(ctl_rate->flags & IEEE80211_RATE_ERP_G), 1,
|
|
|
|
ieee80211_chandef_get_shift(chandef));
|
2008-10-06 00:07:45 +08:00
|
|
|
|
2013-07-08 22:55:51 +08:00
|
|
|
rate_flags = ieee80211_chandef_rate_flags(&mp->hw->conf.chandef);
|
2013-04-22 22:14:43 +08:00
|
|
|
memset(mi->max_tp_rate, 0, sizeof(mi->max_tp_rate));
|
|
|
|
mi->max_prob_rate = 0;
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
for (i = 0; i < sband->n_bitrates; i++) {
|
|
|
|
struct minstrel_rate *mr = &mi->r[n];
|
2014-09-10 05:22:13 +08:00
|
|
|
struct minstrel_rate_stats *mrs = &mi->r[n].stats;
|
2008-10-06 00:07:45 +08:00
|
|
|
unsigned int tx_time = 0, tx_time_cts = 0, tx_time_rtscts = 0;
|
|
|
|
unsigned int tx_time_single;
|
|
|
|
unsigned int cw = mp->cw_min;
|
2013-07-08 22:55:51 +08:00
|
|
|
int shift;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
if (!rate_supported(sta, sband->band, i))
|
|
|
|
continue;
|
2013-07-08 22:55:51 +08:00
|
|
|
if ((rate_flags & sband->bitrates[i].flags) != rate_flags)
|
|
|
|
continue;
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
n++;
|
|
|
|
memset(mr, 0, sizeof(*mr));
|
2014-09-10 05:22:13 +08:00
|
|
|
memset(mrs, 0, sizeof(*mrs));
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
mr->rix = i;
|
2013-07-08 22:55:51 +08:00
|
|
|
shift = ieee80211_chandef_get_shift(chandef);
|
|
|
|
mr->bitrate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
|
|
|
|
(1 << shift) * 5);
|
|
|
|
calc_rate_durations(sband->band, mr, &sband->bitrates[i],
|
|
|
|
chandef);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
/* calculate maximum number of retransmissions before
|
|
|
|
* fallback (based on maximum segment size) */
|
2008-10-16 01:13:59 +08:00
|
|
|
mr->sample_limit = -1;
|
2014-09-10 05:22:13 +08:00
|
|
|
mrs->retry_count = 1;
|
2008-10-06 00:07:45 +08:00
|
|
|
mr->retry_count_cts = 1;
|
2014-09-10 05:22:13 +08:00
|
|
|
mrs->retry_count_rtscts = 1;
|
2008-10-06 00:07:45 +08:00
|
|
|
tx_time = mr->perfect_tx_time + mi->sp_ack_dur;
|
|
|
|
do {
|
|
|
|
/* add one retransmission */
|
|
|
|
tx_time_single = mr->ack_time + mr->perfect_tx_time;
|
|
|
|
|
|
|
|
/* contention window */
|
2011-05-11 10:00:45 +08:00
|
|
|
tx_time_single += (t_slot * cw) >> 1;
|
|
|
|
cw = min((cw << 1) | 1, mp->cw_max);
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
tx_time += tx_time_single;
|
|
|
|
tx_time_cts += tx_time_single + mi->sp_ack_dur;
|
|
|
|
tx_time_rtscts += tx_time_single + 2 * mi->sp_ack_dur;
|
|
|
|
if ((tx_time_cts < mp->segment_size) &&
|
|
|
|
(mr->retry_count_cts < mp->max_retry))
|
|
|
|
mr->retry_count_cts++;
|
|
|
|
if ((tx_time_rtscts < mp->segment_size) &&
|
2014-09-10 05:22:13 +08:00
|
|
|
(mrs->retry_count_rtscts < mp->max_retry))
|
|
|
|
mrs->retry_count_rtscts++;
|
2008-10-06 00:07:45 +08:00
|
|
|
} while ((tx_time < mp->segment_size) &&
|
2014-09-10 05:22:13 +08:00
|
|
|
(++mr->stats.retry_count < mp->max_retry));
|
|
|
|
mr->adjusted_retry_count = mrs->retry_count;
|
2013-04-16 19:38:43 +08:00
|
|
|
if (!(sband->bitrates[i].flags & IEEE80211_RATE_ERP_G))
|
2014-09-10 05:22:13 +08:00
|
|
|
mr->retry_count_cts = mrs->retry_count;
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
for (i = n; i < sband->n_bitrates; i++) {
|
|
|
|
struct minstrel_rate *mr = &mi->r[i];
|
|
|
|
mr->rix = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
mi->n_rates = n;
|
2015-03-25 04:09:39 +08:00
|
|
|
mi->last_stats_update = jiffies;
|
2008-10-06 00:07:45 +08:00
|
|
|
|
|
|
|
init_sample_table(mi);
|
2013-04-22 22:14:43 +08:00
|
|
|
minstrel_update_rates(mp, mi);
|
2008-10-06 00:07:45 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void *
|
|
|
|
minstrel_alloc_sta(void *priv, struct ieee80211_sta *sta, gfp_t gfp)
|
|
|
|
{
|
|
|
|
struct ieee80211_supported_band *sband;
|
|
|
|
struct minstrel_sta_info *mi;
|
|
|
|
struct minstrel_priv *mp = priv;
|
|
|
|
struct ieee80211_hw *hw = mp->hw;
|
|
|
|
int max_rates = 0;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
mi = kzalloc(sizeof(struct minstrel_sta_info), gfp);
|
|
|
|
if (!mi)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
for (i = 0; i < IEEE80211_NUM_BANDS; i++) {
|
2009-05-05 00:04:55 +08:00
|
|
|
sband = hw->wiphy->bands[i];
|
2009-05-06 03:18:26 +08:00
|
|
|
if (sband && sband->n_bitrates > max_rates)
|
2008-10-06 00:07:45 +08:00
|
|
|
max_rates = sband->n_bitrates;
|
|
|
|
}
|
|
|
|
|
|
|
|
mi->r = kzalloc(sizeof(struct minstrel_rate) * max_rates, gfp);
|
|
|
|
if (!mi->r)
|
|
|
|
goto error;
|
|
|
|
|
|
|
|
mi->sample_table = kmalloc(SAMPLE_COLUMNS * max_rates, gfp);
|
|
|
|
if (!mi->sample_table)
|
|
|
|
goto error1;
|
|
|
|
|
2015-03-25 04:09:39 +08:00
|
|
|
mi->last_stats_update = jiffies;
|
2008-10-06 00:07:45 +08:00
|
|
|
return mi;
|
|
|
|
|
|
|
|
error1:
|
|
|
|
kfree(mi->r);
|
|
|
|
error:
|
|
|
|
kfree(mi);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
minstrel_free_sta(void *priv, struct ieee80211_sta *sta, void *priv_sta)
|
|
|
|
{
|
|
|
|
struct minstrel_sta_info *mi = priv_sta;
|
|
|
|
|
|
|
|
kfree(mi->sample_table);
|
|
|
|
kfree(mi->r);
|
|
|
|
kfree(mi);
|
|
|
|
}
|
|
|
|
|
2013-02-13 17:51:08 +08:00
|
|
|
static void
|
|
|
|
minstrel_init_cck_rates(struct minstrel_priv *mp)
|
|
|
|
{
|
|
|
|
static const int bitrates[4] = { 10, 20, 55, 110 };
|
|
|
|
struct ieee80211_supported_band *sband;
|
2013-07-08 22:55:51 +08:00
|
|
|
u32 rate_flags = ieee80211_chandef_rate_flags(&mp->hw->conf.chandef);
|
2013-02-13 17:51:08 +08:00
|
|
|
int i, j;
|
|
|
|
|
|
|
|
sband = mp->hw->wiphy->bands[IEEE80211_BAND_2GHZ];
|
|
|
|
if (!sband)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0, j = 0; i < sband->n_bitrates; i++) {
|
|
|
|
struct ieee80211_rate *rate = &sband->bitrates[i];
|
|
|
|
|
|
|
|
if (rate->flags & IEEE80211_RATE_ERP_G)
|
|
|
|
continue;
|
|
|
|
|
2013-07-08 22:55:51 +08:00
|
|
|
if ((rate_flags & sband->bitrates[i].flags) != rate_flags)
|
|
|
|
continue;
|
|
|
|
|
2013-02-13 17:51:08 +08:00
|
|
|
for (j = 0; j < ARRAY_SIZE(bitrates); j++) {
|
|
|
|
if (rate->bitrate != bitrates[j])
|
|
|
|
continue;
|
|
|
|
|
|
|
|
mp->cck_rates[j] = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
static void *
|
|
|
|
minstrel_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
|
|
|
|
{
|
|
|
|
struct minstrel_priv *mp;
|
|
|
|
|
|
|
|
mp = kzalloc(sizeof(struct minstrel_priv), GFP_ATOMIC);
|
|
|
|
if (!mp)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/* contention window settings
|
|
|
|
* Just an approximation. Using the per-queue values would complicate
|
|
|
|
* the calculations and is probably unnecessary */
|
|
|
|
mp->cw_min = 15;
|
|
|
|
mp->cw_max = 1023;
|
|
|
|
|
|
|
|
/* number of packets (in %) to use for sampling other rates
|
|
|
|
* sample less often for non-mrr packets, because the overhead
|
|
|
|
* is much higher than with mrr */
|
|
|
|
mp->lookaround_rate = 5;
|
|
|
|
mp->lookaround_rate_mrr = 10;
|
|
|
|
|
|
|
|
/* maximum time that the hw is allowed to stay in one MRR segment */
|
|
|
|
mp->segment_size = 6000;
|
|
|
|
|
2008-10-21 18:40:02 +08:00
|
|
|
if (hw->max_rate_tries > 0)
|
|
|
|
mp->max_retry = hw->max_rate_tries;
|
2008-10-06 00:07:45 +08:00
|
|
|
else
|
|
|
|
/* safe default, does not necessarily have to match hw properties */
|
|
|
|
mp->max_retry = 7;
|
|
|
|
|
2008-10-21 18:40:02 +08:00
|
|
|
if (hw->max_rates >= 4)
|
2008-10-06 00:07:45 +08:00
|
|
|
mp->has_mrr = true;
|
|
|
|
|
|
|
|
mp->hw = hw;
|
|
|
|
mp->update_interval = 100;
|
|
|
|
|
2011-05-21 02:29:17 +08:00
|
|
|
#ifdef CONFIG_MAC80211_DEBUGFS
|
|
|
|
mp->fixed_rate_idx = (u32) -1;
|
|
|
|
mp->dbg_fixed_rate = debugfs_create_u32("fixed_rate_idx",
|
|
|
|
S_IRUGO | S_IWUGO, debugfsdir, &mp->fixed_rate_idx);
|
|
|
|
#endif
|
|
|
|
|
2013-02-13 17:51:08 +08:00
|
|
|
minstrel_init_cck_rates(mp);
|
|
|
|
|
2008-10-06 00:07:45 +08:00
|
|
|
return mp;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
minstrel_free(void *priv)
|
|
|
|
{
|
2011-05-21 02:29:17 +08:00
|
|
|
#ifdef CONFIG_MAC80211_DEBUGFS
|
|
|
|
debugfs_remove(((struct minstrel_priv *)priv)->dbg_fixed_rate);
|
|
|
|
#endif
|
2008-10-06 00:07:45 +08:00
|
|
|
kfree(priv);
|
|
|
|
}
|
|
|
|
|
2014-05-20 03:53:20 +08:00
|
|
|
static u32 minstrel_get_expected_throughput(void *priv_sta)
|
|
|
|
{
|
|
|
|
struct minstrel_sta_info *mi = priv_sta;
|
2015-03-25 04:09:41 +08:00
|
|
|
struct minstrel_rate_stats *tmp_mrs;
|
2014-05-20 03:53:20 +08:00
|
|
|
int idx = mi->max_tp_rate[0];
|
2015-03-25 04:09:40 +08:00
|
|
|
int tmp_cur_tp;
|
2014-05-20 03:53:20 +08:00
|
|
|
|
|
|
|
/* convert pkt per sec in kbps (1200 is the average pkt size used for
|
|
|
|
* computing cur_tp
|
|
|
|
*/
|
2015-03-25 04:09:41 +08:00
|
|
|
tmp_mrs = &mi->r[idx].stats;
|
2016-02-02 15:12:26 +08:00
|
|
|
tmp_cur_tp = minstrel_get_tp_avg(&mi->r[idx], tmp_mrs->prob_ewma) * 10;
|
2015-03-25 04:09:40 +08:00
|
|
|
tmp_cur_tp = tmp_cur_tp * 1200 * 8 / 1024;
|
|
|
|
|
|
|
|
return tmp_cur_tp;
|
2014-05-20 03:53:20 +08:00
|
|
|
}
|
|
|
|
|
2014-01-21 06:29:34 +08:00
|
|
|
const struct rate_control_ops mac80211_minstrel = {
|
2008-10-06 00:07:45 +08:00
|
|
|
.name = "minstrel",
|
2014-11-20 03:08:09 +08:00
|
|
|
.tx_status_noskb = minstrel_tx_status,
|
2008-10-06 00:07:45 +08:00
|
|
|
.get_rate = minstrel_get_rate,
|
|
|
|
.rate_init = minstrel_rate_init,
|
|
|
|
.alloc = minstrel_alloc,
|
|
|
|
.free = minstrel_free,
|
|
|
|
.alloc_sta = minstrel_alloc_sta,
|
|
|
|
.free_sta = minstrel_free_sta,
|
|
|
|
#ifdef CONFIG_MAC80211_DEBUGFS
|
|
|
|
.add_sta_debugfs = minstrel_add_sta_debugfs,
|
|
|
|
.remove_sta_debugfs = minstrel_remove_sta_debugfs,
|
|
|
|
#endif
|
2014-05-20 03:53:20 +08:00
|
|
|
.get_expected_throughput = minstrel_get_expected_throughput,
|
2008-10-06 00:07:45 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
int __init
|
|
|
|
rc80211_minstrel_init(void)
|
|
|
|
{
|
|
|
|
return ieee80211_rate_control_register(&mac80211_minstrel);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rc80211_minstrel_exit(void)
|
|
|
|
{
|
|
|
|
ieee80211_rate_control_unregister(&mac80211_minstrel);
|
|
|
|
}
|
|
|
|
|