tpm: use tpm_msleep() value as max delay

Currently, tpm_msleep() uses delay_msec as the minimum value in
usleep_range. However, that is the maximum time we want to wait.
The function is modified to use the delay_msec as the maximum
value, not the minimum value.

After this change, performance on a TPM 1.2 with an 8 byte
burstcount for 1000 extends improved from ~9sec to ~8sec.

Fixes: 3b9af007869("tpm: replace msleep() with usleep_range() in TPM 1.2/
2.0 generic drivers")

Signed-off-by: Nayna Jain <nayna@linux.vnet.ibm.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
This commit is contained in:
Nayna Jain 2017-10-17 16:32:32 -04:00 committed by Jarkko Sakkinen
parent cf151a9a44
commit 5ef924d9e2
1 changed files with 2 additions and 2 deletions

View File

@ -515,8 +515,8 @@ int tpm_pm_resume(struct device *dev);
static inline void tpm_msleep(unsigned int delay_msec)
{
usleep_range(delay_msec * 1000,
(delay_msec * 1000) + TPM_TIMEOUT_RANGE_US);
usleep_range((delay_msec * 1000) - TPM_TIMEOUT_RANGE_US,
delay_msec * 1000);
};
struct tpm_chip *tpm_chip_find_get(int chip_num);