avoid dubiously clever code in win32_start_timer

The code is initializing an unsigned int to UINT_MAX using "-1", so that
the following always-true comparison seems to be always-false at a
first look.  Since alarm timer initializations are never nested, it is
simpler to unconditionally store the result of timeGetDevCaps into
data->period.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
This commit is contained in:
Paolo Bonzini 2010-03-10 11:38:38 +01:00 committed by Anthony Liguori
parent 6d0ee85040
commit 9aea10297f
1 changed files with 2 additions and 4 deletions

4
vl.c
View File

@ -626,7 +626,7 @@ static struct qemu_alarm_timer *alarm_timer;
struct qemu_alarm_win32 {
MMRESULT timerId;
unsigned int period;
} alarm_win32_data = {0, -1};
} alarm_win32_data = {0, 0};
static int win32_start_timer(struct qemu_alarm_timer *t);
static void win32_stop_timer(struct qemu_alarm_timer *t);
@ -1360,9 +1360,7 @@ static int win32_start_timer(struct qemu_alarm_timer *t)
memset(&tc, 0, sizeof(tc));
timeGetDevCaps(&tc, sizeof(tc));
if (data->period < tc.wPeriodMin)
data->period = tc.wPeriodMin;
timeBeginPeriod(data->period);
flags = TIME_CALLBACK_FUNCTION;