commit 889976dbcb ("memcg: reclaim memory from nodes in round-robin
order") adds an numa node round-robin for memcg. But the information is
updated once per 10sec.
This patch changes the update trigger from jiffies to memcg's event count.
After this patch, numa scan information will be updated when we see 1024
events of pagein/pageout under a memcg.
[akpm@linux-foundation.org: attempt to repair code layout]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ying Han <yinghan@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Now, in mem_cgroup_hierarchical_reclaim(), mem_cgroup_local_usage() is
used for checking whether the memcg contains reclaimable pages or not. If
no pages in it, the routine skips it.
But, mem_cgroup_local_usage() contains Unevictable pages and cannot handle
"noswap" condition correctly. This doesn't work on a swapless system.
This patch adds test_mem_cgroup_reclaimable() and replaces
mem_cgroup_local_usage(). test_mem_cgroup_reclaimable() see LRU counter
and returns correct answer to the caller. And this new function has
"noswap" argument and can see only FILE LRU if necessary.
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix kerneldoc layout]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Ying Han <yinghan@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
__tlb_remove_page() switches to a new batch page, but still checks space
in the old batch. This check always fails, and causes a forced tlb flush.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
During allocator-intensive workloads, kswapd will be woken frequently
causing free memory to oscillate between the high and min watermark. This
is expected behaviour. Unfortunately, if the highest zone is small, a
problem occurs.
When balance_pgdat() returns, it may be at a lower classzone_idx than it
started because the highest zone was unreclaimable. Before checking if it
should go to sleep though, it checks pgdat->classzone_idx which when there
is no other activity will be MAX_NR_ZONES-1. It interprets this as it has
been woken up while reclaiming, skips scheduling and reclaims again. As
there is no useful reclaim work to do, it enters into a loop of shrinking
slab consuming loads of CPU until the highest zone becomes reclaimable for
a long period of time.
There are two problems here. 1) If the returned classzone or order is
lower, it'll continue reclaiming without scheduling. 2) if the highest
zone was marked unreclaimable but balance_pgdat() returns immediately at
DEF_PRIORITY, the new lower classzone is not communicated back to kswapd()
for sleeping.
This patch does two things that are related. If the end_zone is
unreclaimable, this information is communicated back. Second, if the
classzone or order was reduced due to failing to reclaim, new information
is not read from pgdat and instead an attempt is made to go to sleep. Due
to this, it is also necessary that pgdat->classzone_idx be initialised
each time to pgdat->nr_zones - 1 to avoid re-reads being interpreted as
wakeups.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Andrew Lutomirski <luto@mit.edu>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When deciding if kswapd is sleeping prematurely, the classzone is taken
into account but this is different to what balance_pgdat() and the
allocator are doing. Specifically, the DMA zone will be checked based on
the classzone used when waking kswapd which could be for a GFP_KERNEL or
GFP_HIGHMEM request. The lowmem reserve limit kicks in, the watermark is
not met and kswapd thinks it's sleeping prematurely keeping kswapd awake in
error.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Andrew Lutomirski <luto@mit.edu>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
During allocator-intensive workloads, kswapd will be woken frequently
causing free memory to oscillate between the high and min watermark. This
is expected behaviour.
When kswapd applies pressure to zones during node balancing, it checks if
the zone is above a high+balance_gap threshold. If it is, it does not
apply pressure but it unconditionally shrinks slab on a global basis which
is excessive. In the event kswapd is being kept awake due to a high small
unreclaimable zone, it skips zone shrinking but still calls shrink_slab().
Once pressure has been applied, the check for zone being unreclaimable is
being made before the check is made if all_unreclaimable should be set.
This miss of unreclaimable can cause has_under_min_watermark_zone to be
set due to an unreclaimable zone preventing kswapd backing off on
congestion_wait().
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Andrew Lutomirski <luto@mit.edu>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
During allocator-intensive workloads, kswapd will be woken frequently
causing free memory to oscillate between the high and min watermark. This
is expected behaviour. Unfortunately, if the highest zone is small, a
problem occurs.
This seems to happen most with recent sandybridge laptops but it's
probably a co-incidence as some of these laptops just happen to have a
small Normal zone. The reproduction case is almost always during copying
large files that kswapd pegs at 100% CPU until the file is deleted or
cache is dropped.
The problem is mostly down to sleeping_prematurely() keeping kswapd awake
when the highest zone is small and unreclaimable and compounded by the
fact we shrink slabs even when not shrinking zones causing a lot of time
to be spent in shrinkers and a lot of memory to be reclaimed.
Patch 1 corrects sleeping_prematurely to check the zones matching
the classzone_idx instead of all zones.
Patch 2 avoids shrinking slab when we are not shrinking a zone.
Patch 3 notes that sleeping_prematurely is checking lower zones against
a high classzone which is not what allocators or balance_pgdat()
is doing leading to an artifical belief that kswapd should be
still awake.
Patch 4 notes that when balance_pgdat() gives up on a high zone that the
decision is not communicated to sleeping_prematurely()
This problem affects 2.6.38.8 for certain and is expected to affect 2.6.39
and 3.0-rc4 as well. If accepted, they need to go to -stable to be picked
up by distros and this series is against 3.0-rc4. I've cc'd people that
reported similar problems recently to see if they still suffer from the
problem and if this fixes it.
This patch: correct the check for kswapd sleeping in sleeping_prematurely()
During allocator-intensive workloads, kswapd will be woken frequently
causing free memory to oscillate between the high and min watermark. This
is expected behaviour.
A problem occurs if the highest zone is small. balance_pgdat() only
considers unreclaimable zones when priority is DEF_PRIORITY but
sleeping_prematurely considers all zones. It's possible for this sequence
to occur
1. kswapd wakes up and enters balance_pgdat()
2. At DEF_PRIORITY, marks highest zone unreclaimable
3. At DEF_PRIORITY-1, ignores highest zone setting end_zone
4. At DEF_PRIORITY-1, calls shrink_slab freeing memory from
highest zone, clearing all_unreclaimable. Highest zone
is still unbalanced
5. kswapd returns and calls sleeping_prematurely
6. sleeping_prematurely looks at *all* zones, not just the ones
being considered by balance_pgdat. The highest small zone
has all_unreclaimable cleared but the zone is not
balanced. all_zones_ok is false so kswapd stays awake
This patch corrects the behaviour of sleeping_prematurely to check the
zones balance_pgdat() checked.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Pádraig Brady <P@draigBrady.com>
Tested-by: Andrew Lutomirski <luto@mit.edu>
Acked-by: Rik van Riel <riel@redhat.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The LM95241 driver accepts every chip ID equal to or larger than 0xA4 as its
own, and other chips such as LM95245 use chip IDs in the accepted ID range.
This results in false chip detection.
Fix problem by accepting only the known LM95241 chip ID.
Signed-off-by: Guenter Roeck <guenter.roeck@ericsson.com>
Acked-by: Jean Delvare <khali@linux-fr.org>
Cc: stable@kernel.org # 2.6.30+
Multiple attempts to dynamically reallocate pci resources have
unfortunately lead to regressions. Though we continue to fix the
regressions and fine tune the dynamic-reallocation behavior, we have not
reached a acceptable state yet.
This patch provides a interim solution. It disables dynamic reallocation
by default, but adds the ability to enable it through pci=realloc kernel
command line parameter.
Tested-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
The ramp_delay variable can be set lower than the desired value.
This patch fixes it.
Signed-off-by: Donggeun Kim <dg77.kim@samsung.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: KyungMin Park <kyungmin.park@samsung.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
Small cleanups for better readability.
Signed-off-by: Axel Lin <axel.lin@gmail.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
The platform_data (pdata) may be pointing to __initdata section, which
may be free'd from the memory. The dependency on pdata in non-init
functions is removed in this patch to allow platform to declare
__initdata for platform data.
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
Currently, ramp_delay variable is used uninitialzed in
max8997_set_voltage_ldobuck which gets called through
regulator_register calls.
To fix the problem, in max8997_pmic_probe, ramp_delay initialization
code is moved before calls to regulator_register.
Cc: Liam Girdwood <lrg@ti.com>
Cc: Mark Brown <broonie@opensource.wolfsonmicro.com>
Cc: MyungJoo Ham <myungjoo.ham@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Acked-by: MyungJoo Ham <myungjoo.ham@samsung.com>
Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Liam Girdwood <lrg@slimlogic.co.uk>
evergreen+ asics have 2-6 crtcs. Don't access crtc registers
for crtc regs that don't exist as they have very high latency
and may cause problems on some asics. The previous code missed
a few cases and was not fine grained enough (missed the 4 crtc
case for example).
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=38800
v2: fix typo noticed by Chris Bandy <cbandy@jbandy.com>
Signed-off-by: Alex Deucher <alexdeucher@gmail.com>
Reviewed-by: Michel Dänzer <michel@daenzer.net>
Tested-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Upon review, all path share the same dependencies for updating the
registers and so we can benefit from sharing the code and checking
early.
This removes the unsightly intel_wait_for_vblank() from the lowlevel
functions and upon further analysis the only path that will require a
wait is if we are performing an instantaneous transition between two
valid FBC configurations. The page-flip path itself will have disabled
FBC registers and will have waited for at least one vblank before
finishing the flip and attempting to re-enable FBC. This wait can be
accomplished simply by delaying the enable until after we are sure that
a vblank will have passed, which we are already doing to make sure that
the display is settled before enabling FBC.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
In order to accommodate the requirements of re-enabling FBC after
page-flipping, but to avoid doing so and incurring the cost of a wait
for vblank in the middle of a page-flip sequence, we defer the actual
enablement by 50ms. If any request to disable FBC arrive within that
interval, the enablement is cancelled and we are saved from blocking on
the wait.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Page-flipping updates the scanout address, nukes the FBC compressed
image and so forces an FBC update so that the displayed image remains
consistent. However, page-flipping does not update the FBC registers
themselves, which remain pointing to both the old address and the old
CPU fence. Future updates to the new front-buffer (scanout) are then
undetected!
This first approach to demonstrate the issue and highlight the fix,
simply disables FBC upon page-flip (a recompression will be forced on
every flip so FBC becomes immaterial) and then re-enables FBC in the
page-flip finish work function, so that the FBC registers are now
pointing to the new framebuffer and front-buffer rendering works once
more.
Ideally, we want to only re-enable FBC after page-flipping is complete,
as otherwise we are just wasting cycles and power (with needless
recompression) whilst the page-flipping application is still running.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=33487
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Persistent mode is intended for use with front-buffer rendering, such as
X, where it is necessary to detect writes to the scanout either by the
GPU or through the CPU's fence, and recompress the dirty regions on the
fly. (By comparison to the back-buffer rendering, the scanout is always
recompressed after a page-flip.)
References: https://bugs.freedesktop.org/show_bug.cgi?id=33487
References: https://bugs.freedesktop.org/show_bug.cgi?id=31742
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
...and this requirement is enforced by intel_update_fbc() so we can
remove the later check from g4x_enable_fbc() and ironlake_enable_fbc().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
The cfb_pitch was only used for 8xx_enable_fbc(), every later routine
was just overwriting the value with itself thanks to a copy'n'paste
error.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
...to ensure that any pending FBC enable tasklet is cancelled.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Keith Packard <keithp@keithp.com>
As the enable/disable routines will be gain additional complexity in
future patches, it is necessary that all callers do not bypass the
generic interface by calling into the chipset routines directly. to do
this we make the chipset routines static, so there is no choice.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
* 'omap-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6:
omap: drop __initdata tags from static struct platform_device declarations
* 'drm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6:
drm/kms: allow drm_mode_group with no objects
drm/radeon/kms: free ib pool on module unloading
drm/radeon/kms: fix typo in evergreen disp int status register
drm/radeon/kms: fix typo in IH_CNTL swap bitfield
The wrong bit was masked when acking langwell gpio interrupts.
Reason for maskig the wrong bit was probably because__ffs() and ffs() functions
return bit indexes differently (0..31 vs 1..32)
This fixes langwell based devices from hanging when a gpio interrupt is
triggered and undoes the breakage which occurred in change set
732063b92b
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
If mini2440_init() is in __init, mini2440_parse_features() should also
be in __init. Fixes:
(.text+0x9adc): Section mismatch in reference from the function mini2440_parse_features.clone.0() to the
(unknown reference) .init.data:(unknown)
The function mini2440_parse_features.clone.0() references the (unknown reference) __initdata (unknown).
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Cc: Michel Pollet <buserror@gmail.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Commit bb072c3c (ARM / Samsung: Use struct syscore_ops for "core" power
management) turned s3c2410_dma_resume_chan() from int to void. So, drop
the actual return values, too. Fixes:
arch/arm/plat-s3c24xx/dma.c: In function 's3c2410_dma_resume_chan':
arch/arm/plat-s3c24xx/dma.c:1238:3: warning: 'return' with a value, in function returning void
arch/arm/plat-s3c24xx/dma.c:1250:2: warning: 'return' with a value, in function returning void
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Commit 8970ef47 (S3C24XX: Remove hardware specific registers from DMA
calls) removed the parameter dcon in s3c2410_dma_config() and calculates
it on its own. So the debug-output for the old parameter can go, too.
Fixes:
arch/arm/plat-s3c24xx/dma.c: In function 's3c2410_dma_config':
arch/arm/plat-s3c24xx/dma.c:1030:2: warning: 'dcon' is used uninitialized in this function
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de>
Cc: Ben Dooks <ben-linux@fluff.org>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
* 'for-30-rc5/all-i2c' of git://git.fluff.org/bjdooks/linux:
i2c-bfin-twi: abort transfer is MEM bit is reset unexpectedly
i2c-s3c2410: Remove useless break code
i2c-s3c2410: Fix typo 'i2s' -> 'i2c'
i2c: tegra: Assign unused slave address
According to the hardware documentation, GDRST is exactly the same as on
Sandybridge. So simply enable the existing code.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
On sinks with a DPCD rev of 1.1 or greater, we can send sink power
management commands to address 0x600 per section 5.1.5 of the
DisplayPort 1.1a spec.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
When checking link status during a hot plug event or detecting sink
presence, we need to retry 3 times per the spec (section 9.1 of the 1.1a
DisplayPort spec). Consolidate the retry code into a
native_aux_read_retry function for use by get_link_status and _detect.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
We currently use this when a hot plug event is received, only checking
the link status and re-training if we had previously configured a link.
However if we want to preserve the DP configuration across both hot plug
and DPMS events (which we do for userspace apps that don't respond to
hot plug uevents), we need to unconditionally check the link and try to
bring it up on hot plug.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
If ->detect is called too soon after a hot plug event, the sink may not
be ready yet. So try up to 3 times with 1ms sleeps in between tries to
get the data (spec dictates that receivers must be ready to respond within
1ms and that sources should try 3 times).
See section 9.1 of the 1.1a DisplayPort spec.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
When a hotplug event is received, we need to check the receiver cap bits
in case they've changed (as they might with a hub or chain config).
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Makes it easier to search for DP related constants.
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Keith Packard <keithp@keithp.com>
Especially after a hotplug or power status change, the sink may not
reply immediately to a link status query. So retry 3 times per the spec
to really make sure nothing is there.
See section 9.1 of the 1.1a DisplayPort spec.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Keith Packard <keithp@keithp.com>
Signed-off-by: Keith Packard <keithp@keithp.com>
Commit e534c5b831 (USB: fix regression
occurring during device removal) didn't go far enough. It failed to
take into account that when a driver claims multiple interfaces, it may
release them all at the same time. As a result, some interfaces can
get released before they are unregistered, and we deadlock trying to
acquire the bandwidth_mutex that we already own.
This patch (asl478) handles this case by setting the "unregistering"
flag on all the interfaces before removing any of them.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Cc: stable <stable@kernel.org>
Tested-by: Éric Piel <eric.piel@tremplin-utc.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The vt->type field determines how the msp3400 should fill in the
tuner data, not whether the msp3400 is in radio mode or not.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
* 'for-linus' of git://git.kernel.dk/linux-block:
drbd: we should write meta data updates with FLUSH FUA
drbd: fix limit define, we support 1 PiByte now
drbd: when receive times out on meta socket, also check last receive time on data socket
drbd: account bitmap IO during resync as resync-(related-)-io
drbd: don't cond_resched_lock with IRQs disabled
drbd: add missing spinlock to bitmap receive
drbd: Use the correct max_bio_size when creating resync requests
cfq-iosched: make code consistent
cfq-iosched: fix a rcu warning
Add an FS-Cache helper to bulk uncache pages on an inode. This will
only work for the circumstance where the pages in the cache correspond
1:1 with the pages attached to an inode's page cache.
This is required for CIFS and NFS: When disabling inode cookie, we were
returning the cookie and setting cifsi->fscache to NULL but failed to
invalidate any previously mapped pages. This resulted in "Bad page
state" errors and manifested in other kind of errors when running
fsstress. Fix it by uncaching mapped pages when we disable the inode
cookie.
This patch should fix the following oops and "Bad page state" errors
seen during fsstress testing.
------------[ cut here ]------------
kernel BUG at fs/cachefiles/namei.c:201!
invalid opcode: 0000 [#1] SMP
Pid: 5, comm: kworker/u:0 Not tainted 2.6.38.7-30.fc15.x86_64 #1 Bochs Bochs
RIP: 0010: cachefiles_walk_to_object+0x436/0x745 [cachefiles]
RSP: 0018:ffff88002ce6dd00 EFLAGS: 00010282
RAX: ffff88002ef165f0 RBX: ffff88001811f500 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000100 RDI: 0000000000000282
RBP: ffff88002ce6dda0 R08: 0000000000000100 R09: ffffffff81b3a300
R10: 0000ffff00066c0a R11: 0000000000000003 R12: ffff88002ae54840
R13: ffff88002ae54840 R14: ffff880029c29c00 R15: ffff88001811f4b0
FS: 00007f394dd32720(0000) GS:ffff88002ef00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007fffcb62ddf8 CR3: 000000001825f000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process kworker/u:0 (pid: 5, threadinfo ffff88002ce6c000, task ffff88002ce55cc0)
Stack:
0000000000000246 ffff88002ce55cc0 ffff88002ce6dd58 ffff88001815dc00
ffff8800185246c0 ffff88001811f618 ffff880029c29d18 ffff88001811f380
ffff88002ce6dd50 ffffffff814757e4 ffff88002ce6dda0 ffffffff8106ac56
Call Trace:
cachefiles_lookup_object+0x78/0xd4 [cachefiles]
fscache_lookup_object+0x131/0x16d [fscache]
fscache_object_work_func+0x1bc/0x669 [fscache]
process_one_work+0x186/0x298
worker_thread+0xda/0x15d
kthread+0x84/0x8c
kernel_thread_helper+0x4/0x10
RIP cachefiles_walk_to_object+0x436/0x745 [cachefiles]
---[ end trace 1d481c9af1804caa ]---
I tested the uncaching by the following means:
(1) Create a big file on my NFS server (104857600 bytes).
(2) Read the file into the cache with md5sum on the NFS client. Look in
/proc/fs/fscache/stats:
Pages : mrk=25601 unc=0
(3) Open the file for read/write ("bash 5<>/warthog/bigfile"). Look in proc
again:
Pages : mrk=25601 unc=25601
Reported-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-and-Tested-by: Suresh Jayaraman <sjayaraman@suse.de>
cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>