When translating a user space address, the address must be checked against
the ASCE limit of the process. If the address is larger than the maximum
address that is reachable with the ASCE, an ASCE type exception must be
generated.
The current code simply ignored the higher order bits. This resulted in an
address wrap around in user space instead of an exception in user space.
Cc: stable@vger.kernel.org # v3.9+
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This driver can be built as a module now.
Add MODULE_ALIAS to support module auto-loading.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Reviewed-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The prototype for static GPIO lookup functions has been updated to use
an explicit type for GPIO lookup flags. Unfortunately the definition of
of_find_gpio() when CONFIG_OF is not defined has been omitted, which
triggers a warning. This patch fixes this.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
It's not obvious from the label name but "err1" tries to release
"p->irq_domain" which leads to a NULL dereference.
Fixes: 119f5e448d ('gpio: Renesas R-Car GPIO driver V3')
Cc: stable@vger.kernel.org
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
There is a bug in msm_gpio_probe() where we do:
msm_gpio.summary_irq = platform_get_irq(pdev, 0);
if (msm_gpio.summary_irq < 0) {
The problem is that "msm_gpio.summary_irq" is unsigned so the error
handling doesn't work. I've fixed it by making it signed.
Fixes: 43f68444bc ('gpio: msm: Add device tree and irqdomain support for gpio-msm-v2')
Cc: stable@vger.kernel.org
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
There is a bug in mvebu_gpio_probe() where we do:
mvchip->irqbase = irq_alloc_descs(-1, 0, ngpios, -1);
if (mvchip->irqbase < 0) {
The problem is that mvchip->irqbase is unsigned so the error handling
doesn't work. I have changed it to be a regular int.
Cc: stable@vger.kernel.org
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
GPIO mapping properties were defined using the GPIOF_* flags, which are
declared in linux/gpio.h. This file is not included when using the
GPIO descriptor interface.
This patch declares the flags that can be used as GPIO mappings
properties in linux/gpio/driver.h, and uses them in gpiolib, so that no
deprecated declarations are used by the GPIO descriptor interface.
This patch also allows GPIO_OPEN_DRAIN and GPIO_OPEN_SOURCE to be
specified as GPIO mapping properties.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
find_chip_by_name() was incorrectly implemented by using
gpio_lookup_list instead of gpiod_chips to iterate through all the
registered GPIO controllers. This patch reimplements it by using
gpiochip_find() with a custom search function, which simplifies the code
on top of fixing the mistake.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
gpiolib now exports a new descriptor-based interface which deprecates
the older integer-based one. This patch documents this new interface and
also takes the opportunity to brush-up the GPIO documentation a little
bit.
The new descriptor-based interface follows the same consumer/driver
model as many other kernel subsystems (e.g. clock, regulator), so its
documentation has similarly been splitted into different files.
The content of the former documentation has been reused whenever it
made sense; however, some of its content did not apply to the new
interface anymore and have this been removed. Likewise, new sections
like the mapping of GPIOs to devices have been written from scratch.
The deprecated legacy-based documentation is still available, untouched,
under Documentation/gpio/gpio-legacy.txt.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Christian Ruppert <christian.ruppert@abilis.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
for tmp_part->header.name:
it is "Terminating null required only for names < 12 chars".
so need to limit the %.12s for it in printk
additional info:
%12s limit the width, not for the original string output length
if name length is more than 12, it still can be fully displayed.
if name length is less than 12, the ' ' will be filled before name.
%.12s truly limit the original string output length (precision)
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
In a recent patch:
commit c13f20ac48
Author: Michael Neuling <mikey@neuling.org>
powerpc/signals: Mark VSX not saved with small contexts
We fixed an issue but an improved solution was later discussed after the patch
was merged.
Firstly, this patch doesn't handle the 64bit signals case, which could also hit
this issue (but has never been reported).
Secondly, the original patch isn't clear what MSR VSX should be set to. The
new approach below always clears the MSR VSX bit (to indicate no VSX is in the
context) and sets it only in the specific case where VSX is available (ie. when
VSX has been used and the signal context passed has space to provide the
state).
This reverts the original patch and replaces it with the improved solution. It
also adds a 64 bit version.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Cc: stable@vger.kernel.org
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
When CONFIG_SPARSEMEM_VMEMMAP option is used in kernel, makedumpfile fails
to filter vmcore dump as it fails to do vmemmap translations. So far
dump filtering on ppc64 never had to deal with vmemmap addresses seperately
as vmemmap regions where mapped in zone normal. But with the inclusion of
CONFIG_SPARSEMEM_VMEMMAP config option in kernel, this vmemmap address
translation support becomes necessary for dump filtering. For vmemmap adress
translation, few kernel symbols are needed by dump filtering tool. This patch
adds those symbols to vmcoreinfo, which a dump filtering tool can use for
filtering the kernel dump. Tested this changes successfully with makedumpfile
tool that supports vmemmap to physical address translation outside zone normal.
[ Removed unneeded #ifdef as suggested by Michael Ellerman --BenH ]
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Stephen reported a failure in an allyesconfig build.
CONFIG_CPU_LITTLE_ENDIAN=y gets set but his toolchain is not
new enough to support little endian. We really want to
default to a big endian build; Ben suggested using a choice
which defaults to CPU_BIG_ENDIAN.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Currently if I cross build TAGS or cscope from x86 I get this:
% make ARCH=powerpc TAGS
gcc-4.8.real: error: unrecognized command line option ‘-mbig-endian’
GEN TAGS
%
I'm not setting CROSS_COMPILE= as logically I shouldn't need to and I
haven't needed to in the past when building TAGS or cscope. Also, the
above completess correct as the error is not fatal to the build.
This was caused by:
commit d72b080171
Author: Ian Munsie <imunsie@au1.ibm.com>
powerpc: Add ability to build little endian kernels
The below fixes this by testing for the -mbig-endian option before
adding it.
I've not done the same thing in the little endian case as if
-mlittle-endian doesn't exist, we probably want to fail quickly as you
probably have an old big endian compiler.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Scott wrote:
<<
The corenet64 patch fixes a regression introduced in 3.13-rc1 (commit
ef1313deaf, "powerpc: Add VMX optimised xor
for RAID5").
The 8xx patch fixes a regression introduced in 3.12 (commit
beb2dc0a7a, "powerpc: Convert some
mftb/mftbu into mfspr").
The other two patches are fixes for minor, long standing bugs.
>>
Fix kernel-doc warning for duplicate definition of 'kmalloc':
Documentation/DocBook/kernel-api.xml:9483: element refentry: validity error : ID API-kmalloc already defined
<refentry id="API-kmalloc">
Also combine the kernel-doc info from the 2 kmalloc definitions into one
block and remove the "see kcalloc" comment since kmalloc now contains the
@flags info.
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull input updates from Dmitry Torokhov:
"A new driver for Surface 2.0/Pixelsense touchscreen and a couple of
driver fixups"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
MAINTAINERS - add keyboard driver to Hyper-V file list
Input: atmel-wm97xx - fix compile error
Input: hp_sdc_rtc - unlock on error in hp_sdc_rtc_read_i8042timer()
Input: cyttsp4 - remove unnecessary work pending test
Input: add sur40 driver for Samsung SUR40 (aka MS Surface 2.0/Pixelsense)
This reverts commit 09fbc47373, which
caused the following build errors:
crypto/asymmetric_keys/x509_public_key.c: In function ‘x509_key_preparse’:
crypto/asymmetric_keys/x509_public_key.c:237:35: error: ‘system_trusted_keyring’ undeclared (first use in this function)
ret = x509_validate_trust(cert, system_trusted_keyring);
^
crypto/asymmetric_keys/x509_public_key.c:237:35: note: each undeclared identifier is reported only once for each function it appears in
reported by Jim Davis. Mimi says:
"I made the classic mistake of requesting this patch to be upstreamed
at the last second, rather than waiting until the next open window.
At this point, the best course would probably be to revert the two
commits and fix them for the next open window"
Reported-by: Jim Davis <jim.epost@gmail.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This reverts commit 217091dd7a, which
caused the following build error:
security/integrity/digsig.c:70:5: error: redefinition of ‘integrity_init_keyring’
security/integrity/integrity.h:149:12: note: previous definition of ‘integrity_init_keyring’ w
security/integrity/integrity.h:149:12: warning: ‘integrity_init_keyring’ defined but not used
reported by Krzysztof Kolasa. Mimi says:
"I made the classic mistake of requesting this patch to be upstreamed
at the last second, rather than waiting until the next open window.
At this point, the best course would probably be to revert the two
commits and fix them for the next open window"
Reported-by: Krzysztof Kolasa <kkolasa@winsoft.pl>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull crypto update from Herbert Xu:
- Made x86 ablk_helper generic for ARM
- Phase out chainiv in favour of eseqiv (affects IPsec)
- Fixed aes-cbc IV corruption on s390
- Added constant-time crypto_memneq which replaces memcmp
- Fixed aes-ctr in omap-aes
- Added OMAP3 ROM RNG support
- Add PRNG support for MSM SoC's
- Add and use Job Ring API in caam
- Misc fixes
[ NOTE! This pull request was sent within the merge window, but Herbert
has some questionable email sending setup that makes him public enemy
#1 as far as gmail is concerned. So most of his emails seem to be
trapped by gmail as spam, resulting in me not seeing them. - Linus ]
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (49 commits)
crypto: s390 - Fix aes-cbc IV corruption
crypto: omap-aes - Fix CTR mode counter length
crypto: omap-sham - Add missing modalias
padata: make the sequence counter an atomic_t
crypto: caam - Modify the interface layers to use JR API's
crypto: caam - Add API's to allocate/free Job Rings
crypto: caam - Add Platform driver for Job Ring
hwrng: msm - Add PRNG support for MSM SoC's
ARM: DT: msm: Add Qualcomm's PRNG driver binding document
crypto: skcipher - Use eseqiv even on UP machines
crypto: talitos - Simplify key parsing
crypto: picoxcell - Simplify and harden key parsing
crypto: ixp4xx - Simplify and harden key parsing
crypto: authencesn - Simplify key parsing
crypto: authenc - Export key parsing helper function
crypto: mv_cesa: remove deprecated IRQF_DISABLED
hwrng: OMAP3 ROM Random Number Generator support
crypto: sha256_ssse3 - also test for BMI2
crypto: mv_cesa - Remove redundant of_match_ptr
crypto: sahara - Remove redundant of_match_ptr
...
ceph_osdc_readpages() returns number of bytes read, currently,
the code only allocate full-zero page into fscache, this patch
fixes this.
Signed-off-by: Li Wang <liwang@ubuntukylin.com>
Reviewed-by: Milosz Tanski <milosz@adfin.com>
Reviewed-by: Sage Weil <sage@inktank.com>
We also need to wake up 'safe' waiters if error occurs or request
aborted. Otherwise sync(2)/fsync(2) may hang forever.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Sage Weil <sage@inktank.com>
Aborted requests usually get cleared when the reply is received.
If MDS crashes, no reply will be received. So we need to cleanup
aborted requests when re-sending requests.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
Signed-off-by: Sage Weil <sage@inktank.com>
When a cap get released while composing the cap reconnect message.
We should skip queuing the release message if the cap hasn't been
added to the cap reconnect message.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Sage Weil <sage@inktank.com>
It's possible that some caps get released while composing the cap
reconnect message.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Sage Weil <sage@inktank.com>
The following two commits implemented mmap support in the regular file
path and merged bin file support into the regular path.
73d9714627 ("sysfs: copy bin mmap support from fs/sysfs/bin.c to fs/sysfs/file.c")
3124eb1679 ("sysfs: merge regular and bin file handling")
After the merge, the following commands trigger a spurious lockdep
warning. "test-mmap-read" simply mmaps the file and dumps the
content.
$ cat /sys/block/sda/trace/act_mask
$ test-mmap-read /sys/devices/pci0000\:00/0000\:00\:03.0/resource0 4096
======================================================
[ INFO: possible circular locking dependency detected ]
3.12.0-work+ #378 Not tainted
-------------------------------------------------------
test-mmap-read/567 is trying to acquire lock:
(&of->mutex){+.+.+.}, at: [<ffffffff8120a8df>] sysfs_bin_mmap+0x4f/0x120
but task is already holding lock:
(&mm->mmap_sem){++++++}, at: [<ffffffff8114b399>] vm_mmap_pgoff+0x49/0xa0
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (&mm->mmap_sem){++++++}:
...
-> #2 (sr_mutex){+.+.+.}:
...
-> #1 (&bdev->bd_mutex){+.+.+.}:
...
-> #0 (&of->mutex){+.+.+.}:
...
other info that might help us debug this:
Chain exists of:
&of->mutex --> sr_mutex --> &mm->mmap_sem
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&mm->mmap_sem);
lock(sr_mutex);
lock(&mm->mmap_sem);
lock(&of->mutex);
*** DEADLOCK ***
1 lock held by test-mmap-read/567:
#0: (&mm->mmap_sem){++++++}, at: [<ffffffff8114b399>] vm_mmap_pgoff+0x49/0xa0
stack backtrace:
CPU: 3 PID: 567 Comm: test-mmap-read Not tainted 3.12.0-work+ #378
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
ffffffff81ed41a0 ffff880009441bc8 ffffffff81611ad2 ffffffff81eccb80
ffff880009441c08 ffffffff8160f215 ffff880009441c60 ffff880009c75208
0000000000000000 ffff880009c751e0 ffff880009c75208 ffff880009c74ac0
Call Trace:
[<ffffffff81611ad2>] dump_stack+0x4e/0x7a
[<ffffffff8160f215>] print_circular_bug+0x2b0/0x2bf
[<ffffffff8109ca0a>] __lock_acquire+0x1a3a/0x1e60
[<ffffffff8109d6ba>] lock_acquire+0x9a/0x1d0
[<ffffffff81615547>] mutex_lock_nested+0x67/0x3f0
[<ffffffff8120a8df>] sysfs_bin_mmap+0x4f/0x120
[<ffffffff8115d363>] mmap_region+0x3b3/0x5b0
[<ffffffff8115d8ae>] do_mmap_pgoff+0x34e/0x3d0
[<ffffffff8114b3ba>] vm_mmap_pgoff+0x6a/0xa0
[<ffffffff8115be3e>] SyS_mmap_pgoff+0xbe/0x250
[<ffffffff81008282>] SyS_mmap+0x22/0x30
[<ffffffff8161a4d2>] system_call_fastpath+0x16/0x1b
This happens because one file nests sr_mutex, which nests mm->mmap_sem
under it, under of->mutex while mmap implementation naturally nests
of->mutex under mm->mmap_sem. The warning is false positive as
of->mutex is per open-file and the two paths belong to two different
files. This warning didn't trigger before regular and bin file
supports were merged because only bin file supported mmap and the
other side of locking happened only on regular files which used
equivalent but separate locking.
It'd be best if we give separate locking classes per file but we can't
easily do that. Let's differentiate on ->mmap() for now. Later we'll
add explicit file operations struct and can add per-ops lockdep key
there.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit bcdde7e221 (sysfs: make __sysfs_remove_dir() recursive) changed
the behavior so that directory removals will be done recursively. This
means that the sysfs group might already be removed if its parent directory
has been removed.
The current code outputs warnings similar to following log snippet when it
detects that there is no group for the given kobject:
WARNING: CPU: 0 PID: 4 at fs/sysfs/group.c:214 sysfs_remove_group+0xc6/0xd0()
sysfs group ffffffff81c6f1e0 not found for kobject 'host7'
Modules linked in:
CPU: 0 PID: 4 Comm: kworker/0:0 Not tainted 3.12.0+ #13
Hardware name: /D33217CK, BIOS GKPPT10H.86A.0042.2013.0422.1439 04/22/2013
Workqueue: kacpi_hotplug acpi_hotplug_work_fn
0000000000000009 ffff8801002459b0 ffffffff817daab1 ffff8801002459f8
ffff8801002459e8 ffffffff810436b8 0000000000000000 ffffffff81c6f1e0
ffff88006d440358 ffff88006d440188 ffff88006e8b4c28 ffff880100245a48
Call Trace:
[<ffffffff817daab1>] dump_stack+0x45/0x56
[<ffffffff810436b8>] warn_slowpath_common+0x78/0xa0
[<ffffffff81043727>] warn_slowpath_fmt+0x47/0x50
[<ffffffff811ad319>] ? sysfs_get_dirent_ns+0x49/0x70
[<ffffffff811ae526>] sysfs_remove_group+0xc6/0xd0
[<ffffffff81432f7e>] dpm_sysfs_remove+0x3e/0x50
[<ffffffff8142a0d0>] device_del+0x40/0x1b0
[<ffffffff8142a24d>] device_unregister+0xd/0x20
[<ffffffff8144131a>] scsi_remove_host+0xba/0x110
[<ffffffff8145f526>] ata_host_detach+0xc6/0x100
[<ffffffff8145f578>] ata_pci_remove_one+0x18/0x20
[<ffffffff812e8f48>] pci_device_remove+0x28/0x60
[<ffffffff8142d854>] __device_release_driver+0x64/0xd0
[<ffffffff8142d8de>] device_release_driver+0x1e/0x30
[<ffffffff8142d257>] bus_remove_device+0xf7/0x140
[<ffffffff8142a1b1>] device_del+0x121/0x1b0
[<ffffffff812e43d4>] pci_stop_bus_device+0x94/0xa0
[<ffffffff812e437b>] pci_stop_bus_device+0x3b/0xa0
[<ffffffff812e437b>] pci_stop_bus_device+0x3b/0xa0
[<ffffffff812e44dd>] pci_stop_and_remove_bus_device+0xd/0x20
[<ffffffff812fc743>] trim_stale_devices+0x73/0xe0
[<ffffffff812fc78b>] trim_stale_devices+0xbb/0xe0
[<ffffffff812fc78b>] trim_stale_devices+0xbb/0xe0
[<ffffffff812fcb6e>] acpiphp_check_bridge+0x7e/0xd0
[<ffffffff812fd90d>] hotplug_event+0xcd/0x160
[<ffffffff812fd9c5>] hotplug_event_work+0x25/0x60
[<ffffffff81316749>] acpi_hotplug_work_fn+0x17/0x22
[<ffffffff8105cf3a>] process_one_work+0x17a/0x430
[<ffffffff8105db29>] worker_thread+0x119/0x390
[<ffffffff8105da10>] ? manage_workers.isra.25+0x2a0/0x2a0
[<ffffffff81063a5d>] kthread+0xcd/0xf0
[<ffffffff81063990>] ? kthread_create_on_node+0x180/0x180
[<ffffffff817eb33c>] ret_from_fork+0x7c/0xb0
[<ffffffff81063990>] ? kthread_create_on_node+0x180/0x180
On this particular machine I see ~16 of these message during Thunderbolt
hot-unplug.
Fix this in similar way that was done for sysfs_remove_one() by checking
if the parent directory has already been removed and bailing out early.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Since acpi_bus_get_device() returns plain int and not acpi_status,
ACPI_FAILURE() should not be used for checking its return value. Fix
that.
tj: Dropped unused local variable @status from odd_can_poweroff().
Reported by kbuild test bot.
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Aaron Lu <aaron.lu@intel.com>
Cc: linux-ide@vger.kernel.org
Cc: kbuild test robot <fengguang.wu@intel.com>
Pull x86 platform driver updates from Matthew Garrett:
"A moderate diffstat, but it's almost entirely just moving the
chromebook driver into its own directory in order to ease ARM support,
adding back rfkill support to the one Dell laptop model where it's
expected to work, updates to the Intel IPC driver for hardware I've
never actually seen and the usual set of small fixes"
[ This actually came in before the merge window closed, and I had just
missed it because it didn't match my git pull email pattern. - Linus ]
* 'for_linus' of git://cavan.codon.org.uk/platform-drivers-x86: (24 commits)
x86, wmi fix modalias_show return values
ipc: Added support for IPC interrupt mode
ipc: Handle error conditions in ipc command
ipc: Enabled ipc support for additional intel platforms
ipc: Added platform data structure
thinkpad_acpi: Fix build error when CONFIG_SND_MAX_CARDS > 32
platform: add chrome platform directory
hp-wmi: detect "2009 BIOS or later" flag by WMI 0x0d for wireless cmd
dell-wmi: Add KEY_MICMUTE to bios_to_linux_keycode
platform:x86: Remove OOM message after input_allocate_device
sony-laptop: fixe typos in sony_laptop_input_keycode_map
sony-laptop: warn on multiple KBD backlight handles
dell-laptop: Only enable rfkill functionality on laptops with a hw killswitch
dell-laptop: Add a force_rfkill module parameter
dell-laptop: Wait less long before updating rfkill after an rfkill keypress
dell-laptop: Do not skip setting blocked bit rfkill_set while hw-blocked
dell-laptop: Sync current block state to BIOS on hw switch change
dell-laptop: Allow changing the sw_state while the radio is blocked by hw
dell-laptop: Don't read-back sw_state on machines with a hardware switch
dell-laptop: Don't set sw_state from the query callback
...
When one work starts execution, the high bits of work's data contain
pool ID. It can represent a maximum of WORK_OFFQ_POOL_NONE. Pool ID
is assigned WORK_OFFQ_POOL_NONE when the work being initialized
indicating that no pool is associated and get_work_pool() uses it to
check the associated pool. So if worker_pool_assign_id() assigns a
ID greater than or equal WORK_OFFQ_POOL_NONE to a pool, it triggers
leakage, and it may break the non-reentrance guarantee.
This patch fix this issue by modifying the worker_pool_assign_id()
function calling idr_alloc() by setting @end param WORK_OFFQ_POOL_NONE.
Furthermore, in the current implementation, the BUILD_BUG_ON() in
init_workqueues makes no sense. The number of worker pools needed
cannot be determined at compile time, because the number of backing
pools for UNBOUND workqueues is dynamic based on the assigned custom
attributes. So remove it.
tj: Minor comment and indentation updates.
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
An ordered workqueue implements execution ordering by using single
pool_workqueue with max_active == 1. On a given pool_workqueue, work
items are processed in FIFO order and limiting max_active to 1
enforces the queued work items to be processed one by one.
Unfortunately, 4c16bd327c ("workqueue: implement NUMA affinity for
unbound workqueues") accidentally broke this guarantee by applying
NUMA affinity to ordered workqueues too. On NUMA setups, an ordered
workqueue would end up with separate pool_workqueues for different
nodes. Each pool_workqueue still limits max_active to 1 but multiple
work items may be executed concurrently and out of order depending on
which node they are queued to.
Fix it by using dedicated ordered_wq_attrs[] when creating ordered
workqueues. The new attrs match the unbound ones except that no_numa
is always set thus forcing all NUMA nodes to share the default
pool_workqueue.
While at it, add sanity check in workqueue creation path which
verifies that an ordered workqueues has only the default
pool_workqueue.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Libin <huawei.libin@huawei.com>
Cc: stable@vger.kernel.org
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Move the setting of PF_NO_SETAFFINITY up before set_cpus_allowed()
in create_worker(). Otherwise userland can change ->cpus_allowed
in between.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
And in flush_hugetlb_page(), don't check whether vma is NULL after
we've already dereferenced it.
This was found by Dan using static analysis as described here:
https://lists.ozlabs.org/pipermail/linuxppc-dev/2013-November/113161.html
We currently get away with this because the callers that currently pass
NULL for vma seem to be 32-bit-only (e.g. highmem, and
CONFIG_DEBUG_PGALLOC in pgtable_32.c) Hugetlb is currently 64-bit only,
so we never saw a NULL vma here.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
These lines were inoperative for four years, which puts some doubt into
their importance, and it's possible the fixed version will regress, but
at the very least they should be removed instead.
Signed-off-by: Adam Borowski <kilobyte@angband.pl>
Signed-off-by: Scott Wood <scottwood@freescale.com>
Commit beb2dc0a7a breaks the MPC8xx which
seems to not support using mfspr SPRN_TBRx instead of mftb/mftbu
despite what is written in the reference manual.
This patch reverts to the use of mftb/mftbu when CONFIG_8xx is
selected.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <scottwood@freescale.com>
If CONFIG_ALTIVEC is enabled for CoreNet64, and if we also
select CONFIG_E{5,6}500_CPU this may introduce -mcpu=e500mc64
into $CFLAGS. But Altivec option not allowed with e500mc64,
then some compiling errors occur like this:
CC arch/powerpc/lib/xor_vmx.o
arch/powerpc/lib/xor_vmx.c:1:0: error: AltiVec not supported in this target
make[1]: *** [arch/powerpc/lib/xor_vmx.o] Error 1
make: *** [arch/powerpc/lib] Error 2
So we should restrict e500mc64 in altivec scenario.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
Add the missing clk_disable_unprepare() before return from cf_init()
in the error handling case.
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
The new IBM Akebono board has a PPC476GTR SoC with an AHCI compliant
SATA controller. This patch adds a compatible property for the new SoC
to the AHCI platform driver.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: stable@vger.kernel.org
Since be44562613 ("cgroup: remove synchronize_rcu() from
cgroup_diput()"), cgroup destruction path makes use of workqueue. css
freeing is performed from a work item from that point on and a later
commit, ea15f8ccdb ("cgroup: split cgroup destruction into two
steps"), moves css offlining to workqueue too.
As cgroup destruction isn't depended upon for memory reclaim, the
destruction work items were put on the system_wq; unfortunately, some
controller may block in the destruction path for considerable duration
while holding cgroup_mutex. As large part of destruction path is
synchronized through cgroup_mutex, when combined with high rate of
cgroup removals, this has potential to fill up system_wq's max_active
of 256.
Also, it turns out that memcg's css destruction path ends up queueing
and waiting for work items on system_wq through work_on_cpu(). If
such operation happens while system_wq is fully occupied by cgroup
destruction work items, work_on_cpu() can't make forward progress
because system_wq is full and other destruction work items on
system_wq can't make forward progress because the work item waiting
for work_on_cpu() is holding cgroup_mutex, leading to deadlock.
This can be fixed by queueing destruction work items on a separate
workqueue. This patch creates a dedicated workqueue -
cgroup_destroy_wq - for this purpose. As these work items shouldn't
have inter-dependencies and mostly serialized by cgroup_mutex anyway,
giving high concurrency level doesn't buy anything and the workqueue's
@max_active is set to 1 so that destruction work items are executed
one by one on each CPU.
Hugh Dickins: Because cgroup_init() is run before init_workqueues(),
cgroup_destroy_wq can't be allocated from cgroup_init(). Do it from a
separate core_initcall(). In the future, we probably want to reorder
so that workqueue init happens before cgroup_init().
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Hugh Dickins <hughd@google.com>
Reported-by: Shawn Bohrer <shawn.bohrer@gmail.com>
Link: http://lkml.kernel.org/r/20131111220626.GA7509@sbohrermbp13-local.rgmadvisors.com
Link: http://lkml.kernel.org/g/alpine.LNX.2.00.1310301606080.2333@eggly.anvils
Cc: stable@vger.kernel.org # v3.9+