Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
 "Highlights:

   - TPM core and driver updates/fixes
   - IPv6 security labeling (CALIPSO)
   - Lots of Apparmor fixes
   - Seccomp: remove 2-phase API, close hole where ptrace can change
     syscall #"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (156 commits)
  apparmor: fix SECURITY_APPARMOR_HASH_DEFAULT parameter handling
  tpm: Add TPM 2.0 support to the Nuvoton i2c driver (NPCT6xx family)
  tpm: Factor out common startup code
  tpm: use devm_add_action_or_reset
  tpm2_i2c_nuvoton: add irq validity check
  tpm: read burstcount from TPM_STS in one 32-bit transaction
  tpm: fix byte-order for the value read by tpm2_get_tpm_pt
  tpm_tis_core: convert max timeouts from msec to jiffies
  apparmor: fix arg_size computation for when setprocattr is null terminated
  apparmor: fix oops, validate buffer size in apparmor_setprocattr()
  apparmor: do not expose kernel stack
  apparmor: fix module parameters can be changed after policy is locked
  apparmor: fix oops in profile_unpack() when policy_db is not present
  apparmor: don't check for vmalloc_addr if kvzalloc() failed
  apparmor: add missing id bounds check on dfa verification
  apparmor: allow SYS_CAP_RESOURCE to be sufficient to prlimit another task
  apparmor: use list_next_entry instead of list_entry_next
  apparmor: fix refcount race when finding a child profile
  apparmor: fix ref count leak when profile sha1 hash is read
  apparmor: check that xindex is in trans_table bounds
  ...
This commit is contained in:
Linus Torvalds 2016-07-29 17:38:46 -07:00
commit 7a1e8b80fb
126 changed files with 7294 additions and 2144 deletions

View File

@ -126,6 +126,7 @@ national,lm80 Serial Interface ACPI-Compatible Microprocessor System Hardware M
national,lm85 Temperature sensor with integrated fan control
national,lm92 ±0.33°C Accurate, 12-Bit + Sign Temperature Sensor and Thermal Window Comparator with Two-Wire Interface
nuvoton,npct501 i2c trusted platform module (TPM)
nuvoton,npct601 i2c trusted platform module (TPM2)
nxp,pca9556 Octal SMBus and I2C registered interface
nxp,pca9557 8-bit I2C-bus and SMBus I/O port with reset
nxp,pcf8563 Real-time clock/calendar

View File

@ -0,0 +1,24 @@
Required properties:
- compatible: should be one of the following
"st,st33htpm-spi"
"infineon,slb9670"
"tcg,tpm_tis-spi"
- spi-max-frequency: Maximum SPI frequency (depends on TPMs).
Optional SoC Specific Properties:
- pinctrl-names: Contains only one value - "default".
- pintctrl-0: Specifies the pin control groups used for this controller.
Example (for ARM-based BeagleBoard xM with TPM_TIS on SPI4):
&mcspi4 {
status = "okay";
tpm_tis@0 {
compatible = "tcg,tpm_tis-spi";
spi-max-frequency = <10000000>;
};
};

View File

@ -128,6 +128,7 @@ idt Integrated Device Technologies, Inc.
ifi Ingenieurburo Fur Ic-Technologie (I/F/I)
iom Iomega Corporation
img Imagination Technologies Ltd.
infineon Infineon Technologies
inforce Inforce Computing
ingenic Ingenic Semiconductor
innolux Innolux Corporation
@ -255,6 +256,7 @@ syna Synaptics Inc.
synology Synology, Inc.
SUNW Sun Microsystems, Inc
tbs TBS Technologies
tcg Trusted Computing Group
tcl Toby Churchill Ltd.
technexion TechNexion
technologic Technologic Systems

View File

@ -303,6 +303,7 @@ Code Seq#(hex) Include File Comments
<mailto:buk@buks.ipn.de>
0xA0 all linux/sdp/sdp.h Industrial Device Project
<mailto:kenji@bitgate.com>
0xA1 0 linux/vtpm_proxy.h TPM Emulator Proxy Driver
0xA2 00-0F arch/tile/include/asm/hardwall.h
0xA3 80-8F Port ACL in development:
<mailto:tlewis@mindspring.com>

View File

@ -0,0 +1,71 @@
Virtual TPM Proxy Driver for Linux Containers
Authors: Stefan Berger (IBM)
This document describes the virtual Trusted Platform Module (vTPM)
proxy device driver for Linux containers.
INTRODUCTION
------------
The goal of this work is to provide TPM functionality to each Linux
container. This allows programs to interact with a TPM in a container
the same way they interact with a TPM on the physical system. Each
container gets its own unique, emulated, software TPM.
DESIGN
------
To make an emulated software TPM available to each container, the container
management stack needs to create a device pair consisting of a client TPM
character device /dev/tpmX (with X=0,1,2...) and a 'server side' file
descriptor. The former is moved into the container by creating a character
device with the appropriate major and minor numbers while the file descriptor
is passed to the TPM emulator. Software inside the container can then send
TPM commands using the character device and the emulator will receive the
commands via the file descriptor and use it for sending back responses.
To support this, the virtual TPM proxy driver provides a device /dev/vtpmx
that is used to create device pairs using an ioctl. The ioctl takes as
an input flags for configuring the device. The flags for example indicate
whether TPM 1.2 or TPM 2 functionality is supported by the TPM emulator.
The result of the ioctl are the file descriptor for the 'server side'
as well as the major and minor numbers of the character device that was created.
Besides that the number of the TPM character device is return. If for
example /dev/tpm10 was created, the number (dev_num) 10 is returned.
The following is the data structure of the TPM_PROXY_IOC_NEW_DEV ioctl:
struct vtpm_proxy_new_dev {
__u32 flags; /* input */
__u32 tpm_num; /* output */
__u32 fd; /* output */
__u32 major; /* output */
__u32 minor; /* output */
};
Note that if unsupported flags are passed to the device driver, the ioctl will
fail and errno will be set to EOPNOTSUPP. Similarly, if an unsupported ioctl is
called on the device driver, the ioctl will fail and errno will be set to
ENOTTY.
See /usr/include/linux/vtpm_proxy.h for definitions related to the public interface
of this vTPM device driver.
Once the device has been created, the driver will immediately try to talk
to the TPM. All commands from the driver can be read from the file descriptor
returned by the ioctl. The commands should be responded to immediately.
Depending on the version of TPM the following commands will be sent by the
driver:
- TPM 1.2:
- the driver will send a TPM_Startup command to the TPM emulator
- the driver will send commands to read the command durations and
interface timeouts from the TPM emulator
- TPM 2:
- the driver will send a TPM2_Startup command to the TPM emulator
The TPM device /dev/tpmX will only appear if all of the relevant commands
were responded to properly.

View File

@ -2837,7 +2837,7 @@ F: include/uapi/linux/can/error.h
F: include/uapi/linux/can/netlink.h
CAPABILITIES
M: Serge Hallyn <serge.hallyn@canonical.com>
M: Serge Hallyn <serge@hallyn.com>
L: linux-security-module@vger.kernel.org
S: Supported
F: include/linux/capability.h
@ -10675,7 +10675,7 @@ SMACK SECURITY MODULE
M: Casey Schaufler <casey@schaufler-ca.com>
L: linux-security-module@vger.kernel.org
W: http://schaufler-ca.com
T: git git://git.gitorious.org/smack-next/kernel.git
T: git git://github.com/cschaufler/smack-next
S: Maintained
F: Documentation/security/Smack.txt
F: security/smack/

View File

@ -932,18 +932,19 @@ asmlinkage int syscall_trace_enter(struct pt_regs *regs, int scno)
{
current_thread_info()->syscall = scno;
/* Do the secure computing check first; failures should be fast. */
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
if (secure_computing() == -1)
return -1;
#else
/* XXX: remove this once OABI gets fixed */
secure_computing_strict(scno);
#endif
if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
/* Do seccomp after ptrace; syscall may have changed. */
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
if (secure_computing(NULL) == -1)
return -1;
#else
/* XXX: remove this once OABI gets fixed */
secure_computing_strict(current_thread_info()->syscall);
#endif
/* Tracer or seccomp may have changed syscall. */
scno = current_thread_info()->syscall;
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))

View File

@ -1347,13 +1347,13 @@ static void tracehook_report_syscall(struct pt_regs *regs,
asmlinkage int syscall_trace_enter(struct pt_regs *regs)
{
/* Do the secure computing check first; failures should be fast. */
if (secure_computing() == -1)
return -1;
if (test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
/* Do the secure computing after ptrace; failures should be fast. */
if (secure_computing(NULL) == -1)
return -1;
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_enter(regs, regs->syscallno);

View File

@ -888,17 +888,16 @@ long arch_ptrace(struct task_struct *child, long request,
*/
asmlinkage long syscall_trace_enter(struct pt_regs *regs, long syscall)
{
long ret = 0;
user_exit();
current_thread_info()->syscall = syscall;
if (secure_computing() == -1)
return -1;
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs))
ret = -1;
return -1;
if (secure_computing(NULL) == -1)
return -1;
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
trace_sys_enter(regs, regs->regs[2]);

View File

@ -311,10 +311,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
long do_syscall_trace_enter(struct pt_regs *regs)
{
/* Do the secure computing check first. */
if (secure_computing() == -1)
return -1;
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) {
/*
@ -325,6 +321,11 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gr[20] = -1UL;
goto out;
}
/* Do the secure computing check after ptrace. */
if (secure_computing(NULL) == -1)
return -1;
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
trace_sys_enter(regs, regs->gr[20]);

View File

@ -1783,12 +1783,12 @@ static int do_seccomp(struct pt_regs *regs)
* have already loaded -ENOSYS into r3, or seccomp has put
* something else in r3 (via SECCOMP_RET_ERRNO/TRACE).
*/
if (__secure_computing())
if (__secure_computing(NULL))
return -1;
/*
* The syscall was allowed by seccomp, restore the register
* state to what ptrace and audit expect.
* state to what audit expects.
* Note that we use orig_gpr3, which means a seccomp tracer can
* modify the first syscall parameter (in orig_gpr3) and also
* allow the syscall to proceed.
@ -1822,22 +1822,25 @@ static inline int do_seccomp(struct pt_regs *regs) { return 0; }
*/
long do_syscall_trace_enter(struct pt_regs *regs)
{
bool abort = false;
user_exit();
/*
* The tracer may decide to abort the syscall, if so tracehook
* will return !0. Note that the tracer may also just change
* regs->gpr[0] to an invalid syscall number, that is handled
* below on the exit path.
*/
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs))
goto skip;
/* Run seccomp after ptrace; allow it to set gpr[3]. */
if (do_seccomp(regs))
return -1;
if (test_thread_flag(TIF_SYSCALL_TRACE)) {
/*
* The tracer may decide to abort the syscall, if so tracehook
* will return !0. Note that the tracer may also just change
* regs->gpr[0] to an invalid syscall number, that is handled
* below on the exit path.
*/
abort = tracehook_report_syscall_entry(regs) != 0;
}
/* Avoid trace and audit when syscall is invalid. */
if (regs->gpr[0] >= NR_syscalls)
goto skip;
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
trace_sys_enter(regs, regs->gpr[0]);
@ -1854,17 +1857,16 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gpr[5] & 0xffffffff,
regs->gpr[6] & 0xffffffff);
if (abort || regs->gpr[0] >= NR_syscalls) {
/*
* If we are aborting explicitly, or if the syscall number is
* now invalid, set the return value to -ENOSYS.
*/
regs->gpr[3] = -ENOSYS;
return -1;
}
/* Return the possibly modified but valid syscall number */
return regs->gpr[0];
skip:
/*
* If we are aborting explicitly, or if the syscall number is
* now invalid, set the return value to -ENOSYS.
*/
regs->gpr[3] = -ENOSYS;
return -1;
}
void do_syscall_trace_leave(struct pt_regs *regs)

View File

@ -821,15 +821,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
{
long ret = 0;
/* Do the secure computing check first. */
if (secure_computing()) {
/* seccomp failures shouldn't expose any additional code. */
ret = -1;
goto out;
}
/*
* The sysc_tracesys code in entry.S stored the system
* call number to gprs[2].
@ -843,7 +834,13 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
* the system call and the system call restart handling.
*/
clear_pt_regs_flag(regs, PIF_SYSCALL);
ret = -1;
return -1;
}
/* Do the secure computing check after ptrace. */
if (secure_computing(NULL)) {
/* seccomp failures shouldn't expose any additional code. */
return -1;
}
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
@ -852,8 +849,8 @@ asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
audit_syscall_entry(regs->gprs[2], regs->orig_gpr2,
regs->gprs[3], regs->gprs[4],
regs->gprs[5]);
out:
return ret ?: regs->gprs[2];
return regs->gprs[2];
}
asmlinkage void do_syscall_trace_exit(struct pt_regs *regs)

View File

@ -255,14 +255,15 @@ int do_syscall_trace_enter(struct pt_regs *regs)
{
u32 work = ACCESS_ONCE(current_thread_info()->flags);
if (secure_computing() == -1)
if ((work & _TIF_SYSCALL_TRACE) &&
tracehook_report_syscall_entry(regs)) {
regs->regs[TREG_SYSCALL_NR] = -1;
return -1;
if (work & _TIF_SYSCALL_TRACE) {
if (tracehook_report_syscall_entry(regs))
regs->regs[TREG_SYSCALL_NR] = -1;
}
if (secure_computing(NULL) == -1)
return -1;
if (work & _TIF_SYSCALL_TRACEPOINT)
trace_sys_enter(regs, regs->regs[TREG_SYSCALL_NR]);

View File

@ -20,12 +20,12 @@ void handle_syscall(struct uml_pt_regs *r)
UPT_SYSCALL_NR(r) = PT_SYSCALL_NR(r->gp);
PT_REGS_SET_SYSCALL_RETURN(regs, -ENOSYS);
/* Do the secure computing check first; failures should be fast. */
if (secure_computing() == -1)
if (syscall_trace_enter(regs))
return;
if (syscall_trace_enter(regs))
goto out;
/* Do the seccomp check after ptrace; failures should be fast. */
if (secure_computing(NULL) == -1)
return;
/* Update the syscall number after orig_ax has potentially been updated
* with ptrace.
@ -37,6 +37,5 @@ void handle_syscall(struct uml_pt_regs *r)
PT_REGS_SET_SYSCALL_RETURN(regs,
EXECUTE_SYSCALL(syscall, regs));
out:
syscall_trace_leave(regs);
}

View File

@ -64,22 +64,16 @@ static void do_audit_syscall_entry(struct pt_regs *regs, u32 arch)
}
/*
* We can return 0 to resume the syscall or anything else to go to phase
* 2. If we resume the syscall, we need to put something appropriate in
* regs->orig_ax.
*
* NB: We don't have full pt_regs here, but regs->orig_ax and regs->ax
* are fully functional.
*
* For phase 2's benefit, our return value is:
* 0: resume the syscall
* 1: go to phase 2; no seccomp phase 2 needed
* anything else: go to phase 2; pass return value to seccomp
* Returns the syscall nr to run (which should match regs->orig_ax) or -1
* to skip the syscall.
*/
unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch)
static long syscall_trace_enter(struct pt_regs *regs)
{
u32 arch = in_ia32_syscall() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64;
struct thread_info *ti = pt_regs_to_thread_info(regs);
unsigned long ret = 0;
bool emulated = false;
u32 work;
if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
@ -87,11 +81,19 @@ unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch)
work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY;
if (unlikely(work & _TIF_SYSCALL_EMU))
emulated = true;
if ((emulated || (work & _TIF_SYSCALL_TRACE)) &&
tracehook_report_syscall_entry(regs))
return -1L;
if (emulated)
return -1L;
#ifdef CONFIG_SECCOMP
/*
* Do seccomp first -- it should minimize exposure of other
* code, and keeping seccomp fast is probably more valuable
* than the rest of this.
* Do seccomp after ptrace, to catch any tracer changes.
*/
if (work & _TIF_SECCOMP) {
struct seccomp_data sd;
@ -118,69 +120,12 @@ unsigned long syscall_trace_enter_phase1(struct pt_regs *regs, u32 arch)
sd.args[5] = regs->bp;
}
BUILD_BUG_ON(SECCOMP_PHASE1_OK != 0);
BUILD_BUG_ON(SECCOMP_PHASE1_SKIP != 1);
ret = seccomp_phase1(&sd);
if (ret == SECCOMP_PHASE1_SKIP) {
regs->orig_ax = -1;
ret = 0;
} else if (ret != SECCOMP_PHASE1_OK) {
return ret; /* Go directly to phase 2 */
}
work &= ~_TIF_SECCOMP;
ret = __secure_computing(&sd);
if (ret == -1)
return ret;
}
#endif
/* Do our best to finish without phase 2. */
if (work == 0)
return ret; /* seccomp and/or nohz only (ret == 0 here) */
#ifdef CONFIG_AUDITSYSCALL
if (work == _TIF_SYSCALL_AUDIT) {
/*
* If there is no more work to be done except auditing,
* then audit in phase 1. Phase 2 always audits, so, if
* we audit here, then we can't go on to phase 2.
*/
do_audit_syscall_entry(regs, arch);
return 0;
}
#endif
return 1; /* Something is enabled that we can't handle in phase 1 */
}
/* Returns the syscall nr to run (which should match regs->orig_ax). */
long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch,
unsigned long phase1_result)
{
struct thread_info *ti = pt_regs_to_thread_info(regs);
long ret = 0;
u32 work = ACCESS_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY;
if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
BUG_ON(regs != task_pt_regs(current));
#ifdef CONFIG_SECCOMP
/*
* Call seccomp_phase2 before running the other hooks so that
* they can see any changes made by a seccomp tracer.
*/
if (phase1_result > 1 && seccomp_phase2(phase1_result)) {
/* seccomp failures shouldn't expose any additional code. */
return -1;
}
#endif
if (unlikely(work & _TIF_SYSCALL_EMU))
ret = -1L;
if ((ret || test_thread_flag(TIF_SYSCALL_TRACE)) &&
tracehook_report_syscall_entry(regs))
ret = -1L;
if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
trace_sys_enter(regs, regs->orig_ax);
@ -189,17 +134,6 @@ long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch,
return ret ?: regs->orig_ax;
}
long syscall_trace_enter(struct pt_regs *regs)
{
u32 arch = in_ia32_syscall() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64;
unsigned long phase1_result = syscall_trace_enter_phase1(regs, arch);
if (phase1_result == 0)
return regs->orig_ax;
else
return syscall_trace_enter_phase2(regs, arch, phase1_result);
}
#define EXIT_TO_USERMODE_LOOP_FLAGS \
(_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
_TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY)

View File

@ -207,7 +207,7 @@ bool emulate_vsyscall(struct pt_regs *regs, unsigned long address)
*/
regs->orig_ax = syscall_nr;
regs->ax = -ENOSYS;
tmp = secure_computing();
tmp = secure_computing(NULL);
if ((!tmp && regs->orig_ax != syscall_nr) || regs->ip != address) {
warn_bad_vsyscall(KERN_DEBUG, regs,
"seccomp tried to change syscall nr or ip");

View File

@ -83,12 +83,6 @@ extern void send_sigtrap(struct task_struct *tsk, struct pt_regs *regs,
int error_code, int si_code);
extern unsigned long syscall_trace_enter_phase1(struct pt_regs *, u32 arch);
extern long syscall_trace_enter_phase2(struct pt_regs *, u32 arch,
unsigned long phase1_result);
extern long syscall_trace_enter(struct pt_regs *);
static inline unsigned long regs_return_value(struct pt_regs *regs)
{
return regs->ax;

View File

@ -24,9 +24,16 @@ menuconfig TCG_TPM
if TCG_TPM
config TCG_TIS_CORE
tristate
---help---
TCG TIS TPM core driver. It implements the TPM TCG TIS logic and hooks
into the TPM kernel APIs. Physical layers will register against it.
config TCG_TIS
tristate "TPM Interface Specification 1.2 Interface / TPM 2.0 FIFO Interface"
depends on X86
select TCG_TIS_CORE
---help---
If you have a TPM security chip that is compliant with the
TCG TIS 1.2 TPM specification (TPM1.2) or the TCG PTP FIFO
@ -34,6 +41,18 @@ config TCG_TIS
within Linux. To compile this driver as a module, choose M here;
the module will be called tpm_tis.
config TCG_TIS_SPI
tristate "TPM Interface Specification 1.3 Interface / TPM 2.0 FIFO Interface - (SPI)"
depends on SPI
select TCG_TIS_CORE
---help---
If you have a TPM security chip which is connected to a regular,
non-tcg SPI master (i.e. most embedded platforms) that is compliant with the
TCG TIS 1.3 TPM specification (TPM1.2) or the TCG PTP FIFO
specification (TPM2.0) say Yes and it will be accessible from
within Linux. To compile this driver as a module, choose M here;
the module will be called tpm_tis_spi.
config TCG_TIS_I2C_ATMEL
tristate "TPM Interface Specification 1.2 Interface (I2C - Atmel)"
depends on I2C
@ -122,5 +141,16 @@ config TCG_CRB
from within Linux. To compile this driver as a module, choose
M here; the module will be called tpm_crb.
config TCG_VTPM_PROXY
tristate "VTPM Proxy Interface"
depends on TCG_TPM
select ANON_INODES
---help---
This driver proxies for an emulated TPM (vTPM) running in userspace.
A device /dev/vtpmx is provided that creates a device pair
/dev/vtpmX and a server-side file descriptor on which the vTPM
can receive commands.
source "drivers/char/tpm/st33zp24/Kconfig"
endif # TCG_TPM

View File

@ -12,7 +12,9 @@ ifdef CONFIG_TCG_IBMVTPM
tpm-y += tpm_eventlog.o tpm_of.o
endif
endif
obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o
obj-$(CONFIG_TCG_TIS) += tpm_tis.o
obj-$(CONFIG_TCG_TIS_SPI) += tpm_tis_spi.o
obj-$(CONFIG_TCG_TIS_I2C_ATMEL) += tpm_i2c_atmel.o
obj-$(CONFIG_TCG_TIS_I2C_INFINEON) += tpm_i2c_infineon.o
obj-$(CONFIG_TCG_TIS_I2C_NUVOTON) += tpm_i2c_nuvoton.o
@ -23,3 +25,4 @@ obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o
obj-$(CONFIG_TCG_TIS_ST33ZP24) += st33zp24/
obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
obj-$(CONFIG_TCG_CRB) += tpm_crb.o
obj-$(CONFIG_TCG_VTPM_PROXY) += tpm_vtpm_proxy.o

View File

@ -1,6 +1,5 @@
config TCG_TIS_ST33ZP24
tristate "STMicroelectronics TPM Interface Specification 1.2 Interface"
depends on GPIOLIB || COMPILE_TEST
tristate
---help---
STMicroelectronics ST33ZP24 core driver. It implements the core
TPM1.2 logic and hooks into the TPM kernel APIs. Physical layers will
@ -10,9 +9,9 @@ config TCG_TIS_ST33ZP24
tpm_st33zp24.
config TCG_TIS_ST33ZP24_I2C
tristate "TPM 1.2 ST33ZP24 I2C support"
depends on TCG_TIS_ST33ZP24
tristate "STMicroelectronics TPM Interface Specification 1.2 Interface (I2C)"
depends on I2C
select TCG_TIS_ST33ZP24
---help---
This module adds support for the STMicroelectronics TPM security chip
ST33ZP24 with i2c interface.
@ -20,9 +19,9 @@ config TCG_TIS_ST33ZP24_I2C
called tpm_st33zp24_i2c.
config TCG_TIS_ST33ZP24_SPI
tristate "TPM 1.2 ST33ZP24 SPI support"
depends on TCG_TIS_ST33ZP24
tristate "STMicroelectronics TPM Interface Specification 1.2 Interface (SPI)"
depends on SPI
select TCG_TIS_ST33ZP24
---help---
This module adds support for the STMicroelectronics TPM security chip
ST33ZP24 with spi interface.

View File

@ -1,6 +1,6 @@
/*
* STMicroelectronics TPM I2C Linux driver for TPM ST33ZP24
* Copyright (C) 2009 - 2015 STMicroelectronics
* Copyright (C) 2009 - 2016 STMicroelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -19,11 +19,14 @@
#include <linux/module.h>
#include <linux/i2c.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <linux/acpi.h>
#include <linux/tpm.h>
#include <linux/platform_data/st33zp24.h>
#include "../tpm.h"
#include "st33zp24.h"
#define TPM_DUMMY_BYTE 0xAA
@ -108,11 +111,40 @@ static const struct st33zp24_phy_ops i2c_phy_ops = {
.recv = st33zp24_i2c_recv,
};
#ifdef CONFIG_OF
static int st33zp24_i2c_of_request_resources(struct st33zp24_i2c_phy *phy)
static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client)
{
struct tpm_chip *chip = i2c_get_clientdata(client);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_i2c_phy *phy = tpm_dev->phy_id;
struct gpio_desc *gpiod_lpcpd;
struct device *dev = &client->dev;
/* Get LPCPD GPIO from ACPI */
gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1,
GPIOD_OUT_HIGH);
if (IS_ERR(gpiod_lpcpd)) {
dev_err(&client->dev,
"Failed to retrieve lpcpd-gpios from acpi.\n");
phy->io_lpcpd = -1;
/*
* lpcpd pin is not specified. This is not an issue as
* power management can be also managed by TPM specific
* commands. So leave with a success status code.
*/
return 0;
}
phy->io_lpcpd = desc_to_gpio(gpiod_lpcpd);
return 0;
}
static int st33zp24_i2c_of_request_resources(struct i2c_client *client)
{
struct tpm_chip *chip = i2c_get_clientdata(client);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_i2c_phy *phy = tpm_dev->phy_id;
struct device_node *pp;
struct i2c_client *client = phy->client;
int gpio;
int ret;
@ -146,16 +178,12 @@ static int st33zp24_i2c_of_request_resources(struct st33zp24_i2c_phy *phy)
return 0;
}
#else
static int st33zp24_i2c_of_request_resources(struct st33zp24_i2c_phy *phy)
{
return -ENODEV;
}
#endif
static int st33zp24_i2c_request_resources(struct i2c_client *client,
struct st33zp24_i2c_phy *phy)
static int st33zp24_i2c_request_resources(struct i2c_client *client)
{
struct tpm_chip *chip = i2c_get_clientdata(client);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_i2c_phy *phy = tpm_dev->phy_id;
struct st33zp24_platform_data *pdata;
int ret;
@ -212,13 +240,18 @@ static int st33zp24_i2c_probe(struct i2c_client *client,
return -ENOMEM;
phy->client = client;
pdata = client->dev.platform_data;
if (!pdata && client->dev.of_node) {
ret = st33zp24_i2c_of_request_resources(phy);
ret = st33zp24_i2c_of_request_resources(client);
if (ret)
return ret;
} else if (pdata) {
ret = st33zp24_i2c_request_resources(client, phy);
ret = st33zp24_i2c_request_resources(client);
if (ret)
return ret;
} else if (ACPI_HANDLE(&client->dev)) {
ret = st33zp24_i2c_acpi_request_resources(client);
if (ret)
return ret;
}
@ -245,13 +278,17 @@ static const struct i2c_device_id st33zp24_i2c_id[] = {
};
MODULE_DEVICE_TABLE(i2c, st33zp24_i2c_id);
#ifdef CONFIG_OF
static const struct of_device_id of_st33zp24_i2c_match[] = {
{ .compatible = "st,st33zp24-i2c", },
{}
};
MODULE_DEVICE_TABLE(of, of_st33zp24_i2c_match);
#endif
static const struct acpi_device_id st33zp24_i2c_acpi_match[] = {
{"SMO3324"},
{}
};
MODULE_DEVICE_TABLE(acpi, st33zp24_i2c_acpi_match);
static SIMPLE_DEV_PM_OPS(st33zp24_i2c_ops, st33zp24_pm_suspend,
st33zp24_pm_resume);
@ -261,6 +298,7 @@ static struct i2c_driver st33zp24_i2c_driver = {
.name = TPM_ST33_I2C,
.pm = &st33zp24_i2c_ops,
.of_match_table = of_match_ptr(of_st33zp24_i2c_match),
.acpi_match_table = ACPI_PTR(st33zp24_i2c_acpi_match),
},
.probe = st33zp24_i2c_probe,
.remove = st33zp24_i2c_remove,

View File

@ -1,6 +1,6 @@
/*
* STMicroelectronics TPM SPI Linux driver for TPM ST33ZP24
* Copyright (C) 2009 - 2015 STMicroelectronics
* Copyright (C) 2009 - 2016 STMicroelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -19,11 +19,14 @@
#include <linux/module.h>
#include <linux/spi/spi.h>
#include <linux/gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <linux/acpi.h>
#include <linux/tpm.h>
#include <linux/platform_data/st33zp24.h>
#include "../tpm.h"
#include "st33zp24.h"
#define TPM_DATA_FIFO 0x24
@ -66,7 +69,7 @@
struct st33zp24_spi_phy {
struct spi_device *spi_device;
struct spi_transfer spi_xfer;
u8 tx_buf[ST33ZP24_SPI_BUFFER_SIZE];
u8 rx_buf[ST33ZP24_SPI_BUFFER_SIZE];
@ -110,43 +113,39 @@ static int st33zp24_status_to_errno(u8 code)
static int st33zp24_spi_send(void *phy_id, u8 tpm_register, u8 *tpm_data,
int tpm_size)
{
u8 data = 0;
int total_length = 0, nbr_dummy_bytes = 0, ret = 0;
int total_length = 0, ret = 0;
struct st33zp24_spi_phy *phy = phy_id;
struct spi_device *dev = phy->spi_device;
u8 *tx_buf = (u8 *)phy->spi_xfer.tx_buf;
u8 *rx_buf = phy->spi_xfer.rx_buf;
struct spi_transfer spi_xfer = {
.tx_buf = phy->tx_buf,
.rx_buf = phy->rx_buf,
};
/* Pre-Header */
data = TPM_WRITE_DIRECTION | LOCALITY0;
memcpy(tx_buf + total_length, &data, sizeof(data));
total_length++;
data = tpm_register;
memcpy(tx_buf + total_length, &data, sizeof(data));
total_length++;
phy->tx_buf[total_length++] = TPM_WRITE_DIRECTION | LOCALITY0;
phy->tx_buf[total_length++] = tpm_register;
if (tpm_size > 0 && tpm_register == TPM_DATA_FIFO) {
tx_buf[total_length++] = tpm_size >> 8;
tx_buf[total_length++] = tpm_size;
phy->tx_buf[total_length++] = tpm_size >> 8;
phy->tx_buf[total_length++] = tpm_size;
}
memcpy(&tx_buf[total_length], tpm_data, tpm_size);
memcpy(&phy->tx_buf[total_length], tpm_data, tpm_size);
total_length += tpm_size;
nbr_dummy_bytes = phy->latency;
memset(&tx_buf[total_length], TPM_DUMMY_BYTE, nbr_dummy_bytes);
memset(&phy->tx_buf[total_length], TPM_DUMMY_BYTE, phy->latency);
phy->spi_xfer.len = total_length + nbr_dummy_bytes;
spi_xfer.len = total_length + phy->latency;
ret = spi_sync_transfer(dev, &phy->spi_xfer, 1);
ret = spi_sync_transfer(dev, &spi_xfer, 1);
if (ret == 0)
ret = rx_buf[total_length + nbr_dummy_bytes - 1];
ret = phy->rx_buf[total_length + phy->latency - 1];
return st33zp24_status_to_errno(ret);
} /* st33zp24_spi_send() */
/*
* read8_recv
* st33zp24_spi_read8_recv
* Recv byte from the TIS register according to the ST33ZP24 SPI protocol.
* @param: phy_id, the phy description
* @param: tpm_register, the tpm tis register where the data should be read
@ -154,40 +153,37 @@ static int st33zp24_spi_send(void *phy_id, u8 tpm_register, u8 *tpm_data,
* @param: tpm_size, tpm TPM response size to read.
* @return: should be zero if success else a negative error code.
*/
static int read8_reg(void *phy_id, u8 tpm_register, u8 *tpm_data, int tpm_size)
static int st33zp24_spi_read8_reg(void *phy_id, u8 tpm_register, u8 *tpm_data,
int tpm_size)
{
u8 data = 0;
int total_length = 0, nbr_dummy_bytes, ret;
int total_length = 0, ret;
struct st33zp24_spi_phy *phy = phy_id;
struct spi_device *dev = phy->spi_device;
u8 *tx_buf = (u8 *)phy->spi_xfer.tx_buf;
u8 *rx_buf = phy->spi_xfer.rx_buf;
struct spi_transfer spi_xfer = {
.tx_buf = phy->tx_buf,
.rx_buf = phy->rx_buf,
};
/* Pre-Header */
data = LOCALITY0;
memcpy(tx_buf + total_length, &data, sizeof(data));
total_length++;
data = tpm_register;
memcpy(tx_buf + total_length, &data, sizeof(data));
total_length++;
phy->tx_buf[total_length++] = LOCALITY0;
phy->tx_buf[total_length++] = tpm_register;
nbr_dummy_bytes = phy->latency;
memset(&tx_buf[total_length], TPM_DUMMY_BYTE,
nbr_dummy_bytes + tpm_size);
memset(&phy->tx_buf[total_length], TPM_DUMMY_BYTE,
phy->latency + tpm_size);
phy->spi_xfer.len = total_length + nbr_dummy_bytes + tpm_size;
spi_xfer.len = total_length + phy->latency + tpm_size;
/* header + status byte + size of the data + status byte */
ret = spi_sync_transfer(dev, &phy->spi_xfer, 1);
ret = spi_sync_transfer(dev, &spi_xfer, 1);
if (tpm_size > 0 && ret == 0) {
ret = rx_buf[total_length + nbr_dummy_bytes - 1];
ret = phy->rx_buf[total_length + phy->latency - 1];
memcpy(tpm_data, rx_buf + total_length + nbr_dummy_bytes,
memcpy(tpm_data, phy->rx_buf + total_length + phy->latency,
tpm_size);
}
return ret;
} /* read8_reg() */
} /* st33zp24_spi_read8_reg() */
/*
* st33zp24_spi_recv
@ -203,13 +199,13 @@ static int st33zp24_spi_recv(void *phy_id, u8 tpm_register, u8 *tpm_data,
{
int ret;
ret = read8_reg(phy_id, tpm_register, tpm_data, tpm_size);
ret = st33zp24_spi_read8_reg(phy_id, tpm_register, tpm_data, tpm_size);
if (!st33zp24_status_to_errno(ret))
return tpm_size;
return ret;
} /* st33zp24_spi_recv() */
static int evaluate_latency(void *phy_id)
static int st33zp24_spi_evaluate_latency(void *phy_id)
{
struct st33zp24_spi_phy *phy = phy_id;
int latency = 1, status = 0;
@ -217,9 +213,15 @@ static int evaluate_latency(void *phy_id)
while (!status && latency < MAX_SPI_LATENCY) {
phy->latency = latency;
status = read8_reg(phy_id, TPM_INTF_CAPABILITY, &data, 1);
status = st33zp24_spi_read8_reg(phy_id, TPM_INTF_CAPABILITY,
&data, 1);
latency++;
}
if (status < 0)
return status;
if (latency == MAX_SPI_LATENCY)
return -ENODEV;
return latency - 1;
} /* evaluate_latency() */
@ -228,24 +230,52 @@ static const struct st33zp24_phy_ops spi_phy_ops = {
.recv = st33zp24_spi_recv,
};
#ifdef CONFIG_OF
static int tpm_stm_spi_of_request_resources(struct st33zp24_spi_phy *phy)
static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev)
{
struct tpm_chip *chip = spi_get_drvdata(spi_dev);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_spi_phy *phy = tpm_dev->phy_id;
struct gpio_desc *gpiod_lpcpd;
struct device *dev = &spi_dev->dev;
/* Get LPCPD GPIO from ACPI */
gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1,
GPIOD_OUT_HIGH);
if (IS_ERR(gpiod_lpcpd)) {
dev_err(dev, "Failed to retrieve lpcpd-gpios from acpi.\n");
phy->io_lpcpd = -1;
/*
* lpcpd pin is not specified. This is not an issue as
* power management can be also managed by TPM specific
* commands. So leave with a success status code.
*/
return 0;
}
phy->io_lpcpd = desc_to_gpio(gpiod_lpcpd);
return 0;
}
static int st33zp24_spi_of_request_resources(struct spi_device *spi_dev)
{
struct tpm_chip *chip = spi_get_drvdata(spi_dev);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_spi_phy *phy = tpm_dev->phy_id;
struct device_node *pp;
struct spi_device *dev = phy->spi_device;
int gpio;
int ret;
pp = dev->dev.of_node;
pp = spi_dev->dev.of_node;
if (!pp) {
dev_err(&dev->dev, "No platform data\n");
dev_err(&spi_dev->dev, "No platform data\n");
return -ENODEV;
}
/* Get GPIO from device tree */
gpio = of_get_named_gpio(pp, "lpcpd-gpios", 0);
if (gpio < 0) {
dev_err(&dev->dev,
dev_err(&spi_dev->dev,
"Failed to retrieve lpcpd-gpios from dts.\n");
phy->io_lpcpd = -1;
/*
@ -256,26 +286,22 @@ static int tpm_stm_spi_of_request_resources(struct st33zp24_spi_phy *phy)
return 0;
}
/* GPIO request and configuration */
ret = devm_gpio_request_one(&dev->dev, gpio,
ret = devm_gpio_request_one(&spi_dev->dev, gpio,
GPIOF_OUT_INIT_HIGH, "TPM IO LPCPD");
if (ret) {
dev_err(&dev->dev, "Failed to request lpcpd pin\n");
dev_err(&spi_dev->dev, "Failed to request lpcpd pin\n");
return -ENODEV;
}
phy->io_lpcpd = gpio;
return 0;
}
#else
static int tpm_stm_spi_of_request_resources(struct st33zp24_spi_phy *phy)
{
return -ENODEV;
}
#endif
static int tpm_stm_spi_request_resources(struct spi_device *dev,
struct st33zp24_spi_phy *phy)
static int st33zp24_spi_request_resources(struct spi_device *dev)
{
struct tpm_chip *chip = spi_get_drvdata(dev);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
struct st33zp24_spi_phy *phy = tpm_dev->phy_id;
struct st33zp24_platform_data *pdata;
int ret;
@ -303,13 +329,12 @@ static int tpm_stm_spi_request_resources(struct spi_device *dev,
}
/*
* tpm_st33_spi_probe initialize the TPM device
* st33zp24_spi_probe initialize the TPM device
* @param: dev, the spi_device drescription (TPM SPI description).
* @return: 0 in case of success.
* or a negative value describing the error.
*/
static int
tpm_st33_spi_probe(struct spi_device *dev)
static int st33zp24_spi_probe(struct spi_device *dev)
{
int ret;
struct st33zp24_platform_data *pdata;
@ -328,21 +353,23 @@ tpm_st33_spi_probe(struct spi_device *dev)
return -ENOMEM;
phy->spi_device = dev;
pdata = dev->dev.platform_data;
if (!pdata && dev->dev.of_node) {
ret = tpm_stm_spi_of_request_resources(phy);
ret = st33zp24_spi_of_request_resources(dev);
if (ret)
return ret;
} else if (pdata) {
ret = tpm_stm_spi_request_resources(dev, phy);
ret = st33zp24_spi_request_resources(dev);
if (ret)
return ret;
} else if (ACPI_HANDLE(&dev->dev)) {
ret = st33zp24_spi_acpi_request_resources(dev);
if (ret)
return ret;
}
phy->spi_xfer.tx_buf = phy->tx_buf;
phy->spi_xfer.rx_buf = phy->rx_buf;
phy->latency = evaluate_latency(phy);
phy->latency = st33zp24_spi_evaluate_latency(phy);
if (phy->latency <= 0)
return -ENODEV;
@ -351,11 +378,11 @@ tpm_st33_spi_probe(struct spi_device *dev)
}
/*
* tpm_st33_spi_remove remove the TPM device
* st33zp24_spi_remove remove the TPM device
* @param: client, the spi_device drescription (TPM SPI description).
* @return: 0 in case of success.
*/
static int tpm_st33_spi_remove(struct spi_device *dev)
static int st33zp24_spi_remove(struct spi_device *dev)
{
struct tpm_chip *chip = spi_get_drvdata(dev);
@ -368,29 +395,34 @@ static const struct spi_device_id st33zp24_spi_id[] = {
};
MODULE_DEVICE_TABLE(spi, st33zp24_spi_id);
#ifdef CONFIG_OF
static const struct of_device_id of_st33zp24_spi_match[] = {
{ .compatible = "st,st33zp24-spi", },
{}
};
MODULE_DEVICE_TABLE(of, of_st33zp24_spi_match);
#endif
static const struct acpi_device_id st33zp24_spi_acpi_match[] = {
{"SMO3324"},
{}
};
MODULE_DEVICE_TABLE(acpi, st33zp24_spi_acpi_match);
static SIMPLE_DEV_PM_OPS(st33zp24_spi_ops, st33zp24_pm_suspend,
st33zp24_pm_resume);
static struct spi_driver tpm_st33_spi_driver = {
static struct spi_driver st33zp24_spi_driver = {
.driver = {
.name = TPM_ST33_SPI,
.pm = &st33zp24_spi_ops,
.of_match_table = of_match_ptr(of_st33zp24_spi_match),
.acpi_match_table = ACPI_PTR(st33zp24_spi_acpi_match),
},
.probe = tpm_st33_spi_probe,
.remove = tpm_st33_spi_remove,
.probe = st33zp24_spi_probe,
.remove = st33zp24_spi_remove,
.id_table = st33zp24_spi_id,
};
module_spi_driver(tpm_st33_spi_driver);
module_spi_driver(st33zp24_spi_driver);
MODULE_AUTHOR("TPM support (TPMsupport@list.st.com)");
MODULE_DESCRIPTION("STM TPM 1.2 SPI ST33 Driver");

View File

@ -1,6 +1,6 @@
/*
* STMicroelectronics TPM Linux driver for TPM ST33ZP24
* Copyright (C) 2009 - 2015 STMicroelectronics
* Copyright (C) 2009 - 2016 STMicroelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -73,14 +73,6 @@ enum tis_defaults {
TIS_LONG_TIMEOUT = 2000,
};
struct st33zp24_dev {
struct tpm_chip *chip;
void *phy_id;
const struct st33zp24_phy_ops *ops;
u32 intrs;
int io_lpcpd;
};
/*
* clear_interruption clear the pending interrupt.
* @param: tpm_dev, the tpm device device.
@ -102,11 +94,9 @@ static u8 clear_interruption(struct st33zp24_dev *tpm_dev)
*/
static void st33zp24_cancel(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u8 data;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
data = TPM_STS_COMMAND_READY;
tpm_dev->ops->send(tpm_dev->phy_id, TPM_STS, &data, 1);
} /* st33zp24_cancel() */
@ -118,11 +108,9 @@ static void st33zp24_cancel(struct tpm_chip *chip)
*/
static u8 st33zp24_status(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u8 data;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS, &data, 1);
return data;
} /* st33zp24_status() */
@ -134,17 +122,15 @@ static u8 st33zp24_status(struct tpm_chip *chip)
*/
static int check_locality(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u8 data;
u8 status;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_ACCESS, &data, 1);
if (status && (data &
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID))
return chip->vendor.locality;
return tpm_dev->locality;
return -EACCES;
} /* check_locality() */
@ -156,27 +142,25 @@ static int check_locality(struct tpm_chip *chip)
*/
static int request_locality(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
unsigned long stop;
long ret;
struct st33zp24_dev *tpm_dev;
u8 data;
if (check_locality(chip) == chip->vendor.locality)
return chip->vendor.locality;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
if (check_locality(chip) == tpm_dev->locality)
return tpm_dev->locality;
data = TPM_ACCESS_REQUEST_USE;
ret = tpm_dev->ops->send(tpm_dev->phy_id, TPM_ACCESS, &data, 1);
if (ret < 0)
return ret;
stop = jiffies + chip->vendor.timeout_a;
stop = jiffies + chip->timeout_a;
/* Request locality is usually effective after the request */
do {
if (check_locality(chip) >= 0)
return chip->vendor.locality;
return tpm_dev->locality;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
@ -190,10 +174,9 @@ static int request_locality(struct tpm_chip *chip)
*/
static void release_locality(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u8 data;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
data = TPM_ACCESS_ACTIVE_LOCALITY;
tpm_dev->ops->send(tpm_dev->phy_id, TPM_ACCESS, &data, 1);
@ -206,23 +189,21 @@ static void release_locality(struct tpm_chip *chip)
*/
static int get_burstcount(struct tpm_chip *chip)
{
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
unsigned long stop;
int burstcnt, status;
u8 tpm_reg, temp;
struct st33zp24_dev *tpm_dev;
u8 temp;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
stop = jiffies + chip->vendor.timeout_d;
stop = jiffies + chip->timeout_d;
do {
tpm_reg = TPM_STS + 1;
status = tpm_dev->ops->recv(tpm_dev->phy_id, tpm_reg, &temp, 1);
status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS + 1,
&temp, 1);
if (status < 0)
return -EBUSY;
tpm_reg = TPM_STS + 2;
burstcnt = temp;
status = tpm_dev->ops->recv(tpm_dev->phy_id, tpm_reg, &temp, 1);
status = tpm_dev->ops->recv(tpm_dev->phy_id, TPM_STS + 2,
&temp, 1);
if (status < 0)
return -EBUSY;
@ -271,15 +252,13 @@ static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask,
static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
wait_queue_head_t *queue, bool check_cancel)
{
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
unsigned long stop;
int ret = 0;
bool canceled = false;
bool condition;
u32 cur_intrs;
u8 status;
struct st33zp24_dev *tpm_dev;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
/* check current status */
status = st33zp24_status(chip);
@ -288,10 +267,10 @@ static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
stop = jiffies + timeout;
if (chip->vendor.irq) {
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
cur_intrs = tpm_dev->intrs;
clear_interruption(tpm_dev);
enable_irq(chip->vendor.irq);
enable_irq(tpm_dev->irq);
do {
if (ret == -ERESTARTSYS && freezing(current))
@ -314,7 +293,7 @@ static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
}
} while (ret == -ERESTARTSYS && freezing(current));
disable_irq_nosync(chip->vendor.irq);
disable_irq_nosync(tpm_dev->irq);
} else {
do {
@ -337,16 +316,14 @@ static int wait_for_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
*/
static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
int size = 0, burstcnt, len, ret;
struct st33zp24_dev *tpm_dev;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
while (size < count &&
wait_for_stat(chip,
TPM_STS_DATA_AVAIL | TPM_STS_VALID,
chip->vendor.timeout_c,
&chip->vendor.read_queue, true) == 0) {
chip->timeout_c,
&tpm_dev->read_queue, true) == 0) {
burstcnt = get_burstcount(chip);
if (burstcnt < 0)
return burstcnt;
@ -370,13 +347,11 @@ static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
static irqreturn_t tpm_ioserirq_handler(int irq, void *dev_id)
{
struct tpm_chip *chip = dev_id;
struct st33zp24_dev *tpm_dev;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
tpm_dev->intrs++;
wake_up_interruptible(&chip->vendor.read_queue);
disable_irq_nosync(chip->vendor.irq);
wake_up_interruptible(&tpm_dev->read_queue);
disable_irq_nosync(tpm_dev->irq);
return IRQ_HANDLED;
} /* tpm_ioserirq_handler() */
@ -393,19 +368,17 @@ static irqreturn_t tpm_ioserirq_handler(int irq, void *dev_id)
static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
size_t len)
{
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u32 status, i, size, ordinal;
int burstcnt = 0;
int ret;
u8 data;
struct st33zp24_dev *tpm_dev;
if (!chip)
return -EBUSY;
if (len < TPM_HEADER_SIZE)
return -EBUSY;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
ret = request_locality(chip);
if (ret < 0)
return ret;
@ -414,8 +387,8 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
if ((status & TPM_STS_COMMAND_READY) == 0) {
st33zp24_cancel(chip);
if (wait_for_stat
(chip, TPM_STS_COMMAND_READY, chip->vendor.timeout_b,
&chip->vendor.read_queue, false) < 0) {
(chip, TPM_STS_COMMAND_READY, chip->timeout_b,
&tpm_dev->read_queue, false) < 0) {
ret = -ETIME;
goto out_err;
}
@ -456,12 +429,12 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
if (ret < 0)
goto out_err;
if (chip->vendor.irq) {
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
ret = wait_for_stat(chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID,
tpm_calc_ordinal_duration(chip, ordinal),
&chip->vendor.read_queue, false);
&tpm_dev->read_queue, false);
if (ret < 0)
goto out_err;
}
@ -532,6 +505,7 @@ static bool st33zp24_req_canceled(struct tpm_chip *chip, u8 status)
}
static const struct tpm_class_ops st33zp24_tpm = {
.flags = TPM_OPS_AUTO_STARTUP,
.send = st33zp24_send,
.recv = st33zp24_recv,
.cancel = st33zp24_cancel,
@ -565,20 +539,20 @@ int st33zp24_probe(void *phy_id, const struct st33zp24_phy_ops *ops,
if (!tpm_dev)
return -ENOMEM;
TPM_VPRIV(chip) = tpm_dev;
tpm_dev->phy_id = phy_id;
tpm_dev->ops = ops;
dev_set_drvdata(&chip->dev, tpm_dev);
chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT);
chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT);
chip->timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->vendor.locality = LOCALITY0;
tpm_dev->locality = LOCALITY0;
if (irq) {
/* INTERRUPT Setup */
init_waitqueue_head(&chip->vendor.read_queue);
init_waitqueue_head(&tpm_dev->read_queue);
tpm_dev->intrs = 0;
if (request_locality(chip) != LOCALITY0) {
@ -611,16 +585,14 @@ int st33zp24_probe(void *phy_id, const struct st33zp24_phy_ops *ops,
if (ret < 0)
goto _tpm_clean_answer;
chip->vendor.irq = irq;
tpm_dev->irq = irq;
chip->flags |= TPM_CHIP_FLAG_IRQ;
disable_irq_nosync(chip->vendor.irq);
disable_irq_nosync(tpm_dev->irq);
tpm_gen_interrupt(chip);
}
tpm_get_timeouts(chip);
tpm_do_selftest(chip);
return tpm_chip_register(chip);
_tpm_clean_answer:
dev_info(&chip->dev, "TPM initialization fail\n");
@ -650,10 +622,9 @@ EXPORT_SYMBOL(st33zp24_remove);
int st33zp24_pm_suspend(struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct st33zp24_dev *tpm_dev;
int ret = 0;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
int ret = 0;
if (gpio_is_valid(tpm_dev->io_lpcpd))
gpio_set_value(tpm_dev->io_lpcpd, 0);
@ -672,16 +643,14 @@ EXPORT_SYMBOL(st33zp24_pm_suspend);
int st33zp24_pm_resume(struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct st33zp24_dev *tpm_dev;
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
int ret = 0;
tpm_dev = (struct st33zp24_dev *)TPM_VPRIV(chip);
if (gpio_is_valid(tpm_dev->io_lpcpd)) {
gpio_set_value(tpm_dev->io_lpcpd, 1);
ret = wait_for_stat(chip,
TPM_STS_VALID, chip->vendor.timeout_b,
&chip->vendor.read_queue, false);
TPM_STS_VALID, chip->timeout_b,
&tpm_dev->read_queue, false);
} else {
ret = tpm_pm_resume(dev);
if (!ret)

View File

@ -1,6 +1,6 @@
/*
* STMicroelectronics TPM Linux driver for TPM ST33ZP24
* Copyright (C) 2009 - 2015 STMicroelectronics
* Copyright (C) 2009 - 2016 STMicroelectronics
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@ -21,6 +21,18 @@
#define TPM_WRITE_DIRECTION 0x80
#define TPM_BUFSIZE 2048
struct st33zp24_dev {
struct tpm_chip *chip;
void *phy_id;
const struct st33zp24_phy_ops *ops;
int locality;
int irq;
u32 intrs;
int io_lpcpd;
wait_queue_head_t read_queue;
};
struct st33zp24_phy_ops {
int (*send)(void *phy_id, u8 tpm_register, u8 *tpm_data, int tpm_size);
int (*recv)(void *phy_id, u8 tpm_register, u8 *tpm_data, int tpm_size);

View File

@ -29,33 +29,88 @@
#include "tpm.h"
#include "tpm_eventlog.h"
static DECLARE_BITMAP(dev_mask, TPM_NUM_DEVICES);
static LIST_HEAD(tpm_chip_list);
static DEFINE_SPINLOCK(driver_lock);
DEFINE_IDR(dev_nums_idr);
static DEFINE_MUTEX(idr_lock);
struct class *tpm_class;
dev_t tpm_devt;
/*
* tpm_chip_find_get - return tpm_chip for a given chip number
* @chip_num the device number for the chip
/**
* tpm_try_get_ops() - Get a ref to the tpm_chip
* @chip: Chip to ref
*
* The caller must already have some kind of locking to ensure that chip is
* valid. This function will lock the chip so that the ops member can be
* accessed safely. The locking prevents tpm_chip_unregister from
* completing, so it should not be held for long periods.
*
* Returns -ERRNO if the chip could not be got.
*/
int tpm_try_get_ops(struct tpm_chip *chip)
{
int rc = -EIO;
get_device(&chip->dev);
down_read(&chip->ops_sem);
if (!chip->ops)
goto out_lock;
return 0;
out_lock:
up_read(&chip->ops_sem);
put_device(&chip->dev);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_try_get_ops);
/**
* tpm_put_ops() - Release a ref to the tpm_chip
* @chip: Chip to put
*
* This is the opposite pair to tpm_try_get_ops(). After this returns chip may
* be kfree'd.
*/
void tpm_put_ops(struct tpm_chip *chip)
{
up_read(&chip->ops_sem);
put_device(&chip->dev);
}
EXPORT_SYMBOL_GPL(tpm_put_ops);
/**
* tpm_chip_find_get() - return tpm_chip for a given chip number
* @chip_num: id to find
*
* The return'd chip has been tpm_try_get_ops'd and must be released via
* tpm_put_ops
*/
struct tpm_chip *tpm_chip_find_get(int chip_num)
{
struct tpm_chip *pos, *chip = NULL;
struct tpm_chip *chip, *res = NULL;
int chip_prev;
rcu_read_lock();
list_for_each_entry_rcu(pos, &tpm_chip_list, list) {
if (chip_num != TPM_ANY_NUM && chip_num != pos->dev_num)
continue;
mutex_lock(&idr_lock);
if (try_module_get(pos->pdev->driver->owner)) {
chip = pos;
break;
}
if (chip_num == TPM_ANY_NUM) {
chip_num = 0;
do {
chip_prev = chip_num;
chip = idr_get_next(&dev_nums_idr, &chip_num);
if (chip && !tpm_try_get_ops(chip)) {
res = chip;
break;
}
} while (chip_prev != chip_num);
} else {
chip = idr_find_slowpath(&dev_nums_idr, chip_num);
if (chip && !tpm_try_get_ops(chip))
res = chip;
}
rcu_read_unlock();
return chip;
mutex_unlock(&idr_lock);
return res;
}
/**
@ -68,24 +123,25 @@ static void tpm_dev_release(struct device *dev)
{
struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev);
spin_lock(&driver_lock);
clear_bit(chip->dev_num, dev_mask);
spin_unlock(&driver_lock);
mutex_lock(&idr_lock);
idr_remove(&dev_nums_idr, chip->dev_num);
mutex_unlock(&idr_lock);
kfree(chip);
}
/**
* tpmm_chip_alloc() - allocate a new struct tpm_chip instance
* @dev: device to which the chip is associated
* tpm_chip_alloc() - allocate a new struct tpm_chip instance
* @pdev: device to which the chip is associated
* At this point pdev mst be initialized, but does not have to
* be registered
* @ops: struct tpm_class_ops instance
*
* Allocates a new struct tpm_chip instance and assigns a free
* device number for it. Caller does not have to worry about
* freeing the allocated resources. When the devices is removed
* devres calls tpmm_chip_remove() to do the job.
* device number for it. Must be paired with put_device(&chip->dev).
*/
struct tpm_chip *tpmm_chip_alloc(struct device *dev,
const struct tpm_class_ops *ops)
struct tpm_chip *tpm_chip_alloc(struct device *dev,
const struct tpm_class_ops *ops)
{
struct tpm_chip *chip;
int rc;
@ -95,53 +151,75 @@ struct tpm_chip *tpmm_chip_alloc(struct device *dev,
return ERR_PTR(-ENOMEM);
mutex_init(&chip->tpm_mutex);
INIT_LIST_HEAD(&chip->list);
init_rwsem(&chip->ops_sem);
chip->ops = ops;
spin_lock(&driver_lock);
chip->dev_num = find_first_zero_bit(dev_mask, TPM_NUM_DEVICES);
spin_unlock(&driver_lock);
if (chip->dev_num >= TPM_NUM_DEVICES) {
mutex_lock(&idr_lock);
rc = idr_alloc(&dev_nums_idr, NULL, 0, TPM_NUM_DEVICES, GFP_KERNEL);
mutex_unlock(&idr_lock);
if (rc < 0) {
dev_err(dev, "No available tpm device numbers\n");
kfree(chip);
return ERR_PTR(-ENOMEM);
return ERR_PTR(rc);
}
chip->dev_num = rc;
set_bit(chip->dev_num, dev_mask);
scnprintf(chip->devname, sizeof(chip->devname), "tpm%d", chip->dev_num);
chip->pdev = dev;
dev_set_drvdata(dev, chip);
device_initialize(&chip->dev);
chip->dev.class = tpm_class;
chip->dev.release = tpm_dev_release;
chip->dev.parent = chip->pdev;
#ifdef CONFIG_ACPI
chip->dev.parent = dev;
chip->dev.groups = chip->groups;
#endif
if (chip->dev_num == 0)
chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
else
chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
dev_set_name(&chip->dev, "%s", chip->devname);
rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
if (rc)
goto out;
device_initialize(&chip->dev);
if (!dev)
chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
cdev_init(&chip->cdev, &tpm_fops);
chip->cdev.owner = chip->pdev->driver->owner;
chip->cdev.owner = THIS_MODULE;
chip->cdev.kobj.parent = &chip->dev.kobj;
rc = devm_add_action(dev, (void (*)(void *)) put_device, &chip->dev);
if (rc) {
put_device(&chip->dev);
return chip;
out:
put_device(&chip->dev);
return ERR_PTR(rc);
}
EXPORT_SYMBOL_GPL(tpm_chip_alloc);
/**
* tpmm_chip_alloc() - allocate a new struct tpm_chip instance
* @pdev: parent device to which the chip is associated
* @ops: struct tpm_class_ops instance
*
* Same as tpm_chip_alloc except devm is used to do the put_device
*/
struct tpm_chip *tpmm_chip_alloc(struct device *pdev,
const struct tpm_class_ops *ops)
{
struct tpm_chip *chip;
int rc;
chip = tpm_chip_alloc(pdev, ops);
if (IS_ERR(chip))
return chip;
rc = devm_add_action_or_reset(pdev,
(void (*)(void *)) put_device,
&chip->dev);
if (rc)
return ERR_PTR(rc);
}
dev_set_drvdata(pdev, chip);
return chip;
}
@ -155,7 +233,7 @@ static int tpm_add_char_device(struct tpm_chip *chip)
if (rc) {
dev_err(&chip->dev,
"unable to cdev_add() %s, major %d, minor %d, err=%d\n",
chip->devname, MAJOR(chip->dev.devt),
dev_name(&chip->dev), MAJOR(chip->dev.devt),
MINOR(chip->dev.devt), rc);
return rc;
@ -165,13 +243,18 @@ static int tpm_add_char_device(struct tpm_chip *chip)
if (rc) {
dev_err(&chip->dev,
"unable to device_register() %s, major %d, minor %d, err=%d\n",
chip->devname, MAJOR(chip->dev.devt),
dev_name(&chip->dev), MAJOR(chip->dev.devt),
MINOR(chip->dev.devt), rc);
cdev_del(&chip->cdev);
return rc;
}
/* Make the chip available. */
mutex_lock(&idr_lock);
idr_replace(&dev_nums_idr, chip, chip->dev_num);
mutex_unlock(&idr_lock);
return rc;
}
@ -179,20 +262,28 @@ static void tpm_del_char_device(struct tpm_chip *chip)
{
cdev_del(&chip->cdev);
device_del(&chip->dev);
/* Make the chip unavailable. */
mutex_lock(&idr_lock);
idr_replace(&dev_nums_idr, NULL, chip->dev_num);
mutex_unlock(&idr_lock);
/* Make the driver uncallable. */
down_write(&chip->ops_sem);
if (chip->flags & TPM_CHIP_FLAG_TPM2)
tpm2_shutdown(chip, TPM2_SU_CLEAR);
chip->ops = NULL;
up_write(&chip->ops_sem);
}
static int tpm1_chip_register(struct tpm_chip *chip)
{
int rc;
if (chip->flags & TPM_CHIP_FLAG_TPM2)
return 0;
rc = tpm_sysfs_add_device(chip);
if (rc)
return rc;
tpm_sysfs_add_device(chip);
chip->bios_dir = tpm_bios_log_setup(chip->devname);
chip->bios_dir = tpm_bios_log_setup(dev_name(&chip->dev));
return 0;
}
@ -204,10 +295,50 @@ static void tpm1_chip_unregister(struct tpm_chip *chip)
if (chip->bios_dir)
tpm_bios_log_teardown(chip->bios_dir);
tpm_sysfs_del_device(chip);
}
static void tpm_del_legacy_sysfs(struct tpm_chip *chip)
{
struct attribute **i;
if (chip->flags & (TPM_CHIP_FLAG_TPM2 | TPM_CHIP_FLAG_VIRTUAL))
return;
sysfs_remove_link(&chip->dev.parent->kobj, "ppi");
for (i = chip->groups[0]->attrs; *i != NULL; ++i)
sysfs_remove_link(&chip->dev.parent->kobj, (*i)->name);
}
/* For compatibility with legacy sysfs paths we provide symlinks from the
* parent dev directory to selected names within the tpm chip directory. Old
* kernel versions created these files directly under the parent.
*/
static int tpm_add_legacy_sysfs(struct tpm_chip *chip)
{
struct attribute **i;
int rc;
if (chip->flags & (TPM_CHIP_FLAG_TPM2 | TPM_CHIP_FLAG_VIRTUAL))
return 0;
rc = __compat_only_sysfs_link_entry_to_kobj(
&chip->dev.parent->kobj, &chip->dev.kobj, "ppi");
if (rc && rc != -ENOENT)
return rc;
/* All the names from tpm-sysfs */
for (i = chip->groups[0]->attrs; *i != NULL; ++i) {
rc = __compat_only_sysfs_link_entry_to_kobj(
&chip->dev.parent->kobj, &chip->dev.kobj, (*i)->name);
if (rc) {
tpm_del_legacy_sysfs(chip);
return rc;
}
}
return 0;
}
/*
* tpm_chip_register() - create a character device for the TPM chip
* @chip: TPM chip to use.
@ -223,6 +354,15 @@ int tpm_chip_register(struct tpm_chip *chip)
{
int rc;
if (chip->ops->flags & TPM_OPS_AUTO_STARTUP) {
if (chip->flags & TPM_CHIP_FLAG_TPM2)
rc = tpm2_auto_startup(chip);
else
rc = tpm1_auto_startup(chip);
if (rc)
return rc;
}
rc = tpm1_chip_register(chip);
if (rc)
return rc;
@ -230,30 +370,20 @@ int tpm_chip_register(struct tpm_chip *chip)
tpm_add_ppi(chip);
rc = tpm_add_char_device(chip);
if (rc)
goto out_err;
/* Make the chip available. */
spin_lock(&driver_lock);
list_add_tail_rcu(&chip->list, &tpm_chip_list);
spin_unlock(&driver_lock);
if (rc) {
tpm1_chip_unregister(chip);
return rc;
}
chip->flags |= TPM_CHIP_FLAG_REGISTERED;
if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) {
rc = __compat_only_sysfs_link_entry_to_kobj(&chip->pdev->kobj,
&chip->dev.kobj,
"ppi");
if (rc && rc != -ENOENT) {
tpm_chip_unregister(chip);
return rc;
}
rc = tpm_add_legacy_sysfs(chip);
if (rc) {
tpm_chip_unregister(chip);
return rc;
}
return 0;
out_err:
tpm1_chip_unregister(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_chip_register);
@ -264,6 +394,9 @@ EXPORT_SYMBOL_GPL(tpm_chip_register);
* Takes the chip first away from the list of available TPM chips and then
* cleans up all the resources reserved by tpm_chip_register().
*
* Once this function returns the driver call backs in 'op's will not be
* running and will no longer start.
*
* NOTE: This function should be only called before deinitializing chip
* resources.
*/
@ -272,13 +405,7 @@ void tpm_chip_unregister(struct tpm_chip *chip)
if (!(chip->flags & TPM_CHIP_FLAG_REGISTERED))
return;
spin_lock(&driver_lock);
list_del_rcu(&chip->list);
spin_unlock(&driver_lock);
synchronize_rcu();
if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
sysfs_remove_link(&chip->pdev->kobj, "ppi");
tpm_del_legacy_sysfs(chip);
tpm1_chip_unregister(chip);
tpm_del_char_device(chip);

View File

@ -61,7 +61,7 @@ static int tpm_open(struct inode *inode, struct file *file)
* by the check of is_open variable, which is protected
* by driver_lock. */
if (test_and_set_bit(0, &chip->is_open)) {
dev_dbg(chip->pdev, "Another process owns this TPM\n");
dev_dbg(&chip->dev, "Another process owns this TPM\n");
return -EBUSY;
}
@ -79,7 +79,6 @@ static int tpm_open(struct inode *inode, struct file *file)
INIT_WORK(&priv->work, timeout_work);
file->private_data = priv;
get_device(chip->pdev);
return 0;
}
@ -137,9 +136,18 @@ static ssize_t tpm_write(struct file *file, const char __user *buf,
return -EFAULT;
}
/* atomic tpm command send and result receive */
/* atomic tpm command send and result receive. We only hold the ops
* lock during this period so that the tpm can be unregistered even if
* the char dev is held open.
*/
if (tpm_try_get_ops(priv->chip)) {
mutex_unlock(&priv->buffer_mutex);
return -EPIPE;
}
out_size = tpm_transmit(priv->chip, priv->data_buffer,
sizeof(priv->data_buffer));
tpm_put_ops(priv->chip);
if (out_size < 0) {
mutex_unlock(&priv->buffer_mutex);
return out_size;
@ -166,7 +174,6 @@ static int tpm_release(struct inode *inode, struct file *file)
file->private_data = NULL;
atomic_set(&priv->data_pending, 0);
clear_bit(0, &priv->chip->is_open);
put_device(priv->chip->pdev);
kfree(priv);
return 0;
}

View File

@ -319,7 +319,7 @@ unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
duration_idx = tpm_ordinal_duration[ordinal];
if (duration_idx != TPM_UNDEFINED)
duration = chip->vendor.duration[duration_idx];
duration = chip->duration[duration_idx];
if (duration <= 0)
return 2 * 60 * HZ;
else
@ -345,7 +345,7 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
if (count == 0)
return -ENODATA;
if (count > bufsiz) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"invalid count value %x %zx\n", count, bufsiz);
return -E2BIG;
}
@ -354,12 +354,12 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
rc = chip->ops->send(chip, (u8 *) buf, count);
if (rc < 0) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"tpm_transmit: tpm_send: error %zd\n", rc);
goto out;
}
if (chip->vendor.irq)
if (chip->flags & TPM_CHIP_FLAG_IRQ)
goto out_recv;
if (chip->flags & TPM_CHIP_FLAG_TPM2)
@ -373,7 +373,7 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
goto out_recv;
if (chip->ops->req_canceled(chip, status)) {
dev_err(chip->pdev, "Operation Canceled\n");
dev_err(&chip->dev, "Operation Canceled\n");
rc = -ECANCELED;
goto out;
}
@ -383,14 +383,14 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
} while (time_before(jiffies, stop));
chip->ops->cancel(chip);
dev_err(chip->pdev, "Operation Timed out\n");
dev_err(&chip->dev, "Operation Timed out\n");
rc = -ETIME;
goto out;
out_recv:
rc = chip->ops->recv(chip, (u8 *) buf, bufsiz);
if (rc < 0)
dev_err(chip->pdev,
dev_err(&chip->dev,
"tpm_transmit: tpm_recv: error %zd\n", rc);
out:
mutex_unlock(&chip->tpm_mutex);
@ -416,7 +416,7 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, void *cmd,
err = be32_to_cpu(header->return_code);
if (err != 0 && desc)
dev_err(chip->pdev, "A TPM error (%d) occurred %s\n", err,
dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
desc);
return err;
@ -432,12 +432,11 @@ static const struct tpm_input_header tpm_getcap_header = {
.ordinal = TPM_ORD_GET_CAP
};
ssize_t tpm_getcap(struct device *dev, __be32 subcap_id, cap_t *cap,
ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap,
const char *desc)
{
struct tpm_cmd_t tpm_cmd;
int rc;
struct tpm_chip *chip = dev_get_drvdata(dev);
tpm_cmd.header.in = tpm_getcap_header;
if (subcap_id == CAP_VERSION_1_1 || subcap_id == CAP_VERSION_1_2) {
@ -505,15 +504,15 @@ int tpm_get_timeouts(struct tpm_chip *chip)
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
/* Fixed timeouts for TPM2 */
chip->vendor.timeout_a = msecs_to_jiffies(TPM2_TIMEOUT_A);
chip->vendor.timeout_b = msecs_to_jiffies(TPM2_TIMEOUT_B);
chip->vendor.timeout_c = msecs_to_jiffies(TPM2_TIMEOUT_C);
chip->vendor.timeout_d = msecs_to_jiffies(TPM2_TIMEOUT_D);
chip->vendor.duration[TPM_SHORT] =
chip->timeout_a = msecs_to_jiffies(TPM2_TIMEOUT_A);
chip->timeout_b = msecs_to_jiffies(TPM2_TIMEOUT_B);
chip->timeout_c = msecs_to_jiffies(TPM2_TIMEOUT_C);
chip->timeout_d = msecs_to_jiffies(TPM2_TIMEOUT_D);
chip->duration[TPM_SHORT] =
msecs_to_jiffies(TPM2_DURATION_SHORT);
chip->vendor.duration[TPM_MEDIUM] =
chip->duration[TPM_MEDIUM] =
msecs_to_jiffies(TPM2_DURATION_MEDIUM);
chip->vendor.duration[TPM_LONG] =
chip->duration[TPM_LONG] =
msecs_to_jiffies(TPM2_DURATION_LONG);
return 0;
}
@ -527,7 +526,7 @@ int tpm_get_timeouts(struct tpm_chip *chip)
if (rc == TPM_ERR_INVALID_POSTINIT) {
/* The TPM is not started, we are the first to talk to it.
Execute a startup command. */
dev_info(chip->pdev, "Issuing TPM_STARTUP");
dev_info(&chip->dev, "Issuing TPM_STARTUP");
if (tpm_startup(chip, TPM_ST_CLEAR))
return rc;
@ -539,7 +538,7 @@ int tpm_get_timeouts(struct tpm_chip *chip)
NULL);
}
if (rc) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"A TPM error (%zd) occurred attempting to determine the timeouts\n",
rc);
goto duration;
@ -561,10 +560,10 @@ int tpm_get_timeouts(struct tpm_chip *chip)
* of misreporting.
*/
if (chip->ops->update_timeouts != NULL)
chip->vendor.timeout_adjusted =
chip->timeout_adjusted =
chip->ops->update_timeouts(chip, new_timeout);
if (!chip->vendor.timeout_adjusted) {
if (!chip->timeout_adjusted) {
/* Don't overwrite default if value is 0 */
if (new_timeout[0] != 0 && new_timeout[0] < 1000) {
int i;
@ -572,13 +571,13 @@ int tpm_get_timeouts(struct tpm_chip *chip)
/* timeouts in msec rather usec */
for (i = 0; i != ARRAY_SIZE(new_timeout); i++)
new_timeout[i] *= 1000;
chip->vendor.timeout_adjusted = true;
chip->timeout_adjusted = true;
}
}
/* Report adjusted timeouts */
if (chip->vendor.timeout_adjusted) {
dev_info(chip->pdev,
if (chip->timeout_adjusted) {
dev_info(&chip->dev,
HW_ERR "Adjusting reported timeouts: A %lu->%luus B %lu->%luus C %lu->%luus D %lu->%luus\n",
old_timeout[0], new_timeout[0],
old_timeout[1], new_timeout[1],
@ -586,10 +585,10 @@ int tpm_get_timeouts(struct tpm_chip *chip)
old_timeout[3], new_timeout[3]);
}
chip->vendor.timeout_a = usecs_to_jiffies(new_timeout[0]);
chip->vendor.timeout_b = usecs_to_jiffies(new_timeout[1]);
chip->vendor.timeout_c = usecs_to_jiffies(new_timeout[2]);
chip->vendor.timeout_d = usecs_to_jiffies(new_timeout[3]);
chip->timeout_a = usecs_to_jiffies(new_timeout[0]);
chip->timeout_b = usecs_to_jiffies(new_timeout[1]);
chip->timeout_c = usecs_to_jiffies(new_timeout[2]);
chip->timeout_d = usecs_to_jiffies(new_timeout[3]);
duration:
tpm_cmd.header.in = tpm_getcap_header;
@ -608,11 +607,11 @@ int tpm_get_timeouts(struct tpm_chip *chip)
return -EINVAL;
duration_cap = &tpm_cmd.params.getcap_out.cap.duration;
chip->vendor.duration[TPM_SHORT] =
chip->duration[TPM_SHORT] =
usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_short));
chip->vendor.duration[TPM_MEDIUM] =
chip->duration[TPM_MEDIUM] =
usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_medium));
chip->vendor.duration[TPM_LONG] =
chip->duration[TPM_LONG] =
usecs_to_jiffies(be32_to_cpu(duration_cap->tpm_long));
/* The Broadcom BCM0102 chipset in a Dell Latitude D820 gets the above
@ -620,12 +619,12 @@ int tpm_get_timeouts(struct tpm_chip *chip)
* fix up the resulting too-small TPM_SHORT value to make things work.
* We also scale the TPM_MEDIUM and -_LONG values by 1000.
*/
if (chip->vendor.duration[TPM_SHORT] < (HZ / 100)) {
chip->vendor.duration[TPM_SHORT] = HZ;
chip->vendor.duration[TPM_MEDIUM] *= 1000;
chip->vendor.duration[TPM_LONG] *= 1000;
chip->vendor.duration_adjusted = true;
dev_info(chip->pdev, "Adjusting TPM timeout parameters.");
if (chip->duration[TPM_SHORT] < (HZ / 100)) {
chip->duration[TPM_SHORT] = HZ;
chip->duration[TPM_MEDIUM] *= 1000;
chip->duration[TPM_LONG] *= 1000;
chip->duration_adjusted = true;
dev_info(&chip->dev, "Adjusting TPM timeout parameters.");
}
return 0;
}
@ -700,7 +699,7 @@ int tpm_is_tpm2(u32 chip_num)
rc = (chip->flags & TPM_CHIP_FLAG_TPM2) != 0;
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
@ -729,7 +728,7 @@ int tpm_pcr_read(u32 chip_num, int pcr_idx, u8 *res_buf)
rc = tpm2_pcr_read(chip, pcr_idx, res_buf);
else
rc = tpm_pcr_read_dev(chip, pcr_idx, res_buf);
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_pcr_read);
@ -764,7 +763,7 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
rc = tpm2_pcr_extend(chip, pcr_idx, hash);
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
@ -774,7 +773,7 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE,
"attempting extend a PCR value");
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_pcr_extend);
@ -815,7 +814,9 @@ int tpm_do_selftest(struct tpm_chip *chip)
* around 300ms while the self test is ongoing, keep trying
* until the self test duration expires. */
if (rc == -ETIME) {
dev_info(chip->pdev, HW_ERR "TPM command timed out during continue self test");
dev_info(
&chip->dev, HW_ERR
"TPM command timed out during continue self test");
msleep(delay_msec);
continue;
}
@ -825,7 +826,7 @@ int tpm_do_selftest(struct tpm_chip *chip)
rc = be32_to_cpu(cmd.header.out.return_code);
if (rc == TPM_ERR_DISABLED || rc == TPM_ERR_DEACTIVATED) {
dev_info(chip->pdev,
dev_info(&chip->dev,
"TPM is disabled/deactivated (0x%X)\n", rc);
/* TPM is disabled and/or deactivated; driver can
* proceed and TPM does handle commands for
@ -842,6 +843,33 @@ int tpm_do_selftest(struct tpm_chip *chip)
}
EXPORT_SYMBOL_GPL(tpm_do_selftest);
/**
* tpm1_auto_startup - Perform the standard automatic TPM initialization
* sequence
* @chip: TPM chip to use
*
* Returns 0 on success, < 0 in case of fatal error.
*/
int tpm1_auto_startup(struct tpm_chip *chip)
{
int rc;
rc = tpm_get_timeouts(chip);
if (rc)
goto out;
rc = tpm_do_selftest(chip);
if (rc) {
dev_err(&chip->dev, "TPM self test failed\n");
goto out;
}
return rc;
out:
if (rc > 0)
rc = -ENODEV;
return rc;
}
int tpm_send(u32 chip_num, void *cmd, size_t buflen)
{
struct tpm_chip *chip;
@ -853,7 +881,7 @@ int tpm_send(u32 chip_num, void *cmd, size_t buflen)
rc = tpm_transmit_cmd(chip, cmd, buflen, "attempting tpm_cmd");
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_send);
@ -888,7 +916,7 @@ int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask, unsigned long timeout,
stop = jiffies + timeout;
if (chip->vendor.irq) {
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
@ -978,10 +1006,10 @@ int tpm_pm_suspend(struct device *dev)
}
if (rc)
dev_err(chip->pdev,
dev_err(&chip->dev,
"Error (%d) sending savestate before suspend\n", rc);
else if (try > 0)
dev_warn(chip->pdev, "TPM savestate took %dms\n",
dev_warn(&chip->dev, "TPM savestate took %dms\n",
try * TPM_TIMEOUT_RETRY);
return rc;
@ -1035,7 +1063,7 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
err = tpm2_get_random(chip, out, max);
tpm_chip_put(chip);
tpm_put_ops(chip);
return err;
}
@ -1057,7 +1085,7 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
num_bytes -= recd;
} while (retries-- && total < max);
tpm_chip_put(chip);
tpm_put_ops(chip);
return total ? total : -EIO;
}
EXPORT_SYMBOL_GPL(tpm_get_random);
@ -1083,7 +1111,7 @@ int tpm_seal_trusted(u32 chip_num, struct trusted_key_payload *payload,
rc = tpm2_seal_trusted(chip, payload, options);
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_seal_trusted);
@ -1109,7 +1137,8 @@ int tpm_unseal_trusted(u32 chip_num, struct trusted_key_payload *payload,
rc = tpm2_unseal_trusted(chip, payload, options);
tpm_chip_put(chip);
tpm_put_ops(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_unseal_trusted);
@ -1136,6 +1165,7 @@ static int __init tpm_init(void)
static void __exit tpm_exit(void)
{
idr_destroy(&dev_nums_idr);
class_destroy(tpm_class);
unregister_chrdev_region(tpm_devt, TPM_NUM_DEVICES);
}

View File

@ -36,7 +36,7 @@ static ssize_t pubek_show(struct device *dev, struct device_attribute *attr,
int i, rc;
char *str = buf;
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = to_tpm_chip(dev);
tpm_cmd.header.in = tpm_readpubek_header;
err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE,
@ -92,9 +92,9 @@ static ssize_t pcrs_show(struct device *dev, struct device_attribute *attr,
ssize_t rc;
int i, j, num_pcrs;
char *str = buf;
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = to_tpm_chip(dev);
rc = tpm_getcap(dev, TPM_CAP_PROP_PCR, &cap,
rc = tpm_getcap(chip, TPM_CAP_PROP_PCR, &cap,
"attempting to determine the number of PCRS");
if (rc)
return 0;
@ -119,8 +119,8 @@ static ssize_t enabled_show(struct device *dev, struct device_attribute *attr,
cap_t cap;
ssize_t rc;
rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
"attempting to determine the permanent enabled state");
rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_PERM, &cap,
"attempting to determine the permanent enabled state");
if (rc)
return 0;
@ -135,8 +135,8 @@ static ssize_t active_show(struct device *dev, struct device_attribute *attr,
cap_t cap;
ssize_t rc;
rc = tpm_getcap(dev, TPM_CAP_FLAG_PERM, &cap,
"attempting to determine the permanent active state");
rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_PERM, &cap,
"attempting to determine the permanent active state");
if (rc)
return 0;
@ -151,8 +151,8 @@ static ssize_t owned_show(struct device *dev, struct device_attribute *attr,
cap_t cap;
ssize_t rc;
rc = tpm_getcap(dev, TPM_CAP_PROP_OWNER, &cap,
"attempting to determine the owner state");
rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_PROP_OWNER, &cap,
"attempting to determine the owner state");
if (rc)
return 0;
@ -167,8 +167,8 @@ static ssize_t temp_deactivated_show(struct device *dev,
cap_t cap;
ssize_t rc;
rc = tpm_getcap(dev, TPM_CAP_FLAG_VOL, &cap,
"attempting to determine the temporary state");
rc = tpm_getcap(to_tpm_chip(dev), TPM_CAP_FLAG_VOL, &cap,
"attempting to determine the temporary state");
if (rc)
return 0;
@ -180,11 +180,12 @@ static DEVICE_ATTR_RO(temp_deactivated);
static ssize_t caps_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tpm_chip *chip = to_tpm_chip(dev);
cap_t cap;
ssize_t rc;
char *str = buf;
rc = tpm_getcap(dev, TPM_CAP_PROP_MANUFACTURER, &cap,
rc = tpm_getcap(chip, TPM_CAP_PROP_MANUFACTURER, &cap,
"attempting to determine the manufacturer");
if (rc)
return 0;
@ -192,8 +193,8 @@ static ssize_t caps_show(struct device *dev, struct device_attribute *attr,
be32_to_cpu(cap.manufacturer_id));
/* Try to get a TPM version 1.2 TPM_CAP_VERSION_INFO */
rc = tpm_getcap(dev, CAP_VERSION_1_2, &cap,
"attempting to determine the 1.2 version");
rc = tpm_getcap(chip, CAP_VERSION_1_2, &cap,
"attempting to determine the 1.2 version");
if (!rc) {
str += sprintf(str,
"TCG version: %d.%d\nFirmware version: %d.%d\n",
@ -203,7 +204,7 @@ static ssize_t caps_show(struct device *dev, struct device_attribute *attr,
cap.tpm_version_1_2.revMinor);
} else {
/* Otherwise just use TPM_STRUCT_VER */
rc = tpm_getcap(dev, CAP_VERSION_1_1, &cap,
rc = tpm_getcap(chip, CAP_VERSION_1_1, &cap,
"attempting to determine the 1.1 version");
if (rc)
return 0;
@ -222,7 +223,7 @@ static DEVICE_ATTR_RO(caps);
static ssize_t cancel_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = to_tpm_chip(dev);
if (chip == NULL)
return 0;
@ -234,16 +235,16 @@ static DEVICE_ATTR_WO(cancel);
static ssize_t durations_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = to_tpm_chip(dev);
if (chip->vendor.duration[TPM_LONG] == 0)
if (chip->duration[TPM_LONG] == 0)
return 0;
return sprintf(buf, "%d %d %d [%s]\n",
jiffies_to_usecs(chip->vendor.duration[TPM_SHORT]),
jiffies_to_usecs(chip->vendor.duration[TPM_MEDIUM]),
jiffies_to_usecs(chip->vendor.duration[TPM_LONG]),
chip->vendor.duration_adjusted
jiffies_to_usecs(chip->duration[TPM_SHORT]),
jiffies_to_usecs(chip->duration[TPM_MEDIUM]),
jiffies_to_usecs(chip->duration[TPM_LONG]),
chip->duration_adjusted
? "adjusted" : "original");
}
static DEVICE_ATTR_RO(durations);
@ -251,14 +252,14 @@ static DEVICE_ATTR_RO(durations);
static ssize_t timeouts_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = to_tpm_chip(dev);
return sprintf(buf, "%d %d %d %d [%s]\n",
jiffies_to_usecs(chip->vendor.timeout_a),
jiffies_to_usecs(chip->vendor.timeout_b),
jiffies_to_usecs(chip->vendor.timeout_c),
jiffies_to_usecs(chip->vendor.timeout_d),
chip->vendor.timeout_adjusted
jiffies_to_usecs(chip->timeout_a),
jiffies_to_usecs(chip->timeout_b),
jiffies_to_usecs(chip->timeout_c),
jiffies_to_usecs(chip->timeout_d),
chip->timeout_adjusted
? "adjusted" : "original");
}
static DEVICE_ATTR_RO(timeouts);
@ -281,19 +282,12 @@ static const struct attribute_group tpm_dev_group = {
.attrs = tpm_dev_attrs,
};
int tpm_sysfs_add_device(struct tpm_chip *chip)
void tpm_sysfs_add_device(struct tpm_chip *chip)
{
int err;
err = sysfs_create_group(&chip->pdev->kobj,
&tpm_dev_group);
if (err)
dev_err(chip->pdev,
"failed to create sysfs attributes, %d\n", err);
return err;
}
void tpm_sysfs_del_device(struct tpm_chip *chip)
{
sysfs_remove_group(&chip->pdev->kobj, &tpm_dev_group);
/* The sysfs routines rely on an implicit tpm_try_get_ops, device_del
* is called before ops is null'd and the sysfs core synchronizes this
* removal so that no callbacks are running or can run again
*/
WARN_ON(chip->groups_cnt != 0);
chip->groups[chip->groups_cnt++] = &tpm_dev_group;
}

View File

@ -19,6 +19,10 @@
* License.
*
*/
#ifndef __TPM_H__
#define __TPM_H__
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/fs.h>
@ -34,7 +38,7 @@
enum tpm_const {
TPM_MINOR = 224, /* officially assigned */
TPM_BUFSIZE = 4096,
TPM_NUM_DEVICES = 256,
TPM_NUM_DEVICES = 65536,
TPM_RETRY = 50, /* 5 seconds */
};
@ -128,33 +132,6 @@ enum tpm2_startup_types {
TPM2_SU_STATE = 0x0001,
};
struct tpm_chip;
struct tpm_vendor_specific {
void __iomem *iobase; /* ioremapped address */
unsigned long base; /* TPM base address */
int irq;
int region_size;
int have_region;
struct list_head list;
int locality;
unsigned long timeout_a, timeout_b, timeout_c, timeout_d; /* jiffies */
bool timeout_adjusted;
unsigned long duration[3]; /* jiffies */
bool duration_adjusted;
void *priv;
wait_queue_head_t read_queue;
wait_queue_head_t int_queue;
u16 manufacturer_id;
};
#define TPM_VPRIV(c) ((c)->vendor.priv)
#define TPM_VID_INTEL 0x8086
#define TPM_VID_WINBOND 0x1050
#define TPM_VID_STM 0x104A
@ -164,44 +141,48 @@ struct tpm_vendor_specific {
enum tpm_chip_flags {
TPM_CHIP_FLAG_REGISTERED = BIT(0),
TPM_CHIP_FLAG_TPM2 = BIT(1),
TPM_CHIP_FLAG_IRQ = BIT(2),
TPM_CHIP_FLAG_VIRTUAL = BIT(3),
};
struct tpm_chip {
struct device *pdev; /* Device stuff */
struct device dev;
struct cdev cdev;
/* A driver callback under ops cannot be run unless ops_sem is held
* (sometimes implicitly, eg for the sysfs code). ops becomes null
* when the driver is unregistered, see tpm_try_get_ops.
*/
struct rw_semaphore ops_sem;
const struct tpm_class_ops *ops;
unsigned int flags;
int dev_num; /* /dev/tpm# */
char devname[7];
unsigned long is_open; /* only one allowed */
int time_expired;
struct mutex tpm_mutex; /* tpm is processing */
struct tpm_vendor_specific vendor;
unsigned long timeout_a; /* jiffies */
unsigned long timeout_b; /* jiffies */
unsigned long timeout_c; /* jiffies */
unsigned long timeout_d; /* jiffies */
bool timeout_adjusted;
unsigned long duration[3]; /* jiffies */
bool duration_adjusted;
struct dentry **bios_dir;
#ifdef CONFIG_ACPI
const struct attribute_group *groups[2];
const struct attribute_group *groups[3];
unsigned int groups_cnt;
#ifdef CONFIG_ACPI
acpi_handle acpi_dev_handle;
char ppi_version[TPM_PPI_VERSION_LEN + 1];
#endif /* CONFIG_ACPI */
struct list_head list;
};
#define to_tpm_chip(d) container_of(d, struct tpm_chip, dev)
static inline void tpm_chip_put(struct tpm_chip *chip)
{
module_put(chip->pdev->driver->owner);
}
static inline int tpm_read_index(int base, int index)
{
outb(index, base);
@ -493,14 +474,17 @@ static inline void tpm_buf_append_u32(struct tpm_buf *buf, const u32 value)
extern struct class *tpm_class;
extern dev_t tpm_devt;
extern const struct file_operations tpm_fops;
extern struct idr dev_nums_idr;
ssize_t tpm_getcap(struct device *, __be32, cap_t *, const char *);
ssize_t tpm_getcap(struct tpm_chip *chip, __be32 subcap_id, cap_t *cap,
const char *desc);
ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
size_t bufsiz);
ssize_t tpm_transmit_cmd(struct tpm_chip *chip, void *cmd, int len,
const char *desc);
extern int tpm_get_timeouts(struct tpm_chip *);
extern void tpm_gen_interrupt(struct tpm_chip *);
int tpm1_auto_startup(struct tpm_chip *chip);
extern int tpm_do_selftest(struct tpm_chip *);
extern unsigned long tpm_calc_ordinal_duration(struct tpm_chip *, u32);
extern int tpm_pm_suspend(struct device *);
@ -509,13 +493,17 @@ extern int wait_for_tpm_stat(struct tpm_chip *, u8, unsigned long,
wait_queue_head_t *, bool);
struct tpm_chip *tpm_chip_find_get(int chip_num);
extern struct tpm_chip *tpmm_chip_alloc(struct device *dev,
__must_check int tpm_try_get_ops(struct tpm_chip *chip);
void tpm_put_ops(struct tpm_chip *chip);
extern struct tpm_chip *tpm_chip_alloc(struct device *dev,
const struct tpm_class_ops *ops);
extern struct tpm_chip *tpmm_chip_alloc(struct device *pdev,
const struct tpm_class_ops *ops);
extern int tpm_chip_register(struct tpm_chip *chip);
extern void tpm_chip_unregister(struct tpm_chip *chip);
int tpm_sysfs_add_device(struct tpm_chip *chip);
void tpm_sysfs_del_device(struct tpm_chip *chip);
void tpm_sysfs_add_device(struct tpm_chip *chip);
int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf);
@ -539,9 +527,9 @@ int tpm2_unseal_trusted(struct tpm_chip *chip,
ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id,
u32 *value, const char *desc);
extern int tpm2_startup(struct tpm_chip *chip, u16 startup_type);
int tpm2_auto_startup(struct tpm_chip *chip);
extern void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type);
extern unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *, u32);
extern int tpm2_do_selftest(struct tpm_chip *chip);
extern int tpm2_gen_interrupt(struct tpm_chip *chip);
extern int tpm2_probe(struct tpm_chip *chip);
#endif

View File

@ -597,7 +597,7 @@ static void tpm2_flush_context(struct tpm_chip *chip, u32 handle)
rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT);
if (rc) {
dev_warn(chip->pdev, "0x%08x was not flushed, out of memory\n",
dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n",
handle);
return;
}
@ -606,7 +606,7 @@ static void tpm2_flush_context(struct tpm_chip *chip, u32 handle)
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, "flushing context");
if (rc)
dev_warn(chip->pdev, "0x%08x was not flushed, rc=%d\n", handle,
dev_warn(&chip->dev, "0x%08x was not flushed, rc=%d\n", handle,
rc);
tpm_buf_destroy(&buf);
@ -703,7 +703,7 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value,
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), desc);
if (!rc)
*value = cmd.params.get_tpm_pt_out.value;
*value = be32_to_cpu(cmd.params.get_tpm_pt_out.value);
return rc;
}
@ -728,7 +728,7 @@ static const struct tpm_input_header tpm2_startup_header = {
* returned it remarks a POSIX error code. If a positive number is returned
* it remarks a TPM error.
*/
int tpm2_startup(struct tpm_chip *chip, u16 startup_type)
static int tpm2_startup(struct tpm_chip *chip, u16 startup_type)
{
struct tpm2_cmd cmd;
@ -738,7 +738,6 @@ int tpm2_startup(struct tpm_chip *chip, u16 startup_type)
return tpm_transmit_cmd(chip, &cmd, sizeof(cmd),
"attempting to start the TPM");
}
EXPORT_SYMBOL_GPL(tpm2_startup);
#define TPM2_SHUTDOWN_IN_SIZE \
(sizeof(struct tpm_input_header) + \
@ -770,10 +769,9 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type)
* except print the error code on a system failure.
*/
if (rc < 0)
dev_warn(chip->pdev, "transmit returned %d while stopping the TPM",
dev_warn(&chip->dev, "transmit returned %d while stopping the TPM",
rc);
}
EXPORT_SYMBOL_GPL(tpm2_shutdown);
/*
* tpm2_calc_ordinal_duration() - maximum duration for a command
@ -793,7 +791,7 @@ unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal)
index = tpm2_ordinal_duration[ordinal - TPM2_CC_FIRST];
if (index != TPM_UNDEFINED)
duration = chip->vendor.duration[index];
duration = chip->duration[index];
if (duration <= 0)
duration = 2 * 60 * HZ;
@ -837,7 +835,7 @@ static int tpm2_start_selftest(struct tpm_chip *chip, bool full)
* immediately. This is a workaround for that.
*/
if (rc == TPM2_RC_TESTING) {
dev_warn(chip->pdev, "Got RC_TESTING, ignoring\n");
dev_warn(&chip->dev, "Got RC_TESTING, ignoring\n");
rc = 0;
}
@ -855,7 +853,7 @@ static int tpm2_start_selftest(struct tpm_chip *chip, bool full)
* returned it remarks a POSIX error code. If a positive number is returned
* it remarks a TPM error.
*/
int tpm2_do_selftest(struct tpm_chip *chip)
static int tpm2_do_selftest(struct tpm_chip *chip)
{
int rc;
unsigned int loops;
@ -895,7 +893,6 @@ int tpm2_do_selftest(struct tpm_chip *chip)
return rc;
}
EXPORT_SYMBOL_GPL(tpm2_do_selftest);
/**
* tpm2_gen_interrupt() - generate an interrupt
@ -943,3 +940,43 @@ int tpm2_probe(struct tpm_chip *chip)
return 0;
}
EXPORT_SYMBOL_GPL(tpm2_probe);
/**
* tpm2_auto_startup - Perform the standard automatic TPM initialization
* sequence
* @chip: TPM chip to use
*
* Returns 0 on success, < 0 in case of fatal error.
*/
int tpm2_auto_startup(struct tpm_chip *chip)
{
int rc;
rc = tpm_get_timeouts(chip);
if (rc)
goto out;
rc = tpm2_do_selftest(chip);
if (rc != TPM2_RC_INITIALIZE) {
dev_err(&chip->dev, "TPM self test failed\n");
goto out;
}
if (rc == TPM2_RC_INITIALIZE) {
rc = tpm2_startup(chip, TPM2_SU_CLEAR);
if (rc)
goto out;
rc = tpm2_do_selftest(chip);
if (rc) {
dev_err(&chip->dev, "TPM self test failed\n");
goto out;
}
}
return rc;
out:
if (rc > 0)
rc = -ENODEV;
return rc;
}

View File

@ -37,6 +37,7 @@ enum tpm_atmel_read_status {
static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
u8 status, *hdr = buf;
u32 size;
int i;
@ -47,12 +48,12 @@ static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count)
return -EIO;
for (i = 0; i < 6; i++) {
status = ioread8(chip->vendor.iobase + 1);
status = ioread8(priv->iobase + 1);
if ((status & ATML_STATUS_DATA_AVAIL) == 0) {
dev_err(chip->pdev, "error reading header\n");
dev_err(&chip->dev, "error reading header\n");
return -EIO;
}
*buf++ = ioread8(chip->vendor.iobase);
*buf++ = ioread8(priv->iobase);
}
/* size of the data received */
@ -60,12 +61,12 @@ static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count)
size = be32_to_cpu(*native_size);
if (count < size) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"Recv size(%d) less than available space\n", size);
for (; i < size; i++) { /* clear the waiting data anyway */
status = ioread8(chip->vendor.iobase + 1);
status = ioread8(priv->iobase + 1);
if ((status & ATML_STATUS_DATA_AVAIL) == 0) {
dev_err(chip->pdev, "error reading data\n");
dev_err(&chip->dev, "error reading data\n");
return -EIO;
}
}
@ -74,19 +75,19 @@ static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count)
/* read all the data available */
for (; i < size; i++) {
status = ioread8(chip->vendor.iobase + 1);
status = ioread8(priv->iobase + 1);
if ((status & ATML_STATUS_DATA_AVAIL) == 0) {
dev_err(chip->pdev, "error reading data\n");
dev_err(&chip->dev, "error reading data\n");
return -EIO;
}
*buf++ = ioread8(chip->vendor.iobase);
*buf++ = ioread8(priv->iobase);
}
/* make sure data available is gone */
status = ioread8(chip->vendor.iobase + 1);
status = ioread8(priv->iobase + 1);
if (status & ATML_STATUS_DATA_AVAIL) {
dev_err(chip->pdev, "data available is stuck\n");
dev_err(&chip->dev, "data available is stuck\n");
return -EIO;
}
@ -95,12 +96,13 @@ static int tpm_atml_recv(struct tpm_chip *chip, u8 *buf, size_t count)
static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
int i;
dev_dbg(chip->pdev, "tpm_atml_send:\n");
dev_dbg(&chip->dev, "tpm_atml_send:\n");
for (i = 0; i < count; i++) {
dev_dbg(chip->pdev, "%d 0x%x(%d)\n", i, buf[i], buf[i]);
iowrite8(buf[i], chip->vendor.iobase);
dev_dbg(&chip->dev, "%d 0x%x(%d)\n", i, buf[i], buf[i]);
iowrite8(buf[i], priv->iobase);
}
return count;
@ -108,12 +110,16 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
static void tpm_atml_cancel(struct tpm_chip *chip)
{
iowrite8(ATML_STATUS_ABORT, chip->vendor.iobase + 1);
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
iowrite8(ATML_STATUS_ABORT, priv->iobase + 1);
}
static u8 tpm_atml_status(struct tpm_chip *chip)
{
return ioread8(chip->vendor.iobase + 1);
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
return ioread8(priv->iobase + 1);
}
static bool tpm_atml_req_canceled(struct tpm_chip *chip, u8 status)
@ -136,13 +142,13 @@ static struct platform_device *pdev;
static void atml_plat_remove(void)
{
struct tpm_chip *chip = dev_get_drvdata(&pdev->dev);
struct tpm_atmel_priv *priv = dev_get_drvdata(&chip->dev);
if (chip) {
tpm_chip_unregister(chip);
if (chip->vendor.have_region)
atmel_release_region(chip->vendor.base,
chip->vendor.region_size);
atmel_put_base_addr(chip->vendor.iobase);
if (priv->have_region)
atmel_release_region(priv->base, priv->region_size);
atmel_put_base_addr(priv->iobase);
platform_device_unregister(pdev);
}
}
@ -163,6 +169,7 @@ static int __init init_atmel(void)
int have_region, region_size;
unsigned long base;
struct tpm_chip *chip;
struct tpm_atmel_priv *priv;
rc = platform_driver_register(&atml_drv);
if (rc)
@ -183,16 +190,24 @@ static int __init init_atmel(void)
goto err_rel_reg;
}
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv) {
rc = -ENOMEM;
goto err_unreg_dev;
}
priv->iobase = iobase;
priv->base = base;
priv->have_region = have_region;
priv->region_size = region_size;
chip = tpmm_chip_alloc(&pdev->dev, &tpm_atmel);
if (IS_ERR(chip)) {
rc = PTR_ERR(chip);
goto err_unreg_dev;
}
chip->vendor.iobase = iobase;
chip->vendor.base = base;
chip->vendor.have_region = have_region;
chip->vendor.region_size = region_size;
dev_set_drvdata(&chip->dev, priv);
rc = tpm_chip_register(chip);
if (rc)

View File

@ -22,12 +22,19 @@
*
*/
struct tpm_atmel_priv {
int region_size;
int have_region;
unsigned long base;
void __iomem *iobase;
};
#ifdef CONFIG_PPC64
#include <asm/prom.h>
#define atmel_getb(chip, offset) readb(chip->vendor->iobase + offset);
#define atmel_putb(val, chip, offset) writeb(val, chip->vendor->iobase + offset)
#define atmel_getb(priv, offset) readb(priv->iobase + offset)
#define atmel_putb(val, priv, offset) writeb(val, priv->iobase + offset)
#define atmel_request_region request_mem_region
#define atmel_release_region release_mem_region
@ -78,8 +85,9 @@ static void __iomem * atmel_get_base_addr(unsigned long *base, int *region_size)
return ioremap(*base, *region_size);
}
#else
#define atmel_getb(chip, offset) inb(chip->vendor->base + offset)
#define atmel_putb(val, chip, offset) outb(val, chip->vendor->base + offset)
#define atmel_getb(chip, offset) inb(atmel_get_priv(chip)->base + offset)
#define atmel_putb(val, chip, offset) \
outb(val, atmel_get_priv(chip)->base + offset)
#define atmel_request_region request_region
#define atmel_release_region release_region
/* Atmel definitions */

View File

@ -77,7 +77,6 @@ enum crb_flags {
struct crb_priv {
unsigned int flags;
struct resource res;
void __iomem *iobase;
struct crb_control_area __iomem *cca;
u8 __iomem *cmd;
@ -88,7 +87,7 @@ static SIMPLE_DEV_PM_OPS(crb_pm, tpm_pm_suspend, tpm_pm_resume);
static u8 crb_status(struct tpm_chip *chip)
{
struct crb_priv *priv = chip->vendor.priv;
struct crb_priv *priv = dev_get_drvdata(&chip->dev);
u8 sts = 0;
if ((ioread32(&priv->cca->start) & CRB_START_INVOKE) !=
@ -100,7 +99,7 @@ static u8 crb_status(struct tpm_chip *chip)
static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct crb_priv *priv = chip->vendor.priv;
struct crb_priv *priv = dev_get_drvdata(&chip->dev);
unsigned int expected;
/* sanity check */
@ -140,7 +139,7 @@ static int crb_do_acpi_start(struct tpm_chip *chip)
static int crb_send(struct tpm_chip *chip, u8 *buf, size_t len)
{
struct crb_priv *priv = chip->vendor.priv;
struct crb_priv *priv = dev_get_drvdata(&chip->dev);
int rc = 0;
if (len > ioread32(&priv->cca->cmd_size)) {
@ -167,7 +166,7 @@ static int crb_send(struct tpm_chip *chip, u8 *buf, size_t len)
static void crb_cancel(struct tpm_chip *chip)
{
struct crb_priv *priv = chip->vendor.priv;
struct crb_priv *priv = dev_get_drvdata(&chip->dev);
iowrite32(cpu_to_le32(CRB_CANCEL_INVOKE), &priv->cca->cancel);
@ -182,13 +181,14 @@ static void crb_cancel(struct tpm_chip *chip)
static bool crb_req_canceled(struct tpm_chip *chip, u8 status)
{
struct crb_priv *priv = chip->vendor.priv;
struct crb_priv *priv = dev_get_drvdata(&chip->dev);
u32 cancel = ioread32(&priv->cca->cancel);
return (cancel & CRB_CANCEL_INVOKE) == CRB_CANCEL_INVOKE;
}
static const struct tpm_class_ops tpm_crb = {
.flags = TPM_OPS_AUTO_STARTUP,
.status = crb_status,
.recv = crb_recv,
.send = crb_send,
@ -201,42 +201,33 @@ static const struct tpm_class_ops tpm_crb = {
static int crb_init(struct acpi_device *device, struct crb_priv *priv)
{
struct tpm_chip *chip;
int rc;
chip = tpmm_chip_alloc(&device->dev, &tpm_crb);
if (IS_ERR(chip))
return PTR_ERR(chip);
chip->vendor.priv = priv;
dev_set_drvdata(&chip->dev, priv);
chip->acpi_dev_handle = device->handle;
chip->flags = TPM_CHIP_FLAG_TPM2;
rc = tpm_get_timeouts(chip);
if (rc)
return rc;
rc = tpm2_do_selftest(chip);
if (rc)
return rc;
return tpm_chip_register(chip);
}
static int crb_check_resource(struct acpi_resource *ares, void *data)
{
struct crb_priv *priv = data;
struct resource *io_res = data;
struct resource res;
if (acpi_dev_resource_memory(ares, &res)) {
priv->res = res;
priv->res.name = NULL;
*io_res = res;
io_res->name = NULL;
}
return 1;
}
static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv,
u64 start, u32 size)
struct resource *io_res, u64 start, u32 size)
{
struct resource new_res = {
.start = start,
@ -246,53 +237,74 @@ static void __iomem *crb_map_res(struct device *dev, struct crb_priv *priv,
/* Detect a 64 bit address on a 32 bit system */
if (start != new_res.start)
return ERR_PTR(-EINVAL);
return (void __iomem *) ERR_PTR(-EINVAL);
if (!resource_contains(&priv->res, &new_res))
if (!resource_contains(io_res, &new_res))
return devm_ioremap_resource(dev, &new_res);
return priv->iobase + (new_res.start - priv->res.start);
return priv->iobase + (new_res.start - io_res->start);
}
static int crb_map_io(struct acpi_device *device, struct crb_priv *priv,
struct acpi_table_tpm2 *buf)
{
struct list_head resources;
struct resource io_res;
struct device *dev = &device->dev;
u64 pa;
u64 cmd_pa;
u32 cmd_size;
u64 rsp_pa;
u32 rsp_size;
int ret;
INIT_LIST_HEAD(&resources);
ret = acpi_dev_get_resources(device, &resources, crb_check_resource,
priv);
&io_res);
if (ret < 0)
return ret;
acpi_dev_free_resource_list(&resources);
if (resource_type(&priv->res) != IORESOURCE_MEM) {
if (resource_type(&io_res) != IORESOURCE_MEM) {
dev_err(dev,
FW_BUG "TPM2 ACPI table does not define a memory resource\n");
return -EINVAL;
}
priv->iobase = devm_ioremap_resource(dev, &priv->res);
priv->iobase = devm_ioremap_resource(dev, &io_res);
if (IS_ERR(priv->iobase))
return PTR_ERR(priv->iobase);
priv->cca = crb_map_res(dev, priv, buf->control_address, 0x1000);
priv->cca = crb_map_res(dev, priv, &io_res, buf->control_address,
sizeof(struct crb_control_area));
if (IS_ERR(priv->cca))
return PTR_ERR(priv->cca);
pa = ((u64) ioread32(&priv->cca->cmd_pa_high) << 32) |
(u64) ioread32(&priv->cca->cmd_pa_low);
priv->cmd = crb_map_res(dev, priv, pa, ioread32(&priv->cca->cmd_size));
cmd_pa = ((u64) ioread32(&priv->cca->cmd_pa_high) << 32) |
(u64) ioread32(&priv->cca->cmd_pa_low);
cmd_size = ioread32(&priv->cca->cmd_size);
priv->cmd = crb_map_res(dev, priv, &io_res, cmd_pa, cmd_size);
if (IS_ERR(priv->cmd))
return PTR_ERR(priv->cmd);
memcpy_fromio(&pa, &priv->cca->rsp_pa, 8);
pa = le64_to_cpu(pa);
priv->rsp = crb_map_res(dev, priv, pa, ioread32(&priv->cca->rsp_size));
return PTR_ERR_OR_ZERO(priv->rsp);
memcpy_fromio(&rsp_pa, &priv->cca->rsp_pa, 8);
rsp_pa = le64_to_cpu(rsp_pa);
rsp_size = ioread32(&priv->cca->rsp_size);
if (cmd_pa != rsp_pa) {
priv->rsp = crb_map_res(dev, priv, &io_res, rsp_pa, rsp_size);
return PTR_ERR_OR_ZERO(priv->rsp);
}
/* According to the PTP specification, overlapping command and response
* buffer sizes must be identical.
*/
if (cmd_size != rsp_size) {
dev_err(dev, FW_BUG "overlapping command and response buffer sizes are not identical");
return -EINVAL;
}
priv->rsp = priv->cmd;
return 0;
}
static int crb_acpi_add(struct acpi_device *device)
@ -344,9 +356,6 @@ static int crb_acpi_remove(struct acpi_device *device)
struct device *dev = &device->dev;
struct tpm_chip *chip = dev_get_drvdata(dev);
if (chip->flags & TPM_CHIP_FLAG_TPM2)
tpm2_shutdown(chip, TPM2_SU_CLEAR);
tpm_chip_unregister(chip);
return 0;

View File

@ -403,7 +403,7 @@ static int is_bad(void *p)
return 0;
}
struct dentry **tpm_bios_log_setup(char *name)
struct dentry **tpm_bios_log_setup(const char *name)
{
struct dentry **ret = NULL, *tpm_dir, *bin_file, *ascii_file;

View File

@ -77,10 +77,10 @@ int read_log(struct tpm_bios_log *log);
#if defined(CONFIG_TCG_IBMVTPM) || defined(CONFIG_TCG_IBMVTPM_MODULE) || \
defined(CONFIG_ACPI)
extern struct dentry **tpm_bios_log_setup(char *);
extern struct dentry **tpm_bios_log_setup(const char *);
extern void tpm_bios_log_teardown(struct dentry **);
#else
static inline struct dentry **tpm_bios_log_setup(char *name)
static inline struct dentry **tpm_bios_log_setup(const char *name)
{
return NULL;
}

View File

@ -51,8 +51,8 @@ struct priv_data {
static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
{
struct priv_data *priv = chip->vendor.priv;
struct i2c_client *client = to_i2c_client(chip->pdev);
struct priv_data *priv = dev_get_drvdata(&chip->dev);
struct i2c_client *client = to_i2c_client(chip->dev.parent);
s32 status;
priv->len = 0;
@ -62,7 +62,7 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
status = i2c_master_send(client, buf, len);
dev_dbg(chip->pdev,
dev_dbg(&chip->dev,
"%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
(int)min_t(size_t, 64, len), buf, len, status);
return status;
@ -70,8 +70,8 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct priv_data *priv = chip->vendor.priv;
struct i2c_client *client = to_i2c_client(chip->pdev);
struct priv_data *priv = dev_get_drvdata(&chip->dev);
struct i2c_client *client = to_i2c_client(chip->dev.parent);
struct tpm_output_header *hdr =
(struct tpm_output_header *)priv->buffer;
u32 expected_len;
@ -88,7 +88,7 @@ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
return -ENOMEM;
if (priv->len >= expected_len) {
dev_dbg(chip->pdev,
dev_dbg(&chip->dev,
"%s early(buf=%*ph count=%0zx) -> ret=%d\n", __func__,
(int)min_t(size_t, 64, expected_len), buf, count,
expected_len);
@ -97,7 +97,7 @@ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
}
rc = i2c_master_recv(client, buf, expected_len);
dev_dbg(chip->pdev,
dev_dbg(&chip->dev,
"%s reread(buf=%*ph count=%0zx) -> ret=%d\n", __func__,
(int)min_t(size_t, 64, expected_len), buf, count,
expected_len);
@ -106,13 +106,13 @@ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
static void i2c_atmel_cancel(struct tpm_chip *chip)
{
dev_err(chip->pdev, "TPM operation cancellation was requested, but is not supported");
dev_err(&chip->dev, "TPM operation cancellation was requested, but is not supported");
}
static u8 i2c_atmel_read_status(struct tpm_chip *chip)
{
struct priv_data *priv = chip->vendor.priv;
struct i2c_client *client = to_i2c_client(chip->pdev);
struct priv_data *priv = dev_get_drvdata(&chip->dev);
struct i2c_client *client = to_i2c_client(chip->dev.parent);
int rc;
/* The TPM fails the I2C read until it is ready, so we do the entire
@ -125,7 +125,7 @@ static u8 i2c_atmel_read_status(struct tpm_chip *chip)
/* Once the TPM has completed the command the command remains readable
* until another command is issued. */
rc = i2c_master_recv(client, priv->buffer, sizeof(priv->buffer));
dev_dbg(chip->pdev,
dev_dbg(&chip->dev,
"%s: sts=%d", __func__, rc);
if (rc <= 0)
return 0;
@ -141,6 +141,7 @@ static bool i2c_atmel_req_canceled(struct tpm_chip *chip, u8 status)
}
static const struct tpm_class_ops i2c_atmel = {
.flags = TPM_OPS_AUTO_STARTUP,
.status = i2c_atmel_read_status,
.recv = i2c_atmel_recv,
.send = i2c_atmel_send,
@ -155,6 +156,7 @@ static int i2c_atmel_probe(struct i2c_client *client,
{
struct tpm_chip *chip;
struct device *dev = &client->dev;
struct priv_data *priv;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C))
return -ENODEV;
@ -163,26 +165,21 @@ static int i2c_atmel_probe(struct i2c_client *client,
if (IS_ERR(chip))
return PTR_ERR(chip);
chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data),
GFP_KERNEL);
if (!chip->vendor.priv)
priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL);
if (!priv)
return -ENOMEM;
/* Default timeouts */
chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->vendor.irq = 0;
chip->timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
chip->timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
dev_set_drvdata(&chip->dev, priv);
/* There is no known way to probe for this device, and all version
* information seems to be read via TPM commands. Thus we rely on the
* TPM startup process in the common code to detect the device. */
if (tpm_get_timeouts(chip))
return -ENODEV;
if (tpm_do_selftest(chip))
return -ENODEV;
return tpm_chip_register(chip);
}

View File

@ -66,6 +66,7 @@ enum i2c_chip_type {
/* Structure to store I2C TPM specific stuff */
struct tpm_inf_dev {
struct i2c_client *client;
int locality;
u8 buf[TPM_BUFSIZE + sizeof(u8)]; /* max. buffer size + addr */
struct tpm_chip *chip;
enum i2c_chip_type chip_type;
@ -288,7 +289,7 @@ static int check_locality(struct tpm_chip *chip, int loc)
if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) {
chip->vendor.locality = loc;
tpm_dev.locality = loc;
return loc;
}
@ -320,7 +321,7 @@ static int request_locality(struct tpm_chip *chip, int loc)
iic_tpm_write(TPM_ACCESS(loc), &buf, 1);
/* wait for burstcount */
stop = jiffies + chip->vendor.timeout_a;
stop = jiffies + chip->timeout_a;
do {
if (check_locality(chip, loc) >= 0)
return loc;
@ -337,7 +338,7 @@ static u8 tpm_tis_i2c_status(struct tpm_chip *chip)
u8 i = 0;
do {
if (iic_tpm_read(TPM_STS(chip->vendor.locality), &buf, 1) < 0)
if (iic_tpm_read(TPM_STS(tpm_dev.locality), &buf, 1) < 0)
return 0;
i++;
@ -351,7 +352,7 @@ static void tpm_tis_i2c_ready(struct tpm_chip *chip)
{
/* this causes the current command to be aborted */
u8 buf = TPM_STS_COMMAND_READY;
iic_tpm_write_long(TPM_STS(chip->vendor.locality), &buf, 1);
iic_tpm_write_long(TPM_STS(tpm_dev.locality), &buf, 1);
}
static ssize_t get_burstcount(struct tpm_chip *chip)
@ -362,10 +363,10 @@ static ssize_t get_burstcount(struct tpm_chip *chip)
/* wait for burstcount */
/* which timeout value, spec has 2 answers (c & d) */
stop = jiffies + chip->vendor.timeout_d;
stop = jiffies + chip->timeout_d;
do {
/* Note: STS is little endian */
if (iic_tpm_read(TPM_STS(chip->vendor.locality)+1, buf, 3) < 0)
if (iic_tpm_read(TPM_STS(tpm_dev.locality)+1, buf, 3) < 0)
burstcnt = 0;
else
burstcnt = (buf[2] << 16) + (buf[1] << 8) + buf[0];
@ -419,7 +420,7 @@ static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
if (burstcnt > (count - size))
burstcnt = count - size;
rc = iic_tpm_read(TPM_DATA_FIFO(chip->vendor.locality),
rc = iic_tpm_read(TPM_DATA_FIFO(tpm_dev.locality),
&(buf[size]), burstcnt);
if (rc == 0)
size += burstcnt;
@ -446,7 +447,7 @@ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count)
/* read first 10 bytes, including tag, paramsize, and result */
size = recv_data(chip, buf, TPM_HEADER_SIZE);
if (size < TPM_HEADER_SIZE) {
dev_err(chip->pdev, "Unable to read header\n");
dev_err(&chip->dev, "Unable to read header\n");
goto out;
}
@ -459,14 +460,14 @@ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count)
size += recv_data(chip, &buf[TPM_HEADER_SIZE],
expected - TPM_HEADER_SIZE);
if (size < expected) {
dev_err(chip->pdev, "Unable to read remainder of result\n");
dev_err(&chip->dev, "Unable to read remainder of result\n");
size = -ETIME;
goto out;
}
wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status);
wait_for_stat(chip, TPM_STS_VALID, chip->timeout_c, &status);
if (status & TPM_STS_DATA_AVAIL) { /* retry? */
dev_err(chip->pdev, "Error left over data\n");
dev_err(&chip->dev, "Error left over data\n");
size = -EIO;
goto out;
}
@ -477,7 +478,7 @@ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count)
* so we sleep rather than keeping the bus busy
*/
usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI);
release_locality(chip, chip->vendor.locality, 0);
release_locality(chip, tpm_dev.locality, 0);
return size;
}
@ -500,7 +501,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
tpm_tis_i2c_ready(chip);
if (wait_for_stat
(chip, TPM_STS_COMMAND_READY,
chip->vendor.timeout_b, &status) < 0) {
chip->timeout_b, &status) < 0) {
rc = -ETIME;
goto out_err;
}
@ -516,7 +517,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
if (burstcnt > (len - 1 - count))
burstcnt = len - 1 - count;
rc = iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality),
rc = iic_tpm_write(TPM_DATA_FIFO(tpm_dev.locality),
&(buf[count]), burstcnt);
if (rc == 0)
count += burstcnt;
@ -530,7 +531,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
}
wait_for_stat(chip, TPM_STS_VALID,
chip->vendor.timeout_c, &status);
chip->timeout_c, &status);
if ((status & TPM_STS_DATA_EXPECT) == 0) {
rc = -EIO;
@ -539,15 +540,15 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
}
/* write last byte */
iic_tpm_write(TPM_DATA_FIFO(chip->vendor.locality), &(buf[count]), 1);
wait_for_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c, &status);
iic_tpm_write(TPM_DATA_FIFO(tpm_dev.locality), &(buf[count]), 1);
wait_for_stat(chip, TPM_STS_VALID, chip->timeout_c, &status);
if ((status & TPM_STS_DATA_EXPECT) != 0) {
rc = -EIO;
goto out_err;
}
/* go and do it */
iic_tpm_write(TPM_STS(chip->vendor.locality), &sts, 1);
iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1);
return len;
out_err:
@ -556,7 +557,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
* so we sleep rather than keeping the bus busy
*/
usleep_range(SLEEP_DURATION_RESET_LOW, SLEEP_DURATION_RESET_HI);
release_locality(chip, chip->vendor.locality, 0);
release_locality(chip, tpm_dev.locality, 0);
return rc;
}
@ -566,6 +567,7 @@ static bool tpm_tis_i2c_req_canceled(struct tpm_chip *chip, u8 status)
}
static const struct tpm_class_ops tpm_tis_i2c = {
.flags = TPM_OPS_AUTO_STARTUP,
.status = tpm_tis_i2c_status,
.recv = tpm_tis_i2c_recv,
.send = tpm_tis_i2c_send,
@ -585,14 +587,11 @@ static int tpm_tis_i2c_init(struct device *dev)
if (IS_ERR(chip))
return PTR_ERR(chip);
/* Disable interrupts */
chip->vendor.irq = 0;
/* Default timeouts */
chip->vendor.timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->vendor.timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT);
chip->vendor.timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->vendor.timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_a = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_b = msecs_to_jiffies(TIS_LONG_TIMEOUT);
chip->timeout_c = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
chip->timeout_d = msecs_to_jiffies(TIS_SHORT_TIMEOUT);
if (request_locality(chip, 0) != 0) {
dev_err(dev, "could not request locality\n");
@ -619,15 +618,11 @@ static int tpm_tis_i2c_init(struct device *dev)
dev_info(dev, "1.2 TPM (device-id 0x%X)\n", vendor >> 16);
INIT_LIST_HEAD(&chip->vendor.list);
tpm_dev.chip = chip;
tpm_get_timeouts(chip);
tpm_do_selftest(chip);
return tpm_chip_register(chip);
out_release:
release_locality(chip, chip->vendor.locality, 1);
release_locality(chip, tpm_dev.locality, 1);
tpm_dev.client = NULL;
out_err:
return rc;
@ -699,7 +694,7 @@ static int tpm_tis_i2c_remove(struct i2c_client *client)
struct tpm_chip *chip = tpm_dev.chip;
tpm_chip_unregister(chip);
release_locality(chip, chip->vendor.locality, 1);
release_locality(chip, tpm_dev.locality, 1);
tpm_dev.client = NULL;
return 0;

View File

@ -1,5 +1,5 @@
/******************************************************************************
* Nuvoton TPM I2C Device Driver Interface for WPCT301/NPCT501,
/******************************************************************************
* Nuvoton TPM I2C Device Driver Interface for WPCT301/NPCT501/NPCT6XX,
* based on the TCG TPM Interface Spec version 1.2.
* Specifications at www.trustedcomputinggroup.org
*
@ -31,6 +31,7 @@
#include <linux/interrupt.h>
#include <linux/wait.h>
#include <linux/i2c.h>
#include <linux/of_device.h>
#include "tpm.h"
/* I2C interface offsets */
@ -52,10 +53,13 @@
#define TPM_I2C_RETRY_DELAY_SHORT 2 /* msec */
#define TPM_I2C_RETRY_DELAY_LONG 10 /* msec */
#define I2C_DRIVER_NAME "tpm_i2c_nuvoton"
#define OF_IS_TPM2 ((void *)1)
#define I2C_IS_TPM2 1
struct priv_data {
int irq;
unsigned int intrs;
wait_queue_head_t read_queue;
};
static s32 i2c_nuvoton_read_buf(struct i2c_client *client, u8 offset, u8 size,
@ -96,13 +100,13 @@ static s32 i2c_nuvoton_write_buf(struct i2c_client *client, u8 offset, u8 size,
/* read TPM_STS register */
static u8 i2c_nuvoton_read_status(struct tpm_chip *chip)
{
struct i2c_client *client = to_i2c_client(chip->pdev);
struct i2c_client *client = to_i2c_client(chip->dev.parent);
s32 status;
u8 data;
status = i2c_nuvoton_read_buf(client, TPM_STS, 1, &data);
if (status <= 0) {
dev_err(chip->pdev, "%s() error return %d\n", __func__,
dev_err(&chip->dev, "%s() error return %d\n", __func__,
status);
data = TPM_STS_ERR_VAL;
}
@ -127,13 +131,13 @@ static s32 i2c_nuvoton_write_status(struct i2c_client *client, u8 data)
/* write commandReady to TPM_STS register */
static void i2c_nuvoton_ready(struct tpm_chip *chip)
{
struct i2c_client *client = to_i2c_client(chip->pdev);
struct i2c_client *client = to_i2c_client(chip->dev.parent);
s32 status;
/* this causes the current command to be aborted */
status = i2c_nuvoton_write_status(client, TPM_STS_COMMAND_READY);
if (status < 0)
dev_err(chip->pdev,
dev_err(&chip->dev,
"%s() fail to write TPM_STS.commandReady\n", __func__);
}
@ -142,7 +146,7 @@ static void i2c_nuvoton_ready(struct tpm_chip *chip)
static int i2c_nuvoton_get_burstcount(struct i2c_client *client,
struct tpm_chip *chip)
{
unsigned long stop = jiffies + chip->vendor.timeout_d;
unsigned long stop = jiffies + chip->timeout_d;
s32 status;
int burst_count = -1;
u8 data;
@ -163,7 +167,7 @@ static int i2c_nuvoton_get_burstcount(struct i2c_client *client,
}
/*
* WPCT301/NPCT501 SINT# supports only dataAvail
* WPCT301/NPCT501/NPCT6XX SINT# supports only dataAvail
* any call to this function which is not waiting for dataAvail will
* set queue to NULL to avoid waiting for interrupt
*/
@ -176,12 +180,12 @@ static bool i2c_nuvoton_check_status(struct tpm_chip *chip, u8 mask, u8 value)
static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value,
u32 timeout, wait_queue_head_t *queue)
{
if (chip->vendor.irq && queue) {
if ((chip->flags & TPM_CHIP_FLAG_IRQ) && queue) {
s32 rc;
struct priv_data *priv = chip->vendor.priv;
struct priv_data *priv = dev_get_drvdata(&chip->dev);
unsigned int cur_intrs = priv->intrs;
enable_irq(chip->vendor.irq);
enable_irq(priv->irq);
rc = wait_event_interruptible_timeout(*queue,
cur_intrs != priv->intrs,
timeout);
@ -212,7 +216,7 @@ static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value,
return 0;
} while (time_before(jiffies, stop));
}
dev_err(chip->pdev, "%s(%02x, %02x) -> timeout\n", __func__, mask,
dev_err(&chip->dev, "%s(%02x, %02x) -> timeout\n", __func__, mask,
value);
return -ETIMEDOUT;
}
@ -231,16 +235,17 @@ static int i2c_nuvoton_wait_for_data_avail(struct tpm_chip *chip, u32 timeout,
static int i2c_nuvoton_recv_data(struct i2c_client *client,
struct tpm_chip *chip, u8 *buf, size_t count)
{
struct priv_data *priv = dev_get_drvdata(&chip->dev);
s32 rc;
int burst_count, bytes2read, size = 0;
while (size < count &&
i2c_nuvoton_wait_for_data_avail(chip,
chip->vendor.timeout_c,
&chip->vendor.read_queue) == 0) {
chip->timeout_c,
&priv->read_queue) == 0) {
burst_count = i2c_nuvoton_get_burstcount(client, chip);
if (burst_count < 0) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"%s() fail to read burstCount=%d\n", __func__,
burst_count);
return -EIO;
@ -249,12 +254,12 @@ static int i2c_nuvoton_recv_data(struct i2c_client *client,
rc = i2c_nuvoton_read_buf(client, TPM_DATA_FIFO_R,
bytes2read, &buf[size]);
if (rc < 0) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"%s() fail on i2c_nuvoton_read_buf()=%d\n",
__func__, rc);
return -EIO;
}
dev_dbg(chip->pdev, "%s(%d):", __func__, bytes2read);
dev_dbg(&chip->dev, "%s(%d):", __func__, bytes2read);
size += bytes2read;
}
@ -264,7 +269,8 @@ static int i2c_nuvoton_recv_data(struct i2c_client *client,
/* Read TPM command results */
static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct device *dev = chip->pdev;
struct priv_data *priv = dev_get_drvdata(&chip->dev);
struct device *dev = chip->dev.parent;
struct i2c_client *client = to_i2c_client(dev);
s32 rc;
int expected, status, burst_count, retries, size = 0;
@ -285,7 +291,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
* tag, paramsize, and result
*/
status = i2c_nuvoton_wait_for_data_avail(
chip, chip->vendor.timeout_c, &chip->vendor.read_queue);
chip, chip->timeout_c, &priv->read_queue);
if (status != 0) {
dev_err(dev, "%s() timeout on dataAvail\n", __func__);
size = -ETIMEDOUT;
@ -325,7 +331,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
}
if (i2c_nuvoton_wait_for_stat(
chip, TPM_STS_VALID | TPM_STS_DATA_AVAIL,
TPM_STS_VALID, chip->vendor.timeout_c,
TPM_STS_VALID, chip->timeout_c,
NULL)) {
dev_err(dev, "%s() error left over data\n", __func__);
size = -ETIMEDOUT;
@ -334,7 +340,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
break;
}
i2c_nuvoton_ready(chip);
dev_dbg(chip->pdev, "%s() -> %d\n", __func__, size);
dev_dbg(&chip->dev, "%s() -> %d\n", __func__, size);
return size;
}
@ -347,7 +353,8 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
*/
static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
{
struct device *dev = chip->pdev;
struct priv_data *priv = dev_get_drvdata(&chip->dev);
struct device *dev = chip->dev.parent;
struct i2c_client *client = to_i2c_client(dev);
u32 ordinal;
size_t count = 0;
@ -357,7 +364,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
i2c_nuvoton_ready(chip);
if (i2c_nuvoton_wait_for_stat(chip, TPM_STS_COMMAND_READY,
TPM_STS_COMMAND_READY,
chip->vendor.timeout_b, NULL)) {
chip->timeout_b, NULL)) {
dev_err(dev, "%s() timeout on commandReady\n",
__func__);
rc = -EIO;
@ -389,7 +396,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
TPM_STS_EXPECT,
TPM_STS_VALID |
TPM_STS_EXPECT,
chip->vendor.timeout_c,
chip->timeout_c,
NULL);
if (rc < 0) {
dev_err(dev, "%s() timeout on Expect\n",
@ -414,7 +421,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
rc = i2c_nuvoton_wait_for_stat(chip,
TPM_STS_VALID | TPM_STS_EXPECT,
TPM_STS_VALID,
chip->vendor.timeout_c, NULL);
chip->timeout_c, NULL);
if (rc) {
dev_err(dev, "%s() timeout on Expect to clear\n",
__func__);
@ -439,7 +446,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
rc = i2c_nuvoton_wait_for_data_avail(chip,
tpm_calc_ordinal_duration(chip,
ordinal),
&chip->vendor.read_queue);
&priv->read_queue);
if (rc) {
dev_err(dev, "%s() timeout command duration\n", __func__);
i2c_nuvoton_ready(chip);
@ -456,6 +463,7 @@ static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
}
static const struct tpm_class_ops tpm_i2c = {
.flags = TPM_OPS_AUTO_STARTUP,
.status = i2c_nuvoton_read_status,
.recv = i2c_nuvoton_recv,
.send = i2c_nuvoton_send,
@ -473,11 +481,11 @@ static const struct tpm_class_ops tpm_i2c = {
static irqreturn_t i2c_nuvoton_int_handler(int dummy, void *dev_id)
{
struct tpm_chip *chip = dev_id;
struct priv_data *priv = chip->vendor.priv;
struct priv_data *priv = dev_get_drvdata(&chip->dev);
priv->intrs++;
wake_up(&chip->vendor.read_queue);
disable_irq_nosync(chip->vendor.irq);
wake_up(&priv->read_queue);
disable_irq_nosync(priv->irq);
return IRQ_HANDLED;
}
@ -521,6 +529,7 @@ static int i2c_nuvoton_probe(struct i2c_client *client,
int rc;
struct tpm_chip *chip;
struct device *dev = &client->dev;
struct priv_data *priv;
u32 vid = 0;
rc = get_vid(client, &vid);
@ -534,46 +543,56 @@ static int i2c_nuvoton_probe(struct i2c_client *client,
if (IS_ERR(chip))
return PTR_ERR(chip);
chip->vendor.priv = devm_kzalloc(dev, sizeof(struct priv_data),
GFP_KERNEL);
if (!chip->vendor.priv)
priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL);
if (!priv)
return -ENOMEM;
init_waitqueue_head(&chip->vendor.read_queue);
init_waitqueue_head(&chip->vendor.int_queue);
if (dev->of_node) {
const struct of_device_id *of_id;
of_id = of_match_device(dev->driver->of_match_table, dev);
if (of_id && of_id->data == OF_IS_TPM2)
chip->flags |= TPM_CHIP_FLAG_TPM2;
} else
if (id->driver_data == I2C_IS_TPM2)
chip->flags |= TPM_CHIP_FLAG_TPM2;
init_waitqueue_head(&priv->read_queue);
/* Default timeouts */
chip->vendor.timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->vendor.timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
chip->vendor.timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->vendor.timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->timeout_a = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->timeout_b = msecs_to_jiffies(TPM_I2C_LONG_TIMEOUT);
chip->timeout_c = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
chip->timeout_d = msecs_to_jiffies(TPM_I2C_SHORT_TIMEOUT);
dev_set_drvdata(&chip->dev, priv);
/*
* I2C intfcaps (interrupt capabilitieis) in the chip are hard coded to:
* TPM_INTF_INT_LEVEL_LOW | TPM_INTF_DATA_AVAIL_INT
* The IRQ should be set in the i2c_board_info (which is done
* automatically in of_i2c_register_devices, for device tree users */
chip->vendor.irq = client->irq;
if (chip->vendor.irq) {
dev_dbg(dev, "%s() chip-vendor.irq\n", __func__);
rc = devm_request_irq(dev, chip->vendor.irq,
priv->irq = client->irq;
if (client->irq) {
dev_dbg(dev, "%s() priv->irq\n", __func__);
rc = devm_request_irq(dev, client->irq,
i2c_nuvoton_int_handler,
IRQF_TRIGGER_LOW,
chip->devname,
dev_name(&chip->dev),
chip);
if (rc) {
dev_err(dev, "%s() Unable to request irq: %d for use\n",
__func__, chip->vendor.irq);
chip->vendor.irq = 0;
__func__, priv->irq);
priv->irq = 0;
} else {
chip->flags |= TPM_CHIP_FLAG_IRQ;
/* Clear any pending interrupt */
i2c_nuvoton_ready(chip);
/* - wait for TPM_STS==0xA0 (stsValid, commandReady) */
rc = i2c_nuvoton_wait_for_stat(chip,
TPM_STS_COMMAND_READY,
TPM_STS_COMMAND_READY,
chip->vendor.timeout_b,
chip->timeout_b,
NULL);
if (rc == 0) {
/*
@ -601,25 +620,20 @@ static int i2c_nuvoton_probe(struct i2c_client *client,
}
}
if (tpm_get_timeouts(chip))
return -ENODEV;
if (tpm_do_selftest(chip))
return -ENODEV;
return tpm_chip_register(chip);
}
static int i2c_nuvoton_remove(struct i2c_client *client)
{
struct device *dev = &(client->dev);
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_chip *chip = i2c_get_clientdata(client);
tpm_chip_unregister(chip);
return 0;
}
static const struct i2c_device_id i2c_nuvoton_id[] = {
{I2C_DRIVER_NAME, 0},
{"tpm_i2c_nuvoton"},
{"tpm2_i2c_nuvoton", .driver_data = I2C_IS_TPM2},
{}
};
MODULE_DEVICE_TABLE(i2c, i2c_nuvoton_id);
@ -628,6 +642,7 @@ MODULE_DEVICE_TABLE(i2c, i2c_nuvoton_id);
static const struct of_device_id i2c_nuvoton_of_match[] = {
{.compatible = "nuvoton,npct501"},
{.compatible = "winbond,wpct301"},
{.compatible = "nuvoton,npct601", .data = OF_IS_TPM2},
{},
};
MODULE_DEVICE_TABLE(of, i2c_nuvoton_of_match);
@ -640,7 +655,7 @@ static struct i2c_driver i2c_nuvoton_driver = {
.probe = i2c_nuvoton_probe,
.remove = i2c_nuvoton_remove,
.driver = {
.name = I2C_DRIVER_NAME,
.name = "tpm_i2c_nuvoton",
.pm = &i2c_nuvoton_pm_ops,
.of_match_table = of_match_ptr(i2c_nuvoton_of_match),
},

View File

@ -53,21 +53,6 @@ static int ibmvtpm_send_crq(struct vio_dev *vdev, u64 w1, u64 w2)
return plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, w1, w2);
}
/**
* ibmvtpm_get_data - Retrieve ibm vtpm data
* @dev: device struct
*
* Return value:
* vtpm device struct
*/
static struct ibmvtpm_dev *ibmvtpm_get_data(const struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
if (chip)
return (struct ibmvtpm_dev *)TPM_VPRIV(chip);
return NULL;
}
/**
* tpm_ibmvtpm_recv - Receive data after send
* @chip: tpm chip struct
@ -79,12 +64,10 @@ static struct ibmvtpm_dev *ibmvtpm_get_data(const struct device *dev)
*/
static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct ibmvtpm_dev *ibmvtpm;
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
u16 len;
int sig;
ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip);
if (!ibmvtpm->rtce_buf) {
dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
return 0;
@ -122,13 +105,11 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
*/
static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct ibmvtpm_dev *ibmvtpm;
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
struct ibmvtpm_crq crq;
__be64 *word = (__be64 *)&crq;
int rc, sig;
ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip);
if (!ibmvtpm->rtce_buf) {
dev_err(ibmvtpm->dev, "ibmvtpm device is not ready\n");
return 0;
@ -289,8 +270,8 @@ static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm)
*/
static int tpm_ibmvtpm_remove(struct vio_dev *vdev)
{
struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev);
struct tpm_chip *chip = dev_get_drvdata(ibmvtpm->dev);
struct tpm_chip *chip = dev_get_drvdata(&vdev->dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
int rc = 0;
tpm_chip_unregister(chip);
@ -327,7 +308,8 @@ static int tpm_ibmvtpm_remove(struct vio_dev *vdev)
*/
static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev)
{
struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(&vdev->dev);
struct tpm_chip *chip = dev_get_drvdata(&vdev->dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
/* ibmvtpm initializes at probe time, so the data we are
* asking for may not be set yet. Estimate that 4K required
@ -348,7 +330,8 @@ static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev)
*/
static int tpm_ibmvtpm_suspend(struct device *dev)
{
struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev);
struct tpm_chip *chip = dev_get_drvdata(dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
struct ibmvtpm_crq crq;
u64 *buf = (u64 *) &crq;
int rc = 0;
@ -400,7 +383,8 @@ static int ibmvtpm_reset_crq(struct ibmvtpm_dev *ibmvtpm)
*/
static int tpm_ibmvtpm_resume(struct device *dev)
{
struct ibmvtpm_dev *ibmvtpm = ibmvtpm_get_data(dev);
struct tpm_chip *chip = dev_get_drvdata(dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
int rc = 0;
do {
@ -643,7 +627,7 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
crq_q->index = 0;
TPM_VPRIV(chip) = (void *)ibmvtpm;
dev_set_drvdata(&chip->dev, ibmvtpm);
spin_lock_init(&ibmvtpm->rtce_lock);

View File

@ -195,9 +195,9 @@ static int wait(struct tpm_chip *chip, int wait_for_bit)
}
if (i == TPM_MAX_TRIES) { /* timeout occurs */
if (wait_for_bit == STAT_XFE)
dev_err(chip->pdev, "Timeout in wait(STAT_XFE)\n");
dev_err(&chip->dev, "Timeout in wait(STAT_XFE)\n");
if (wait_for_bit == STAT_RDA)
dev_err(chip->pdev, "Timeout in wait(STAT_RDA)\n");
dev_err(&chip->dev, "Timeout in wait(STAT_RDA)\n");
return -EIO;
}
return 0;
@ -220,7 +220,7 @@ static void wait_and_send(struct tpm_chip *chip, u8 sendbyte)
static void tpm_wtx(struct tpm_chip *chip)
{
number_of_wtx++;
dev_info(chip->pdev, "Granting WTX (%02d / %02d)\n",
dev_info(&chip->dev, "Granting WTX (%02d / %02d)\n",
number_of_wtx, TPM_MAX_WTX_PACKAGES);
wait_and_send(chip, TPM_VL_VER);
wait_and_send(chip, TPM_CTRL_WTX);
@ -231,7 +231,7 @@ static void tpm_wtx(struct tpm_chip *chip)
static void tpm_wtx_abort(struct tpm_chip *chip)
{
dev_info(chip->pdev, "Aborting WTX\n");
dev_info(&chip->dev, "Aborting WTX\n");
wait_and_send(chip, TPM_VL_VER);
wait_and_send(chip, TPM_CTRL_WTX_ABORT);
wait_and_send(chip, 0x00);
@ -257,7 +257,7 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
}
if (buf[0] != TPM_VL_VER) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"Wrong transport protocol implementation!\n");
return -EIO;
}
@ -272,7 +272,7 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
}
if ((size == 0x6D00) && (buf[1] == 0x80)) {
dev_err(chip->pdev, "Error handling on vendor layer!\n");
dev_err(&chip->dev, "Error handling on vendor layer!\n");
return -EIO;
}
@ -284,7 +284,7 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
}
if (buf[1] == TPM_CTRL_WTX) {
dev_info(chip->pdev, "WTX-package received\n");
dev_info(&chip->dev, "WTX-package received\n");
if (number_of_wtx < TPM_MAX_WTX_PACKAGES) {
tpm_wtx(chip);
goto recv_begin;
@ -295,14 +295,14 @@ static int tpm_inf_recv(struct tpm_chip *chip, u8 * buf, size_t count)
}
if (buf[1] == TPM_CTRL_WTX_ABORT_ACK) {
dev_info(chip->pdev, "WTX-abort acknowledged\n");
dev_info(&chip->dev, "WTX-abort acknowledged\n");
return size;
}
if (buf[1] == TPM_CTRL_ERROR) {
dev_err(chip->pdev, "ERROR-package received:\n");
dev_err(&chip->dev, "ERROR-package received:\n");
if (buf[4] == TPM_INF_NAK)
dev_err(chip->pdev,
dev_err(&chip->dev,
"-> Negative acknowledgement"
" - retransmit command!\n");
return -EIO;
@ -321,7 +321,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count)
ret = empty_fifo(chip, 1);
if (ret) {
dev_err(chip->pdev, "Timeout while clearing FIFO\n");
dev_err(&chip->dev, "Timeout while clearing FIFO\n");
return -EIO;
}

View File

@ -64,15 +64,21 @@ enum tpm_nsc_cmd_mode {
NSC_COMMAND_EOC = 0x03,
NSC_COMMAND_CANCEL = 0x22
};
struct tpm_nsc_priv {
unsigned long base;
};
/*
* Wait for a certain status to appear
*/
static int wait_for_stat(struct tpm_chip *chip, u8 mask, u8 val, u8 * data)
{
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
unsigned long stop;
/* status immediately available check */
*data = inb(chip->vendor.base + NSC_STATUS);
*data = inb(priv->base + NSC_STATUS);
if ((*data & mask) == val)
return 0;
@ -80,7 +86,7 @@ static int wait_for_stat(struct tpm_chip *chip, u8 mask, u8 val, u8 * data)
stop = jiffies + 10 * HZ;
do {
msleep(TPM_TIMEOUT);
*data = inb(chip->vendor.base + 1);
*data = inb(priv->base + 1);
if ((*data & mask) == val)
return 0;
}
@ -91,13 +97,14 @@ static int wait_for_stat(struct tpm_chip *chip, u8 mask, u8 val, u8 * data)
static int nsc_wait_for_ready(struct tpm_chip *chip)
{
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
int status;
unsigned long stop;
/* status immediately available check */
status = inb(chip->vendor.base + NSC_STATUS);
status = inb(priv->base + NSC_STATUS);
if (status & NSC_STATUS_OBF)
status = inb(chip->vendor.base + NSC_DATA);
status = inb(priv->base + NSC_DATA);
if (status & NSC_STATUS_RDY)
return 0;
@ -105,21 +112,22 @@ static int nsc_wait_for_ready(struct tpm_chip *chip)
stop = jiffies + 100;
do {
msleep(TPM_TIMEOUT);
status = inb(chip->vendor.base + NSC_STATUS);
status = inb(priv->base + NSC_STATUS);
if (status & NSC_STATUS_OBF)
status = inb(chip->vendor.base + NSC_DATA);
status = inb(priv->base + NSC_DATA);
if (status & NSC_STATUS_RDY)
return 0;
}
while (time_before(jiffies, stop));
dev_info(chip->pdev, "wait for ready failed\n");
dev_info(&chip->dev, "wait for ready failed\n");
return -EBUSY;
}
static int tpm_nsc_recv(struct tpm_chip *chip, u8 * buf, size_t count)
{
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
u8 *buffer = buf;
u8 data, *p;
u32 size;
@ -129,12 +137,13 @@ static int tpm_nsc_recv(struct tpm_chip *chip, u8 * buf, size_t count)
return -EIO;
if (wait_for_stat(chip, NSC_STATUS_F0, NSC_STATUS_F0, &data) < 0) {
dev_err(chip->pdev, "F0 timeout\n");
dev_err(&chip->dev, "F0 timeout\n");
return -EIO;
}
if ((data =
inb(chip->vendor.base + NSC_DATA)) != NSC_COMMAND_NORMAL) {
dev_err(chip->pdev, "not in normal mode (0x%x)\n",
data = inb(priv->base + NSC_DATA);
if (data != NSC_COMMAND_NORMAL) {
dev_err(&chip->dev, "not in normal mode (0x%x)\n",
data);
return -EIO;
}
@ -143,22 +152,24 @@ static int tpm_nsc_recv(struct tpm_chip *chip, u8 * buf, size_t count)
for (p = buffer; p < &buffer[count]; p++) {
if (wait_for_stat
(chip, NSC_STATUS_OBF, NSC_STATUS_OBF, &data) < 0) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"OBF timeout (while reading data)\n");
return -EIO;
}
if (data & NSC_STATUS_F0)
break;
*p = inb(chip->vendor.base + NSC_DATA);
*p = inb(priv->base + NSC_DATA);
}
if ((data & NSC_STATUS_F0) == 0 &&
(wait_for_stat(chip, NSC_STATUS_F0, NSC_STATUS_F0, &data) < 0)) {
dev_err(chip->pdev, "F0 not set\n");
dev_err(&chip->dev, "F0 not set\n");
return -EIO;
}
if ((data = inb(chip->vendor.base + NSC_DATA)) != NSC_COMMAND_EOC) {
dev_err(chip->pdev,
data = inb(priv->base + NSC_DATA);
if (data != NSC_COMMAND_EOC) {
dev_err(&chip->dev,
"expected end of command(0x%x)\n", data);
return -EIO;
}
@ -174,6 +185,7 @@ static int tpm_nsc_recv(struct tpm_chip *chip, u8 * buf, size_t count)
static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
{
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
u8 data;
int i;
@ -183,48 +195,52 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
* fix it. Not sure why this is needed, we followed the flow
* chart in the manual to the letter.
*/
outb(NSC_COMMAND_CANCEL, chip->vendor.base + NSC_COMMAND);
outb(NSC_COMMAND_CANCEL, priv->base + NSC_COMMAND);
if (nsc_wait_for_ready(chip) != 0)
return -EIO;
if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) {
dev_err(chip->pdev, "IBF timeout\n");
dev_err(&chip->dev, "IBF timeout\n");
return -EIO;
}
outb(NSC_COMMAND_NORMAL, chip->vendor.base + NSC_COMMAND);
outb(NSC_COMMAND_NORMAL, priv->base + NSC_COMMAND);
if (wait_for_stat(chip, NSC_STATUS_IBR, NSC_STATUS_IBR, &data) < 0) {
dev_err(chip->pdev, "IBR timeout\n");
dev_err(&chip->dev, "IBR timeout\n");
return -EIO;
}
for (i = 0; i < count; i++) {
if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) {
dev_err(chip->pdev,
dev_err(&chip->dev,
"IBF timeout (while writing data)\n");
return -EIO;
}
outb(buf[i], chip->vendor.base + NSC_DATA);
outb(buf[i], priv->base + NSC_DATA);
}
if (wait_for_stat(chip, NSC_STATUS_IBF, 0, &data) < 0) {
dev_err(chip->pdev, "IBF timeout\n");
dev_err(&chip->dev, "IBF timeout\n");
return -EIO;
}
outb(NSC_COMMAND_EOC, chip->vendor.base + NSC_COMMAND);
outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND);
return count;
}
static void tpm_nsc_cancel(struct tpm_chip *chip)
{
outb(NSC_COMMAND_CANCEL, chip->vendor.base + NSC_COMMAND);
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
outb(NSC_COMMAND_CANCEL, priv->base + NSC_COMMAND);
}
static u8 tpm_nsc_status(struct tpm_chip *chip)
{
return inb(chip->vendor.base + NSC_STATUS);
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
return inb(priv->base + NSC_STATUS);
}
static bool tpm_nsc_req_canceled(struct tpm_chip *chip, u8 status)
@ -247,9 +263,10 @@ static struct platform_device *pdev = NULL;
static void tpm_nsc_remove(struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
struct tpm_nsc_priv *priv = dev_get_drvdata(&chip->dev);
tpm_chip_unregister(chip);
release_region(chip->vendor.base, 2);
release_region(priv->base, 2);
}
static SIMPLE_DEV_PM_OPS(tpm_nsc_pm, tpm_pm_suspend, tpm_pm_resume);
@ -268,6 +285,7 @@ static int __init init_nsc(void)
int nscAddrBase = TPM_ADDR;
struct tpm_chip *chip;
unsigned long base;
struct tpm_nsc_priv *priv;
/* verify that it is a National part (SID) */
if (tpm_read_index(TPM_ADDR, NSC_SID_INDEX) != 0xEF) {
@ -301,6 +319,14 @@ static int __init init_nsc(void)
if ((rc = platform_device_add(pdev)) < 0)
goto err_put_dev;
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv) {
rc = -ENOMEM;
goto err_del_dev;
}
priv->base = base;
if (request_region(base, 2, "tpm_nsc0") == NULL ) {
rc = -EBUSY;
goto err_del_dev;
@ -312,6 +338,8 @@ static int __init init_nsc(void)
goto err_rel_reg;
}
dev_set_drvdata(&chip->dev, priv);
rc = tpm_chip_register(chip);
if (rc)
goto err_rel_reg;
@ -349,8 +377,6 @@ static int __init init_nsc(void)
"NSC TPM revision %d\n",
tpm_read_index(nscAddrBase, 0x27) & 0x1F);
chip->vendor.base = base;
return 0;
err_rel_reg:

View File

@ -29,40 +29,7 @@
#include <linux/acpi.h>
#include <linux/freezer.h>
#include "tpm.h"
enum tis_access {
TPM_ACCESS_VALID = 0x80,
TPM_ACCESS_ACTIVE_LOCALITY = 0x20,
TPM_ACCESS_REQUEST_PENDING = 0x04,
TPM_ACCESS_REQUEST_USE = 0x02,
};
enum tis_status {
TPM_STS_VALID = 0x80,
TPM_STS_COMMAND_READY = 0x40,
TPM_STS_GO = 0x20,
TPM_STS_DATA_AVAIL = 0x10,
TPM_STS_DATA_EXPECT = 0x08,
};
enum tis_int_flags {
TPM_GLOBAL_INT_ENABLE = 0x80000000,
TPM_INTF_BURST_COUNT_STATIC = 0x100,
TPM_INTF_CMD_READY_INT = 0x080,
TPM_INTF_INT_EDGE_FALLING = 0x040,
TPM_INTF_INT_EDGE_RISING = 0x020,
TPM_INTF_INT_LEVEL_LOW = 0x010,
TPM_INTF_INT_LEVEL_HIGH = 0x008,
TPM_INTF_LOCALITY_CHANGE_INT = 0x004,
TPM_INTF_STS_VALID_INT = 0x002,
TPM_INTF_DATA_AVAIL_INT = 0x001,
};
enum tis_defaults {
TIS_MEM_LEN = 0x5000,
TIS_SHORT_TIMEOUT = 750, /* ms */
TIS_LONG_TIMEOUT = 2000, /* 2 sec */
};
#include "tpm_tis_core.h"
struct tpm_info {
struct resource res;
@ -73,30 +40,30 @@ struct tpm_info {
int irq;
};
/* Some timeout values are needed before it is known whether the chip is
* TPM 1.0 or TPM 2.0.
*/
#define TIS_TIMEOUT_A_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_A)
#define TIS_TIMEOUT_B_MAX max(TIS_LONG_TIMEOUT, TPM2_TIMEOUT_B)
#define TIS_TIMEOUT_C_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_C)
#define TIS_TIMEOUT_D_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_D)
#define TPM_ACCESS(l) (0x0000 | ((l) << 12))
#define TPM_INT_ENABLE(l) (0x0008 | ((l) << 12))
#define TPM_INT_VECTOR(l) (0x000C | ((l) << 12))
#define TPM_INT_STATUS(l) (0x0010 | ((l) << 12))
#define TPM_INTF_CAPS(l) (0x0014 | ((l) << 12))
#define TPM_STS(l) (0x0018 | ((l) << 12))
#define TPM_STS3(l) (0x001b | ((l) << 12))
#define TPM_DATA_FIFO(l) (0x0024 | ((l) << 12))
#define TPM_DID_VID(l) (0x0F00 | ((l) << 12))
#define TPM_RID(l) (0x0F04 | ((l) << 12))
struct priv_data {
bool irq_tested;
struct tpm_tis_tcg_phy {
struct tpm_tis_data priv;
void __iomem *iobase;
};
static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *data)
{
return container_of(data, struct tpm_tis_tcg_phy, priv);
}
static bool interrupts = true;
module_param(interrupts, bool, 0444);
MODULE_PARM_DESC(interrupts, "Enable interrupts");
static bool itpm;
module_param(itpm, bool, 0444);
MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)");
static bool force;
#ifdef CONFIG_X86
module_param(force, bool, 0444);
MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry");
#endif
#if defined(CONFIG_PNP) && defined(CONFIG_ACPI)
static int has_hid(struct acpi_device *dev, const char *hid)
{
@ -120,744 +87,82 @@ static inline int is_itpm(struct acpi_device *dev)
}
#endif
/* Before we attempt to access the TPM we must see that the valid bit is set.
* The specification says that this bit is 0 at reset and remains 0 until the
* 'TPM has gone through its self test and initialization and has established
* correct values in the other bits.' */
static int wait_startup(struct tpm_chip *chip, int l)
static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
u8 *result)
{
unsigned long stop = jiffies + chip->vendor.timeout_a;
do {
if (ioread8(chip->vendor.iobase + TPM_ACCESS(l)) &
TPM_ACCESS_VALID)
return 0;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -1;
}
static int check_locality(struct tpm_chip *chip, int l)
{
if ((ioread8(chip->vendor.iobase + TPM_ACCESS(l)) &
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID))
return chip->vendor.locality = l;
return -1;
}
static void release_locality(struct tpm_chip *chip, int l, int force)
{
if (force || (ioread8(chip->vendor.iobase + TPM_ACCESS(l)) &
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID))
iowrite8(TPM_ACCESS_ACTIVE_LOCALITY,
chip->vendor.iobase + TPM_ACCESS(l));
}
static int request_locality(struct tpm_chip *chip, int l)
{
unsigned long stop, timeout;
long rc;
if (check_locality(chip, l) >= 0)
return l;
iowrite8(TPM_ACCESS_REQUEST_USE,
chip->vendor.iobase + TPM_ACCESS(l));
stop = jiffies + chip->vendor.timeout_a;
if (chip->vendor.irq) {
again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
return -1;
rc = wait_event_interruptible_timeout(chip->vendor.int_queue,
(check_locality
(chip, l) >= 0),
timeout);
if (rc > 0)
return l;
if (rc == -ERESTARTSYS && freezing(current)) {
clear_thread_flag(TIF_SIGPENDING);
goto again;
}
} else {
/* wait for burstcount */
do {
if (check_locality(chip, l) >= 0)
return l;
msleep(TPM_TIMEOUT);
}
while (time_before(jiffies, stop));
}
return -1;
}
static u8 tpm_tis_status(struct tpm_chip *chip)
{
return ioread8(chip->vendor.iobase +
TPM_STS(chip->vendor.locality));
}
static void tpm_tis_ready(struct tpm_chip *chip)
{
/* this causes the current command to be aborted */
iowrite8(TPM_STS_COMMAND_READY,
chip->vendor.iobase + TPM_STS(chip->vendor.locality));
}
static int get_burstcount(struct tpm_chip *chip)
{
unsigned long stop;
int burstcnt;
/* wait for burstcount */
/* which timeout value, spec has 2 answers (c & d) */
stop = jiffies + chip->vendor.timeout_d;
do {
burstcnt = ioread8(chip->vendor.iobase +
TPM_STS(chip->vendor.locality) + 1);
burstcnt += ioread8(chip->vendor.iobase +
TPM_STS(chip->vendor.locality) +
2) << 8;
if (burstcnt)
return burstcnt;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -EBUSY;
}
static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
{
int size = 0, burstcnt;
while (size < count &&
wait_for_tpm_stat(chip,
TPM_STS_DATA_AVAIL | TPM_STS_VALID,
chip->vendor.timeout_c,
&chip->vendor.read_queue, true)
== 0) {
burstcnt = get_burstcount(chip);
for (; burstcnt > 0 && size < count; burstcnt--)
buf[size++] = ioread8(chip->vendor.iobase +
TPM_DATA_FIFO(chip->vendor.
locality));
}
return size;
}
static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
int size = 0;
int expected, status;
if (count < TPM_HEADER_SIZE) {
size = -EIO;
goto out;
}
/* read first 10 bytes, including tag, paramsize, and result */
if ((size =
recv_data(chip, buf, TPM_HEADER_SIZE)) < TPM_HEADER_SIZE) {
dev_err(chip->pdev, "Unable to read header\n");
goto out;
}
expected = be32_to_cpu(*(__be32 *) (buf + 2));
if (expected > count) {
size = -EIO;
goto out;
}
if ((size +=
recv_data(chip, &buf[TPM_HEADER_SIZE],
expected - TPM_HEADER_SIZE)) < expected) {
dev_err(chip->pdev, "Unable to read remainder of result\n");
size = -ETIME;
goto out;
}
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c,
&chip->vendor.int_queue, false);
status = tpm_tis_status(chip);
if (status & TPM_STS_DATA_AVAIL) { /* retry? */
dev_err(chip->pdev, "Error left over data\n");
size = -EIO;
goto out;
}
out:
tpm_tis_ready(chip);
release_locality(chip, chip->vendor.locality, 0);
return size;
}
static bool itpm;
module_param(itpm, bool, 0444);
MODULE_PARM_DESC(itpm, "Force iTPM workarounds (found on some Lenovo laptops)");
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
* waited for here
*/
static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len)
{
int rc, status, burstcnt;
size_t count = 0;
if (request_locality(chip, 0) < 0)
return -EBUSY;
status = tpm_tis_status(chip);
if ((status & TPM_STS_COMMAND_READY) == 0) {
tpm_tis_ready(chip);
if (wait_for_tpm_stat
(chip, TPM_STS_COMMAND_READY, chip->vendor.timeout_b,
&chip->vendor.int_queue, false) < 0) {
rc = -ETIME;
goto out_err;
}
}
while (count < len - 1) {
burstcnt = get_burstcount(chip);
for (; burstcnt > 0 && count < len - 1; burstcnt--) {
iowrite8(buf[count], chip->vendor.iobase +
TPM_DATA_FIFO(chip->vendor.locality));
count++;
}
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c,
&chip->vendor.int_queue, false);
status = tpm_tis_status(chip);
if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) {
rc = -EIO;
goto out_err;
}
}
/* write last byte */
iowrite8(buf[count],
chip->vendor.iobase + TPM_DATA_FIFO(chip->vendor.locality));
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->vendor.timeout_c,
&chip->vendor.int_queue, false);
status = tpm_tis_status(chip);
if ((status & TPM_STS_DATA_EXPECT) != 0) {
rc = -EIO;
goto out_err;
}
return 0;
out_err:
tpm_tis_ready(chip);
release_locality(chip, chip->vendor.locality, 0);
return rc;
}
static void disable_interrupts(struct tpm_chip *chip)
{
u32 intmask;
intmask =
ioread32(chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.locality));
intmask &= ~TPM_GLOBAL_INT_ENABLE;
iowrite32(intmask,
chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.locality));
devm_free_irq(chip->pdev, chip->vendor.irq, chip);
chip->vendor.irq = 0;
}
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
* waited for here
*/
static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len)
{
int rc;
u32 ordinal;
unsigned long dur;
rc = tpm_tis_send_data(chip, buf, len);
if (rc < 0)
return rc;
/* go and do it */
iowrite8(TPM_STS_GO,
chip->vendor.iobase + TPM_STS(chip->vendor.locality));
if (chip->vendor.irq) {
ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
if (chip->flags & TPM_CHIP_FLAG_TPM2)
dur = tpm2_calc_ordinal_duration(chip, ordinal);
else
dur = tpm_calc_ordinal_duration(chip, ordinal);
if (wait_for_tpm_stat
(chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID, dur,
&chip->vendor.read_queue, false) < 0) {
rc = -ETIME;
goto out_err;
}
}
return len;
out_err:
tpm_tis_ready(chip);
release_locality(chip, chip->vendor.locality, 0);
return rc;
}
static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
{
int rc, irq;
struct priv_data *priv = chip->vendor.priv;
if (!chip->vendor.irq || priv->irq_tested)
return tpm_tis_send_main(chip, buf, len);
/* Verify receipt of the expected IRQ */
irq = chip->vendor.irq;
chip->vendor.irq = 0;
rc = tpm_tis_send_main(chip, buf, len);
chip->vendor.irq = irq;
if (!priv->irq_tested)
msleep(1);
if (!priv->irq_tested)
disable_interrupts(chip);
priv->irq_tested = true;
return rc;
}
struct tis_vendor_timeout_override {
u32 did_vid;
unsigned long timeout_us[4];
};
static const struct tis_vendor_timeout_override vendor_timeout_overrides[] = {
/* Atmel 3204 */
{ 0x32041114, { (TIS_SHORT_TIMEOUT*1000), (TIS_LONG_TIMEOUT*1000),
(TIS_SHORT_TIMEOUT*1000), (TIS_SHORT_TIMEOUT*1000) } },
};
static bool tpm_tis_update_timeouts(struct tpm_chip *chip,
unsigned long *timeout_cap)
{
int i;
u32 did_vid;
did_vid = ioread32(chip->vendor.iobase + TPM_DID_VID(0));
for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) {
if (vendor_timeout_overrides[i].did_vid != did_vid)
continue;
memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us,
sizeof(vendor_timeout_overrides[i].timeout_us));
return true;
}
return false;
}
/*
* Early probing for iTPM with STS_DATA_EXPECT flaw.
* Try sending command without itpm flag set and if that
* fails, repeat with itpm flag set.
*/
static int probe_itpm(struct tpm_chip *chip)
{
int rc = 0;
u8 cmd_getticks[] = {
0x00, 0xc1, 0x00, 0x00, 0x00, 0x0a,
0x00, 0x00, 0x00, 0xf1
};
size_t len = sizeof(cmd_getticks);
bool rem_itpm = itpm;
u16 vendor = ioread16(chip->vendor.iobase + TPM_DID_VID(0));
/* probe only iTPMS */
if (vendor != TPM_VID_INTEL)
return 0;
itpm = false;
rc = tpm_tis_send_data(chip, cmd_getticks, len);
if (rc == 0)
goto out;
tpm_tis_ready(chip);
release_locality(chip, chip->vendor.locality, 0);
itpm = true;
rc = tpm_tis_send_data(chip, cmd_getticks, len);
if (rc == 0) {
dev_info(chip->pdev, "Detected an iTPM.\n");
rc = 1;
} else
rc = -EFAULT;
out:
itpm = rem_itpm;
tpm_tis_ready(chip);
release_locality(chip, chip->vendor.locality, 0);
return rc;
}
static bool tpm_tis_req_canceled(struct tpm_chip *chip, u8 status)
{
switch (chip->vendor.manufacturer_id) {
case TPM_VID_WINBOND:
return ((status == TPM_STS_VALID) ||
(status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)));
case TPM_VID_STM:
return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY));
default:
return (status == TPM_STS_COMMAND_READY);
}
}
static const struct tpm_class_ops tpm_tis = {
.status = tpm_tis_status,
.recv = tpm_tis_recv,
.send = tpm_tis_send,
.cancel = tpm_tis_ready,
.update_timeouts = tpm_tis_update_timeouts,
.req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_canceled = tpm_tis_req_canceled,
};
static irqreturn_t tis_int_handler(int dummy, void *dev_id)
{
struct tpm_chip *chip = dev_id;
u32 interrupt;
int i;
interrupt = ioread32(chip->vendor.iobase +
TPM_INT_STATUS(chip->vendor.locality));
if (interrupt == 0)
return IRQ_NONE;
((struct priv_data *)chip->vendor.priv)->irq_tested = true;
if (interrupt & TPM_INTF_DATA_AVAIL_INT)
wake_up_interruptible(&chip->vendor.read_queue);
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT)
for (i = 0; i < 5; i++)
if (check_locality(chip, i) >= 0)
break;
if (interrupt &
(TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT |
TPM_INTF_CMD_READY_INT))
wake_up_interruptible(&chip->vendor.int_queue);
/* Clear interrupts handled with TPM_EOI */
iowrite32(interrupt,
chip->vendor.iobase +
TPM_INT_STATUS(chip->vendor.locality));
ioread32(chip->vendor.iobase + TPM_INT_STATUS(chip->vendor.locality));
return IRQ_HANDLED;
}
/* Register the IRQ and issue a command that will cause an interrupt. If an
* irq is seen then leave the chip setup for IRQ operation, otherwise reverse
* everything and leave in polling mode. Returns 0 on success.
*/
static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
int flags, int irq)
{
struct priv_data *priv = chip->vendor.priv;
u8 original_int_vec;
if (devm_request_irq(chip->pdev, irq, tis_int_handler, flags,
chip->devname, chip) != 0) {
dev_info(chip->pdev, "Unable to request irq: %d for probe\n",
irq);
return -1;
}
chip->vendor.irq = irq;
original_int_vec = ioread8(chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
iowrite8(irq,
chip->vendor.iobase + TPM_INT_VECTOR(chip->vendor.locality));
/* Clear all existing */
iowrite32(ioread32(chip->vendor.iobase +
TPM_INT_STATUS(chip->vendor.locality)),
chip->vendor.iobase + TPM_INT_STATUS(chip->vendor.locality));
/* Turn on */
iowrite32(intmask | TPM_GLOBAL_INT_ENABLE,
chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality));
priv->irq_tested = false;
/* Generate an interrupt by having the core call through to
* tpm_tis_send
*/
if (chip->flags & TPM_CHIP_FLAG_TPM2)
tpm2_gen_interrupt(chip);
else
tpm_gen_interrupt(chip);
/* tpm_tis_send will either confirm the interrupt is working or it
* will call disable_irq which undoes all of the above.
*/
if (!chip->vendor.irq) {
iowrite8(original_int_vec,
chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
return 1;
}
struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
while (len--)
*result++ = ioread8(phy->iobase + addr);
return 0;
}
/* Try to find the IRQ the TPM is using. This is for legacy x86 systems that
* do not have ACPI/etc. We typically expect the interrupt to be declared if
* present.
*/
static void tpm_tis_probe_irq(struct tpm_chip *chip, u32 intmask)
static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
u8 *value)
{
u8 original_int_vec;
int i;
struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
original_int_vec = ioread8(chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
if (!original_int_vec) {
if (IS_ENABLED(CONFIG_X86))
for (i = 3; i <= 15; i++)
if (!tpm_tis_probe_irq_single(chip, intmask, 0,
i))
return;
} else if (!tpm_tis_probe_irq_single(chip, intmask, 0,
original_int_vec))
return;
while (len--)
iowrite8(*value++, phy->iobase + addr);
return 0;
}
static bool interrupts = true;
module_param(interrupts, bool, 0444);
MODULE_PARM_DESC(interrupts, "Enable interrupts");
static void tpm_tis_remove(struct tpm_chip *chip)
static int tpm_tcg_read16(struct tpm_tis_data *data, u32 addr, u16 *result)
{
if (chip->flags & TPM_CHIP_FLAG_TPM2)
tpm2_shutdown(chip, TPM2_SU_CLEAR);
struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
iowrite32(~TPM_GLOBAL_INT_ENABLE &
ioread32(chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.
locality)),
chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.locality));
release_locality(chip, chip->vendor.locality, 1);
*result = ioread16(phy->iobase + addr);
return 0;
}
static int tpm_tcg_read32(struct tpm_tis_data *data, u32 addr, u32 *result)
{
struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
*result = ioread32(phy->iobase + addr);
return 0;
}
static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value)
{
struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
iowrite32(value, phy->iobase + addr);
return 0;
}
static const struct tpm_tis_phy_ops tpm_tcg = {
.read_bytes = tpm_tcg_read_bytes,
.write_bytes = tpm_tcg_write_bytes,
.read16 = tpm_tcg_read16,
.read32 = tpm_tcg_read32,
.write32 = tpm_tcg_write32,
};
static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
acpi_handle acpi_dev_handle)
{
u32 vendor, intfcaps, intmask;
int rc, probe;
struct tpm_chip *chip;
struct priv_data *priv;
struct tpm_tis_tcg_phy *phy;
int irq = -1;
priv = devm_kzalloc(dev, sizeof(struct priv_data), GFP_KERNEL);
if (priv == NULL)
phy = devm_kzalloc(dev, sizeof(struct tpm_tis_tcg_phy), GFP_KERNEL);
if (phy == NULL)
return -ENOMEM;
chip = tpmm_chip_alloc(dev, &tpm_tis);
if (IS_ERR(chip))
return PTR_ERR(chip);
phy->iobase = devm_ioremap_resource(dev, &tpm_info->res);
if (IS_ERR(phy->iobase))
return PTR_ERR(phy->iobase);
chip->vendor.priv = priv;
#ifdef CONFIG_ACPI
chip->acpi_dev_handle = acpi_dev_handle;
#endif
chip->vendor.iobase = devm_ioremap_resource(dev, &tpm_info->res);
if (IS_ERR(chip->vendor.iobase))
return PTR_ERR(chip->vendor.iobase);
/* Maximum timeouts */
chip->vendor.timeout_a = TIS_TIMEOUT_A_MAX;
chip->vendor.timeout_b = TIS_TIMEOUT_B_MAX;
chip->vendor.timeout_c = TIS_TIMEOUT_C_MAX;
chip->vendor.timeout_d = TIS_TIMEOUT_D_MAX;
if (wait_startup(chip, 0) != 0) {
rc = -ENODEV;
goto out_err;
}
/* Take control of the TPM's interrupt hardware and shut it off */
intmask = ioread32(chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.locality));
intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
intmask &= ~TPM_GLOBAL_INT_ENABLE;
iowrite32(intmask,
chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality));
if (request_locality(chip, 0) != 0) {
rc = -ENODEV;
goto out_err;
}
rc = tpm2_probe(chip);
if (rc)
goto out_err;
vendor = ioread32(chip->vendor.iobase + TPM_DID_VID(0));
chip->vendor.manufacturer_id = vendor;
dev_info(dev, "%s TPM (device-id 0x%X, rev-id %d)\n",
(chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2",
vendor >> 16, ioread8(chip->vendor.iobase + TPM_RID(0)));
if (!itpm) {
probe = probe_itpm(chip);
if (probe < 0) {
rc = -ENODEV;
goto out_err;
}
itpm = !!probe;
}
if (interrupts)
irq = tpm_info->irq;
if (itpm)
dev_info(dev, "Intel iTPM workaround enabled\n");
phy->priv.flags |= TPM_TIS_ITPM_POSSIBLE;
/* Figure out the capabilities */
intfcaps =
ioread32(chip->vendor.iobase +
TPM_INTF_CAPS(chip->vendor.locality));
dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
intfcaps);
if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
dev_dbg(dev, "\tBurst Count Static\n");
if (intfcaps & TPM_INTF_CMD_READY_INT)
dev_dbg(dev, "\tCommand Ready Int Support\n");
if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
dev_dbg(dev, "\tInterrupt Edge Falling\n");
if (intfcaps & TPM_INTF_INT_EDGE_RISING)
dev_dbg(dev, "\tInterrupt Edge Rising\n");
if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
dev_dbg(dev, "\tInterrupt Level Low\n");
if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
dev_dbg(dev, "\tInterrupt Level High\n");
if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT)
dev_dbg(dev, "\tLocality Change Int Support\n");
if (intfcaps & TPM_INTF_STS_VALID_INT)
dev_dbg(dev, "\tSts Valid Int Support\n");
if (intfcaps & TPM_INTF_DATA_AVAIL_INT)
dev_dbg(dev, "\tData Avail Int Support\n");
/* Very early on issue a command to the TPM in polling mode to make
* sure it works. May as well use that command to set the proper
* timeouts for the driver.
*/
if (tpm_get_timeouts(chip)) {
dev_err(dev, "Could not get TPM timeouts and durations\n");
rc = -ENODEV;
goto out_err;
}
/* INTERRUPT Setup */
init_waitqueue_head(&chip->vendor.read_queue);
init_waitqueue_head(&chip->vendor.int_queue);
if (interrupts && tpm_info->irq != -1) {
if (tpm_info->irq) {
tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
tpm_info->irq);
if (!chip->vendor.irq)
dev_err(chip->pdev, FW_BUG
"TPM interrupt not working, polling instead\n");
} else
tpm_tis_probe_irq(chip, intmask);
}
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
rc = tpm2_do_selftest(chip);
if (rc == TPM2_RC_INITIALIZE) {
dev_warn(dev, "Firmware has not started TPM\n");
rc = tpm2_startup(chip, TPM2_SU_CLEAR);
if (!rc)
rc = tpm2_do_selftest(chip);
}
if (rc) {
dev_err(dev, "TPM self test failed\n");
if (rc > 0)
rc = -ENODEV;
goto out_err;
}
} else {
if (tpm_do_selftest(chip)) {
dev_err(dev, "TPM self test failed\n");
rc = -ENODEV;
goto out_err;
}
}
return tpm_chip_register(chip);
out_err:
tpm_tis_remove(chip);
return rc;
return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg,
acpi_dev_handle);
}
#ifdef CONFIG_PM_SLEEP
static void tpm_tis_reenable_interrupts(struct tpm_chip *chip)
{
u32 intmask;
/* reenable interrupts that device may have lost or
BIOS/firmware may have disabled */
iowrite8(chip->vendor.irq, chip->vendor.iobase +
TPM_INT_VECTOR(chip->vendor.locality));
intmask =
ioread32(chip->vendor.iobase +
TPM_INT_ENABLE(chip->vendor.locality));
intmask |= TPM_INTF_CMD_READY_INT
| TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT
| TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE;
iowrite32(intmask,
chip->vendor.iobase + TPM_INT_ENABLE(chip->vendor.locality));
}
static int tpm_tis_resume(struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
int ret;
if (chip->vendor.irq)
tpm_tis_reenable_interrupts(chip);
ret = tpm_pm_resume(dev);
if (ret)
return ret;
/* TPM 1.2 requires self-test on resume. This function actually returns
* an error code but for unknown reason it isn't handled.
*/
if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
tpm_do_selftest(chip);
return 0;
}
#endif
static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume);
static int tpm_tis_pnp_init(struct pnp_dev *pnp_dev,
@ -1058,12 +363,6 @@ static struct platform_driver tis_drv = {
},
};
static bool force;
#ifdef CONFIG_X86
module_param(force, bool, 0444);
MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry");
#endif
static int tpm_tis_force_device(void)
{
struct platform_device *pdev;

View File

@ -0,0 +1,835 @@
/*
* Copyright (C) 2005, 2006 IBM Corporation
* Copyright (C) 2014, 2015 Intel Corporation
*
* Authors:
* Leendert van Doorn <leendert@watson.ibm.com>
* Kylene Hall <kjhall@us.ibm.com>
*
* Maintained by: <tpmdd-devel@lists.sourceforge.net>
*
* Device driver for TCG/TCPA TPM (trusted platform module).
* Specifications at www.trustedcomputinggroup.org
*
* This device driver implements the TPM interface as defined in
* the TCG TPM Interface Spec version 1.2, revision 1.0.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2 of the
* License.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/pnp.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/wait.h>
#include <linux/acpi.h>
#include <linux/freezer.h>
#include "tpm.h"
#include "tpm_tis_core.h"
/* Before we attempt to access the TPM we must see that the valid bit is set.
* The specification says that this bit is 0 at reset and remains 0 until the
* 'TPM has gone through its self test and initialization and has established
* correct values in the other bits.'
*/
static int wait_startup(struct tpm_chip *chip, int l)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
unsigned long stop = jiffies + chip->timeout_a;
do {
int rc;
u8 access;
rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
if (rc < 0)
return rc;
if (access & TPM_ACCESS_VALID)
return 0;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -1;
}
static int check_locality(struct tpm_chip *chip, int l)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc;
u8 access;
rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
if (rc < 0)
return rc;
if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID))
return priv->locality = l;
return -1;
}
static void release_locality(struct tpm_chip *chip, int l, int force)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc;
u8 access;
rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
if (rc < 0)
return;
if (force || (access &
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID))
tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
}
static int request_locality(struct tpm_chip *chip, int l)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
unsigned long stop, timeout;
long rc;
if (check_locality(chip, l) >= 0)
return l;
rc = tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_REQUEST_USE);
if (rc < 0)
return rc;
stop = jiffies + chip->timeout_a;
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
return -1;
rc = wait_event_interruptible_timeout(priv->int_queue,
(check_locality
(chip, l) >= 0),
timeout);
if (rc > 0)
return l;
if (rc == -ERESTARTSYS && freezing(current)) {
clear_thread_flag(TIF_SIGPENDING);
goto again;
}
} else {
/* wait for burstcount */
do {
if (check_locality(chip, l) >= 0)
return l;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
return -1;
}
static u8 tpm_tis_status(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc;
u8 status;
rc = tpm_tis_read8(priv, TPM_STS(priv->locality), &status);
if (rc < 0)
return 0;
return status;
}
static void tpm_tis_ready(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
/* this causes the current command to be aborted */
tpm_tis_write8(priv, TPM_STS(priv->locality), TPM_STS_COMMAND_READY);
}
static int get_burstcount(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
unsigned long stop;
int burstcnt, rc;
u32 value;
/* wait for burstcount */
/* which timeout value, spec has 2 answers (c & d) */
stop = jiffies + chip->timeout_d;
do {
rc = tpm_tis_read32(priv, TPM_STS(priv->locality), &value);
if (rc < 0)
return rc;
burstcnt = (value >> 8) & 0xFFFF;
if (burstcnt)
return burstcnt;
msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
return -EBUSY;
}
static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int size = 0, burstcnt, rc;
while (size < count &&
wait_for_tpm_stat(chip,
TPM_STS_DATA_AVAIL | TPM_STS_VALID,
chip->timeout_c,
&priv->read_queue, true) == 0) {
burstcnt = min_t(int, get_burstcount(chip), count - size);
rc = tpm_tis_read_bytes(priv, TPM_DATA_FIFO(priv->locality),
burstcnt, buf + size);
if (rc < 0)
return rc;
size += burstcnt;
}
return size;
}
static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int size = 0;
int expected, status;
if (count < TPM_HEADER_SIZE) {
size = -EIO;
goto out;
}
size = recv_data(chip, buf, TPM_HEADER_SIZE);
/* read first 10 bytes, including tag, paramsize, and result */
if (size < TPM_HEADER_SIZE) {
dev_err(&chip->dev, "Unable to read header\n");
goto out;
}
expected = be32_to_cpu(*(__be32 *) (buf + 2));
if (expected > count) {
size = -EIO;
goto out;
}
size += recv_data(chip, &buf[TPM_HEADER_SIZE],
expected - TPM_HEADER_SIZE);
if (size < expected) {
dev_err(&chip->dev, "Unable to read remainder of result\n");
size = -ETIME;
goto out;
}
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
&priv->int_queue, false);
status = tpm_tis_status(chip);
if (status & TPM_STS_DATA_AVAIL) { /* retry? */
dev_err(&chip->dev, "Error left over data\n");
size = -EIO;
goto out;
}
out:
tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return size;
}
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
* waited for here
*/
static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc, status, burstcnt;
size_t count = 0;
bool itpm = priv->flags & TPM_TIS_ITPM_POSSIBLE;
if (request_locality(chip, 0) < 0)
return -EBUSY;
status = tpm_tis_status(chip);
if ((status & TPM_STS_COMMAND_READY) == 0) {
tpm_tis_ready(chip);
if (wait_for_tpm_stat
(chip, TPM_STS_COMMAND_READY, chip->timeout_b,
&priv->int_queue, false) < 0) {
rc = -ETIME;
goto out_err;
}
}
while (count < len - 1) {
burstcnt = min_t(int, get_burstcount(chip), len - count - 1);
rc = tpm_tis_write_bytes(priv, TPM_DATA_FIFO(priv->locality),
burstcnt, buf + count);
if (rc < 0)
goto out_err;
count += burstcnt;
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
&priv->int_queue, false);
status = tpm_tis_status(chip);
if (!itpm && (status & TPM_STS_DATA_EXPECT) == 0) {
rc = -EIO;
goto out_err;
}
}
/* write last byte */
rc = tpm_tis_write8(priv, TPM_DATA_FIFO(priv->locality), buf[count]);
if (rc < 0)
goto out_err;
wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
&priv->int_queue, false);
status = tpm_tis_status(chip);
if (!itpm && (status & TPM_STS_DATA_EXPECT) != 0) {
rc = -EIO;
goto out_err;
}
return 0;
out_err:
tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return rc;
}
static void disable_interrupts(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u32 intmask;
int rc;
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
intmask = 0;
intmask &= ~TPM_GLOBAL_INT_ENABLE;
rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
devm_free_irq(chip->dev.parent, priv->irq, chip);
priv->irq = 0;
chip->flags &= ~TPM_CHIP_FLAG_IRQ;
}
/*
* If interrupts are used (signaled by an irq set in the vendor structure)
* tpm.c can skip polling for the data to be available as the interrupt is
* waited for here
*/
static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc;
u32 ordinal;
unsigned long dur;
rc = tpm_tis_send_data(chip, buf, len);
if (rc < 0)
return rc;
/* go and do it */
rc = tpm_tis_write8(priv, TPM_STS(priv->locality), TPM_STS_GO);
if (rc < 0)
goto out_err;
if (chip->flags & TPM_CHIP_FLAG_IRQ) {
ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
if (chip->flags & TPM_CHIP_FLAG_TPM2)
dur = tpm2_calc_ordinal_duration(chip, ordinal);
else
dur = tpm_calc_ordinal_duration(chip, ordinal);
if (wait_for_tpm_stat
(chip, TPM_STS_DATA_AVAIL | TPM_STS_VALID, dur,
&priv->read_queue, false) < 0) {
rc = -ETIME;
goto out_err;
}
}
return len;
out_err:
tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return rc;
}
static int tpm_tis_send(struct tpm_chip *chip, u8 *buf, size_t len)
{
int rc, irq;
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
if (!(chip->flags & TPM_CHIP_FLAG_IRQ) || priv->irq_tested)
return tpm_tis_send_main(chip, buf, len);
/* Verify receipt of the expected IRQ */
irq = priv->irq;
priv->irq = 0;
chip->flags &= ~TPM_CHIP_FLAG_IRQ;
rc = tpm_tis_send_main(chip, buf, len);
priv->irq = irq;
chip->flags |= TPM_CHIP_FLAG_IRQ;
if (!priv->irq_tested)
msleep(1);
if (!priv->irq_tested)
disable_interrupts(chip);
priv->irq_tested = true;
return rc;
}
struct tis_vendor_timeout_override {
u32 did_vid;
unsigned long timeout_us[4];
};
static const struct tis_vendor_timeout_override vendor_timeout_overrides[] = {
/* Atmel 3204 */
{ 0x32041114, { (TIS_SHORT_TIMEOUT*1000), (TIS_LONG_TIMEOUT*1000),
(TIS_SHORT_TIMEOUT*1000), (TIS_SHORT_TIMEOUT*1000) } },
};
static bool tpm_tis_update_timeouts(struct tpm_chip *chip,
unsigned long *timeout_cap)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int i, rc;
u32 did_vid;
rc = tpm_tis_read32(priv, TPM_DID_VID(0), &did_vid);
if (rc < 0)
return rc;
for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) {
if (vendor_timeout_overrides[i].did_vid != did_vid)
continue;
memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us,
sizeof(vendor_timeout_overrides[i].timeout_us));
return true;
}
return false;
}
/*
* Early probing for iTPM with STS_DATA_EXPECT flaw.
* Try sending command without itpm flag set and if that
* fails, repeat with itpm flag set.
*/
static int probe_itpm(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc = 0;
u8 cmd_getticks[] = {
0x00, 0xc1, 0x00, 0x00, 0x00, 0x0a,
0x00, 0x00, 0x00, 0xf1
};
size_t len = sizeof(cmd_getticks);
bool itpm;
u16 vendor;
rc = tpm_tis_read16(priv, TPM_DID_VID(0), &vendor);
if (rc < 0)
return rc;
/* probe only iTPMS */
if (vendor != TPM_VID_INTEL)
return 0;
itpm = false;
rc = tpm_tis_send_data(chip, cmd_getticks, len);
if (rc == 0)
goto out;
tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
itpm = true;
rc = tpm_tis_send_data(chip, cmd_getticks, len);
if (rc == 0) {
dev_info(&chip->dev, "Detected an iTPM.\n");
rc = 1;
} else
rc = -EFAULT;
out:
tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return rc;
}
static bool tpm_tis_req_canceled(struct tpm_chip *chip, u8 status)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
switch (priv->manufacturer_id) {
case TPM_VID_WINBOND:
return ((status == TPM_STS_VALID) ||
(status == (TPM_STS_VALID | TPM_STS_COMMAND_READY)));
case TPM_VID_STM:
return (status == (TPM_STS_VALID | TPM_STS_COMMAND_READY));
default:
return (status == TPM_STS_COMMAND_READY);
}
}
static irqreturn_t tis_int_handler(int dummy, void *dev_id)
{
struct tpm_chip *chip = dev_id;
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u32 interrupt;
int i, rc;
rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt);
if (rc < 0)
return IRQ_NONE;
if (interrupt == 0)
return IRQ_NONE;
priv->irq_tested = true;
if (interrupt & TPM_INTF_DATA_AVAIL_INT)
wake_up_interruptible(&priv->read_queue);
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT)
for (i = 0; i < 5; i++)
if (check_locality(chip, i) >= 0)
break;
if (interrupt &
(TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT |
TPM_INTF_CMD_READY_INT))
wake_up_interruptible(&priv->int_queue);
/* Clear interrupts handled with TPM_EOI */
rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt);
if (rc < 0)
return IRQ_NONE;
tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt);
return IRQ_HANDLED;
}
/* Register the IRQ and issue a command that will cause an interrupt. If an
* irq is seen then leave the chip setup for IRQ operation, otherwise reverse
* everything and leave in polling mode. Returns 0 on success.
*/
static int tpm_tis_probe_irq_single(struct tpm_chip *chip, u32 intmask,
int flags, int irq)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u8 original_int_vec;
int rc;
u32 int_status;
if (devm_request_irq(chip->dev.parent, irq, tis_int_handler, flags,
dev_name(&chip->dev), chip) != 0) {
dev_info(&chip->dev, "Unable to request irq: %d for probe\n",
irq);
return -1;
}
priv->irq = irq;
rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality),
&original_int_vec);
if (rc < 0)
return rc;
rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), irq);
if (rc < 0)
return rc;
rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &int_status);
if (rc < 0)
return rc;
/* Clear all existing */
rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), int_status);
if (rc < 0)
return rc;
/* Turn on */
rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality),
intmask | TPM_GLOBAL_INT_ENABLE);
if (rc < 0)
return rc;
priv->irq_tested = false;
/* Generate an interrupt by having the core call through to
* tpm_tis_send
*/
if (chip->flags & TPM_CHIP_FLAG_TPM2)
tpm2_gen_interrupt(chip);
else
tpm_gen_interrupt(chip);
/* tpm_tis_send will either confirm the interrupt is working or it
* will call disable_irq which undoes all of the above.
*/
if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
rc = tpm_tis_write8(priv, original_int_vec,
TPM_INT_VECTOR(priv->locality));
if (rc < 0)
return rc;
return 1;
}
return 0;
}
/* Try to find the IRQ the TPM is using. This is for legacy x86 systems that
* do not have ACPI/etc. We typically expect the interrupt to be declared if
* present.
*/
static void tpm_tis_probe_irq(struct tpm_chip *chip, u32 intmask)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u8 original_int_vec;
int i, rc;
rc = tpm_tis_read8(priv, TPM_INT_VECTOR(priv->locality),
&original_int_vec);
if (rc < 0)
return;
if (!original_int_vec) {
if (IS_ENABLED(CONFIG_X86))
for (i = 3; i <= 15; i++)
if (!tpm_tis_probe_irq_single(chip, intmask, 0,
i))
return;
} else if (!tpm_tis_probe_irq_single(chip, intmask, 0,
original_int_vec))
return;
}
void tpm_tis_remove(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u32 reg = TPM_INT_ENABLE(priv->locality);
u32 interrupt;
int rc;
rc = tpm_tis_read32(priv, reg, &interrupt);
if (rc < 0)
interrupt = 0;
tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt);
release_locality(chip, priv->locality, 1);
}
EXPORT_SYMBOL_GPL(tpm_tis_remove);
static const struct tpm_class_ops tpm_tis = {
.flags = TPM_OPS_AUTO_STARTUP,
.status = tpm_tis_status,
.recv = tpm_tis_recv,
.send = tpm_tis_send,
.cancel = tpm_tis_ready,
.update_timeouts = tpm_tis_update_timeouts,
.req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_canceled = tpm_tis_req_canceled,
};
int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
const struct tpm_tis_phy_ops *phy_ops,
acpi_handle acpi_dev_handle)
{
u32 vendor, intfcaps, intmask;
u8 rid;
int rc, probe;
struct tpm_chip *chip;
chip = tpmm_chip_alloc(dev, &tpm_tis);
if (IS_ERR(chip))
return PTR_ERR(chip);
#ifdef CONFIG_ACPI
chip->acpi_dev_handle = acpi_dev_handle;
#endif
/* Maximum timeouts */
chip->timeout_a = msecs_to_jiffies(TIS_TIMEOUT_A_MAX);
chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX);
chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX);
chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX);
priv->phy_ops = phy_ops;
dev_set_drvdata(&chip->dev, priv);
if (wait_startup(chip, 0) != 0) {
rc = -ENODEV;
goto out_err;
}
/* Take control of the TPM's interrupt hardware and shut it off */
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
goto out_err;
intmask |= TPM_INTF_CMD_READY_INT | TPM_INTF_LOCALITY_CHANGE_INT |
TPM_INTF_DATA_AVAIL_INT | TPM_INTF_STS_VALID_INT;
intmask &= ~TPM_GLOBAL_INT_ENABLE;
tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
if (request_locality(chip, 0) != 0) {
rc = -ENODEV;
goto out_err;
}
rc = tpm2_probe(chip);
if (rc)
goto out_err;
rc = tpm_tis_read32(priv, TPM_DID_VID(0), &vendor);
if (rc < 0)
goto out_err;
priv->manufacturer_id = vendor;
rc = tpm_tis_read8(priv, TPM_RID(0), &rid);
if (rc < 0)
goto out_err;
dev_info(dev, "%s TPM (device-id 0x%X, rev-id %d)\n",
(chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2",
vendor >> 16, rid);
if (!(priv->flags & TPM_TIS_ITPM_POSSIBLE)) {
probe = probe_itpm(chip);
if (probe < 0) {
rc = -ENODEV;
goto out_err;
}
if (!!probe)
priv->flags |= TPM_TIS_ITPM_POSSIBLE;
}
/* Figure out the capabilities */
rc = tpm_tis_read32(priv, TPM_INTF_CAPS(priv->locality), &intfcaps);
if (rc < 0)
goto out_err;
dev_dbg(dev, "TPM interface capabilities (0x%x):\n",
intfcaps);
if (intfcaps & TPM_INTF_BURST_COUNT_STATIC)
dev_dbg(dev, "\tBurst Count Static\n");
if (intfcaps & TPM_INTF_CMD_READY_INT)
dev_dbg(dev, "\tCommand Ready Int Support\n");
if (intfcaps & TPM_INTF_INT_EDGE_FALLING)
dev_dbg(dev, "\tInterrupt Edge Falling\n");
if (intfcaps & TPM_INTF_INT_EDGE_RISING)
dev_dbg(dev, "\tInterrupt Edge Rising\n");
if (intfcaps & TPM_INTF_INT_LEVEL_LOW)
dev_dbg(dev, "\tInterrupt Level Low\n");
if (intfcaps & TPM_INTF_INT_LEVEL_HIGH)
dev_dbg(dev, "\tInterrupt Level High\n");
if (intfcaps & TPM_INTF_LOCALITY_CHANGE_INT)
dev_dbg(dev, "\tLocality Change Int Support\n");
if (intfcaps & TPM_INTF_STS_VALID_INT)
dev_dbg(dev, "\tSts Valid Int Support\n");
if (intfcaps & TPM_INTF_DATA_AVAIL_INT)
dev_dbg(dev, "\tData Avail Int Support\n");
/* Very early on issue a command to the TPM in polling mode to make
* sure it works. May as well use that command to set the proper
* timeouts for the driver.
*/
if (tpm_get_timeouts(chip)) {
dev_err(dev, "Could not get TPM timeouts and durations\n");
rc = -ENODEV;
goto out_err;
}
/* INTERRUPT Setup */
init_waitqueue_head(&priv->read_queue);
init_waitqueue_head(&priv->int_queue);
if (irq != -1) {
if (irq) {
tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
irq);
if (!(chip->flags & TPM_CHIP_FLAG_IRQ))
dev_err(&chip->dev, FW_BUG
"TPM interrupt not working, polling instead\n");
} else {
tpm_tis_probe_irq(chip, intmask);
}
}
return tpm_chip_register(chip);
out_err:
tpm_tis_remove(chip);
return rc;
}
EXPORT_SYMBOL_GPL(tpm_tis_core_init);
#ifdef CONFIG_PM_SLEEP
static void tpm_tis_reenable_interrupts(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
u32 intmask;
int rc;
/* reenable interrupts that device may have lost or
* BIOS/firmware may have disabled
*/
rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq);
if (rc < 0)
return;
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
return;
intmask |= TPM_INTF_CMD_READY_INT
| TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT
| TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE;
tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
}
int tpm_tis_resume(struct device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
int ret;
if (chip->flags & TPM_CHIP_FLAG_IRQ)
tpm_tis_reenable_interrupts(chip);
ret = tpm_pm_resume(dev);
if (ret)
return ret;
/* TPM 1.2 requires self-test on resume. This function actually returns
* an error code but for unknown reason it isn't handled.
*/
if (!(chip->flags & TPM_CHIP_FLAG_TPM2))
tpm_do_selftest(chip);
return 0;
}
EXPORT_SYMBOL_GPL(tpm_tis_resume);
#endif
MODULE_AUTHOR("Leendert van Doorn (leendert@watson.ibm.com)");
MODULE_DESCRIPTION("TPM Driver");
MODULE_VERSION("2.0");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,156 @@
/*
* Copyright (C) 2005, 2006 IBM Corporation
* Copyright (C) 2014, 2015 Intel Corporation
*
* Authors:
* Leendert van Doorn <leendert@watson.ibm.com>
* Kylene Hall <kjhall@us.ibm.com>
*
* Maintained by: <tpmdd-devel@lists.sourceforge.net>
*
* Device driver for TCG/TCPA TPM (trusted platform module).
* Specifications at www.trustedcomputinggroup.org
*
* This device driver implements the TPM interface as defined in
* the TCG TPM Interface Spec version 1.2, revision 1.0.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2 of the
* License.
*/
#ifndef __TPM_TIS_CORE_H__
#define __TPM_TIS_CORE_H__
#include "tpm.h"
enum tis_access {
TPM_ACCESS_VALID = 0x80,
TPM_ACCESS_ACTIVE_LOCALITY = 0x20,
TPM_ACCESS_REQUEST_PENDING = 0x04,
TPM_ACCESS_REQUEST_USE = 0x02,
};
enum tis_status {
TPM_STS_VALID = 0x80,
TPM_STS_COMMAND_READY = 0x40,
TPM_STS_GO = 0x20,
TPM_STS_DATA_AVAIL = 0x10,
TPM_STS_DATA_EXPECT = 0x08,
};
enum tis_int_flags {
TPM_GLOBAL_INT_ENABLE = 0x80000000,
TPM_INTF_BURST_COUNT_STATIC = 0x100,
TPM_INTF_CMD_READY_INT = 0x080,
TPM_INTF_INT_EDGE_FALLING = 0x040,
TPM_INTF_INT_EDGE_RISING = 0x020,
TPM_INTF_INT_LEVEL_LOW = 0x010,
TPM_INTF_INT_LEVEL_HIGH = 0x008,
TPM_INTF_LOCALITY_CHANGE_INT = 0x004,
TPM_INTF_STS_VALID_INT = 0x002,
TPM_INTF_DATA_AVAIL_INT = 0x001,
};
enum tis_defaults {
TIS_MEM_LEN = 0x5000,
TIS_SHORT_TIMEOUT = 750, /* ms */
TIS_LONG_TIMEOUT = 2000, /* 2 sec */
};
/* Some timeout values are needed before it is known whether the chip is
* TPM 1.0 or TPM 2.0.
*/
#define TIS_TIMEOUT_A_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_A)
#define TIS_TIMEOUT_B_MAX max(TIS_LONG_TIMEOUT, TPM2_TIMEOUT_B)
#define TIS_TIMEOUT_C_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_C)
#define TIS_TIMEOUT_D_MAX max(TIS_SHORT_TIMEOUT, TPM2_TIMEOUT_D)
#define TPM_ACCESS(l) (0x0000 | ((l) << 12))
#define TPM_INT_ENABLE(l) (0x0008 | ((l) << 12))
#define TPM_INT_VECTOR(l) (0x000C | ((l) << 12))
#define TPM_INT_STATUS(l) (0x0010 | ((l) << 12))
#define TPM_INTF_CAPS(l) (0x0014 | ((l) << 12))
#define TPM_STS(l) (0x0018 | ((l) << 12))
#define TPM_STS3(l) (0x001b | ((l) << 12))
#define TPM_DATA_FIFO(l) (0x0024 | ((l) << 12))
#define TPM_DID_VID(l) (0x0F00 | ((l) << 12))
#define TPM_RID(l) (0x0F04 | ((l) << 12))
enum tpm_tis_flags {
TPM_TIS_ITPM_POSSIBLE = BIT(0),
};
struct tpm_tis_data {
u16 manufacturer_id;
int locality;
int irq;
bool irq_tested;
unsigned int flags;
wait_queue_head_t int_queue;
wait_queue_head_t read_queue;
const struct tpm_tis_phy_ops *phy_ops;
};
struct tpm_tis_phy_ops {
int (*read_bytes)(struct tpm_tis_data *data, u32 addr, u16 len,
u8 *result);
int (*write_bytes)(struct tpm_tis_data *data, u32 addr, u16 len,
u8 *value);
int (*read16)(struct tpm_tis_data *data, u32 addr, u16 *result);
int (*read32)(struct tpm_tis_data *data, u32 addr, u32 *result);
int (*write32)(struct tpm_tis_data *data, u32 addr, u32 src);
};
static inline int tpm_tis_read_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *result)
{
return data->phy_ops->read_bytes(data, addr, len, result);
}
static inline int tpm_tis_read8(struct tpm_tis_data *data, u32 addr, u8 *result)
{
return data->phy_ops->read_bytes(data, addr, 1, result);
}
static inline int tpm_tis_read16(struct tpm_tis_data *data, u32 addr,
u16 *result)
{
return data->phy_ops->read16(data, addr, result);
}
static inline int tpm_tis_read32(struct tpm_tis_data *data, u32 addr,
u32 *result)
{
return data->phy_ops->read32(data, addr, result);
}
static inline int tpm_tis_write_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *value)
{
return data->phy_ops->write_bytes(data, addr, len, value);
}
static inline int tpm_tis_write8(struct tpm_tis_data *data, u32 addr, u8 value)
{
return data->phy_ops->write_bytes(data, addr, 1, &value);
}
static inline int tpm_tis_write32(struct tpm_tis_data *data, u32 addr,
u32 value)
{
return data->phy_ops->write32(data, addr, value);
}
void tpm_tis_remove(struct tpm_chip *chip);
int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
const struct tpm_tis_phy_ops *phy_ops,
acpi_handle acpi_dev_handle);
#ifdef CONFIG_PM_SLEEP
int tpm_tis_resume(struct device *dev);
#endif
#endif

View File

@ -0,0 +1,272 @@
/*
* Copyright (C) 2015 Infineon Technologies AG
* Copyright (C) 2016 STMicroelectronics SAS
*
* Authors:
* Peter Huewe <peter.huewe@infineon.com>
* Christophe Ricard <christophe-h.ricard@st.com>
*
* Maintained by: <tpmdd-devel@lists.sourceforge.net>
*
* Device driver for TCG/TCPA TPM (trusted platform module).
* Specifications at www.trustedcomputinggroup.org
*
* This device driver implements the TPM interface as defined in
* the TCG TPM Interface Spec version 1.3, revision 27 via _raw/native
* SPI access_.
*
* It is based on the original tpm_tis device driver from Leendert van
* Dorn and Kyleen Hall and Jarko Sakkinnen.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2 of the
* License.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/wait.h>
#include <linux/acpi.h>
#include <linux/freezer.h>
#include <linux/module.h>
#include <linux/spi/spi.h>
#include <linux/gpio.h>
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <linux/tpm.h>
#include "tpm.h"
#include "tpm_tis_core.h"
#define MAX_SPI_FRAMESIZE 64
struct tpm_tis_spi_phy {
struct tpm_tis_data priv;
struct spi_device *spi_device;
u8 tx_buf[MAX_SPI_FRAMESIZE + 4];
u8 rx_buf[MAX_SPI_FRAMESIZE + 4];
};
static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data)
{
return container_of(data, struct tpm_tis_spi_phy, priv);
}
static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *result)
{
struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
int ret, i;
struct spi_message m;
struct spi_transfer spi_xfer = {
.tx_buf = phy->tx_buf,
.rx_buf = phy->rx_buf,
.len = 4,
};
if (len > MAX_SPI_FRAMESIZE)
return -ENOMEM;
phy->tx_buf[0] = 0x80 | (len - 1);
phy->tx_buf[1] = 0xd4;
phy->tx_buf[2] = (addr >> 8) & 0xFF;
phy->tx_buf[3] = addr & 0xFF;
spi_xfer.cs_change = 1;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
spi_bus_lock(phy->spi_device->master);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
memset(phy->tx_buf, 0, len);
/* According to TCG PTP specification, if there is no TPM present at
* all, then the design has a weak pull-up on MISO. If a TPM is not
* present, a pull-up on MISO means that the SB controller sees a 1,
* and will latch in 0xFF on the read.
*/
for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) {
spi_xfer.len = 1;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
}
spi_xfer.cs_change = 0;
spi_xfer.len = len;
spi_xfer.rx_buf = result;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
exit:
spi_bus_unlock(phy->spi_device->master);
return ret;
}
static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *value)
{
struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
int ret, i;
struct spi_message m;
struct spi_transfer spi_xfer = {
.tx_buf = phy->tx_buf,
.rx_buf = phy->rx_buf,
.len = 4,
};
if (len > MAX_SPI_FRAMESIZE)
return -ENOMEM;
phy->tx_buf[0] = len - 1;
phy->tx_buf[1] = 0xd4;
phy->tx_buf[2] = (addr >> 8) & 0xFF;
phy->tx_buf[3] = addr & 0xFF;
spi_xfer.cs_change = 1;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
spi_bus_lock(phy->spi_device->master);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
memset(phy->tx_buf, 0, len);
/* According to TCG PTP specification, if there is no TPM present at
* all, then the design has a weak pull-up on MISO. If a TPM is not
* present, a pull-up on MISO means that the SB controller sees a 1,
* and will latch in 0xFF on the read.
*/
for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) {
spi_xfer.len = 1;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
}
spi_xfer.len = len;
spi_xfer.tx_buf = value;
spi_xfer.cs_change = 0;
spi_xfer.tx_buf = value;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
exit:
spi_bus_unlock(phy->spi_device->master);
return ret;
}
static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result)
{
int rc;
rc = data->phy_ops->read_bytes(data, addr, sizeof(u16), (u8 *)result);
if (!rc)
*result = le16_to_cpu(*result);
return rc;
}
static int tpm_tis_spi_read32(struct tpm_tis_data *data, u32 addr, u32 *result)
{
int rc;
rc = data->phy_ops->read_bytes(data, addr, sizeof(u32), (u8 *)result);
if (!rc)
*result = le32_to_cpu(*result);
return rc;
}
static int tpm_tis_spi_write32(struct tpm_tis_data *data, u32 addr, u32 value)
{
value = cpu_to_le32(value);
return data->phy_ops->write_bytes(data, addr, sizeof(u32),
(u8 *)&value);
}
static const struct tpm_tis_phy_ops tpm_spi_phy_ops = {
.read_bytes = tpm_tis_spi_read_bytes,
.write_bytes = tpm_tis_spi_write_bytes,
.read16 = tpm_tis_spi_read16,
.read32 = tpm_tis_spi_read32,
.write32 = tpm_tis_spi_write32,
};
static int tpm_tis_spi_probe(struct spi_device *dev)
{
struct tpm_tis_spi_phy *phy;
phy = devm_kzalloc(&dev->dev, sizeof(struct tpm_tis_spi_phy),
GFP_KERNEL);
if (!phy)
return -ENOMEM;
phy->spi_device = dev;
return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops,
NULL);
}
static SIMPLE_DEV_PM_OPS(tpm_tis_pm, tpm_pm_suspend, tpm_tis_resume);
static int tpm_tis_spi_remove(struct spi_device *dev)
{
struct tpm_chip *chip = spi_get_drvdata(dev);
tpm_chip_unregister(chip);
tpm_tis_remove(chip);
return 0;
}
static const struct spi_device_id tpm_tis_spi_id[] = {
{"tpm_tis_spi", 0},
{}
};
MODULE_DEVICE_TABLE(spi, tpm_tis_spi_id);
static const struct of_device_id of_tis_spi_match[] = {
{ .compatible = "st,st33htpm-spi", },
{ .compatible = "infineon,slb9670", },
{ .compatible = "tcg,tpm_tis-spi", },
{}
};
MODULE_DEVICE_TABLE(of, of_tis_spi_match);
static const struct acpi_device_id acpi_tis_spi_match[] = {
{"SMO0768", 0},
{}
};
MODULE_DEVICE_TABLE(acpi, acpi_tis_spi_match);
static struct spi_driver tpm_tis_spi_driver = {
.driver = {
.owner = THIS_MODULE,
.name = "tpm_tis_spi",
.pm = &tpm_tis_pm,
.of_match_table = of_match_ptr(of_tis_spi_match),
.acpi_match_table = ACPI_PTR(acpi_tis_spi_match),
},
.probe = tpm_tis_spi_probe,
.remove = tpm_tis_spi_remove,
.id_table = tpm_tis_spi_id,
};
module_spi_driver(tpm_tis_spi_driver);
MODULE_DESCRIPTION("TPM Driver for native SPI access");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,637 @@
/*
* Copyright (C) 2015, 2016 IBM Corporation
*
* Author: Stefan Berger <stefanb@us.ibm.com>
*
* Maintained by: <tpmdd-devel@lists.sourceforge.net>
*
* Device driver for vTPM (vTPM proxy driver)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2 of the
* License.
*
*/
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/uaccess.h>
#include <linux/wait.h>
#include <linux/miscdevice.h>
#include <linux/vtpm_proxy.h>
#include <linux/file.h>
#include <linux/anon_inodes.h>
#include <linux/poll.h>
#include <linux/compat.h>
#include "tpm.h"
#define VTPM_PROXY_REQ_COMPLETE_FLAG BIT(0)
struct proxy_dev {
struct tpm_chip *chip;
u32 flags; /* public API flags */
wait_queue_head_t wq;
struct mutex buf_lock; /* protect buffer and flags */
long state; /* internal state */
#define STATE_OPENED_FLAG BIT(0)
#define STATE_WAIT_RESPONSE_FLAG BIT(1) /* waiting for emulator response */
size_t req_len; /* length of queued TPM request */
size_t resp_len; /* length of queued TPM response */
u8 buffer[TPM_BUFSIZE]; /* request/response buffer */
struct work_struct work; /* task that retrieves TPM timeouts */
};
/* all supported flags */
#define VTPM_PROXY_FLAGS_ALL (VTPM_PROXY_FLAG_TPM2)
static struct workqueue_struct *workqueue;
static void vtpm_proxy_delete_device(struct proxy_dev *proxy_dev);
/*
* Functions related to 'server side'
*/
/**
* vtpm_proxy_fops_read - Read TPM commands on 'server side'
*
* Return value:
* Number of bytes read or negative error code
*/
static ssize_t vtpm_proxy_fops_read(struct file *filp, char __user *buf,
size_t count, loff_t *off)
{
struct proxy_dev *proxy_dev = filp->private_data;
size_t len;
int sig, rc;
sig = wait_event_interruptible(proxy_dev->wq,
proxy_dev->req_len != 0 ||
!(proxy_dev->state & STATE_OPENED_FLAG));
if (sig)
return -EINTR;
mutex_lock(&proxy_dev->buf_lock);
if (!(proxy_dev->state & STATE_OPENED_FLAG)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EPIPE;
}
len = proxy_dev->req_len;
if (count < len) {
mutex_unlock(&proxy_dev->buf_lock);
pr_debug("Invalid size in recv: count=%zd, req_len=%zd\n",
count, len);
return -EIO;
}
rc = copy_to_user(buf, proxy_dev->buffer, len);
memset(proxy_dev->buffer, 0, len);
proxy_dev->req_len = 0;
if (!rc)
proxy_dev->state |= STATE_WAIT_RESPONSE_FLAG;
mutex_unlock(&proxy_dev->buf_lock);
if (rc)
return -EFAULT;
return len;
}
/**
* vtpm_proxy_fops_write - Write TPM responses on 'server side'
*
* Return value:
* Number of bytes read or negative error value
*/
static ssize_t vtpm_proxy_fops_write(struct file *filp, const char __user *buf,
size_t count, loff_t *off)
{
struct proxy_dev *proxy_dev = filp->private_data;
mutex_lock(&proxy_dev->buf_lock);
if (!(proxy_dev->state & STATE_OPENED_FLAG)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EPIPE;
}
if (count > sizeof(proxy_dev->buffer) ||
!(proxy_dev->state & STATE_WAIT_RESPONSE_FLAG)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EIO;
}
proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG;
proxy_dev->req_len = 0;
if (copy_from_user(proxy_dev->buffer, buf, count)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EFAULT;
}
proxy_dev->resp_len = count;
mutex_unlock(&proxy_dev->buf_lock);
wake_up_interruptible(&proxy_dev->wq);
return count;
}
/*
* vtpm_proxy_fops_poll: Poll status on 'server side'
*
* Return value:
* Poll flags
*/
static unsigned int vtpm_proxy_fops_poll(struct file *filp, poll_table *wait)
{
struct proxy_dev *proxy_dev = filp->private_data;
unsigned ret;
poll_wait(filp, &proxy_dev->wq, wait);
ret = POLLOUT;
mutex_lock(&proxy_dev->buf_lock);
if (proxy_dev->req_len)
ret |= POLLIN | POLLRDNORM;
if (!(proxy_dev->state & STATE_OPENED_FLAG))
ret |= POLLHUP;
mutex_unlock(&proxy_dev->buf_lock);
return ret;
}
/*
* vtpm_proxy_fops_open - Open vTPM device on 'server side'
*
* Called when setting up the anonymous file descriptor
*/
static void vtpm_proxy_fops_open(struct file *filp)
{
struct proxy_dev *proxy_dev = filp->private_data;
proxy_dev->state |= STATE_OPENED_FLAG;
}
/**
* vtpm_proxy_fops_undo_open - counter-part to vtpm_fops_open
*
* Call to undo vtpm_proxy_fops_open
*/
static void vtpm_proxy_fops_undo_open(struct proxy_dev *proxy_dev)
{
mutex_lock(&proxy_dev->buf_lock);
proxy_dev->state &= ~STATE_OPENED_FLAG;
mutex_unlock(&proxy_dev->buf_lock);
/* no more TPM responses -- wake up anyone waiting for them */
wake_up_interruptible(&proxy_dev->wq);
}
/*
* vtpm_proxy_fops_release: Close 'server side'
*
* Return value:
* Always returns 0.
*/
static int vtpm_proxy_fops_release(struct inode *inode, struct file *filp)
{
struct proxy_dev *proxy_dev = filp->private_data;
filp->private_data = NULL;
vtpm_proxy_delete_device(proxy_dev);
return 0;
}
static const struct file_operations vtpm_proxy_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.read = vtpm_proxy_fops_read,
.write = vtpm_proxy_fops_write,
.poll = vtpm_proxy_fops_poll,
.release = vtpm_proxy_fops_release,
};
/*
* Functions invoked by the core TPM driver to send TPM commands to
* 'server side' and receive responses from there.
*/
/*
* Called when core TPM driver reads TPM responses from 'server side'
*
* Return value:
* Number of TPM response bytes read, negative error value otherwise
*/
static int vtpm_proxy_tpm_op_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
size_t len;
/* process gone ? */
mutex_lock(&proxy_dev->buf_lock);
if (!(proxy_dev->state & STATE_OPENED_FLAG)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EPIPE;
}
len = proxy_dev->resp_len;
if (count < len) {
dev_err(&chip->dev,
"Invalid size in recv: count=%zd, resp_len=%zd\n",
count, len);
len = -EIO;
goto out;
}
memcpy(buf, proxy_dev->buffer, len);
proxy_dev->resp_len = 0;
out:
mutex_unlock(&proxy_dev->buf_lock);
return len;
}
/*
* Called when core TPM driver forwards TPM requests to 'server side'.
*
* Return value:
* 0 in case of success, negative error value otherwise.
*/
static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
int rc = 0;
if (count > sizeof(proxy_dev->buffer)) {
dev_err(&chip->dev,
"Invalid size in send: count=%zd, buffer size=%zd\n",
count, sizeof(proxy_dev->buffer));
return -EIO;
}
mutex_lock(&proxy_dev->buf_lock);
if (!(proxy_dev->state & STATE_OPENED_FLAG)) {
mutex_unlock(&proxy_dev->buf_lock);
return -EPIPE;
}
proxy_dev->resp_len = 0;
proxy_dev->req_len = count;
memcpy(proxy_dev->buffer, buf, count);
proxy_dev->state &= ~STATE_WAIT_RESPONSE_FLAG;
mutex_unlock(&proxy_dev->buf_lock);
wake_up_interruptible(&proxy_dev->wq);
return rc;
}
static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip)
{
/* not supported */
}
static u8 vtpm_proxy_tpm_op_status(struct tpm_chip *chip)
{
struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
if (proxy_dev->resp_len)
return VTPM_PROXY_REQ_COMPLETE_FLAG;
return 0;
}
static bool vtpm_proxy_tpm_req_canceled(struct tpm_chip *chip, u8 status)
{
struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
bool ret;
mutex_lock(&proxy_dev->buf_lock);
ret = !(proxy_dev->state & STATE_OPENED_FLAG);
mutex_unlock(&proxy_dev->buf_lock);
return ret;
}
static const struct tpm_class_ops vtpm_proxy_tpm_ops = {
.flags = TPM_OPS_AUTO_STARTUP,
.recv = vtpm_proxy_tpm_op_recv,
.send = vtpm_proxy_tpm_op_send,
.cancel = vtpm_proxy_tpm_op_cancel,
.status = vtpm_proxy_tpm_op_status,
.req_complete_mask = VTPM_PROXY_REQ_COMPLETE_FLAG,
.req_complete_val = VTPM_PROXY_REQ_COMPLETE_FLAG,
.req_canceled = vtpm_proxy_tpm_req_canceled,
};
/*
* Code related to the startup of the TPM 2 and startup of TPM 1.2 +
* retrieval of timeouts and durations.
*/
static void vtpm_proxy_work(struct work_struct *work)
{
struct proxy_dev *proxy_dev = container_of(work, struct proxy_dev,
work);
int rc;
rc = tpm_chip_register(proxy_dev->chip);
if (rc)
goto err;
return;
err:
vtpm_proxy_fops_undo_open(proxy_dev);
}
/*
* vtpm_proxy_work_stop: make sure the work has finished
*
* This function is useful when user space closed the fd
* while the driver still determines timeouts.
*/
static void vtpm_proxy_work_stop(struct proxy_dev *proxy_dev)
{
vtpm_proxy_fops_undo_open(proxy_dev);
flush_work(&proxy_dev->work);
}
/*
* vtpm_proxy_work_start: Schedule the work for TPM 1.2 & 2 initialization
*/
static inline void vtpm_proxy_work_start(struct proxy_dev *proxy_dev)
{
queue_work(workqueue, &proxy_dev->work);
}
/*
* Code related to creation and deletion of device pairs
*/
static struct proxy_dev *vtpm_proxy_create_proxy_dev(void)
{
struct proxy_dev *proxy_dev;
struct tpm_chip *chip;
int err;
proxy_dev = kzalloc(sizeof(*proxy_dev), GFP_KERNEL);
if (proxy_dev == NULL)
return ERR_PTR(-ENOMEM);
init_waitqueue_head(&proxy_dev->wq);
mutex_init(&proxy_dev->buf_lock);
INIT_WORK(&proxy_dev->work, vtpm_proxy_work);
chip = tpm_chip_alloc(NULL, &vtpm_proxy_tpm_ops);
if (IS_ERR(chip)) {
err = PTR_ERR(chip);
goto err_proxy_dev_free;
}
dev_set_drvdata(&chip->dev, proxy_dev);
proxy_dev->chip = chip;
return proxy_dev;
err_proxy_dev_free:
kfree(proxy_dev);
return ERR_PTR(err);
}
/*
* Undo what has been done in vtpm_create_proxy_dev
*/
static inline void vtpm_proxy_delete_proxy_dev(struct proxy_dev *proxy_dev)
{
put_device(&proxy_dev->chip->dev); /* frees chip */
kfree(proxy_dev);
}
/*
* Create a /dev/tpm%d and 'server side' file descriptor pair
*
* Return value:
* Returns file pointer on success, an error value otherwise
*/
static struct file *vtpm_proxy_create_device(
struct vtpm_proxy_new_dev *vtpm_new_dev)
{
struct proxy_dev *proxy_dev;
int rc, fd;
struct file *file;
if (vtpm_new_dev->flags & ~VTPM_PROXY_FLAGS_ALL)
return ERR_PTR(-EOPNOTSUPP);
proxy_dev = vtpm_proxy_create_proxy_dev();
if (IS_ERR(proxy_dev))
return ERR_CAST(proxy_dev);
proxy_dev->flags = vtpm_new_dev->flags;
/* setup an anonymous file for the server-side */
fd = get_unused_fd_flags(O_RDWR);
if (fd < 0) {
rc = fd;
goto err_delete_proxy_dev;
}
file = anon_inode_getfile("[vtpms]", &vtpm_proxy_fops, proxy_dev,
O_RDWR);
if (IS_ERR(file)) {
rc = PTR_ERR(file);
goto err_put_unused_fd;
}
/* from now on we can unwind with put_unused_fd() + fput() */
/* simulate an open() on the server side */
vtpm_proxy_fops_open(file);
if (proxy_dev->flags & VTPM_PROXY_FLAG_TPM2)
proxy_dev->chip->flags |= TPM_CHIP_FLAG_TPM2;
vtpm_proxy_work_start(proxy_dev);
vtpm_new_dev->fd = fd;
vtpm_new_dev->major = MAJOR(proxy_dev->chip->dev.devt);
vtpm_new_dev->minor = MINOR(proxy_dev->chip->dev.devt);
vtpm_new_dev->tpm_num = proxy_dev->chip->dev_num;
return file;
err_put_unused_fd:
put_unused_fd(fd);
err_delete_proxy_dev:
vtpm_proxy_delete_proxy_dev(proxy_dev);
return ERR_PTR(rc);
}
/*
* Counter part to vtpm_create_device.
*/
static void vtpm_proxy_delete_device(struct proxy_dev *proxy_dev)
{
vtpm_proxy_work_stop(proxy_dev);
/*
* A client may hold the 'ops' lock, so let it know that the server
* side shuts down before we try to grab the 'ops' lock when
* unregistering the chip.
*/
vtpm_proxy_fops_undo_open(proxy_dev);
tpm_chip_unregister(proxy_dev->chip);
vtpm_proxy_delete_proxy_dev(proxy_dev);
}
/*
* Code related to the control device /dev/vtpmx
*/
/*
* vtpmx_fops_ioctl: ioctl on /dev/vtpmx
*
* Return value:
* Returns 0 on success, a negative error code otherwise.
*/
static long vtpmx_fops_ioctl(struct file *f, unsigned int ioctl,
unsigned long arg)
{
void __user *argp = (void __user *)arg;
struct vtpm_proxy_new_dev __user *vtpm_new_dev_p;
struct vtpm_proxy_new_dev vtpm_new_dev;
struct file *file;
switch (ioctl) {
case VTPM_PROXY_IOC_NEW_DEV:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
vtpm_new_dev_p = argp;
if (copy_from_user(&vtpm_new_dev, vtpm_new_dev_p,
sizeof(vtpm_new_dev)))
return -EFAULT;
file = vtpm_proxy_create_device(&vtpm_new_dev);
if (IS_ERR(file))
return PTR_ERR(file);
if (copy_to_user(vtpm_new_dev_p, &vtpm_new_dev,
sizeof(vtpm_new_dev))) {
put_unused_fd(vtpm_new_dev.fd);
fput(file);
return -EFAULT;
}
fd_install(vtpm_new_dev.fd, file);
return 0;
default:
return -ENOIOCTLCMD;
}
}
#ifdef CONFIG_COMPAT
static long vtpmx_fops_compat_ioctl(struct file *f, unsigned int ioctl,
unsigned long arg)
{
return vtpmx_fops_ioctl(f, ioctl, (unsigned long)compat_ptr(arg));
}
#endif
static const struct file_operations vtpmx_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = vtpmx_fops_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = vtpmx_fops_compat_ioctl,
#endif
.llseek = noop_llseek,
};
static struct miscdevice vtpmx_miscdev = {
.minor = MISC_DYNAMIC_MINOR,
.name = "vtpmx",
.fops = &vtpmx_fops,
};
static int vtpmx_init(void)
{
return misc_register(&vtpmx_miscdev);
}
static void vtpmx_cleanup(void)
{
misc_deregister(&vtpmx_miscdev);
}
static int __init vtpm_module_init(void)
{
int rc;
rc = vtpmx_init();
if (rc) {
pr_err("couldn't create vtpmx device\n");
return rc;
}
workqueue = create_workqueue("tpm-vtpm");
if (!workqueue) {
pr_err("couldn't create workqueue\n");
rc = -ENOMEM;
goto err_vtpmx_cleanup;
}
return 0;
err_vtpmx_cleanup:
vtpmx_cleanup();
return rc;
}
static void __exit vtpm_module_exit(void)
{
destroy_workqueue(workqueue);
vtpmx_cleanup();
}
module_init(vtpm_module_init);
module_exit(vtpm_module_exit);
MODULE_AUTHOR("Stefan Berger (stefanb@us.ibm.com)");
MODULE_DESCRIPTION("vTPM Driver");
MODULE_VERSION("0.1");
MODULE_LICENSE("GPL");

View File

@ -28,6 +28,8 @@ struct tpm_private {
unsigned int evtchn;
int ring_ref;
domid_t backend_id;
int irq;
wait_queue_head_t read_queue;
};
enum status_bits {
@ -39,7 +41,7 @@ enum status_bits {
static u8 vtpm_status(struct tpm_chip *chip)
{
struct tpm_private *priv = TPM_VPRIV(chip);
struct tpm_private *priv = dev_get_drvdata(&chip->dev);
switch (priv->shr->state) {
case VTPM_STATE_IDLE:
return VTPM_STATUS_IDLE | VTPM_STATUS_CANCELED;
@ -60,7 +62,7 @@ static bool vtpm_req_canceled(struct tpm_chip *chip, u8 status)
static void vtpm_cancel(struct tpm_chip *chip)
{
struct tpm_private *priv = TPM_VPRIV(chip);
struct tpm_private *priv = dev_get_drvdata(&chip->dev);
priv->shr->state = VTPM_STATE_CANCEL;
wmb();
notify_remote_via_evtchn(priv->evtchn);
@ -73,7 +75,7 @@ static unsigned int shr_data_offset(struct vtpm_shared_page *shr)
static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_private *priv = TPM_VPRIV(chip);
struct tpm_private *priv = dev_get_drvdata(&chip->dev);
struct vtpm_shared_page *shr = priv->shr;
unsigned int offset = shr_data_offset(shr);
@ -87,8 +89,8 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
return -EINVAL;
/* Wait for completion of any existing command or cancellation */
if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, chip->vendor.timeout_c,
&chip->vendor.read_queue, true) < 0) {
if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, chip->timeout_c,
&priv->read_queue, true) < 0) {
vtpm_cancel(chip);
return -ETIME;
}
@ -104,7 +106,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
duration = tpm_calc_ordinal_duration(chip, ordinal);
if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, duration,
&chip->vendor.read_queue, true) < 0) {
&priv->read_queue, true) < 0) {
/* got a signal or timeout, try to cancel */
vtpm_cancel(chip);
return -ETIME;
@ -115,7 +117,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
{
struct tpm_private *priv = TPM_VPRIV(chip);
struct tpm_private *priv = dev_get_drvdata(&chip->dev);
struct vtpm_shared_page *shr = priv->shr;
unsigned int offset = shr_data_offset(shr);
size_t length = shr->length;
@ -124,8 +126,8 @@ static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
return -ECANCELED;
/* In theory the wait at the end of _send makes this one unnecessary */
if (wait_for_tpm_stat(chip, VTPM_STATUS_RESULT, chip->vendor.timeout_c,
&chip->vendor.read_queue, true) < 0) {
if (wait_for_tpm_stat(chip, VTPM_STATUS_RESULT, chip->timeout_c,
&priv->read_queue, true) < 0) {
vtpm_cancel(chip);
return -ETIME;
}
@ -161,7 +163,7 @@ static irqreturn_t tpmif_interrupt(int dummy, void *dev_id)
switch (priv->shr->state) {
case VTPM_STATE_IDLE:
case VTPM_STATE_FINISH:
wake_up_interruptible(&priv->chip->vendor.read_queue);
wake_up_interruptible(&priv->read_queue);
break;
case VTPM_STATE_SUBMIT:
case VTPM_STATE_CANCEL:
@ -179,10 +181,10 @@ static int setup_chip(struct device *dev, struct tpm_private *priv)
if (IS_ERR(chip))
return PTR_ERR(chip);
init_waitqueue_head(&chip->vendor.read_queue);
init_waitqueue_head(&priv->read_queue);
priv->chip = chip;
TPM_VPRIV(chip) = priv;
dev_set_drvdata(&chip->dev, priv);
return 0;
}
@ -217,7 +219,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
xenbus_dev_fatal(dev, rv, "allocating TPM irq");
return rv;
}
priv->chip->vendor.irq = rv;
priv->irq = rv;
again:
rv = xenbus_transaction_start(&xbt);
@ -277,8 +279,8 @@ static void ring_free(struct tpm_private *priv)
else
free_page((unsigned long)priv->shr);
if (priv->chip && priv->chip->vendor.irq)
unbind_from_irqhandler(priv->chip->vendor.irq, priv);
if (priv->irq)
unbind_from_irqhandler(priv->irq, priv);
kfree(priv);
}
@ -318,10 +320,10 @@ static int tpmfront_probe(struct xenbus_device *dev,
static int tpmfront_remove(struct xenbus_device *dev)
{
struct tpm_chip *chip = dev_get_drvdata(&dev->dev);
struct tpm_private *priv = TPM_VPRIV(chip);
struct tpm_private *priv = dev_get_drvdata(&chip->dev);
tpm_chip_unregister(chip);
ring_free(priv);
TPM_VPRIV(chip) = NULL;
dev_set_drvdata(&chip->dev, NULL);
return 0;
}

View File

@ -51,7 +51,7 @@ struct krb5_principal {
struct krb5_tagged_data {
/* for tag value, see /usr/include/krb5/krb5.h
* - KRB5_AUTHDATA_* for auth data
* -
* -
*/
s32 tag;
u32 data_len;

View File

@ -206,6 +206,7 @@ extern bool has_ns_capability_noaudit(struct task_struct *t,
struct user_namespace *ns, int cap);
extern bool capable(int cap);
extern bool ns_capable(struct user_namespace *ns, int cap);
extern bool ns_capable_noaudit(struct user_namespace *ns, int cap);
#else
static inline bool has_capability(struct task_struct *t, int cap)
{
@ -233,6 +234,10 @@ static inline bool ns_capable(struct user_namespace *ns, int cap)
{
return true;
}
static inline bool ns_capable_noaudit(struct user_namespace *ns, int cap)
{
return true;
}
#endif /* CONFIG_MULTIUSER */
extern bool capable_wrt_inode_uidgid(const struct inode *inode, int cap);
extern bool file_ns_capable(const struct file *file, struct user_namespace *ns, int cap);

View File

@ -1,6 +1,6 @@
/*
* STMicroelectronics TPM Linux driver for TPM 1.2 ST33ZP24
* Copyright (C) 2009 - 2015 STMicroelectronics
* Copyright (C) 2009 - 2016 STMicroelectronics
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by

View File

@ -28,19 +28,13 @@ struct seccomp {
};
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
extern int __secure_computing(void);
static inline int secure_computing(void)
extern int __secure_computing(const struct seccomp_data *sd);
static inline int secure_computing(const struct seccomp_data *sd)
{
if (unlikely(test_thread_flag(TIF_SECCOMP)))
return __secure_computing();
return __secure_computing(sd);
return 0;
}
#define SECCOMP_PHASE1_OK 0
#define SECCOMP_PHASE1_SKIP 1
extern u32 seccomp_phase1(struct seccomp_data *sd);
int seccomp_phase2(u32 phase1_result);
#else
extern void secure_computing_strict(int this_syscall);
#endif
@ -61,7 +55,7 @@ struct seccomp { };
struct seccomp_filter { };
#ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
static inline int secure_computing(void) { return 0; }
static inline int secure_computing(struct seccomp_data *sd) { return 0; }
#else
static inline void secure_computing_strict(int this_syscall) { return; }
#endif

View File

@ -33,7 +33,12 @@ struct tpm_chip;
struct trusted_key_payload;
struct trusted_key_options;
enum TPM_OPS_FLAGS {
TPM_OPS_AUTO_STARTUP = BIT(0),
};
struct tpm_class_ops {
unsigned int flags;
const u8 req_complete_mask;
const u8 req_complete_val;
bool (*req_canceled)(struct tpm_chip *chip, u8 status);

91
include/net/calipso.h Normal file
View File

@ -0,0 +1,91 @@
/*
* CALIPSO - Common Architecture Label IPv6 Security Option
*
* This is an implementation of the CALIPSO protocol as specified in
* RFC 5570.
*
* Authors: Paul Moore <paul@paul-moore.com>
* Huw Davies <huw@codeweavers.com>
*
*/
/*
* (c) Copyright Hewlett-Packard Development Company, L.P., 2006
* (c) Copyright Huw Davies <huw@codeweavers.com>, 2015
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
* the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _CALIPSO_H
#define _CALIPSO_H
#include <linux/types.h>
#include <linux/rcupdate.h>
#include <linux/list.h>
#include <linux/net.h>
#include <linux/skbuff.h>
#include <net/netlabel.h>
#include <net/request_sock.h>
#include <linux/atomic.h>
#include <asm/unaligned.h>
/* known doi values */
#define CALIPSO_DOI_UNKNOWN 0x00000000
/* doi mapping types */
#define CALIPSO_MAP_UNKNOWN 0
#define CALIPSO_MAP_PASS 2
/*
* CALIPSO DOI definitions
*/
/* DOI definition struct */
struct calipso_doi {
u32 doi;
u32 type;
atomic_t refcount;
struct list_head list;
struct rcu_head rcu;
};
/*
* Sysctl Variables
*/
extern int calipso_cache_enabled;
extern int calipso_cache_bucketsize;
#ifdef CONFIG_NETLABEL
int __init calipso_init(void);
void calipso_exit(void);
bool calipso_validate(const struct sk_buff *skb, const unsigned char *option);
#else
static inline int __init calipso_init(void)
{
return 0;
}
static inline void calipso_exit(void)
{
}
static inline bool calipso_validate(const struct sk_buff *skb,
const unsigned char *option)
{
return true;
}
#endif /* CONFIG_NETLABEL */
#endif /* _CALIPSO_H */

View File

@ -97,7 +97,12 @@ struct inet_request_sock {
u32 ir_mark;
union {
struct ip_options_rcu *opt;
struct sk_buff *pktopts;
#if IS_ENABLED(CONFIG_IPV6)
struct {
struct ipv6_txoptions *ipv6_opt;
struct sk_buff *pktopts;
};
#endif
};
};

View File

@ -313,11 +313,19 @@ struct ipv6_txoptions *ipv6_renew_options(struct sock *sk,
int newtype,
struct ipv6_opt_hdr __user *newopt,
int newoptlen);
struct ipv6_txoptions *
ipv6_renew_options_kern(struct sock *sk,
struct ipv6_txoptions *opt,
int newtype,
struct ipv6_opt_hdr *newopt,
int newoptlen);
struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
struct ipv6_txoptions *opt);
bool ipv6_opt_accepted(const struct sock *sk, const struct sk_buff *skb,
const struct inet6_skb_parm *opt);
struct ipv6_txoptions *ipv6_update_options(struct sock *sk,
struct ipv6_txoptions *opt);
static inline bool ipv6_accept_ra(struct inet6_dev *idev)
{
@ -943,7 +951,7 @@ enum {
int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, int target,
unsigned short *fragoff, int *fragflg);
int ipv6_find_tlv(struct sk_buff *skb, int offset, int type);
int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type);
struct in6_addr *fl6_update_dst(struct flowi6 *fl6,
const struct ipv6_txoptions *opt,

View File

@ -40,6 +40,7 @@
#include <linux/atomic.h>
struct cipso_v4_doi;
struct calipso_doi;
/*
* NetLabel - A management interface for maintaining network packet label
@ -94,6 +95,8 @@ struct cipso_v4_doi;
#define NETLBL_NLTYPE_UNLABELED_NAME "NLBL_UNLBL"
#define NETLBL_NLTYPE_ADDRSELECT 6
#define NETLBL_NLTYPE_ADDRSELECT_NAME "NLBL_ADRSEL"
#define NETLBL_NLTYPE_CALIPSO 7
#define NETLBL_NLTYPE_CALIPSO_NAME "NLBL_CALIPSO"
/*
* NetLabel - Kernel API for accessing the network packet label mappings.
@ -216,6 +219,63 @@ struct netlbl_lsm_secattr {
} attr;
};
/**
* struct netlbl_calipso_ops - NetLabel CALIPSO operations
* @doi_add: add a CALIPSO DOI
* @doi_free: free a CALIPSO DOI
* @doi_getdef: returns a reference to a DOI
* @doi_putdef: releases a reference of a DOI
* @doi_walk: enumerate the DOI list
* @sock_getattr: retrieve the socket's attr
* @sock_setattr: set the socket's attr
* @sock_delattr: remove the socket's attr
* @req_setattr: set the req socket's attr
* @req_delattr: remove the req socket's attr
* @opt_getattr: retrieve attr from memory block
* @skbuff_optptr: find option in packet
* @skbuff_setattr: set the skbuff's attr
* @skbuff_delattr: remove the skbuff's attr
* @cache_invalidate: invalidate cache
* @cache_add: add cache entry
*
* Description:
* This structure is filled out by the CALIPSO engine and passed
* to the NetLabel core via a call to netlbl_calipso_ops_register().
* It enables the CALIPSO engine (and hence IPv6) to be compiled
* as a module.
*/
struct netlbl_calipso_ops {
int (*doi_add)(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info);
void (*doi_free)(struct calipso_doi *doi_def);
int (*doi_remove)(u32 doi, struct netlbl_audit *audit_info);
struct calipso_doi *(*doi_getdef)(u32 doi);
void (*doi_putdef)(struct calipso_doi *doi_def);
int (*doi_walk)(u32 *skip_cnt,
int (*callback)(struct calipso_doi *doi_def, void *arg),
void *cb_arg);
int (*sock_getattr)(struct sock *sk,
struct netlbl_lsm_secattr *secattr);
int (*sock_setattr)(struct sock *sk,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
void (*sock_delattr)(struct sock *sk);
int (*req_setattr)(struct request_sock *req,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
void (*req_delattr)(struct request_sock *req);
int (*opt_getattr)(const unsigned char *calipso,
struct netlbl_lsm_secattr *secattr);
unsigned char *(*skbuff_optptr)(const struct sk_buff *skb);
int (*skbuff_setattr)(struct sk_buff *skb,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
int (*skbuff_delattr)(struct sk_buff *skb);
void (*cache_invalidate)(void);
int (*cache_add)(const unsigned char *calipso_ptr,
const struct netlbl_lsm_secattr *secattr);
};
/*
* LSM security attribute operations (inline)
*/
@ -385,6 +445,14 @@ int netlbl_cfg_cipsov4_map_add(u32 doi,
const struct in_addr *addr,
const struct in_addr *mask,
struct netlbl_audit *audit_info);
int netlbl_cfg_calipso_add(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info);
void netlbl_cfg_calipso_del(u32 doi, struct netlbl_audit *audit_info);
int netlbl_cfg_calipso_map_add(u32 doi,
const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info);
/*
* LSM security attribute operations
*/
@ -405,6 +473,12 @@ int netlbl_catmap_setlong(struct netlbl_lsm_catmap **catmap,
unsigned long bitmap,
gfp_t flags);
/* Bitmap functions
*/
int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len,
u32 offset, u8 state);
void netlbl_bitmap_setbit(unsigned char *bitmap, u32 bit, u8 state);
/*
* LSM protocol operations (NetLabel LSM/kernel API)
*/
@ -427,13 +501,13 @@ int netlbl_skbuff_setattr(struct sk_buff *skb,
int netlbl_skbuff_getattr(const struct sk_buff *skb,
u16 family,
struct netlbl_lsm_secattr *secattr);
void netlbl_skbuff_err(struct sk_buff *skb, int error, int gateway);
void netlbl_skbuff_err(struct sk_buff *skb, u16 family, int error, int gateway);
/*
* LSM label mapping cache operations
*/
void netlbl_cache_invalidate(void);
int netlbl_cache_add(const struct sk_buff *skb,
int netlbl_cache_add(const struct sk_buff *skb, u16 family,
const struct netlbl_lsm_secattr *secattr);
/*
@ -495,6 +569,24 @@ static inline int netlbl_cfg_cipsov4_map_add(u32 doi,
{
return -ENOSYS;
}
static inline int netlbl_cfg_calipso_add(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info)
{
return -ENOSYS;
}
static inline void netlbl_cfg_calipso_del(u32 doi,
struct netlbl_audit *audit_info)
{
return;
}
static inline int netlbl_cfg_calipso_map_add(u32 doi,
const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info)
{
return -ENOSYS;
}
static inline int netlbl_catmap_walk(struct netlbl_lsm_catmap *catmap,
u32 offset)
{
@ -586,7 +678,7 @@ static inline void netlbl_cache_invalidate(void)
{
return;
}
static inline int netlbl_cache_add(const struct sk_buff *skb,
static inline int netlbl_cache_add(const struct sk_buff *skb, u16 family,
const struct netlbl_lsm_secattr *secattr)
{
return 0;
@ -598,4 +690,7 @@ static inline struct audit_buffer *netlbl_audit_start(int type,
}
#endif /* CONFIG_NETLABEL */
const struct netlbl_calipso_ops *
netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops);
#endif /* _NETLABEL_H */

View File

@ -455,6 +455,7 @@ header-y += virtio_scsi.h
header-y += virtio_types.h
header-y += vm_sockets.h
header-y += vt.h
header-y += vtpm_proxy.h
header-y += wait.h
header-y += wanrouter.h
header-y += watchdog.h

View File

@ -130,6 +130,8 @@
#define AUDIT_MAC_IPSEC_EVENT 1415 /* Audit an IPSec event */
#define AUDIT_MAC_UNLBL_STCADD 1416 /* NetLabel: add a static label */
#define AUDIT_MAC_UNLBL_STCDEL 1417 /* NetLabel: del a static label */
#define AUDIT_MAC_CALIPSO_ADD 1418 /* NetLabel: add CALIPSO DOI entry */
#define AUDIT_MAC_CALIPSO_DEL 1419 /* NetLabel: del CALIPSO DOI entry */
#define AUDIT_FIRST_KERN_ANOM_MSG 1700
#define AUDIT_LAST_KERN_ANOM_MSG 1799

View File

@ -143,6 +143,7 @@ struct in6_flowlabel_req {
#define IPV6_TLV_PAD1 0
#define IPV6_TLV_PADN 1
#define IPV6_TLV_ROUTERALERT 5
#define IPV6_TLV_CALIPSO 7 /* RFC 5570 */
#define IPV6_TLV_JUMBO 194
#define IPV6_TLV_HAO 201 /* home address option */

View File

@ -0,0 +1,36 @@
/*
* Definitions for the VTPM proxy driver
* Copyright (c) 2015, 2016, IBM Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#ifndef _UAPI_LINUX_VTPM_PROXY_H
#define _UAPI_LINUX_VTPM_PROXY_H
#include <linux/types.h>
#include <linux/ioctl.h>
/* ioctls */
struct vtpm_proxy_new_dev {
__u32 flags; /* input */
__u32 tpm_num; /* output */
__u32 fd; /* output */
__u32 major; /* output */
__u32 minor; /* output */
};
/* above flags */
#define VTPM_PROXY_FLAG_TPM2 1 /* emulator is TPM 2 */
#define VTPM_PROXY_IOC_NEW_DEV _IOWR(0xa1, 0x00, struct vtpm_proxy_new_dev)
#endif /* _UAPI_LINUX_VTPM_PROXY_H */

View File

@ -361,6 +361,24 @@ bool has_capability_noaudit(struct task_struct *t, int cap)
return has_ns_capability_noaudit(t, &init_user_ns, cap);
}
static bool ns_capable_common(struct user_namespace *ns, int cap, bool audit)
{
int capable;
if (unlikely(!cap_valid(cap))) {
pr_crit("capable() called with invalid cap=%u\n", cap);
BUG();
}
capable = audit ? security_capable(current_cred(), ns, cap) :
security_capable_noaudit(current_cred(), ns, cap);
if (capable == 0) {
current->flags |= PF_SUPERPRIV;
return true;
}
return false;
}
/**
* ns_capable - Determine if the current task has a superior capability in effect
* @ns: The usernamespace we want the capability in
@ -374,19 +392,27 @@ bool has_capability_noaudit(struct task_struct *t, int cap)
*/
bool ns_capable(struct user_namespace *ns, int cap)
{
if (unlikely(!cap_valid(cap))) {
pr_crit("capable() called with invalid cap=%u\n", cap);
BUG();
}
if (security_capable(current_cred(), ns, cap) == 0) {
current->flags |= PF_SUPERPRIV;
return true;
}
return false;
return ns_capable_common(ns, cap, true);
}
EXPORT_SYMBOL(ns_capable);
/**
* ns_capable_noaudit - Determine if the current task has a superior capability
* (unaudited) in effect
* @ns: The usernamespace we want the capability in
* @cap: The capability to be tested for
*
* Return true if the current task has the given superior capability currently
* available for use, false if not.
*
* This sets PF_SUPERPRIV on the task if the capability is available on the
* assumption that it's about to be used.
*/
bool ns_capable_noaudit(struct user_namespace *ns, int cap)
{
return ns_capable_common(ns, cap, false);
}
EXPORT_SYMBOL(ns_capable_noaudit);
/**
* capable - Determine if the current task has a superior capability in effect

View File

@ -173,7 +173,7 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
*
* Returns valid seccomp BPF response codes.
*/
static u32 seccomp_run_filters(struct seccomp_data *sd)
static u32 seccomp_run_filters(const struct seccomp_data *sd)
{
struct seccomp_data sd_local;
u32 ret = SECCOMP_RET_ALLOW;
@ -554,20 +554,10 @@ void secure_computing_strict(int this_syscall)
BUG();
}
#else
int __secure_computing(void)
{
u32 phase1_result = seccomp_phase1(NULL);
if (likely(phase1_result == SECCOMP_PHASE1_OK))
return 0;
else if (likely(phase1_result == SECCOMP_PHASE1_SKIP))
return -1;
else
return seccomp_phase2(phase1_result);
}
#ifdef CONFIG_SECCOMP_FILTER
static u32 __seccomp_phase1_filter(int this_syscall, struct seccomp_data *sd)
static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
const bool recheck_after_trace)
{
u32 filter_ret, action;
int data;
@ -599,10 +589,46 @@ static u32 __seccomp_phase1_filter(int this_syscall, struct seccomp_data *sd)
goto skip;
case SECCOMP_RET_TRACE:
return filter_ret; /* Save the rest for phase 2. */
/* We've been put in this state by the ptracer already. */
if (recheck_after_trace)
return 0;
/* ENOSYS these calls if there is no tracer attached. */
if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) {
syscall_set_return_value(current,
task_pt_regs(current),
-ENOSYS, 0);
goto skip;
}
/* Allow the BPF to provide the event message */
ptrace_event(PTRACE_EVENT_SECCOMP, data);
/*
* The delivery of a fatal signal during event
* notification may silently skip tracer notification.
* Terminating the task now avoids executing a system
* call that may not be intended.
*/
if (fatal_signal_pending(current))
do_exit(SIGSYS);
/* Check if the tracer forced the syscall to be skipped. */
this_syscall = syscall_get_nr(current, task_pt_regs(current));
if (this_syscall < 0)
goto skip;
/*
* Recheck the syscall, since it may have changed. This
* intentionally uses a NULL struct seccomp_data to force
* a reload of all registers. This does not goto skip since
* a skip would have already been reported.
*/
if (__seccomp_filter(this_syscall, NULL, true))
return -1;
return 0;
case SECCOMP_RET_ALLOW:
return SECCOMP_PHASE1_OK;
return 0;
case SECCOMP_RET_KILL:
default:
@ -614,96 +640,38 @@ static u32 __seccomp_phase1_filter(int this_syscall, struct seccomp_data *sd)
skip:
audit_seccomp(this_syscall, 0, action);
return SECCOMP_PHASE1_SKIP;
return -1;
}
#else
static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
const bool recheck_after_trace)
{
BUG();
}
#endif
/**
* seccomp_phase1() - run fast path seccomp checks on the current syscall
* @arg sd: The seccomp_data or NULL
*
* This only reads pt_regs via the syscall_xyz helpers. The only change
* it will make to pt_regs is via syscall_set_return_value, and it will
* only do that if it returns SECCOMP_PHASE1_SKIP.
*
* If sd is provided, it will not read pt_regs at all.
*
* It may also call do_exit or force a signal; these actions must be
* safe.
*
* If it returns SECCOMP_PHASE1_OK, the syscall passes checks and should
* be processed normally.
*
* If it returns SECCOMP_PHASE1_SKIP, then the syscall should not be
* invoked. In this case, seccomp_phase1 will have set the return value
* using syscall_set_return_value.
*
* If it returns anything else, then the return value should be passed
* to seccomp_phase2 from a context in which ptrace hooks are safe.
*/
u32 seccomp_phase1(struct seccomp_data *sd)
int __secure_computing(const struct seccomp_data *sd)
{
int mode = current->seccomp.mode;
int this_syscall = sd ? sd->nr :
syscall_get_nr(current, task_pt_regs(current));
int this_syscall;
if (config_enabled(CONFIG_CHECKPOINT_RESTORE) &&
unlikely(current->ptrace & PT_SUSPEND_SECCOMP))
return SECCOMP_PHASE1_OK;
return 0;
this_syscall = sd ? sd->nr :
syscall_get_nr(current, task_pt_regs(current));
switch (mode) {
case SECCOMP_MODE_STRICT:
__secure_computing_strict(this_syscall); /* may call do_exit */
return SECCOMP_PHASE1_OK;
#ifdef CONFIG_SECCOMP_FILTER
return 0;
case SECCOMP_MODE_FILTER:
return __seccomp_phase1_filter(this_syscall, sd);
#endif
return __seccomp_filter(this_syscall, sd, false);
default:
BUG();
}
}
/**
* seccomp_phase2() - finish slow path seccomp work for the current syscall
* @phase1_result: The return value from seccomp_phase1()
*
* This must be called from a context in which ptrace hooks can be used.
*
* Returns 0 if the syscall should be processed or -1 to skip the syscall.
*/
int seccomp_phase2(u32 phase1_result)
{
struct pt_regs *regs = task_pt_regs(current);
u32 action = phase1_result & SECCOMP_RET_ACTION;
int data = phase1_result & SECCOMP_RET_DATA;
BUG_ON(action != SECCOMP_RET_TRACE);
audit_seccomp(syscall_get_nr(current, regs), 0, action);
/* Skip these calls if there is no tracer. */
if (!ptrace_event_enabled(current, PTRACE_EVENT_SECCOMP)) {
syscall_set_return_value(current, regs,
-ENOSYS, 0);
return -1;
}
/* Allow the BPF to provide the event message */
ptrace_event(PTRACE_EVENT_SECCOMP, data);
/*
* The delivery of a fatal signal during event
* notification may silently skip tracer notification.
* Terminating the task now avoids executing a system
* call that may not be intended.
*/
if (fatal_signal_pending(current))
do_exit(SIGSYS);
if (syscall_get_nr(current, regs) < 0)
return -1; /* Explicit request to skip. */
return 0;
}
#endif /* CONFIG_HAVE_ARCH_SECCOMP_FILTER */
long prctl_get_seccomp(void)

View File

@ -216,14 +216,17 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
skb = dccp_make_response(sk, dst, req);
if (skb != NULL) {
struct dccp_hdr *dh = dccp_hdr(skb);
struct ipv6_txoptions *opt;
dh->dccph_checksum = dccp_v6_csum_finish(skb,
&ireq->ir_v6_loc_addr,
&ireq->ir_v6_rmt_addr);
fl6.daddr = ireq->ir_v6_rmt_addr;
rcu_read_lock();
err = ip6_xmit(sk, skb, &fl6, rcu_dereference(np->opt),
np->tclass);
opt = ireq->ipv6_opt;
if (!opt)
opt = rcu_dereference(np->opt);
err = ip6_xmit(sk, skb, &fl6, opt, np->tclass);
rcu_read_unlock();
err = net_xmit_eval(err);
}
@ -236,6 +239,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
static void dccp_v6_reqsk_destructor(struct request_sock *req)
{
dccp_feat_list_purge(&dccp_rsk(req)->dreq_featneg);
kfree(inet_rsk(req)->ipv6_opt);
kfree_skb(inet_rsk(req)->pktopts);
}
@ -494,7 +498,9 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
* Yes, keeping reference count would be much more clever, but we make
* one more one thing there: reattach optmem to newsk.
*/
opt = rcu_dereference(np->opt);
opt = ireq->ipv6_opt;
if (!opt)
opt = rcu_dereference(np->opt);
if (opt) {
opt = ipv6_dup_options(newsk, opt);
RCU_INIT_POINTER(newnp->opt, opt);

View File

@ -134,76 +134,6 @@ int cipso_v4_rbm_strictvalid = 1;
* Helper Functions
*/
/**
* cipso_v4_bitmap_walk - Walk a bitmap looking for a bit
* @bitmap: the bitmap
* @bitmap_len: length in bits
* @offset: starting offset
* @state: if non-zero, look for a set (1) bit else look for a cleared (0) bit
*
* Description:
* Starting at @offset, walk the bitmap from left to right until either the
* desired bit is found or we reach the end. Return the bit offset, -1 if
* not found, or -2 if error.
*/
static int cipso_v4_bitmap_walk(const unsigned char *bitmap,
u32 bitmap_len,
u32 offset,
u8 state)
{
u32 bit_spot;
u32 byte_offset;
unsigned char bitmask;
unsigned char byte;
/* gcc always rounds to zero when doing integer division */
byte_offset = offset / 8;
byte = bitmap[byte_offset];
bit_spot = offset;
bitmask = 0x80 >> (offset % 8);
while (bit_spot < bitmap_len) {
if ((state && (byte & bitmask) == bitmask) ||
(state == 0 && (byte & bitmask) == 0))
return bit_spot;
bit_spot++;
bitmask >>= 1;
if (bitmask == 0) {
byte = bitmap[++byte_offset];
bitmask = 0x80;
}
}
return -1;
}
/**
* cipso_v4_bitmap_setbit - Sets a single bit in a bitmap
* @bitmap: the bitmap
* @bit: the bit
* @state: if non-zero, set the bit (1) else clear the bit (0)
*
* Description:
* Set a single bit in the bitmask. Returns zero on success, negative values
* on error.
*/
static void cipso_v4_bitmap_setbit(unsigned char *bitmap,
u32 bit,
u8 state)
{
u32 byte_spot;
u8 bitmask;
/* gcc always rounds to zero when doing integer division */
byte_spot = bit / 8;
bitmask = 0x80 >> (bit % 8);
if (state)
bitmap[byte_spot] |= bitmask;
else
bitmap[byte_spot] &= ~bitmask;
}
/**
* cipso_v4_cache_entry_free - Frees a cache entry
* @entry: the entry to free
@ -840,10 +770,10 @@ static int cipso_v4_map_cat_rbm_valid(const struct cipso_v4_doi *doi_def,
cipso_cat_size = doi_def->map.std->cat.cipso_size;
cipso_array = doi_def->map.std->cat.cipso;
for (;;) {
cat = cipso_v4_bitmap_walk(bitmap,
bitmap_len_bits,
cat + 1,
1);
cat = netlbl_bitmap_walk(bitmap,
bitmap_len_bits,
cat + 1,
1);
if (cat < 0)
break;
if (cat >= cipso_cat_size ||
@ -909,7 +839,7 @@ static int cipso_v4_map_cat_rbm_hton(const struct cipso_v4_doi *doi_def,
}
if (net_spot >= net_clen_bits)
return -ENOSPC;
cipso_v4_bitmap_setbit(net_cat, net_spot, 1);
netlbl_bitmap_setbit(net_cat, net_spot, 1);
if (net_spot > net_spot_max)
net_spot_max = net_spot;
@ -951,10 +881,10 @@ static int cipso_v4_map_cat_rbm_ntoh(const struct cipso_v4_doi *doi_def,
}
for (;;) {
net_spot = cipso_v4_bitmap_walk(net_cat,
net_clen_bits,
net_spot + 1,
1);
net_spot = netlbl_bitmap_walk(net_cat,
net_clen_bits,
net_spot + 1,
1);
if (net_spot < 0) {
if (net_spot == -2)
return -EFAULT;

View File

@ -6147,6 +6147,9 @@ struct request_sock *inet_reqsk_alloc(const struct request_sock_ops *ops,
kmemcheck_annotate_bitfield(ireq, flags);
ireq->opt = NULL;
#if IS_ENABLED(CONFIG_IPV6)
ireq->pktopts = NULL;
#endif
atomic64_set(&ireq->ir_cookie, 0);
ireq->ireq_state = TCP_NEW_SYN_RECV;
write_pnet(&ireq->ireq_net, sock_net(sk_listener));

View File

@ -22,6 +22,7 @@ ipv6-$(CONFIG_NETFILTER) += netfilter.o
ipv6-$(CONFIG_IPV6_MULTIPLE_TABLES) += fib6_rules.o
ipv6-$(CONFIG_PROC_FS) += proc.o
ipv6-$(CONFIG_SYN_COOKIES) += syncookies.o
ipv6-$(CONFIG_NETLABEL) += calipso.o
ipv6-objs += $(ipv6-y)

View File

@ -60,6 +60,7 @@
#ifdef CONFIG_IPV6_TUNNEL
#include <net/ip6_tunnel.h>
#endif
#include <net/calipso.h>
#include <asm/uaccess.h>
#include <linux/mroute6.h>
@ -983,6 +984,10 @@ static int __init inet6_init(void)
if (err)
goto pingv6_fail;
err = calipso_init();
if (err)
goto calipso_fail;
#ifdef CONFIG_SYSCTL
err = ipv6_sysctl_register();
if (err)
@ -993,8 +998,10 @@ static int __init inet6_init(void)
#ifdef CONFIG_SYSCTL
sysctl_fail:
pingv6_exit();
calipso_exit();
#endif
calipso_fail:
pingv6_exit();
pingv6_fail:
ipv6_packet_cleanup();
ipv6_packet_fail:

1473
net/ipv6/calipso.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -43,6 +43,7 @@
#include <net/ndisc.h>
#include <net/ip6_route.h>
#include <net/addrconf.h>
#include <net/calipso.h>
#if IS_ENABLED(CONFIG_IPV6_MIP6)
#include <net/xfrm.h>
#endif
@ -603,6 +604,28 @@ static bool ipv6_hop_jumbo(struct sk_buff *skb, int optoff)
return false;
}
/* CALIPSO RFC 5570 */
static bool ipv6_hop_calipso(struct sk_buff *skb, int optoff)
{
const unsigned char *nh = skb_network_header(skb);
if (nh[optoff + 1] < 8)
goto drop;
if (nh[optoff + 6] * 4 + 8 > nh[optoff + 1])
goto drop;
if (!calipso_validate(skb, nh + optoff))
goto drop;
return true;
drop:
kfree_skb(skb);
return false;
}
static const struct tlvtype_proc tlvprochopopt_lst[] = {
{
.type = IPV6_TLV_ROUTERALERT,
@ -612,6 +635,10 @@ static const struct tlvtype_proc tlvprochopopt_lst[] = {
.type = IPV6_TLV_JUMBO,
.func = ipv6_hop_jumbo,
},
{
.type = IPV6_TLV_CALIPSO,
.func = ipv6_hop_calipso,
},
{ -1, }
};
@ -758,6 +785,27 @@ static int ipv6_renew_option(void *ohdr,
return 0;
}
/**
* ipv6_renew_options - replace a specific ext hdr with a new one.
*
* @sk: sock from which to allocate memory
* @opt: original options
* @newtype: option type to replace in @opt
* @newopt: new option of type @newtype to replace (user-mem)
* @newoptlen: length of @newopt
*
* Returns a new set of options which is a copy of @opt with the
* option type @newtype replaced with @newopt.
*
* @opt may be NULL, in which case a new set of options is returned
* containing just @newopt.
*
* @newopt may be NULL, in which case the specified option type is
* not copied into the new set of options.
*
* The new set of options is allocated from the socket option memory
* buffer of @sk.
*/
struct ipv6_txoptions *
ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
int newtype,
@ -830,6 +878,34 @@ ipv6_renew_options(struct sock *sk, struct ipv6_txoptions *opt,
return ERR_PTR(err);
}
/**
* ipv6_renew_options_kern - replace a specific ext hdr with a new one.
*
* @sk: sock from which to allocate memory
* @opt: original options
* @newtype: option type to replace in @opt
* @newopt: new option of type @newtype to replace (kernel-mem)
* @newoptlen: length of @newopt
*
* See ipv6_renew_options(). The difference is that @newopt is
* kernel memory, rather than user memory.
*/
struct ipv6_txoptions *
ipv6_renew_options_kern(struct sock *sk, struct ipv6_txoptions *opt,
int newtype, struct ipv6_opt_hdr *newopt,
int newoptlen)
{
struct ipv6_txoptions *ret_val;
const mm_segment_t old_fs = get_fs();
set_fs(KERNEL_DS);
ret_val = ipv6_renew_options(sk, opt, newtype,
(struct ipv6_opt_hdr __user *)newopt,
newoptlen);
set_fs(old_fs);
return ret_val;
}
struct ipv6_txoptions *ipv6_fixup_options(struct ipv6_txoptions *opt_space,
struct ipv6_txoptions *opt)
{

View File

@ -112,7 +112,7 @@ int ipv6_skip_exthdr(const struct sk_buff *skb, int start, u8 *nexthdrp,
}
EXPORT_SYMBOL(ipv6_skip_exthdr);
int ipv6_find_tlv(struct sk_buff *skb, int offset, int type)
int ipv6_find_tlv(const struct sk_buff *skb, int offset, int type)
{
const unsigned char *nh = skb_network_header(skb);
int packet_len = skb_tail_pointer(skb) - skb_network_header(skb);

View File

@ -98,7 +98,6 @@ int ip6_ra_control(struct sock *sk, int sel)
return 0;
}
static
struct ipv6_txoptions *ipv6_update_options(struct sock *sk,
struct ipv6_txoptions *opt)
{

View File

@ -15,6 +15,9 @@
#include <net/ipv6.h>
#include <net/addrconf.h>
#include <net/inet_frag.h>
#ifdef CONFIG_NETLABEL
#include <net/calipso.h>
#endif
static int one = 1;
static int auto_flowlabels_min;
@ -106,6 +109,22 @@ static struct ctl_table ipv6_rotable[] = {
.proc_handler = proc_dointvec_minmax,
.extra1 = &one
},
#ifdef CONFIG_NETLABEL
{
.procname = "calipso_cache_enable",
.data = &calipso_cache_enabled,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
{
.procname = "calipso_cache_bucket_size",
.data = &calipso_cache_bucketsize,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
#endif /* CONFIG_NETLABEL */
{ }
};

View File

@ -443,6 +443,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
{
struct inet_request_sock *ireq = inet_rsk(req);
struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6_txoptions *opt;
struct flowi6 *fl6 = &fl->u.ip6;
struct sk_buff *skb;
int err = -ENOMEM;
@ -463,8 +464,10 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
rcu_read_lock();
err = ip6_xmit(sk, skb, fl6, rcu_dereference(np->opt),
np->tclass);
opt = ireq->ipv6_opt;
if (!opt)
opt = rcu_dereference(np->opt);
err = ip6_xmit(sk, skb, fl6, opt, np->tclass);
rcu_read_unlock();
err = net_xmit_eval(err);
}
@ -476,6 +479,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
static void tcp_v6_reqsk_destructor(struct request_sock *req)
{
kfree(inet_rsk(req)->ipv6_opt);
kfree_skb(inet_rsk(req)->pktopts);
}
@ -1112,7 +1116,9 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
but we make one more one thing there: reattach optmem
to newsk.
*/
opt = rcu_dereference(np->opt);
opt = ireq->ipv6_opt;
if (!opt)
opt = rcu_dereference(np->opt);
if (opt) {
opt = ipv6_dup_options(newsk, opt);
RCU_INIT_POINTER(newnp->opt, opt);

View File

@ -22,6 +22,7 @@
#include <linux/skbuff.h>
#include <linux/init.h>
#include <linux/poll.h>
#include <linux/security.h>
#include <net/sock.h>
#include <asm/ebcdic.h>
#include <asm/cpcmd.h>
@ -530,8 +531,10 @@ static void iucv_sock_close(struct sock *sk)
static void iucv_sock_init(struct sock *sk, struct sock *parent)
{
if (parent)
if (parent) {
sk->sk_type = parent->sk_type;
security_sk_clone(parent, sk);
}
}
static struct sock *iucv_sock_alloc(struct socket *sock, int proto, gfp_t prio, int kern)

View File

@ -5,6 +5,7 @@
config NETLABEL
bool "NetLabel subsystem support"
depends on SECURITY
select CRC_CCITT if IPV6
default n
---help---
NetLabel provides support for explicit network packet labeling

View File

@ -12,4 +12,4 @@ obj-y += netlabel_mgmt.o
# protocol modules
obj-y += netlabel_unlabeled.o
obj-y += netlabel_cipso_v4.o
obj-$(subst m,y,$(CONFIG_IPV6)) += netlabel_calipso.o

View File

@ -0,0 +1,740 @@
/*
* NetLabel CALIPSO/IPv6 Support
*
* This file defines the CALIPSO/IPv6 functions for the NetLabel system. The
* NetLabel system manages static and dynamic label mappings for network
* protocols such as CIPSO and CALIPSO.
*
* Authors: Paul Moore <paul@paul-moore.com>
* Huw Davies <huw@codeweavers.com>
*
*/
/* (c) Copyright Hewlett-Packard Development Company, L.P., 2006
* (c) Copyright Huw Davies <huw@codeweavers.com>, 2015
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
* the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include <linux/types.h>
#include <linux/socket.h>
#include <linux/string.h>
#include <linux/skbuff.h>
#include <linux/audit.h>
#include <linux/slab.h>
#include <net/sock.h>
#include <net/netlink.h>
#include <net/genetlink.h>
#include <net/netlabel.h>
#include <net/calipso.h>
#include <linux/atomic.h>
#include "netlabel_user.h"
#include "netlabel_calipso.h"
#include "netlabel_mgmt.h"
#include "netlabel_domainhash.h"
/* Argument struct for calipso_doi_walk() */
struct netlbl_calipso_doiwalk_arg {
struct netlink_callback *nl_cb;
struct sk_buff *skb;
u32 seq;
};
/* Argument struct for netlbl_domhsh_walk() */
struct netlbl_domhsh_walk_arg {
struct netlbl_audit *audit_info;
u32 doi;
};
/* NetLabel Generic NETLINK CALIPSO family */
static struct genl_family netlbl_calipso_gnl_family = {
.id = GENL_ID_GENERATE,
.hdrsize = 0,
.name = NETLBL_NLTYPE_CALIPSO_NAME,
.version = NETLBL_PROTO_VERSION,
.maxattr = NLBL_CALIPSO_A_MAX,
};
/* NetLabel Netlink attribute policy */
static const struct nla_policy calipso_genl_policy[NLBL_CALIPSO_A_MAX + 1] = {
[NLBL_CALIPSO_A_DOI] = { .type = NLA_U32 },
[NLBL_CALIPSO_A_MTYPE] = { .type = NLA_U32 },
};
/* NetLabel Command Handlers
*/
/**
* netlbl_calipso_add_pass - Adds a CALIPSO pass DOI definition
* @info: the Generic NETLINK info block
* @audit_info: NetLabel audit information
*
* Description:
* Create a new CALIPSO_MAP_PASS DOI definition based on the given ADD message
* and add it to the CALIPSO engine. Return zero on success and non-zero on
* error.
*
*/
static int netlbl_calipso_add_pass(struct genl_info *info,
struct netlbl_audit *audit_info)
{
int ret_val;
struct calipso_doi *doi_def = NULL;
doi_def = kmalloc(sizeof(*doi_def), GFP_KERNEL);
if (!doi_def)
return -ENOMEM;
doi_def->type = CALIPSO_MAP_PASS;
doi_def->doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]);
ret_val = calipso_doi_add(doi_def, audit_info);
if (ret_val != 0)
calipso_doi_free(doi_def);
return ret_val;
}
/**
* netlbl_calipso_add - Handle an ADD message
* @skb: the NETLINK buffer
* @info: the Generic NETLINK info block
*
* Description:
* Create a new DOI definition based on the given ADD message and add it to the
* CALIPSO engine. Returns zero on success, negative values on failure.
*
*/
static int netlbl_calipso_add(struct sk_buff *skb, struct genl_info *info)
{
int ret_val = -EINVAL;
struct netlbl_audit audit_info;
if (!info->attrs[NLBL_CALIPSO_A_DOI] ||
!info->attrs[NLBL_CALIPSO_A_MTYPE])
return -EINVAL;
netlbl_netlink_auditinfo(skb, &audit_info);
switch (nla_get_u32(info->attrs[NLBL_CALIPSO_A_MTYPE])) {
case CALIPSO_MAP_PASS:
ret_val = netlbl_calipso_add_pass(info, &audit_info);
break;
}
if (ret_val == 0)
atomic_inc(&netlabel_mgmt_protocount);
return ret_val;
}
/**
* netlbl_calipso_list - Handle a LIST message
* @skb: the NETLINK buffer
* @info: the Generic NETLINK info block
*
* Description:
* Process a user generated LIST message and respond accordingly.
* Returns zero on success and negative values on error.
*
*/
static int netlbl_calipso_list(struct sk_buff *skb, struct genl_info *info)
{
int ret_val;
struct sk_buff *ans_skb = NULL;
void *data;
u32 doi;
struct calipso_doi *doi_def;
if (!info->attrs[NLBL_CALIPSO_A_DOI]) {
ret_val = -EINVAL;
goto list_failure;
}
doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]);
doi_def = calipso_doi_getdef(doi);
if (!doi_def) {
ret_val = -EINVAL;
goto list_failure;
}
ans_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!ans_skb) {
ret_val = -ENOMEM;
goto list_failure_put;
}
data = genlmsg_put_reply(ans_skb, info, &netlbl_calipso_gnl_family,
0, NLBL_CALIPSO_C_LIST);
if (!data) {
ret_val = -ENOMEM;
goto list_failure_put;
}
ret_val = nla_put_u32(ans_skb, NLBL_CALIPSO_A_MTYPE, doi_def->type);
if (ret_val != 0)
goto list_failure_put;
calipso_doi_putdef(doi_def);
genlmsg_end(ans_skb, data);
return genlmsg_reply(ans_skb, info);
list_failure_put:
calipso_doi_putdef(doi_def);
list_failure:
kfree_skb(ans_skb);
return ret_val;
}
/**
* netlbl_calipso_listall_cb - calipso_doi_walk() callback for LISTALL
* @doi_def: the CALIPSO DOI definition
* @arg: the netlbl_calipso_doiwalk_arg structure
*
* Description:
* This function is designed to be used as a callback to the
* calipso_doi_walk() function for use in generating a response for a LISTALL
* message. Returns the size of the message on success, negative values on
* failure.
*
*/
static int netlbl_calipso_listall_cb(struct calipso_doi *doi_def, void *arg)
{
int ret_val = -ENOMEM;
struct netlbl_calipso_doiwalk_arg *cb_arg = arg;
void *data;
data = genlmsg_put(cb_arg->skb, NETLINK_CB(cb_arg->nl_cb->skb).portid,
cb_arg->seq, &netlbl_calipso_gnl_family,
NLM_F_MULTI, NLBL_CALIPSO_C_LISTALL);
if (!data)
goto listall_cb_failure;
ret_val = nla_put_u32(cb_arg->skb, NLBL_CALIPSO_A_DOI, doi_def->doi);
if (ret_val != 0)
goto listall_cb_failure;
ret_val = nla_put_u32(cb_arg->skb,
NLBL_CALIPSO_A_MTYPE,
doi_def->type);
if (ret_val != 0)
goto listall_cb_failure;
genlmsg_end(cb_arg->skb, data);
return 0;
listall_cb_failure:
genlmsg_cancel(cb_arg->skb, data);
return ret_val;
}
/**
* netlbl_calipso_listall - Handle a LISTALL message
* @skb: the NETLINK buffer
* @cb: the NETLINK callback
*
* Description:
* Process a user generated LISTALL message and respond accordingly. Returns
* zero on success and negative values on error.
*
*/
static int netlbl_calipso_listall(struct sk_buff *skb,
struct netlink_callback *cb)
{
struct netlbl_calipso_doiwalk_arg cb_arg;
u32 doi_skip = cb->args[0];
cb_arg.nl_cb = cb;
cb_arg.skb = skb;
cb_arg.seq = cb->nlh->nlmsg_seq;
calipso_doi_walk(&doi_skip, netlbl_calipso_listall_cb, &cb_arg);
cb->args[0] = doi_skip;
return skb->len;
}
/**
* netlbl_calipso_remove_cb - netlbl_calipso_remove() callback for REMOVE
* @entry: LSM domain mapping entry
* @arg: the netlbl_domhsh_walk_arg structure
*
* Description:
* This function is intended for use by netlbl_calipso_remove() as the callback
* for the netlbl_domhsh_walk() function; it removes LSM domain map entries
* which are associated with the CALIPSO DOI specified in @arg. Returns zero on
* success, negative values on failure.
*
*/
static int netlbl_calipso_remove_cb(struct netlbl_dom_map *entry, void *arg)
{
struct netlbl_domhsh_walk_arg *cb_arg = arg;
if (entry->def.type == NETLBL_NLTYPE_CALIPSO &&
entry->def.calipso->doi == cb_arg->doi)
return netlbl_domhsh_remove_entry(entry, cb_arg->audit_info);
return 0;
}
/**
* netlbl_calipso_remove - Handle a REMOVE message
* @skb: the NETLINK buffer
* @info: the Generic NETLINK info block
*
* Description:
* Process a user generated REMOVE message and respond accordingly. Returns
* zero on success, negative values on failure.
*
*/
static int netlbl_calipso_remove(struct sk_buff *skb, struct genl_info *info)
{
int ret_val = -EINVAL;
struct netlbl_domhsh_walk_arg cb_arg;
struct netlbl_audit audit_info;
u32 skip_bkt = 0;
u32 skip_chain = 0;
if (!info->attrs[NLBL_CALIPSO_A_DOI])
return -EINVAL;
netlbl_netlink_auditinfo(skb, &audit_info);
cb_arg.doi = nla_get_u32(info->attrs[NLBL_CALIPSO_A_DOI]);
cb_arg.audit_info = &audit_info;
ret_val = netlbl_domhsh_walk(&skip_bkt, &skip_chain,
netlbl_calipso_remove_cb, &cb_arg);
if (ret_val == 0 || ret_val == -ENOENT) {
ret_val = calipso_doi_remove(cb_arg.doi, &audit_info);
if (ret_val == 0)
atomic_dec(&netlabel_mgmt_protocount);
}
return ret_val;
}
/* NetLabel Generic NETLINK Command Definitions
*/
static const struct genl_ops netlbl_calipso_ops[] = {
{
.cmd = NLBL_CALIPSO_C_ADD,
.flags = GENL_ADMIN_PERM,
.policy = calipso_genl_policy,
.doit = netlbl_calipso_add,
.dumpit = NULL,
},
{
.cmd = NLBL_CALIPSO_C_REMOVE,
.flags = GENL_ADMIN_PERM,
.policy = calipso_genl_policy,
.doit = netlbl_calipso_remove,
.dumpit = NULL,
},
{
.cmd = NLBL_CALIPSO_C_LIST,
.flags = 0,
.policy = calipso_genl_policy,
.doit = netlbl_calipso_list,
.dumpit = NULL,
},
{
.cmd = NLBL_CALIPSO_C_LISTALL,
.flags = 0,
.policy = calipso_genl_policy,
.doit = NULL,
.dumpit = netlbl_calipso_listall,
},
};
/* NetLabel Generic NETLINK Protocol Functions
*/
/**
* netlbl_calipso_genl_init - Register the CALIPSO NetLabel component
*
* Description:
* Register the CALIPSO packet NetLabel component with the Generic NETLINK
* mechanism. Returns zero on success, negative values on failure.
*
*/
int __init netlbl_calipso_genl_init(void)
{
return genl_register_family_with_ops(&netlbl_calipso_gnl_family,
netlbl_calipso_ops);
}
static const struct netlbl_calipso_ops *calipso_ops;
/**
* netlbl_calipso_ops_register - Register the CALIPSO operations
*
* Description:
* Register the CALIPSO packet engine operations.
*
*/
const struct netlbl_calipso_ops *
netlbl_calipso_ops_register(const struct netlbl_calipso_ops *ops)
{
return xchg(&calipso_ops, ops);
}
EXPORT_SYMBOL(netlbl_calipso_ops_register);
static const struct netlbl_calipso_ops *netlbl_calipso_ops_get(void)
{
return ACCESS_ONCE(calipso_ops);
}
/**
* calipso_doi_add - Add a new DOI to the CALIPSO protocol engine
* @doi_def: the DOI structure
* @audit_info: NetLabel audit information
*
* Description:
* The caller defines a new DOI for use by the CALIPSO engine and calls this
* function to add it to the list of acceptable domains. The caller must
* ensure that the mapping table specified in @doi_def->map meets all of the
* requirements of the mapping type (see calipso.h for details). Returns
* zero on success and non-zero on failure.
*
*/
int calipso_doi_add(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->doi_add(doi_def, audit_info);
return ret_val;
}
/**
* calipso_doi_free - Frees a DOI definition
* @doi_def: the DOI definition
*
* Description:
* This function frees all of the memory associated with a DOI definition.
*
*/
void calipso_doi_free(struct calipso_doi *doi_def)
{
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ops->doi_free(doi_def);
}
/**
* calipso_doi_remove - Remove an existing DOI from the CALIPSO protocol engine
* @doi: the DOI value
* @audit_secid: the LSM secid to use in the audit message
*
* Description:
* Removes a DOI definition from the CALIPSO engine. The NetLabel routines will
* be called to release their own LSM domain mappings as well as our own
* domain list. Returns zero on success and negative values on failure.
*
*/
int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->doi_remove(doi, audit_info);
return ret_val;
}
/**
* calipso_doi_getdef - Returns a reference to a valid DOI definition
* @doi: the DOI value
*
* Description:
* Searches for a valid DOI definition and if one is found it is returned to
* the caller. Otherwise NULL is returned. The caller must ensure that
* calipso_doi_putdef() is called when the caller is done.
*
*/
struct calipso_doi *calipso_doi_getdef(u32 doi)
{
struct calipso_doi *ret_val = NULL;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->doi_getdef(doi);
return ret_val;
}
/**
* calipso_doi_putdef - Releases a reference for the given DOI definition
* @doi_def: the DOI definition
*
* Description:
* Releases a DOI definition reference obtained from calipso_doi_getdef().
*
*/
void calipso_doi_putdef(struct calipso_doi *doi_def)
{
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ops->doi_putdef(doi_def);
}
/**
* calipso_doi_walk - Iterate through the DOI definitions
* @skip_cnt: skip past this number of DOI definitions, updated
* @callback: callback for each DOI definition
* @cb_arg: argument for the callback function
*
* Description:
* Iterate over the DOI definition list, skipping the first @skip_cnt entries.
* For each entry call @callback, if @callback returns a negative value stop
* 'walking' through the list and return. Updates the value in @skip_cnt upon
* return. Returns zero on success, negative values on failure.
*
*/
int calipso_doi_walk(u32 *skip_cnt,
int (*callback)(struct calipso_doi *doi_def, void *arg),
void *cb_arg)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->doi_walk(skip_cnt, callback, cb_arg);
return ret_val;
}
/**
* calipso_sock_getattr - Get the security attributes from a sock
* @sk: the sock
* @secattr: the security attributes
*
* Description:
* Query @sk to see if there is a CALIPSO option attached to the sock and if
* there is return the CALIPSO security attributes in @secattr. This function
* requires that @sk be locked, or privately held, but it does not do any
* locking itself. Returns zero on success and negative values on failure.
*
*/
int calipso_sock_getattr(struct sock *sk, struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->sock_getattr(sk, secattr);
return ret_val;
}
/**
* calipso_sock_setattr - Add a CALIPSO option to a socket
* @sk: the socket
* @doi_def: the CALIPSO DOI to use
* @secattr: the specific security attributes of the socket
*
* Description:
* Set the CALIPSO option on the given socket using the DOI definition and
* security attributes passed to the function. This function requires
* exclusive access to @sk, which means it either needs to be in the
* process of being created or locked. Returns zero on success and negative
* values on failure.
*
*/
int calipso_sock_setattr(struct sock *sk,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->sock_setattr(sk, doi_def, secattr);
return ret_val;
}
/**
* calipso_sock_delattr - Delete the CALIPSO option from a socket
* @sk: the socket
*
* Description:
* Removes the CALIPSO option from a socket, if present.
*
*/
void calipso_sock_delattr(struct sock *sk)
{
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ops->sock_delattr(sk);
}
/**
* calipso_req_setattr - Add a CALIPSO option to a connection request socket
* @req: the connection request socket
* @doi_def: the CALIPSO DOI to use
* @secattr: the specific security attributes of the socket
*
* Description:
* Set the CALIPSO option on the given socket using the DOI definition and
* security attributes passed to the function. Returns zero on success and
* negative values on failure.
*
*/
int calipso_req_setattr(struct request_sock *req,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->req_setattr(req, doi_def, secattr);
return ret_val;
}
/**
* calipso_req_delattr - Delete the CALIPSO option from a request socket
* @reg: the request socket
*
* Description:
* Removes the CALIPSO option from a request socket, if present.
*
*/
void calipso_req_delattr(struct request_sock *req)
{
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ops->req_delattr(req);
}
/**
* calipso_optptr - Find the CALIPSO option in the packet
* @skb: the packet
*
* Description:
* Parse the packet's IP header looking for a CALIPSO option. Returns a pointer
* to the start of the CALIPSO option on success, NULL if one if not found.
*
*/
unsigned char *calipso_optptr(const struct sk_buff *skb)
{
unsigned char *ret_val = NULL;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->skbuff_optptr(skb);
return ret_val;
}
/**
* calipso_getattr - Get the security attributes from a memory block.
* @calipso: the CALIPSO option
* @secattr: the security attributes
*
* Description:
* Inspect @calipso and return the security attributes in @secattr.
* Returns zero on success and negative values on failure.
*
*/
int calipso_getattr(const unsigned char *calipso,
struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->opt_getattr(calipso, secattr);
return ret_val;
}
/**
* calipso_skbuff_setattr - Set the CALIPSO option on a packet
* @skb: the packet
* @doi_def: the CALIPSO DOI to use
* @secattr: the security attributes
*
* Description:
* Set the CALIPSO option on the given packet based on the security attributes.
* Returns a pointer to the IP header on success and NULL on failure.
*
*/
int calipso_skbuff_setattr(struct sk_buff *skb,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->skbuff_setattr(skb, doi_def, secattr);
return ret_val;
}
/**
* calipso_skbuff_delattr - Delete any CALIPSO options from a packet
* @skb: the packet
*
* Description:
* Removes any and all CALIPSO options from the given packet. Returns zero on
* success, negative values on failure.
*
*/
int calipso_skbuff_delattr(struct sk_buff *skb)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->skbuff_delattr(skb);
return ret_val;
}
/**
* calipso_cache_invalidate - Invalidates the current CALIPSO cache
*
* Description:
* Invalidates and frees any entries in the CALIPSO cache. Returns zero on
* success and negative values on failure.
*
*/
void calipso_cache_invalidate(void)
{
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ops->cache_invalidate();
}
/**
* calipso_cache_add - Add an entry to the CALIPSO cache
* @calipso_ptr: the CALIPSO option
* @secattr: the packet's security attributes
*
* Description:
* Add a new entry into the CALIPSO label mapping cache.
* Returns zero on success, negative values on failure.
*
*/
int calipso_cache_add(const unsigned char *calipso_ptr,
const struct netlbl_lsm_secattr *secattr)
{
int ret_val = -ENOMSG;
const struct netlbl_calipso_ops *ops = netlbl_calipso_ops_get();
if (ops)
ret_val = ops->cache_add(calipso_ptr, secattr);
return ret_val;
}

View File

@ -0,0 +1,151 @@
/*
* NetLabel CALIPSO Support
*
* This file defines the CALIPSO functions for the NetLabel system. The
* NetLabel system manages static and dynamic label mappings for network
* protocols such as CIPSO and RIPSO.
*
* Authors: Paul Moore <paul@paul-moore.com>
* Huw Davies <huw@codeweavers.com>
*
*/
/* (c) Copyright Hewlett-Packard Development Company, L.P., 2006
* (c) Copyright Huw Davies <huw@codeweavers.com>, 2015
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
* the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef _NETLABEL_CALIPSO
#define _NETLABEL_CALIPSO
#include <net/netlabel.h>
#include <net/calipso.h>
/* The following NetLabel payloads are supported by the CALIPSO subsystem.
*
* o ADD:
* Sent by an application to add a new DOI mapping table.
*
* Required attributes:
*
* NLBL_CALIPSO_A_DOI
* NLBL_CALIPSO_A_MTYPE
*
* If using CALIPSO_MAP_PASS no additional attributes are required.
*
* o REMOVE:
* Sent by an application to remove a specific DOI mapping table from the
* CALIPSO system.
*
* Required attributes:
*
* NLBL_CALIPSO_A_DOI
*
* o LIST:
* Sent by an application to list the details of a DOI definition. On
* success the kernel should send a response using the following format.
*
* Required attributes:
*
* NLBL_CALIPSO_A_DOI
*
* The valid response message format depends on the type of the DOI mapping,
* the defined formats are shown below.
*
* Required attributes:
*
* NLBL_CALIPSO_A_MTYPE
*
* If using CALIPSO_MAP_PASS no additional attributes are required.
*
* o LISTALL:
* This message is sent by an application to list the valid DOIs on the
* system. When sent by an application there is no payload and the
* NLM_F_DUMP flag should be set. The kernel should respond with a series of
* the following messages.
*
* Required attributes:
*
* NLBL_CALIPSO_A_DOI
* NLBL_CALIPSO_A_MTYPE
*
*/
/* NetLabel CALIPSO commands */
enum {
NLBL_CALIPSO_C_UNSPEC,
NLBL_CALIPSO_C_ADD,
NLBL_CALIPSO_C_REMOVE,
NLBL_CALIPSO_C_LIST,
NLBL_CALIPSO_C_LISTALL,
__NLBL_CALIPSO_C_MAX,
};
/* NetLabel CALIPSO attributes */
enum {
NLBL_CALIPSO_A_UNSPEC,
NLBL_CALIPSO_A_DOI,
/* (NLA_U32)
* the DOI value */
NLBL_CALIPSO_A_MTYPE,
/* (NLA_U32)
* the mapping table type (defined in the calipso.h header as
* CALIPSO_MAP_*) */
__NLBL_CALIPSO_A_MAX,
};
#define NLBL_CALIPSO_A_MAX (__NLBL_CALIPSO_A_MAX - 1)
/* NetLabel protocol functions */
#if IS_ENABLED(CONFIG_IPV6)
int netlbl_calipso_genl_init(void);
#else
static inline int netlbl_calipso_genl_init(void)
{
return 0;
}
#endif
int calipso_doi_add(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info);
void calipso_doi_free(struct calipso_doi *doi_def);
int calipso_doi_remove(u32 doi, struct netlbl_audit *audit_info);
struct calipso_doi *calipso_doi_getdef(u32 doi);
void calipso_doi_putdef(struct calipso_doi *doi_def);
int calipso_doi_walk(u32 *skip_cnt,
int (*callback)(struct calipso_doi *doi_def, void *arg),
void *cb_arg);
int calipso_sock_getattr(struct sock *sk, struct netlbl_lsm_secattr *secattr);
int calipso_sock_setattr(struct sock *sk,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
void calipso_sock_delattr(struct sock *sk);
int calipso_req_setattr(struct request_sock *req,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
void calipso_req_delattr(struct request_sock *req);
unsigned char *calipso_optptr(const struct sk_buff *skb);
int calipso_getattr(const unsigned char *calipso,
struct netlbl_lsm_secattr *secattr);
int calipso_skbuff_setattr(struct sk_buff *skb,
const struct calipso_doi *doi_def,
const struct netlbl_lsm_secattr *secattr);
int calipso_skbuff_delattr(struct sk_buff *skb);
void calipso_cache_invalidate(void);
int calipso_cache_add(const unsigned char *calipso_ptr,
const struct netlbl_lsm_secattr *secattr);
#endif

View File

@ -37,10 +37,12 @@
#include <linux/slab.h>
#include <net/netlabel.h>
#include <net/cipso_ipv4.h>
#include <net/calipso.h>
#include <asm/bug.h>
#include "netlabel_mgmt.h"
#include "netlabel_addrlist.h"
#include "netlabel_calipso.h"
#include "netlabel_domainhash.h"
#include "netlabel_user.h"
@ -55,8 +57,9 @@ struct netlbl_domhsh_tbl {
static DEFINE_SPINLOCK(netlbl_domhsh_lock);
#define netlbl_domhsh_rcu_deref(p) \
rcu_dereference_check(p, lockdep_is_held(&netlbl_domhsh_lock))
static struct netlbl_domhsh_tbl *netlbl_domhsh;
static struct netlbl_dom_map *netlbl_domhsh_def;
static struct netlbl_domhsh_tbl __rcu *netlbl_domhsh;
static struct netlbl_dom_map __rcu *netlbl_domhsh_def_ipv4;
static struct netlbl_dom_map __rcu *netlbl_domhsh_def_ipv6;
/*
* Domain Hash Table Helper Functions
@ -126,18 +129,26 @@ static u32 netlbl_domhsh_hash(const char *key)
return val & (netlbl_domhsh_rcu_deref(netlbl_domhsh)->size - 1);
}
static bool netlbl_family_match(u16 f1, u16 f2)
{
return (f1 == f2) || (f1 == AF_UNSPEC) || (f2 == AF_UNSPEC);
}
/**
* netlbl_domhsh_search - Search for a domain entry
* @domain: the domain
* @family: the address family
*
* Description:
* Searches the domain hash table and returns a pointer to the hash table
* entry if found, otherwise NULL is returned. The caller is responsible for
* entry if found, otherwise NULL is returned. @family may be %AF_UNSPEC
* which matches any address family entries. The caller is responsible for
* ensuring that the hash table is protected with either a RCU read lock or the
* hash table lock.
*
*/
static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain)
static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain,
u16 family)
{
u32 bkt;
struct list_head *bkt_list;
@ -147,7 +158,9 @@ static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain)
bkt = netlbl_domhsh_hash(domain);
bkt_list = &netlbl_domhsh_rcu_deref(netlbl_domhsh)->tbl[bkt];
list_for_each_entry_rcu(iter, bkt_list, list)
if (iter->valid && strcmp(iter->domain, domain) == 0)
if (iter->valid &&
netlbl_family_match(iter->family, family) &&
strcmp(iter->domain, domain) == 0)
return iter;
}
@ -157,28 +170,37 @@ static struct netlbl_dom_map *netlbl_domhsh_search(const char *domain)
/**
* netlbl_domhsh_search_def - Search for a domain entry
* @domain: the domain
* @def: return default if no match is found
* @family: the address family
*
* Description:
* Searches the domain hash table and returns a pointer to the hash table
* entry if an exact match is found, if an exact match is not present in the
* hash table then the default entry is returned if valid otherwise NULL is
* returned. The caller is responsible ensuring that the hash table is
* returned. @family may be %AF_UNSPEC which matches any address family
* entries. The caller is responsible ensuring that the hash table is
* protected with either a RCU read lock or the hash table lock.
*
*/
static struct netlbl_dom_map *netlbl_domhsh_search_def(const char *domain)
static struct netlbl_dom_map *netlbl_domhsh_search_def(const char *domain,
u16 family)
{
struct netlbl_dom_map *entry;
entry = netlbl_domhsh_search(domain);
if (entry == NULL) {
entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def);
if (entry != NULL && !entry->valid)
entry = NULL;
entry = netlbl_domhsh_search(domain, family);
if (entry != NULL)
return entry;
if (family == AF_INET || family == AF_UNSPEC) {
entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def_ipv4);
if (entry != NULL && entry->valid)
return entry;
}
if (family == AF_INET6 || family == AF_UNSPEC) {
entry = netlbl_domhsh_rcu_deref(netlbl_domhsh_def_ipv6);
if (entry != NULL && entry->valid)
return entry;
}
return entry;
return NULL;
}
/**
@ -203,6 +225,7 @@ static void netlbl_domhsh_audit_add(struct netlbl_dom_map *entry,
{
struct audit_buffer *audit_buf;
struct cipso_v4_doi *cipsov4 = NULL;
struct calipso_doi *calipso = NULL;
u32 type;
audit_buf = netlbl_audit_start_common(AUDIT_MAC_MAP_ADD, audit_info);
@ -221,12 +244,14 @@ static void netlbl_domhsh_audit_add(struct netlbl_dom_map *entry,
struct netlbl_domaddr6_map *map6;
map6 = netlbl_domhsh_addr6_entry(addr6);
type = map6->def.type;
calipso = map6->def.calipso;
netlbl_af6list_audit_addr(audit_buf, 0, NULL,
&addr6->addr, &addr6->mask);
#endif /* IPv6 */
} else {
type = entry->def.type;
cipsov4 = entry->def.cipso;
calipso = entry->def.calipso;
}
switch (type) {
case NETLBL_NLTYPE_UNLABELED:
@ -238,6 +263,12 @@ static void netlbl_domhsh_audit_add(struct netlbl_dom_map *entry,
" nlbl_protocol=cipsov4 cipso_doi=%u",
cipsov4->doi);
break;
case NETLBL_NLTYPE_CALIPSO:
BUG_ON(calipso == NULL);
audit_log_format(audit_buf,
" nlbl_protocol=calipso calipso_doi=%u",
calipso->doi);
break;
}
audit_log_format(audit_buf, " res=%u", result == 0 ? 1 : 0);
audit_log_end(audit_buf);
@ -264,13 +295,25 @@ static int netlbl_domhsh_validate(const struct netlbl_dom_map *entry)
if (entry == NULL)
return -EINVAL;
if (entry->family != AF_INET && entry->family != AF_INET6 &&
(entry->family != AF_UNSPEC ||
entry->def.type != NETLBL_NLTYPE_UNLABELED))
return -EINVAL;
switch (entry->def.type) {
case NETLBL_NLTYPE_UNLABELED:
if (entry->def.cipso != NULL || entry->def.addrsel != NULL)
if (entry->def.cipso != NULL || entry->def.calipso != NULL ||
entry->def.addrsel != NULL)
return -EINVAL;
break;
case NETLBL_NLTYPE_CIPSOV4:
if (entry->def.cipso == NULL)
if (entry->family != AF_INET ||
entry->def.cipso == NULL)
return -EINVAL;
break;
case NETLBL_NLTYPE_CALIPSO:
if (entry->family != AF_INET6 ||
entry->def.calipso == NULL)
return -EINVAL;
break;
case NETLBL_NLTYPE_ADDRSELECT:
@ -294,6 +337,12 @@ static int netlbl_domhsh_validate(const struct netlbl_dom_map *entry)
map6 = netlbl_domhsh_addr6_entry(iter6);
switch (map6->def.type) {
case NETLBL_NLTYPE_UNLABELED:
if (map6->def.calipso != NULL)
return -EINVAL;
break;
case NETLBL_NLTYPE_CALIPSO:
if (map6->def.calipso == NULL)
return -EINVAL;
break;
default:
return -EINVAL;
@ -358,15 +407,18 @@ int __init netlbl_domhsh_init(u32 size)
*
* Description:
* Adds a new entry to the domain hash table and handles any updates to the
* lower level protocol handler (i.e. CIPSO). Returns zero on success,
* negative on failure.
* lower level protocol handler (i.e. CIPSO). @entry->family may be set to
* %AF_UNSPEC which will add an entry that matches all address families. This
* is only useful for the unlabelled type and will only succeed if there is no
* existing entry for any address family with the same domain. Returns zero
* on success, negative on failure.
*
*/
int netlbl_domhsh_add(struct netlbl_dom_map *entry,
struct netlbl_audit *audit_info)
{
int ret_val = 0;
struct netlbl_dom_map *entry_old;
struct netlbl_dom_map *entry_old, *entry_b;
struct netlbl_af4list *iter4;
struct netlbl_af4list *tmp4;
#if IS_ENABLED(CONFIG_IPV6)
@ -385,9 +437,10 @@ int netlbl_domhsh_add(struct netlbl_dom_map *entry,
rcu_read_lock();
spin_lock(&netlbl_domhsh_lock);
if (entry->domain != NULL)
entry_old = netlbl_domhsh_search(entry->domain);
entry_old = netlbl_domhsh_search(entry->domain, entry->family);
else
entry_old = netlbl_domhsh_search_def(entry->domain);
entry_old = netlbl_domhsh_search_def(entry->domain,
entry->family);
if (entry_old == NULL) {
entry->valid = 1;
@ -397,7 +450,41 @@ int netlbl_domhsh_add(struct netlbl_dom_map *entry,
&rcu_dereference(netlbl_domhsh)->tbl[bkt]);
} else {
INIT_LIST_HEAD(&entry->list);
rcu_assign_pointer(netlbl_domhsh_def, entry);
switch (entry->family) {
case AF_INET:
rcu_assign_pointer(netlbl_domhsh_def_ipv4,
entry);
break;
case AF_INET6:
rcu_assign_pointer(netlbl_domhsh_def_ipv6,
entry);
break;
case AF_UNSPEC:
if (entry->def.type !=
NETLBL_NLTYPE_UNLABELED) {
ret_val = -EINVAL;
goto add_return;
}
entry_b = kzalloc(sizeof(*entry_b), GFP_ATOMIC);
if (entry_b == NULL) {
ret_val = -ENOMEM;
goto add_return;
}
entry_b->family = AF_INET6;
entry_b->def.type = NETLBL_NLTYPE_UNLABELED;
entry_b->valid = 1;
entry->family = AF_INET;
rcu_assign_pointer(netlbl_domhsh_def_ipv4,
entry);
rcu_assign_pointer(netlbl_domhsh_def_ipv6,
entry_b);
break;
default:
/* Already checked in
* netlbl_domhsh_validate(). */
ret_val = -EINVAL;
goto add_return;
}
}
if (entry->def.type == NETLBL_NLTYPE_ADDRSELECT) {
@ -513,10 +600,12 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
spin_lock(&netlbl_domhsh_lock);
if (entry->valid) {
entry->valid = 0;
if (entry != rcu_dereference(netlbl_domhsh_def))
list_del_rcu(&entry->list);
if (entry == rcu_dereference(netlbl_domhsh_def_ipv4))
RCU_INIT_POINTER(netlbl_domhsh_def_ipv4, NULL);
else if (entry == rcu_dereference(netlbl_domhsh_def_ipv6))
RCU_INIT_POINTER(netlbl_domhsh_def_ipv6, NULL);
else
RCU_INIT_POINTER(netlbl_domhsh_def, NULL);
list_del_rcu(&entry->list);
} else
ret_val = -ENOENT;
spin_unlock(&netlbl_domhsh_lock);
@ -533,6 +622,10 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
if (ret_val == 0) {
struct netlbl_af4list *iter4;
struct netlbl_domaddr4_map *map4;
#if IS_ENABLED(CONFIG_IPV6)
struct netlbl_af6list *iter6;
struct netlbl_domaddr6_map *map6;
#endif /* IPv6 */
switch (entry->def.type) {
case NETLBL_NLTYPE_ADDRSELECT:
@ -541,12 +634,22 @@ int netlbl_domhsh_remove_entry(struct netlbl_dom_map *entry,
map4 = netlbl_domhsh_addr4_entry(iter4);
cipso_v4_doi_putdef(map4->def.cipso);
}
/* no need to check the IPv6 list since we currently
* support only unlabeled protocols for IPv6 */
#if IS_ENABLED(CONFIG_IPV6)
netlbl_af6list_foreach_rcu(iter6,
&entry->def.addrsel->list6) {
map6 = netlbl_domhsh_addr6_entry(iter6);
calipso_doi_putdef(map6->def.calipso);
}
#endif /* IPv6 */
break;
case NETLBL_NLTYPE_CIPSOV4:
cipso_v4_doi_putdef(entry->def.cipso);
break;
#if IS_ENABLED(CONFIG_IPV6)
case NETLBL_NLTYPE_CALIPSO:
calipso_doi_putdef(entry->def.calipso);
break;
#endif /* IPv6 */
}
call_rcu(&entry->rcu, netlbl_domhsh_free_entry);
}
@ -583,9 +686,9 @@ int netlbl_domhsh_remove_af4(const char *domain,
rcu_read_lock();
if (domain)
entry_map = netlbl_domhsh_search(domain);
entry_map = netlbl_domhsh_search(domain, AF_INET);
else
entry_map = netlbl_domhsh_search_def(domain);
entry_map = netlbl_domhsh_search_def(domain, AF_INET);
if (entry_map == NULL ||
entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT)
goto remove_af4_failure;
@ -622,28 +725,114 @@ int netlbl_domhsh_remove_af4(const char *domain,
return -ENOENT;
}
#if IS_ENABLED(CONFIG_IPV6)
/**
* netlbl_domhsh_remove_af6 - Removes an address selector entry
* @domain: the domain
* @addr: IPv6 address
* @mask: IPv6 address mask
* @audit_info: NetLabel audit information
*
* Description:
* Removes an individual address selector from a domain mapping and potentially
* the entire mapping if it is empty. Returns zero on success, negative values
* on failure.
*
*/
int netlbl_domhsh_remove_af6(const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info)
{
struct netlbl_dom_map *entry_map;
struct netlbl_af6list *entry_addr;
struct netlbl_af4list *iter4;
struct netlbl_af6list *iter6;
struct netlbl_domaddr6_map *entry;
rcu_read_lock();
if (domain)
entry_map = netlbl_domhsh_search(domain, AF_INET6);
else
entry_map = netlbl_domhsh_search_def(domain, AF_INET6);
if (entry_map == NULL ||
entry_map->def.type != NETLBL_NLTYPE_ADDRSELECT)
goto remove_af6_failure;
spin_lock(&netlbl_domhsh_lock);
entry_addr = netlbl_af6list_remove(addr, mask,
&entry_map->def.addrsel->list6);
spin_unlock(&netlbl_domhsh_lock);
if (entry_addr == NULL)
goto remove_af6_failure;
netlbl_af4list_foreach_rcu(iter4, &entry_map->def.addrsel->list4)
goto remove_af6_single_addr;
netlbl_af6list_foreach_rcu(iter6, &entry_map->def.addrsel->list6)
goto remove_af6_single_addr;
/* the domain mapping is empty so remove it from the mapping table */
netlbl_domhsh_remove_entry(entry_map, audit_info);
remove_af6_single_addr:
rcu_read_unlock();
/* yick, we can't use call_rcu here because we don't have a rcu head
* pointer but hopefully this should be a rare case so the pause
* shouldn't be a problem */
synchronize_rcu();
entry = netlbl_domhsh_addr6_entry(entry_addr);
calipso_doi_putdef(entry->def.calipso);
kfree(entry);
return 0;
remove_af6_failure:
rcu_read_unlock();
return -ENOENT;
}
#endif /* IPv6 */
/**
* netlbl_domhsh_remove - Removes an entry from the domain hash table
* @domain: the domain to remove
* @family: address family
* @audit_info: NetLabel audit information
*
* Description:
* Removes an entry from the domain hash table and handles any updates to the
* lower level protocol handler (i.e. CIPSO). Returns zero on success,
* negative on failure.
* lower level protocol handler (i.e. CIPSO). @family may be %AF_UNSPEC which
* removes all address family entries. Returns zero on success, negative on
* failure.
*
*/
int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info)
int netlbl_domhsh_remove(const char *domain, u16 family,
struct netlbl_audit *audit_info)
{
int ret_val;
int ret_val = -EINVAL;
struct netlbl_dom_map *entry;
rcu_read_lock();
if (domain)
entry = netlbl_domhsh_search(domain);
else
entry = netlbl_domhsh_search_def(domain);
ret_val = netlbl_domhsh_remove_entry(entry, audit_info);
if (family == AF_INET || family == AF_UNSPEC) {
if (domain)
entry = netlbl_domhsh_search(domain, AF_INET);
else
entry = netlbl_domhsh_search_def(domain, AF_INET);
ret_val = netlbl_domhsh_remove_entry(entry, audit_info);
if (ret_val && ret_val != -ENOENT)
goto done;
}
if (family == AF_INET6 || family == AF_UNSPEC) {
int ret_val2;
if (domain)
entry = netlbl_domhsh_search(domain, AF_INET6);
else
entry = netlbl_domhsh_search_def(domain, AF_INET6);
ret_val2 = netlbl_domhsh_remove_entry(entry, audit_info);
if (ret_val2 != -ENOENT)
ret_val = ret_val2;
}
done:
rcu_read_unlock();
return ret_val;
@ -651,32 +840,38 @@ int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info)
/**
* netlbl_domhsh_remove_default - Removes the default entry from the table
* @family: address family
* @audit_info: NetLabel audit information
*
* Description:
* Removes/resets the default entry for the domain hash table and handles any
* updates to the lower level protocol handler (i.e. CIPSO). Returns zero on
* success, non-zero on failure.
* Removes/resets the default entry corresponding to @family from the domain
* hash table and handles any updates to the lower level protocol handler
* (i.e. CIPSO). @family may be %AF_UNSPEC which removes all address family
* entries. Returns zero on success, negative on failure.
*
*/
int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info)
int netlbl_domhsh_remove_default(u16 family, struct netlbl_audit *audit_info)
{
return netlbl_domhsh_remove(NULL, audit_info);
return netlbl_domhsh_remove(NULL, family, audit_info);
}
/**
* netlbl_domhsh_getentry - Get an entry from the domain hash table
* @domain: the domain name to search for
* @family: address family
*
* Description:
* Look through the domain hash table searching for an entry to match @domain,
* return a pointer to a copy of the entry or NULL. The caller is responsible
* for ensuring that rcu_read_[un]lock() is called.
* with address family @family, return a pointer to a copy of the entry or
* NULL. The caller is responsible for ensuring that rcu_read_[un]lock() is
* called.
*
*/
struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain)
struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain, u16 family)
{
return netlbl_domhsh_search_def(domain);
if (family == AF_UNSPEC)
return NULL;
return netlbl_domhsh_search_def(domain, family);
}
/**
@ -696,7 +891,7 @@ struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain,
struct netlbl_dom_map *dom_iter;
struct netlbl_af4list *addr_iter;
dom_iter = netlbl_domhsh_search_def(domain);
dom_iter = netlbl_domhsh_search_def(domain, AF_INET);
if (dom_iter == NULL)
return NULL;
@ -726,7 +921,7 @@ struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain,
struct netlbl_dom_map *dom_iter;
struct netlbl_af6list *addr_iter;
dom_iter = netlbl_domhsh_search_def(domain);
dom_iter = netlbl_domhsh_search_def(domain, AF_INET6);
if (dom_iter == NULL)
return NULL;

View File

@ -51,6 +51,7 @@ struct netlbl_dommap_def {
union {
struct netlbl_domaddr_map *addrsel;
struct cipso_v4_doi *cipso;
struct calipso_doi *calipso;
};
};
#define netlbl_domhsh_addr4_entry(iter) \
@ -70,6 +71,7 @@ struct netlbl_domaddr6_map {
struct netlbl_dom_map {
char *domain;
u16 family;
struct netlbl_dommap_def def;
u32 valid;
@ -91,14 +93,23 @@ int netlbl_domhsh_remove_af4(const char *domain,
const struct in_addr *addr,
const struct in_addr *mask,
struct netlbl_audit *audit_info);
int netlbl_domhsh_remove(const char *domain, struct netlbl_audit *audit_info);
int netlbl_domhsh_remove_default(struct netlbl_audit *audit_info);
struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain);
int netlbl_domhsh_remove_af6(const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info);
int netlbl_domhsh_remove(const char *domain, u16 family,
struct netlbl_audit *audit_info);
int netlbl_domhsh_remove_default(u16 family, struct netlbl_audit *audit_info);
struct netlbl_dom_map *netlbl_domhsh_getentry(const char *domain, u16 family);
struct netlbl_dommap_def *netlbl_domhsh_getentry_af4(const char *domain,
__be32 addr);
#if IS_ENABLED(CONFIG_IPV6)
struct netlbl_dommap_def *netlbl_domhsh_getentry_af6(const char *domain,
const struct in6_addr *addr);
int netlbl_domhsh_remove_af6(const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info);
#endif /* IPv6 */
int netlbl_domhsh_walk(u32 *skip_bkt,

View File

@ -37,12 +37,14 @@
#include <net/ipv6.h>
#include <net/netlabel.h>
#include <net/cipso_ipv4.h>
#include <net/calipso.h>
#include <asm/bug.h>
#include <linux/atomic.h>
#include "netlabel_domainhash.h"
#include "netlabel_unlabeled.h"
#include "netlabel_cipso_v4.h"
#include "netlabel_calipso.h"
#include "netlabel_user.h"
#include "netlabel_mgmt.h"
#include "netlabel_addrlist.h"
@ -72,12 +74,17 @@ int netlbl_cfg_map_del(const char *domain,
struct netlbl_audit *audit_info)
{
if (addr == NULL && mask == NULL) {
return netlbl_domhsh_remove(domain, audit_info);
return netlbl_domhsh_remove(domain, family, audit_info);
} else if (addr != NULL && mask != NULL) {
switch (family) {
case AF_INET:
return netlbl_domhsh_remove_af4(domain, addr, mask,
audit_info);
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
return netlbl_domhsh_remove_af6(domain, addr, mask,
audit_info);
#endif /* IPv6 */
default:
return -EPFNOSUPPORT;
}
@ -119,6 +126,7 @@ int netlbl_cfg_unlbl_map_add(const char *domain,
if (entry->domain == NULL)
goto cfg_unlbl_map_add_failure;
}
entry->family = family;
if (addr == NULL && mask == NULL)
entry->def.type = NETLBL_NLTYPE_UNLABELED;
@ -345,6 +353,7 @@ int netlbl_cfg_cipsov4_map_add(u32 doi,
entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
if (entry == NULL)
goto out_entry;
entry->family = AF_INET;
if (domain != NULL) {
entry->domain = kstrdup(domain, GFP_ATOMIC);
if (entry->domain == NULL)
@ -399,6 +408,139 @@ int netlbl_cfg_cipsov4_map_add(u32 doi,
return ret_val;
}
/**
* netlbl_cfg_calipso_add - Add a new CALIPSO DOI definition
* @doi_def: CALIPSO DOI definition
* @audit_info: NetLabel audit information
*
* Description:
* Add a new CALIPSO DOI definition as defined by @doi_def. Returns zero on
* success and negative values on failure.
*
*/
int netlbl_cfg_calipso_add(struct calipso_doi *doi_def,
struct netlbl_audit *audit_info)
{
#if IS_ENABLED(CONFIG_IPV6)
return calipso_doi_add(doi_def, audit_info);
#else /* IPv6 */
return -ENOSYS;
#endif /* IPv6 */
}
/**
* netlbl_cfg_calipso_del - Remove an existing CALIPSO DOI definition
* @doi: CALIPSO DOI
* @audit_info: NetLabel audit information
*
* Description:
* Remove an existing CALIPSO DOI definition matching @doi. Returns zero on
* success and negative values on failure.
*
*/
void netlbl_cfg_calipso_del(u32 doi, struct netlbl_audit *audit_info)
{
#if IS_ENABLED(CONFIG_IPV6)
calipso_doi_remove(doi, audit_info);
#endif /* IPv6 */
}
/**
* netlbl_cfg_calipso_map_add - Add a new CALIPSO DOI mapping
* @doi: the CALIPSO DOI
* @domain: the domain mapping to add
* @addr: IP address
* @mask: IP address mask
* @audit_info: NetLabel audit information
*
* Description:
* Add a new NetLabel/LSM domain mapping for the given CALIPSO DOI to the
* NetLabel subsystem. A @domain value of NULL adds a new default domain
* mapping. Returns zero on success, negative values on failure.
*
*/
int netlbl_cfg_calipso_map_add(u32 doi,
const char *domain,
const struct in6_addr *addr,
const struct in6_addr *mask,
struct netlbl_audit *audit_info)
{
#if IS_ENABLED(CONFIG_IPV6)
int ret_val = -ENOMEM;
struct calipso_doi *doi_def;
struct netlbl_dom_map *entry;
struct netlbl_domaddr_map *addrmap = NULL;
struct netlbl_domaddr6_map *addrinfo = NULL;
doi_def = calipso_doi_getdef(doi);
if (doi_def == NULL)
return -ENOENT;
entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
if (entry == NULL)
goto out_entry;
entry->family = AF_INET6;
if (domain != NULL) {
entry->domain = kstrdup(domain, GFP_ATOMIC);
if (entry->domain == NULL)
goto out_domain;
}
if (addr == NULL && mask == NULL) {
entry->def.calipso = doi_def;
entry->def.type = NETLBL_NLTYPE_CALIPSO;
} else if (addr != NULL && mask != NULL) {
addrmap = kzalloc(sizeof(*addrmap), GFP_ATOMIC);
if (addrmap == NULL)
goto out_addrmap;
INIT_LIST_HEAD(&addrmap->list4);
INIT_LIST_HEAD(&addrmap->list6);
addrinfo = kzalloc(sizeof(*addrinfo), GFP_ATOMIC);
if (addrinfo == NULL)
goto out_addrinfo;
addrinfo->def.calipso = doi_def;
addrinfo->def.type = NETLBL_NLTYPE_CALIPSO;
addrinfo->list.addr = *addr;
addrinfo->list.addr.s6_addr32[0] &= mask->s6_addr32[0];
addrinfo->list.addr.s6_addr32[1] &= mask->s6_addr32[1];
addrinfo->list.addr.s6_addr32[2] &= mask->s6_addr32[2];
addrinfo->list.addr.s6_addr32[3] &= mask->s6_addr32[3];
addrinfo->list.mask = *mask;
addrinfo->list.valid = 1;
ret_val = netlbl_af6list_add(&addrinfo->list, &addrmap->list6);
if (ret_val != 0)
goto cfg_calipso_map_add_failure;
entry->def.addrsel = addrmap;
entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
} else {
ret_val = -EINVAL;
goto out_addrmap;
}
ret_val = netlbl_domhsh_add(entry, audit_info);
if (ret_val != 0)
goto cfg_calipso_map_add_failure;
return 0;
cfg_calipso_map_add_failure:
kfree(addrinfo);
out_addrinfo:
kfree(addrmap);
out_addrmap:
kfree(entry->domain);
out_domain:
kfree(entry);
out_entry:
calipso_doi_putdef(doi_def);
return ret_val;
#else /* IPv6 */
return -ENOSYS;
#endif /* IPv6 */
}
/*
* Security Attribute Functions
*/
@ -519,6 +661,7 @@ int netlbl_catmap_walk(struct netlbl_lsm_catmap *catmap, u32 offset)
return -ENOENT;
}
EXPORT_SYMBOL(netlbl_catmap_walk);
/**
* netlbl_catmap_walkrng - Find the end of a string of set bits
@ -609,20 +752,19 @@ int netlbl_catmap_getlong(struct netlbl_lsm_catmap *catmap,
off = catmap->startbit;
*offset = off;
}
iter = _netlbl_catmap_getnode(&catmap, off, _CM_F_NONE, 0);
iter = _netlbl_catmap_getnode(&catmap, off, _CM_F_WALK, 0);
if (iter == NULL) {
*offset = (u32)-1;
return 0;
}
if (off < iter->startbit) {
off = iter->startbit;
*offset = off;
*offset = iter->startbit;
off = 0;
} else
off -= iter->startbit;
idx = off / NETLBL_CATMAP_MAPSIZE;
*bitmap = iter->bitmap[idx] >> (off % NETLBL_CATMAP_SIZE);
*bitmap = iter->bitmap[idx] >> (off % NETLBL_CATMAP_MAPSIZE);
return 0;
}
@ -655,6 +797,7 @@ int netlbl_catmap_setbit(struct netlbl_lsm_catmap **catmap,
return 0;
}
EXPORT_SYMBOL(netlbl_catmap_setbit);
/**
* netlbl_catmap_setrng - Set a range of bits in a LSM secattr catmap
@ -727,6 +870,76 @@ int netlbl_catmap_setlong(struct netlbl_lsm_catmap **catmap,
return 0;
}
/* Bitmap functions
*/
/**
* netlbl_bitmap_walk - Walk a bitmap looking for a bit
* @bitmap: the bitmap
* @bitmap_len: length in bits
* @offset: starting offset
* @state: if non-zero, look for a set (1) bit else look for a cleared (0) bit
*
* Description:
* Starting at @offset, walk the bitmap from left to right until either the
* desired bit is found or we reach the end. Return the bit offset, -1 if
* not found, or -2 if error.
*/
int netlbl_bitmap_walk(const unsigned char *bitmap, u32 bitmap_len,
u32 offset, u8 state)
{
u32 bit_spot;
u32 byte_offset;
unsigned char bitmask;
unsigned char byte;
byte_offset = offset / 8;
byte = bitmap[byte_offset];
bit_spot = offset;
bitmask = 0x80 >> (offset % 8);
while (bit_spot < bitmap_len) {
if ((state && (byte & bitmask) == bitmask) ||
(state == 0 && (byte & bitmask) == 0))
return bit_spot;
bit_spot++;
bitmask >>= 1;
if (bitmask == 0) {
byte = bitmap[++byte_offset];
bitmask = 0x80;
}
}
return -1;
}
EXPORT_SYMBOL(netlbl_bitmap_walk);
/**
* netlbl_bitmap_setbit - Sets a single bit in a bitmap
* @bitmap: the bitmap
* @bit: the bit
* @state: if non-zero, set the bit (1) else clear the bit (0)
*
* Description:
* Set a single bit in the bitmask. Returns zero on success, negative values
* on error.
*/
void netlbl_bitmap_setbit(unsigned char *bitmap, u32 bit, u8 state)
{
u32 byte_spot;
u8 bitmask;
/* gcc always rounds to zero when doing integer division */
byte_spot = bit / 8;
bitmask = 0x80 >> (bit % 8);
if (state)
bitmap[byte_spot] |= bitmask;
else
bitmap[byte_spot] &= ~bitmask;
}
EXPORT_SYMBOL(netlbl_bitmap_setbit);
/*
* LSM Functions
*/
@ -774,7 +987,7 @@ int netlbl_sock_setattr(struct sock *sk,
struct netlbl_dom_map *dom_entry;
rcu_read_lock();
dom_entry = netlbl_domhsh_getentry(secattr->domain);
dom_entry = netlbl_domhsh_getentry(secattr->domain, family);
if (dom_entry == NULL) {
ret_val = -ENOENT;
goto socket_setattr_return;
@ -799,9 +1012,21 @@ int netlbl_sock_setattr(struct sock *sk,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
/* since we don't support any IPv6 labeling protocols right
* now we can optimize everything away until we do */
ret_val = 0;
switch (dom_entry->def.type) {
case NETLBL_NLTYPE_ADDRSELECT:
ret_val = -EDESTADDRREQ;
break;
case NETLBL_NLTYPE_CALIPSO:
ret_val = calipso_sock_setattr(sk,
dom_entry->def.calipso,
secattr);
break;
case NETLBL_NLTYPE_UNLABELED:
ret_val = 0;
break;
default:
ret_val = -ENOENT;
}
break;
#endif /* IPv6 */
default:
@ -824,7 +1049,16 @@ int netlbl_sock_setattr(struct sock *sk,
*/
void netlbl_sock_delattr(struct sock *sk)
{
cipso_v4_sock_delattr(sk);
switch (sk->sk_family) {
case AF_INET:
cipso_v4_sock_delattr(sk);
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
calipso_sock_delattr(sk);
break;
#endif /* IPv6 */
}
}
/**
@ -850,7 +1084,7 @@ int netlbl_sock_getattr(struct sock *sk,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
ret_val = -ENOMSG;
ret_val = calipso_sock_getattr(sk, secattr);
break;
#endif /* IPv6 */
default:
@ -878,6 +1112,9 @@ int netlbl_conn_setattr(struct sock *sk,
{
int ret_val;
struct sockaddr_in *addr4;
#if IS_ENABLED(CONFIG_IPV6)
struct sockaddr_in6 *addr6;
#endif
struct netlbl_dommap_def *entry;
rcu_read_lock();
@ -898,7 +1135,7 @@ int netlbl_conn_setattr(struct sock *sk,
case NETLBL_NLTYPE_UNLABELED:
/* just delete the protocols we support for right now
* but we could remove other protocols if needed */
cipso_v4_sock_delattr(sk);
netlbl_sock_delattr(sk);
ret_val = 0;
break;
default:
@ -907,9 +1144,27 @@ int netlbl_conn_setattr(struct sock *sk,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
/* since we don't support any IPv6 labeling protocols right
* now we can optimize everything away until we do */
ret_val = 0;
addr6 = (struct sockaddr_in6 *)addr;
entry = netlbl_domhsh_getentry_af6(secattr->domain,
&addr6->sin6_addr);
if (entry == NULL) {
ret_val = -ENOENT;
goto conn_setattr_return;
}
switch (entry->type) {
case NETLBL_NLTYPE_CALIPSO:
ret_val = calipso_sock_setattr(sk,
entry->calipso, secattr);
break;
case NETLBL_NLTYPE_UNLABELED:
/* just delete the protocols we support for right now
* but we could remove other protocols if needed */
netlbl_sock_delattr(sk);
ret_val = 0;
break;
default:
ret_val = -ENOENT;
}
break;
#endif /* IPv6 */
default:
@ -936,12 +1191,13 @@ int netlbl_req_setattr(struct request_sock *req,
{
int ret_val;
struct netlbl_dommap_def *entry;
struct inet_request_sock *ireq = inet_rsk(req);
rcu_read_lock();
switch (req->rsk_ops->family) {
case AF_INET:
entry = netlbl_domhsh_getentry_af4(secattr->domain,
inet_rsk(req)->ir_rmt_addr);
ireq->ir_rmt_addr);
if (entry == NULL) {
ret_val = -ENOENT;
goto req_setattr_return;
@ -952,9 +1208,7 @@ int netlbl_req_setattr(struct request_sock *req,
entry->cipso, secattr);
break;
case NETLBL_NLTYPE_UNLABELED:
/* just delete the protocols we support for right now
* but we could remove other protocols if needed */
cipso_v4_req_delattr(req);
netlbl_req_delattr(req);
ret_val = 0;
break;
default:
@ -963,9 +1217,24 @@ int netlbl_req_setattr(struct request_sock *req,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
/* since we don't support any IPv6 labeling protocols right
* now we can optimize everything away until we do */
ret_val = 0;
entry = netlbl_domhsh_getentry_af6(secattr->domain,
&ireq->ir_v6_rmt_addr);
if (entry == NULL) {
ret_val = -ENOENT;
goto req_setattr_return;
}
switch (entry->type) {
case NETLBL_NLTYPE_CALIPSO:
ret_val = calipso_req_setattr(req,
entry->calipso, secattr);
break;
case NETLBL_NLTYPE_UNLABELED:
netlbl_req_delattr(req);
ret_val = 0;
break;
default:
ret_val = -ENOENT;
}
break;
#endif /* IPv6 */
default:
@ -987,7 +1256,16 @@ int netlbl_req_setattr(struct request_sock *req,
*/
void netlbl_req_delattr(struct request_sock *req)
{
cipso_v4_req_delattr(req);
switch (req->rsk_ops->family) {
case AF_INET:
cipso_v4_req_delattr(req);
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
calipso_req_delattr(req);
break;
#endif /* IPv6 */
}
}
/**
@ -1007,13 +1285,17 @@ int netlbl_skbuff_setattr(struct sk_buff *skb,
{
int ret_val;
struct iphdr *hdr4;
#if IS_ENABLED(CONFIG_IPV6)
struct ipv6hdr *hdr6;
#endif
struct netlbl_dommap_def *entry;
rcu_read_lock();
switch (family) {
case AF_INET:
hdr4 = ip_hdr(skb);
entry = netlbl_domhsh_getentry_af4(secattr->domain,hdr4->daddr);
entry = netlbl_domhsh_getentry_af4(secattr->domain,
hdr4->daddr);
if (entry == NULL) {
ret_val = -ENOENT;
goto skbuff_setattr_return;
@ -1034,9 +1316,26 @@ int netlbl_skbuff_setattr(struct sk_buff *skb,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
/* since we don't support any IPv6 labeling protocols right
* now we can optimize everything away until we do */
ret_val = 0;
hdr6 = ipv6_hdr(skb);
entry = netlbl_domhsh_getentry_af6(secattr->domain,
&hdr6->daddr);
if (entry == NULL) {
ret_val = -ENOENT;
goto skbuff_setattr_return;
}
switch (entry->type) {
case NETLBL_NLTYPE_CALIPSO:
ret_val = calipso_skbuff_setattr(skb, entry->calipso,
secattr);
break;
case NETLBL_NLTYPE_UNLABELED:
/* just delete the protocols we support for right now
* but we could remove other protocols if needed */
ret_val = calipso_skbuff_delattr(skb);
break;
default:
ret_val = -ENOENT;
}
break;
#endif /* IPv6 */
default:
@ -1075,6 +1374,9 @@ int netlbl_skbuff_getattr(const struct sk_buff *skb,
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
ptr = calipso_optptr(skb);
if (ptr && calipso_getattr(ptr, secattr) == 0)
return 0;
break;
#endif /* IPv6 */
}
@ -1085,6 +1387,7 @@ int netlbl_skbuff_getattr(const struct sk_buff *skb,
/**
* netlbl_skbuff_err - Handle a LSM error on a sk_buff
* @skb: the packet
* @family: the family
* @error: the error code
* @gateway: true if host is acting as a gateway, false otherwise
*
@ -1094,10 +1397,14 @@ int netlbl_skbuff_getattr(const struct sk_buff *skb,
* according to the packet's labeling protocol.
*
*/
void netlbl_skbuff_err(struct sk_buff *skb, int error, int gateway)
void netlbl_skbuff_err(struct sk_buff *skb, u16 family, int error, int gateway)
{
if (cipso_v4_optptr(skb))
cipso_v4_error(skb, error, gateway);
switch (family) {
case AF_INET:
if (cipso_v4_optptr(skb))
cipso_v4_error(skb, error, gateway);
break;
}
}
/**
@ -1112,11 +1419,15 @@ void netlbl_skbuff_err(struct sk_buff *skb, int error, int gateway)
void netlbl_cache_invalidate(void)
{
cipso_v4_cache_invalidate();
#if IS_ENABLED(CONFIG_IPV6)
calipso_cache_invalidate();
#endif /* IPv6 */
}
/**
* netlbl_cache_add - Add an entry to a NetLabel protocol cache
* @skb: the packet
* @family: the family
* @secattr: the packet's security attributes
*
* Description:
@ -1125,7 +1436,7 @@ void netlbl_cache_invalidate(void)
* values on error.
*
*/
int netlbl_cache_add(const struct sk_buff *skb,
int netlbl_cache_add(const struct sk_buff *skb, u16 family,
const struct netlbl_lsm_secattr *secattr)
{
unsigned char *ptr;
@ -1133,10 +1444,20 @@ int netlbl_cache_add(const struct sk_buff *skb,
if ((secattr->flags & NETLBL_SECATTR_CACHE) == 0)
return -ENOMSG;
ptr = cipso_v4_optptr(skb);
if (ptr)
return cipso_v4_cache_add(ptr, secattr);
switch (family) {
case AF_INET:
ptr = cipso_v4_optptr(skb);
if (ptr)
return cipso_v4_cache_add(ptr, secattr);
break;
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
ptr = calipso_optptr(skb);
if (ptr)
return calipso_cache_add(ptr, secattr);
break;
#endif /* IPv6 */
}
return -ENOMSG;
}
@ -1161,6 +1482,7 @@ struct audit_buffer *netlbl_audit_start(int type,
{
return netlbl_audit_start_common(type, audit_info);
}
EXPORT_SYMBOL(netlbl_audit_start);
/*
* Setup Functions

View File

@ -41,8 +41,10 @@
#include <net/ipv6.h>
#include <net/netlabel.h>
#include <net/cipso_ipv4.h>
#include <net/calipso.h>
#include <linux/atomic.h>
#include "netlabel_calipso.h"
#include "netlabel_domainhash.h"
#include "netlabel_user.h"
#include "netlabel_mgmt.h"
@ -72,6 +74,8 @@ static const struct nla_policy netlbl_mgmt_genl_policy[NLBL_MGMT_A_MAX + 1] = {
[NLBL_MGMT_A_PROTOCOL] = { .type = NLA_U32 },
[NLBL_MGMT_A_VERSION] = { .type = NLA_U32 },
[NLBL_MGMT_A_CV4DOI] = { .type = NLA_U32 },
[NLBL_MGMT_A_FAMILY] = { .type = NLA_U16 },
[NLBL_MGMT_A_CLPDOI] = { .type = NLA_U32 },
};
/*
@ -95,6 +99,9 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
int ret_val = -EINVAL;
struct netlbl_domaddr_map *addrmap = NULL;
struct cipso_v4_doi *cipsov4 = NULL;
#if IS_ENABLED(CONFIG_IPV6)
struct calipso_doi *calipso = NULL;
#endif
u32 tmp_val;
struct netlbl_dom_map *entry = kzalloc(sizeof(*entry), GFP_KERNEL);
@ -119,6 +126,11 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
switch (entry->def.type) {
case NETLBL_NLTYPE_UNLABELED:
if (info->attrs[NLBL_MGMT_A_FAMILY])
entry->family =
nla_get_u16(info->attrs[NLBL_MGMT_A_FAMILY]);
else
entry->family = AF_UNSPEC;
break;
case NETLBL_NLTYPE_CIPSOV4:
if (!info->attrs[NLBL_MGMT_A_CV4DOI])
@ -128,12 +140,30 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
cipsov4 = cipso_v4_doi_getdef(tmp_val);
if (cipsov4 == NULL)
goto add_free_domain;
entry->family = AF_INET;
entry->def.cipso = cipsov4;
break;
#if IS_ENABLED(CONFIG_IPV6)
case NETLBL_NLTYPE_CALIPSO:
if (!info->attrs[NLBL_MGMT_A_CLPDOI])
goto add_free_domain;
tmp_val = nla_get_u32(info->attrs[NLBL_MGMT_A_CLPDOI]);
calipso = calipso_doi_getdef(tmp_val);
if (calipso == NULL)
goto add_free_domain;
entry->family = AF_INET6;
entry->def.calipso = calipso;
break;
#endif /* IPv6 */
default:
goto add_free_domain;
}
if ((entry->family == AF_INET && info->attrs[NLBL_MGMT_A_IPV6ADDR]) ||
(entry->family == AF_INET6 && info->attrs[NLBL_MGMT_A_IPV4ADDR]))
goto add_doi_put_def;
if (info->attrs[NLBL_MGMT_A_IPV4ADDR]) {
struct in_addr *addr;
struct in_addr *mask;
@ -178,6 +208,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
goto add_free_addrmap;
}
entry->family = AF_INET;
entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
entry->def.addrsel = addrmap;
#if IS_ENABLED(CONFIG_IPV6)
@ -220,6 +251,8 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
map->list.mask = *mask;
map->list.valid = 1;
map->def.type = entry->def.type;
if (calipso)
map->def.calipso = calipso;
ret_val = netlbl_af6list_add(&map->list, &addrmap->list6);
if (ret_val != 0) {
@ -227,6 +260,7 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
goto add_free_addrmap;
}
entry->family = AF_INET6;
entry->def.type = NETLBL_NLTYPE_ADDRSELECT;
entry->def.addrsel = addrmap;
#endif /* IPv6 */
@ -242,6 +276,9 @@ static int netlbl_mgmt_add_common(struct genl_info *info,
kfree(addrmap);
add_doi_put_def:
cipso_v4_doi_putdef(cipsov4);
#if IS_ENABLED(CONFIG_IPV6)
calipso_doi_putdef(calipso);
#endif
add_free_domain:
kfree(entry->domain);
add_free_entry:
@ -278,6 +315,10 @@ static int netlbl_mgmt_listentry(struct sk_buff *skb,
return ret_val;
}
ret_val = nla_put_u16(skb, NLBL_MGMT_A_FAMILY, entry->family);
if (ret_val != 0)
return ret_val;
switch (entry->def.type) {
case NETLBL_NLTYPE_ADDRSELECT:
nla_a = nla_nest_start(skb, NLBL_MGMT_A_SELECTORLIST);
@ -340,6 +381,15 @@ static int netlbl_mgmt_listentry(struct sk_buff *skb,
if (ret_val != 0)
return ret_val;
switch (map6->def.type) {
case NETLBL_NLTYPE_CALIPSO:
ret_val = nla_put_u32(skb, NLBL_MGMT_A_CLPDOI,
map6->def.calipso->doi);
if (ret_val != 0)
return ret_val;
break;
}
nla_nest_end(skb, nla_b);
}
#endif /* IPv6 */
@ -347,15 +397,25 @@ static int netlbl_mgmt_listentry(struct sk_buff *skb,
nla_nest_end(skb, nla_a);
break;
case NETLBL_NLTYPE_UNLABELED:
ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type);
ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL,
entry->def.type);
break;
case NETLBL_NLTYPE_CIPSOV4:
ret_val = nla_put_u32(skb,NLBL_MGMT_A_PROTOCOL,entry->def.type);
ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL,
entry->def.type);
if (ret_val != 0)
return ret_val;
ret_val = nla_put_u32(skb, NLBL_MGMT_A_CV4DOI,
entry->def.cipso->doi);
break;
case NETLBL_NLTYPE_CALIPSO:
ret_val = nla_put_u32(skb, NLBL_MGMT_A_PROTOCOL,
entry->def.type);
if (ret_val != 0)
return ret_val;
ret_val = nla_put_u32(skb, NLBL_MGMT_A_CLPDOI,
entry->def.calipso->doi);
break;
}
return ret_val;
@ -418,7 +478,7 @@ static int netlbl_mgmt_remove(struct sk_buff *skb, struct genl_info *info)
netlbl_netlink_auditinfo(skb, &audit_info);
domain = nla_data(info->attrs[NLBL_MGMT_A_DOMAIN]);
return netlbl_domhsh_remove(domain, &audit_info);
return netlbl_domhsh_remove(domain, AF_UNSPEC, &audit_info);
}
/**
@ -536,7 +596,7 @@ static int netlbl_mgmt_removedef(struct sk_buff *skb, struct genl_info *info)
netlbl_netlink_auditinfo(skb, &audit_info);
return netlbl_domhsh_remove_default(&audit_info);
return netlbl_domhsh_remove_default(AF_UNSPEC, &audit_info);
}
/**
@ -556,6 +616,12 @@ static int netlbl_mgmt_listdef(struct sk_buff *skb, struct genl_info *info)
struct sk_buff *ans_skb = NULL;
void *data;
struct netlbl_dom_map *entry;
u16 family;
if (info->attrs[NLBL_MGMT_A_FAMILY])
family = nla_get_u16(info->attrs[NLBL_MGMT_A_FAMILY]);
else
family = AF_INET;
ans_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (ans_skb == NULL)
@ -566,7 +632,7 @@ static int netlbl_mgmt_listdef(struct sk_buff *skb, struct genl_info *info)
goto listdef_failure;
rcu_read_lock();
entry = netlbl_domhsh_getentry(NULL);
entry = netlbl_domhsh_getentry(NULL, family);
if (entry == NULL) {
ret_val = -ENOENT;
goto listdef_failure_lock;
@ -651,6 +717,15 @@ static int netlbl_mgmt_protocols(struct sk_buff *skb,
goto protocols_return;
protos_sent++;
}
#if IS_ENABLED(CONFIG_IPV6)
if (protos_sent == 2) {
if (netlbl_mgmt_protocols_cb(skb,
cb,
NETLBL_NLTYPE_CALIPSO) < 0)
goto protocols_return;
protos_sent++;
}
#endif
protocols_return:
cb->args[0] = protos_sent;

View File

@ -58,7 +58,10 @@
*
* NLBL_MGMT_A_CV4DOI
*
* If using NETLBL_NLTYPE_UNLABELED no other attributes are required.
* If using NETLBL_NLTYPE_UNLABELED no other attributes are required,
* however the following attribute may optionally be sent:
*
* NLBL_MGMT_A_FAMILY
*
* o REMOVE:
* Sent by an application to remove a domain mapping from the NetLabel
@ -77,6 +80,7 @@
* Required attributes:
*
* NLBL_MGMT_A_DOMAIN
* NLBL_MGMT_A_FAMILY
*
* If the IP address selectors are not used the following attribute is
* required:
@ -108,7 +112,10 @@
*
* NLBL_MGMT_A_CV4DOI
*
* If using NETLBL_NLTYPE_UNLABELED no other attributes are required.
* If using NETLBL_NLTYPE_UNLABELED no other attributes are required,
* however the following attribute may optionally be sent:
*
* NLBL_MGMT_A_FAMILY
*
* o REMOVEDEF:
* Sent by an application to remove the default domain mapping from the
@ -117,13 +124,17 @@
* o LISTDEF:
* This message can be sent either from an application or by the kernel in
* response to an application generated LISTDEF message. When sent by an
* application there is no payload. On success the kernel should send a
* response using the following format.
* application there may be an optional payload.
*
* If the IP address selectors are not used the following attribute is
* NLBL_MGMT_A_FAMILY
*
* On success the kernel should send a response using the following format:
*
* If the IP address selectors are not used the following attributes are
* required:
*
* NLBL_MGMT_A_PROTOCOL
* NLBL_MGMT_A_FAMILY
*
* If the IP address selectors are used then the following attritbute is
* required:
@ -209,6 +220,12 @@ enum {
/* (NLA_NESTED)
* the selector list, there must be at least one
* NLBL_MGMT_A_ADDRSELECTOR attribute */
NLBL_MGMT_A_FAMILY,
/* (NLA_U16)
* The address family */
NLBL_MGMT_A_CLPDOI,
/* (NLA_U32)
* the CALIPSO DOI value */
__NLBL_MGMT_A_MAX,
};
#define NLBL_MGMT_A_MAX (__NLBL_MGMT_A_MAX - 1)

View File

@ -116,8 +116,8 @@ struct netlbl_unlhsh_walk_arg {
static DEFINE_SPINLOCK(netlbl_unlhsh_lock);
#define netlbl_unlhsh_rcu_deref(p) \
rcu_dereference_check(p, lockdep_is_held(&netlbl_unlhsh_lock))
static struct netlbl_unlhsh_tbl *netlbl_unlhsh;
static struct netlbl_unlhsh_iface *netlbl_unlhsh_def;
static struct netlbl_unlhsh_tbl __rcu *netlbl_unlhsh;
static struct netlbl_unlhsh_iface __rcu *netlbl_unlhsh_def;
/* Accept unlabeled packets flag */
static u8 netlabel_unlabel_acceptflg;
@ -1537,6 +1537,7 @@ int __init netlbl_unlabel_defconf(void)
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
if (entry == NULL)
return -ENOMEM;
entry->family = AF_UNSPEC;
entry->def.type = NETLBL_NLTYPE_UNLABELED;
ret_val = netlbl_domhsh_add_default(entry, &audit_info);
if (ret_val != 0)

View File

@ -44,6 +44,7 @@
#include "netlabel_mgmt.h"
#include "netlabel_unlabeled.h"
#include "netlabel_cipso_v4.h"
#include "netlabel_calipso.h"
#include "netlabel_user.h"
/*
@ -71,6 +72,10 @@ int __init netlbl_netlink_init(void)
if (ret_val != 0)
return ret_val;
ret_val = netlbl_calipso_genl_init();
if (ret_val != 0)
return ret_val;
return netlbl_unlabel_genl_init();
}

View File

@ -46,7 +46,7 @@ static int net_ctl_permissions(struct ctl_table_header *head,
kgid_t root_gid = make_kgid(net->user_ns, 0);
/* Allow network administrator to have same access as root. */
if (ns_capable(net->user_ns, CAP_NET_ADMIN) ||
if (ns_capable_noaudit(net->user_ns, CAP_NET_ADMIN) ||
uid_eq(root_uid, current_euid())) {
int mode = (table->mode >> 6) & 7;
return (mode << 6) | (mode << 3) | mode;

View File

@ -92,4 +92,11 @@ config SAMPLE_CONNECTOR
with it.
See also Documentation/connector/connector.txt
config SAMPLE_SECCOMP
tristate "Build seccomp sample code -- loadable modules only"
depends on SECCOMP_FILTER && m
help
Build samples of seccomp filters using various methods of
BPF filter construction.
endif # SAMPLES

View File

@ -1,7 +1,7 @@
# kbuild trick to avoid linker error. Can be omitted if a module is built.
obj- := dummy.o
hostprogs-$(CONFIG_SECCOMP_FILTER) := bpf-fancy dropper bpf-direct
hostprogs-$(CONFIG_SAMPLE_SECCOMP) := bpf-fancy dropper bpf-direct
HOSTCFLAGS_bpf-fancy.o += -I$(objtree)/usr/include
HOSTCFLAGS_bpf-fancy.o += -idirafter $(objtree)/include

View File

@ -1,6 +1,6 @@
/* Sign a module file using the given key.
*
* Copyright © 2014-2015 Red Hat, Inc. All Rights Reserved.
* Copyright © 2014-2016 Red Hat, Inc. All Rights Reserved.
* Copyright © 2015 Intel Corporation.
* Copyright © 2016 Hewlett Packard Enterprise Development LP
*
@ -167,19 +167,37 @@ static EVP_PKEY *read_private_key(const char *private_key_name)
static X509 *read_x509(const char *x509_name)
{
unsigned char buf[2];
X509 *x509;
BIO *b;
int n;
b = BIO_new_file(x509_name, "rb");
ERR(!b, "%s", x509_name);
x509 = d2i_X509_bio(b, NULL); /* Binary encoded X.509 */
if (!x509) {
ERR(BIO_reset(b) != 1, "%s", x509_name);
x509 = PEM_read_bio_X509(b, NULL, NULL,
NULL); /* PEM encoded X.509 */
if (x509)
drain_openssl_errors();
/* Look at the first two bytes of the file to determine the encoding */
n = BIO_read(b, buf, 2);
if (n != 2) {
if (BIO_should_retry(b)) {
fprintf(stderr, "%s: Read wanted retry\n", x509_name);
exit(1);
}
if (n >= 0) {
fprintf(stderr, "%s: Short read\n", x509_name);
exit(1);
}
ERR(1, "%s", x509_name);
}
ERR(BIO_reset(b) != 0, "%s", x509_name);
if (buf[0] == 0x30 && buf[1] >= 0x81 && buf[1] <= 0x84)
/* Assume raw DER encoded X.509 */
x509 = d2i_X509_bio(b, NULL);
else
/* Assume PEM encoded X.509 */
x509 = PEM_read_bio_X509(b, NULL, NULL, NULL);
BIO_free(b);
ERR(!x509, "%s", x509_name);

View File

@ -31,13 +31,26 @@ config SECURITY_APPARMOR_BOOTPARAM_VALUE
If you are unsure how to answer this question, answer 1.
config SECURITY_APPARMOR_HASH
bool "SHA1 hash of loaded profiles"
bool "Enable introspection of sha1 hashes for loaded profiles"
depends on SECURITY_APPARMOR
select CRYPTO
select CRYPTO_SHA1
default y
help
This option selects whether sha1 hashing is done against loaded
profiles and exported for inspection to user space via the apparmor
filesystem.
This option selects whether introspection of loaded policy
is available to userspace via the apparmor filesystem.
config SECURITY_APPARMOR_HASH_DEFAULT
bool "Enable policy hash introspection by default"
depends on SECURITY_APPARMOR_HASH
default y
help
This option selects whether sha1 hashing of loaded policy
is enabled by default. The generation of sha1 hashes for
loaded policy provide system administrators a quick way
to verify that policy in the kernel matches what is expected,
however it can slow down policy load on some devices. In
these cases policy hashing can be disabled by default and
enabled only if needed.

View File

@ -331,6 +331,7 @@ static int aa_fs_seq_hash_show(struct seq_file *seq, void *v)
seq_printf(seq, "%.2x", profile->hash[i]);
seq_puts(seq, "\n");
}
aa_put_profile(profile);
return 0;
}
@ -379,6 +380,8 @@ void __aa_fs_profile_migrate_dents(struct aa_profile *old,
for (i = 0; i < AAFS_PROF_SIZEOF; i++) {
new->dents[i] = old->dents[i];
if (new->dents[i])
new->dents[i]->d_inode->i_mtime = CURRENT_TIME;
old->dents[i] = NULL;
}
}
@ -550,8 +553,6 @@ int __aa_fs_namespace_mkdir(struct aa_namespace *ns, struct dentry *parent,
}
#define list_entry_next(pos, member) \
list_entry(pos->member.next, typeof(*pos), member)
#define list_entry_is_head(pos, head, member) (&pos->member == (head))
/**
@ -582,7 +583,7 @@ static struct aa_namespace *__next_namespace(struct aa_namespace *root,
parent = ns->parent;
while (ns != root) {
mutex_unlock(&ns->lock);
next = list_entry_next(ns, base.list);
next = list_next_entry(ns, base.list);
if (!list_entry_is_head(next, &parent->sub_ns, base.list)) {
mutex_lock(&next->lock);
return next;
@ -636,7 +637,7 @@ static struct aa_profile *__next_profile(struct aa_profile *p)
parent = rcu_dereference_protected(p->parent,
mutex_is_locked(&p->ns->lock));
while (parent) {
p = list_entry_next(p, base.list);
p = list_next_entry(p, base.list);
if (!list_entry_is_head(p, &parent->base.profiles, base.list))
return p;
p = parent;
@ -645,7 +646,7 @@ static struct aa_profile *__next_profile(struct aa_profile *p)
}
/* is next another profile in the namespace */
p = list_entry_next(p, base.list);
p = list_next_entry(p, base.list);
if (!list_entry_is_head(p, &ns->base.profiles, base.list))
return p;

View File

@ -200,7 +200,8 @@ int aa_audit(int type, struct aa_profile *profile, gfp_t gfp,
if (sa->aad->type == AUDIT_APPARMOR_KILL)
(void)send_sig_info(SIGKILL, NULL,
sa->u.tsk ? sa->u.tsk : current);
sa->type == LSM_AUDIT_DATA_TASK && sa->u.tsk ?
sa->u.tsk : current);
if (sa->aad->type == AUDIT_APPARMOR_ALLOWED)
return complain_error(sa->aad->error);

View File

@ -39,6 +39,9 @@ int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start,
int error = -ENOMEM;
u32 le32_version = cpu_to_le32(version);
if (!aa_g_hash_policy)
return 0;
if (!apparmor_tfm)
return 0;

View File

@ -346,7 +346,7 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
file_inode(bprm->file)->i_uid,
file_inode(bprm->file)->i_mode
};
const char *name = NULL, *target = NULL, *info = NULL;
const char *name = NULL, *info = NULL;
int error = 0;
if (bprm->cred_prepared)
@ -399,6 +399,7 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
if (cxt->onexec) {
struct file_perms cp;
info = "change_profile onexec";
new_profile = aa_get_newest_profile(cxt->onexec);
if (!(perms.allow & AA_MAY_ONEXEC))
goto audit;
@ -413,7 +414,6 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
if (!(cp.allow & AA_MAY_ONEXEC))
goto audit;
new_profile = aa_get_newest_profile(cxt->onexec);
goto apply;
}
@ -433,7 +433,7 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
new_profile = aa_get_newest_profile(ns->unconfined);
info = "ux fallback";
} else {
error = -ENOENT;
error = -EACCES;
info = "profile not found";
/* remove MAY_EXEC to audit as failure */
perms.allow &= ~MAY_EXEC;
@ -445,10 +445,8 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
if (!new_profile) {
error = -ENOMEM;
info = "could not create null profile";
} else {
} else
error = -EACCES;
target = new_profile->base.hname;
}
perms.xindex |= AA_X_UNSAFE;
} else
/* fail exec */
@ -459,7 +457,6 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
* fail the exec.
*/
if (bprm->unsafe & LSM_UNSAFE_NO_NEW_PRIVS) {
aa_put_profile(new_profile);
error = -EPERM;
goto cleanup;
}
@ -474,10 +471,8 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
if (bprm->unsafe & (LSM_UNSAFE_PTRACE | LSM_UNSAFE_PTRACE_CAP)) {
error = may_change_ptraced_domain(new_profile);
if (error) {
aa_put_profile(new_profile);
if (error)
goto audit;
}
}
/* Determine if secure exec is needed.
@ -498,7 +493,6 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
bprm->unsafe |= AA_SECURE_X_NEEDED;
}
apply:
target = new_profile->base.hname;
/* when transitioning profiles clear unsafe personality bits */
bprm->per_clear |= PER_CLEAR_ON_SETID;
@ -506,15 +500,19 @@ int apparmor_bprm_set_creds(struct linux_binprm *bprm)
aa_put_profile(cxt->profile);
/* transfer new profile reference will be released when cxt is freed */
cxt->profile = new_profile;
new_profile = NULL;
/* clear out all temporary/transitional state from the context */
aa_clear_task_cxt_trans(cxt);
audit:
error = aa_audit_file(profile, &perms, GFP_KERNEL, OP_EXEC, MAY_EXEC,
name, target, cond.uid, info, error);
name,
new_profile ? new_profile->base.hname : NULL,
cond.uid, info, error);
cleanup:
aa_put_profile(new_profile);
aa_put_profile(profile);
kfree(buffer);

View File

@ -110,7 +110,8 @@ int aa_audit_file(struct aa_profile *profile, struct file_perms *perms,
int type = AUDIT_APPARMOR_AUTO;
struct common_audit_data sa;
struct apparmor_audit_data aad = {0,};
sa.type = LSM_AUDIT_DATA_NONE;
sa.type = LSM_AUDIT_DATA_TASK;
sa.u.tsk = NULL;
sa.aad = &aad;
aad.op = op,
aad.fs.request = request;

View File

@ -37,6 +37,7 @@
extern enum audit_mode aa_g_audit;
extern bool aa_g_audit_header;
extern bool aa_g_debug;
extern bool aa_g_hash_policy;
extern bool aa_g_lock_policy;
extern bool aa_g_logsyscall;
extern bool aa_g_paranoid_load;

View File

@ -62,6 +62,7 @@ struct table_set_header {
#define YYTD_ID_ACCEPT2 6
#define YYTD_ID_NXT 7
#define YYTD_ID_TSIZE 8
#define YYTD_ID_MAX 8
#define YYTD_DATA8 1
#define YYTD_DATA16 2

View File

@ -403,6 +403,8 @@ static inline int AUDIT_MODE(struct aa_profile *profile)
return profile->audit;
}
bool policy_view_capable(void);
bool policy_admin_capable(void);
bool aa_may_manage_policy(int op);
#endif /* __AA_POLICY_H */

View File

@ -529,7 +529,7 @@ static int apparmor_setprocattr(struct task_struct *task, char *name,
if (!*args)
goto out;
arg_size = size - (args - (char *) value);
arg_size = size - (args - (largs ? largs : (char *) value));
if (strcmp(name, "current") == 0) {
if (strcmp(command, "changehat") == 0) {
error = aa_setprocattr_changehat(args, arg_size,
@ -671,6 +671,12 @@ enum profile_mode aa_g_profile_mode = APPARMOR_ENFORCE;
module_param_call(mode, param_set_mode, param_get_mode,
&aa_g_profile_mode, S_IRUSR | S_IWUSR);
#ifdef CONFIG_SECURITY_APPARMOR_HASH
/* whether policy verification hashing is enabled */
bool aa_g_hash_policy = IS_ENABLED(CONFIG_SECURITY_APPARMOR_HASH_DEFAULT);
module_param_named(hash_policy, aa_g_hash_policy, aabool, S_IRUSR | S_IWUSR);
#endif
/* Debug mode */
bool aa_g_debug;
module_param_named(debug, aa_g_debug, aabool, S_IRUSR | S_IWUSR);
@ -728,51 +734,49 @@ __setup("apparmor=", apparmor_enabled_setup);
/* set global flag turning off the ability to load policy */
static int param_set_aalockpolicy(const char *val, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
if (aa_g_lock_policy)
return -EACCES;
return param_set_bool(val, kp);
}
static int param_get_aalockpolicy(char *buffer, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_view_capable())
return -EPERM;
return param_get_bool(buffer, kp);
}
static int param_set_aabool(const char *val, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
return param_set_bool(val, kp);
}
static int param_get_aabool(char *buffer, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_view_capable())
return -EPERM;
return param_get_bool(buffer, kp);
}
static int param_set_aauint(const char *val, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
return param_set_uint(val, kp);
}
static int param_get_aauint(char *buffer, const struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_view_capable())
return -EPERM;
return param_get_uint(buffer, kp);
}
static int param_get_audit(char *buffer, struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_view_capable())
return -EPERM;
if (!apparmor_enabled)
@ -784,7 +788,7 @@ static int param_get_audit(char *buffer, struct kernel_param *kp)
static int param_set_audit(const char *val, struct kernel_param *kp)
{
int i;
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
if (!apparmor_enabled)
@ -805,7 +809,7 @@ static int param_set_audit(const char *val, struct kernel_param *kp)
static int param_get_mode(char *buffer, struct kernel_param *kp)
{
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
if (!apparmor_enabled)
@ -817,7 +821,7 @@ static int param_get_mode(char *buffer, struct kernel_param *kp)
static int param_set_mode(const char *val, struct kernel_param *kp)
{
int i;
if (!capable(CAP_MAC_ADMIN))
if (!policy_admin_capable())
return -EPERM;
if (!apparmor_enabled)

View File

@ -47,6 +47,8 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
* it every time we use td_id as an index
*/
th.td_id = be16_to_cpu(*(u16 *) (blob)) - 1;
if (th.td_id > YYTD_ID_MAX)
goto out;
th.td_flags = be16_to_cpu(*(u16 *) (blob + 2));
th.td_lolen = be32_to_cpu(*(u32 *) (blob + 8));
blob += sizeof(struct table_header);
@ -61,7 +63,9 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
table = kvzalloc(tsize);
if (table) {
*table = th;
table->td_id = th.td_id;
table->td_flags = th.td_flags;
table->td_lolen = th.td_lolen;
if (th.td_flags == YYTD_DATA8)
UNPACK_ARRAY(table->td_data, blob, th.td_lolen,
u8, byte_to_byte);
@ -73,14 +77,14 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
u32, be32_to_cpu);
else
goto fail;
/* if table was vmalloced make sure the page tables are synced
* before it is used, as it goes live to all cpus.
*/
if (is_vmalloc_addr(table))
vm_unmap_aliases();
}
out:
/* if table was vmalloced make sure the page tables are synced
* before it is used, as it goes live to all cpus.
*/
if (is_vmalloc_addr(table))
vm_unmap_aliases();
return table;
fail:
kvfree(table);

Some files were not shown because too many files have changed in this diff Show More