Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'stable/for-linus-3.12-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip

Pull Xen updates from Konrad Rzeszutek Wilk:
"A couple of features and a ton of bug-fixes. There is also some
maintership changes. Jeremy is enjoying the full-time work at the
startup and as much as he would love to help - he can't find the time.
I have a bunch of other things that I promised to work on - paravirt
diet, get SWIOTLB working everywhere, etc, but haven't been able to
find the time.

As such both David Vrabel and Boris Ostrovsky have graciously
volunteered to help with the maintership role. They will keep the lid
on regressions, bug-fixes, etc. I will be in the background to help -
but eventually there will be less of me doing the Xen GIT pulls and
more of them. Stefano is still doing the ARM/ARM64 and will continue
on doing so.

Features:
- Xen Trusted Platform Module (TPM) frontend driver - with the
backend in MiniOS.
- Scalability improvements in event channel.
- Two extra Xen co-maintainers (David, Boris) and one going away (Jeremy)

Bug-fixes:
- Make the 1:1 mapping work during early bootup on selective regions.
- Add scratch page to balloon driver to deal with unexpected code
still holding on stale pages.
- Allow NMIs on PV guests (64-bit only)
- Remove unnecessary TLB flush in M2P code.
- Fixes duplicate callbacks in Xen granttable code.
- Fixes in PRIVCMD_MMAPBATCH ioctls to allow retries
- Fix for events being lost due to rescheduling on different VCPUs.
- More documentation"

* tag 'stable/for-linus-3.12-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: (23 commits)
hvc_xen: Remove unnecessary __GFP_ZERO from kzalloc
drivers/xen-tpmfront: Fix compile issue with missing option.
xen/balloon: don't set P2M entry for auto translated guest
xen/evtchn: double free on error
Xen: Fix retry calls into PRIVCMD_MMAPBATCH*.
xen/pvhvm: Initialize xen panic handler for PVHVM guests
xen/m2p: use GNTTABOP_unmap_and_replace to reinstate the original mapping
xen: fix ARM build after 6efa20e4
MAINTAINERS: Remove Jeremy from the Xen subsystem.
xen/events: document behaviour when scanning the start word for events
x86/xen: during early setup, only 1:1 map the ISA region
x86/xen: disable premption when enabling local irqs
swiotlb-xen: replace dma_length with sg_dma_len() macro
swiotlb: replace dma_length with sg_dma_len() macro
xen/balloon: set a mapping for ballooned out pages
xen/evtchn: improve scalability by using per-user locks
xen/p2m: avoid unneccesary TLB flush in m2p_remove_override()
MAINTAINERS: Add in two extra co-maintainers of the Xen tree.
MAINTAINERS: Update the Xen subsystem's with proper mailing list.
xen: replace strict_strtoul() with kstrtoul()
...

+1057 -186
+1
CREDITS
··· 1120 1120 D: Improved mmap and munmap handling 1121 1121 D: General mm minor tidyups 1122 1122 D: autofs v4 maintainer 1123 + D: Xen subsystem 1123 1124 S: 987 Alabama St 1124 1125 S: San Francisco 1125 1126 S: CA, 94110
+113
Documentation/tpm/xen-tpmfront.txt
··· 1 + Virtual TPM interface for Xen 2 + 3 + Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA) 4 + 5 + This document describes the virtual Trusted Platform Module (vTPM) subsystem for 6 + Xen. The reader is assumed to have familiarity with building and installing Xen, 7 + Linux, and a basic understanding of the TPM and vTPM concepts. 8 + 9 + INTRODUCTION 10 + 11 + The goal of this work is to provide a TPM functionality to a virtual guest 12 + operating system (in Xen terms, a DomU). This allows programs to interact with 13 + a TPM in a virtual system the same way they interact with a TPM on the physical 14 + system. Each guest gets its own unique, emulated, software TPM. However, each 15 + of the vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, 16 + which seals the secrets to the Physical TPM. If the process of creating each of 17 + these domains (manager, vTPM, and guest) is trusted, the vTPM subsystem extends 18 + the chain of trust rooted in the hardware TPM to virtual machines in Xen. Each 19 + major component of vTPM is implemented as a separate domain, providing secure 20 + separation guaranteed by the hypervisor. The vTPM domains are implemented in 21 + mini-os to reduce memory and processor overhead. 22 + 23 + This mini-os vTPM subsystem was built on top of the previous vTPM work done by 24 + IBM and Intel corporation. 25 + 26 + 27 + DESIGN OVERVIEW 28 + --------------- 29 + 30 + The architecture of vTPM is described below: 31 + 32 + +------------------+ 33 + | Linux DomU | ... 34 + | | ^ | 35 + | v | | 36 + | xen-tpmfront | 37 + +------------------+ 38 + | ^ 39 + v | 40 + +------------------+ 41 + | mini-os/tpmback | 42 + | | ^ | 43 + | v | | 44 + | vtpm-stubdom | ... 45 + | | ^ | 46 + | v | | 47 + | mini-os/tpmfront | 48 + +------------------+ 49 + | ^ 50 + v | 51 + +------------------+ 52 + | mini-os/tpmback | 53 + | | ^ | 54 + | v | | 55 + | vtpmmgr-stubdom | 56 + | | ^ | 57 + | v | | 58 + | mini-os/tpm_tis | 59 + +------------------+ 60 + | ^ 61 + v | 62 + +------------------+ 63 + | Hardware TPM | 64 + +------------------+ 65 + 66 + * Linux DomU: The Linux based guest that wants to use a vTPM. There may be 67 + more than one of these. 68 + 69 + * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver 70 + provides vTPM access to a Linux-based DomU. 71 + 72 + * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver 73 + connects to this backend driver to facilitate communications 74 + between the Linux DomU and its vTPM. This driver is also 75 + used by vtpmmgr-stubdom to communicate with vtpm-stubdom. 76 + 77 + * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a 78 + one to one mapping between running vtpm-stubdom instances and 79 + logical vtpms on the system. The vTPM Platform Configuration 80 + Registers (PCRs) are normally all initialized to zero. 81 + 82 + * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain 83 + vtpm-stubdom uses this driver to communicate with 84 + vtpmmgr-stubdom. This driver is also used in mini-os 85 + domains such as pv-grub that talk to the vTPM domain. 86 + 87 + * vtpmmgr-stubdom: A mini-os domain that implements the vTPM manager. There is 88 + only one vTPM manager and it should be running during the 89 + entire lifetime of the machine. This domain regulates 90 + access to the physical TPM on the system and secures the 91 + persistent state of each vTPM. 92 + 93 + * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS) 94 + driver. This driver used by vtpmmgr-stubdom to talk directly to 95 + the hardware TPM. Communication is facilitated by mapping 96 + hardware memory pages into vtpmmgr-stubdom. 97 + 98 + * Hardware TPM: The physical TPM that is soldered onto the motherboard. 99 + 100 + 101 + INTEGRATION WITH XEN 102 + -------------------- 103 + 104 + Support for the vTPM driver was added in Xen using the libxl toolstack in Xen 105 + 4.3. See the Xen documentation (docs/misc/vtpm.txt) for details on setting up 106 + the vTPM and vTPM Manager stub domains. Once the stub domains are running, a 107 + vTPM device is set up in the same manner as a disk or network device in the 108 + domain's configuration file. 109 + 110 + In order to use features such as IMA that require a TPM to be loaded prior to 111 + the initrd, the xen-tpmfront driver must be compiled in to the kernel. If not 112 + using such features, the driver can be compiled as a module and will be loaded 113 + as usual.
+8 -8
MAINTAINERS
··· 9278 9278 9279 9279 XEN HYPERVISOR INTERFACE 9280 9280 M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 9281 - M: Jeremy Fitzhardinge <jeremy@goop.org> 9282 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9283 - L: virtualization@lists.linux-foundation.org 9281 + M: Boris Ostrovsky <boris.ostrovsky@oracle.com> 9282 + M: David Vrabel <david.vrabel@citrix.com> 9283 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9284 9284 S: Supported 9285 9285 F: arch/x86/xen/ 9286 9286 F: drivers/*/xen-*front.c ··· 9291 9291 9292 9292 XEN HYPERVISOR ARM 9293 9293 M: Stefano Stabellini <stefano.stabellini@eu.citrix.com> 9294 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9294 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9295 9295 S: Supported 9296 9296 F: arch/arm/xen/ 9297 9297 F: arch/arm/include/asm/xen/ 9298 9298 9299 9299 XEN HYPERVISOR ARM64 9300 9300 M: Stefano Stabellini <stefano.stabellini@eu.citrix.com> 9301 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9301 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9302 9302 S: Supported 9303 9303 F: arch/arm64/xen/ 9304 9304 F: arch/arm64/include/asm/xen/ 9305 9305 9306 9306 XEN NETWORK BACKEND DRIVER 9307 9307 M: Ian Campbell <ian.campbell@citrix.com> 9308 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9308 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9309 9309 L: netdev@vger.kernel.org 9310 9310 S: Supported 9311 9311 F: drivers/net/xen-netback/* 9312 9312 9313 9313 XEN PCI SUBSYSTEM 9314 9314 M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 9315 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9315 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9316 9316 S: Supported 9317 9317 F: arch/x86/pci/*xen* 9318 9318 F: drivers/pci/*xen* 9319 9319 9320 9320 XEN SWIOTLB SUBSYSTEM 9321 9321 M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> 9322 - L: xen-devel@lists.xensource.com (moderated for non-subscribers) 9322 + L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 9323 9323 S: Supported 9324 9324 F: arch/x86/xen/*swiotlb* 9325 9325 F: drivers/xen/*swiotlb*
+1
arch/x86/include/asm/xen/events.h
··· 7 7 XEN_CALL_FUNCTION_SINGLE_VECTOR, 8 8 XEN_SPIN_UNLOCK_VECTOR, 9 9 XEN_IRQ_WORK_VECTOR, 10 + XEN_NMI_VECTOR, 10 11 11 12 XEN_NR_IPIS, 12 13 };
+10 -5
arch/x86/xen/enlighten.c
··· 427 427 428 428 if (!xen_initial_domain()) 429 429 cpuid_leaf1_edx_mask &= 430 - ~((1 << X86_FEATURE_APIC) | /* disable local APIC */ 431 - (1 << X86_FEATURE_ACPI)); /* disable ACPI */ 430 + ~((1 << X86_FEATURE_ACPI)); /* disable ACPI */ 432 431 433 432 cpuid_leaf1_ecx_mask &= ~(1 << (X86_FEATURE_X2APIC % 32)); 434 433 ··· 734 735 addr = (unsigned long)xen_int3; 735 736 else if (addr == (unsigned long)stack_segment) 736 737 addr = (unsigned long)xen_stack_segment; 737 - else if (addr == (unsigned long)double_fault || 738 - addr == (unsigned long)nmi) { 738 + else if (addr == (unsigned long)double_fault) { 739 739 /* Don't need to handle these */ 740 740 return 0; 741 741 #ifdef CONFIG_X86_MCE ··· 745 747 */ 746 748 ; 747 749 #endif 748 - } else { 750 + } else if (addr == (unsigned long)nmi) 751 + /* 752 + * Use the native version as well. 753 + */ 754 + ; 755 + else { 749 756 /* Some other trap using IST? */ 750 757 if (WARN_ON(val->ist != 0)) 751 758 return 0; ··· 1712 1709 init_hvm_pv_info(); 1713 1710 1714 1711 xen_hvm_init_shared_info(); 1712 + 1713 + xen_panic_handler_init(); 1715 1714 1716 1715 if (xen_feature(XENFEAT_hvm_callback_vector)) 1717 1716 xen_have_vector_callback = 1;
+12 -13
arch/x86/xen/irq.c
··· 47 47 /* convert from IF type flag */ 48 48 flags = !(flags & X86_EFLAGS_IF); 49 49 50 - /* There's a one instruction preempt window here. We need to 51 - make sure we're don't switch CPUs between getting the vcpu 52 - pointer and updating the mask. */ 50 + /* See xen_irq_enable() for why preemption must be disabled. */ 53 51 preempt_disable(); 54 52 vcpu = this_cpu_read(xen_vcpu); 55 53 vcpu->evtchn_upcall_mask = flags; 56 - preempt_enable_no_resched(); 57 - 58 - /* Doesn't matter if we get preempted here, because any 59 - pending event will get dealt with anyway. */ 60 54 61 55 if (flags == 0) { 62 - preempt_check_resched(); 63 56 barrier(); /* unmask then check (avoid races) */ 64 57 if (unlikely(vcpu->evtchn_upcall_pending)) 65 58 xen_force_evtchn_callback(); 66 - } 59 + preempt_enable(); 60 + } else 61 + preempt_enable_no_resched(); 67 62 } 68 63 PV_CALLEE_SAVE_REGS_THUNK(xen_restore_fl); 69 64 ··· 77 82 { 78 83 struct vcpu_info *vcpu; 79 84 80 - /* We don't need to worry about being preempted here, since 81 - either a) interrupts are disabled, so no preemption, or b) 82 - the caller is confused and is trying to re-enable interrupts 83 - on an indeterminate processor. */ 85 + /* 86 + * We may be preempted as soon as vcpu->evtchn_upcall_mask is 87 + * cleared, so disable preemption to ensure we check for 88 + * events on the VCPU we are still running on. 89 + */ 90 + preempt_disable(); 84 91 85 92 vcpu = this_cpu_read(xen_vcpu); 86 93 vcpu->evtchn_upcall_mask = 0; ··· 93 96 barrier(); /* unmask then check (avoid races) */ 94 97 if (unlikely(vcpu->evtchn_upcall_pending)) 95 98 xen_force_evtchn_callback(); 99 + 100 + preempt_enable(); 96 101 } 97 102 PV_CALLEE_SAVE_REGS_THUNK(xen_irq_enable); 98 103
+15 -7
arch/x86/xen/p2m.c
··· 161 161 #include <asm/xen/page.h> 162 162 #include <asm/xen/hypercall.h> 163 163 #include <asm/xen/hypervisor.h> 164 + #include <xen/balloon.h> 164 165 #include <xen/grant_table.h> 165 166 166 167 #include "multicalls.h" ··· 968 967 if (kmap_op != NULL) { 969 968 if (!PageHighMem(page)) { 970 969 struct multicall_space mcs; 971 - struct gnttab_unmap_grant_ref *unmap_op; 970 + struct gnttab_unmap_and_replace *unmap_op; 971 + struct page *scratch_page = get_balloon_scratch_page(); 972 + unsigned long scratch_page_address = (unsigned long) 973 + __va(page_to_pfn(scratch_page) << PAGE_SHIFT); 972 974 973 975 /* 974 976 * It might be that we queued all the m2p grant table ··· 994 990 } 995 991 996 992 mcs = xen_mc_entry( 997 - sizeof(struct gnttab_unmap_grant_ref)); 993 + sizeof(struct gnttab_unmap_and_replace)); 998 994 unmap_op = mcs.args; 999 995 unmap_op->host_addr = kmap_op->host_addr; 996 + unmap_op->new_addr = scratch_page_address; 1000 997 unmap_op->handle = kmap_op->handle; 1001 - unmap_op->dev_bus_addr = 0; 1002 998 1003 999 MULTI_grant_table_op(mcs.mc, 1004 - GNTTABOP_unmap_grant_ref, unmap_op, 1); 1000 + GNTTABOP_unmap_and_replace, unmap_op, 1); 1005 1001 1006 1002 xen_mc_issue(PARAVIRT_LAZY_MMU); 1007 1003 1008 - set_pte_at(&init_mm, address, ptep, 1009 - pfn_pte(pfn, PAGE_KERNEL)); 1010 - __flush_tlb_single(address); 1004 + mcs = __xen_mc_entry(0); 1005 + MULTI_update_va_mapping(mcs.mc, scratch_page_address, 1006 + pfn_pte(page_to_pfn(get_balloon_scratch_page()), 1007 + PAGE_KERNEL_RO), 0); 1008 + xen_mc_issue(PARAVIRT_LAZY_MMU); 1009 + 1011 1010 kmap_op->host_addr = 0; 1011 + put_balloon_scratch_page(); 1012 1012 } 1013 1013 } 1014 1014
+22 -7
arch/x86/xen/setup.c
··· 33 33 /* These are code, but not functions. Defined in entry.S */ 34 34 extern const char xen_hypervisor_callback[]; 35 35 extern const char xen_failsafe_callback[]; 36 + #ifdef CONFIG_X86_64 37 + extern const char nmi[]; 38 + #endif 36 39 extern void xen_sysenter_target(void); 37 40 extern void xen_syscall_target(void); 38 41 extern void xen_syscall32_target(void); ··· 218 215 unsigned long pfn; 219 216 220 217 /* 221 - * If the PFNs are currently mapped, the VA mapping also needs 222 - * to be updated to be 1:1. 218 + * If the PFNs are currently mapped, clear the mappings 219 + * (except for the ISA region which must be 1:1 mapped) to 220 + * release the refcounts (in Xen) on the original frames. 223 221 */ 224 - for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) 222 + for (pfn = start_pfn; pfn <= max_pfn_mapped && pfn < end_pfn; pfn++) { 223 + pte_t pte = __pte_ma(0); 224 + 225 + if (pfn < PFN_UP(ISA_END_ADDRESS)) 226 + pte = mfn_pte(pfn, PAGE_KERNEL_IO); 227 + 225 228 (void)HYPERVISOR_update_va_mapping( 226 - (unsigned long)__va(pfn << PAGE_SHIFT), 227 - mfn_pte(pfn, PAGE_KERNEL_IO), 0); 229 + (unsigned long)__va(pfn << PAGE_SHIFT), pte, 0); 230 + } 228 231 229 232 if (start_pfn < nr_pages) 230 233 *released += xen_release_chunk( ··· 556 547 } 557 548 #endif /* CONFIG_X86_64 */ 558 549 } 559 - 550 + void __cpuinit xen_enable_nmi(void) 551 + { 552 + #ifdef CONFIG_X86_64 553 + if (register_callback(CALLBACKTYPE_nmi, nmi)) 554 + BUG(); 555 + #endif 556 + } 560 557 void __init xen_arch_setup(void) 561 558 { 562 559 xen_panic_handler_init(); ··· 580 565 581 566 xen_enable_sysenter(); 582 567 xen_enable_syscall(); 583 - 568 + xen_enable_nmi(); 584 569 #ifdef CONFIG_ACPI 585 570 if (!(xen_start_info->flags & SIF_INITDOMAIN)) { 586 571 printk(KERN_INFO "ACPI in unprivileged domain disabled\n");
+6
arch/x86/xen/smp.c
··· 573 573 case IRQ_WORK_VECTOR: 574 574 xen_vector = XEN_IRQ_WORK_VECTOR; 575 575 break; 576 + #ifdef CONFIG_X86_64 577 + case NMI_VECTOR: 578 + case APIC_DM_NMI: /* Some use that instead of NMI_VECTOR */ 579 + xen_vector = XEN_NMI_VECTOR; 580 + break; 581 + #endif 576 582 default: 577 583 xen_vector = -1; 578 584 printk(KERN_ERR "xen: vector 0x%x is not implemented\n",
+12
drivers/char/tpm/Kconfig
··· 91 91 To compile this driver as a module, choose M here; the module will be 92 92 called tpm_stm_st33_i2c. 93 93 94 + config TCG_XEN 95 + tristate "XEN TPM Interface" 96 + depends on TCG_TPM && XEN 97 + select XEN_XENBUS_FRONTEND 98 + ---help--- 99 + If you want to make TPM support available to a Xen user domain, 100 + say Yes and it will be accessible from within Linux. See 101 + the manpages for xl, xl.conf, and docs/misc/vtpm.txt in 102 + the Xen source repository for more details. 103 + To compile this driver as a module, choose M here; the module 104 + will be called xen-tpmfront. 105 + 94 106 endif # TCG_TPM
+1
drivers/char/tpm/Makefile
··· 18 18 obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o 19 19 obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o 20 20 obj-$(CONFIG_TCG_ST33_I2C) += tpm_i2c_stm_st33.o 21 + obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
+473
drivers/char/tpm/xen-tpmfront.c
··· 1 + /* 2 + * Implementation of the Xen vTPM device frontend 3 + * 4 + * Author: Daniel De Graaf <dgdegra@tycho.nsa.gov> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2, 8 + * as published by the Free Software Foundation. 9 + */ 10 + #include <linux/errno.h> 11 + #include <linux/err.h> 12 + #include <linux/interrupt.h> 13 + #include <xen/events.h> 14 + #include <xen/interface/io/tpmif.h> 15 + #include <xen/grant_table.h> 16 + #include <xen/xenbus.h> 17 + #include <xen/page.h> 18 + #include "tpm.h" 19 + 20 + struct tpm_private { 21 + struct tpm_chip *chip; 22 + struct xenbus_device *dev; 23 + 24 + struct vtpm_shared_page *shr; 25 + 26 + unsigned int evtchn; 27 + int ring_ref; 28 + domid_t backend_id; 29 + }; 30 + 31 + enum status_bits { 32 + VTPM_STATUS_RUNNING = 0x1, 33 + VTPM_STATUS_IDLE = 0x2, 34 + VTPM_STATUS_RESULT = 0x4, 35 + VTPM_STATUS_CANCELED = 0x8, 36 + }; 37 + 38 + static u8 vtpm_status(struct tpm_chip *chip) 39 + { 40 + struct tpm_private *priv = TPM_VPRIV(chip); 41 + switch (priv->shr->state) { 42 + case VTPM_STATE_IDLE: 43 + return VTPM_STATUS_IDLE | VTPM_STATUS_CANCELED; 44 + case VTPM_STATE_FINISH: 45 + return VTPM_STATUS_IDLE | VTPM_STATUS_RESULT; 46 + case VTPM_STATE_SUBMIT: 47 + case VTPM_STATE_CANCEL: /* cancel requested, not yet canceled */ 48 + return VTPM_STATUS_RUNNING; 49 + default: 50 + return 0; 51 + } 52 + } 53 + 54 + static bool vtpm_req_canceled(struct tpm_chip *chip, u8 status) 55 + { 56 + return status & VTPM_STATUS_CANCELED; 57 + } 58 + 59 + static void vtpm_cancel(struct tpm_chip *chip) 60 + { 61 + struct tpm_private *priv = TPM_VPRIV(chip); 62 + priv->shr->state = VTPM_STATE_CANCEL; 63 + wmb(); 64 + notify_remote_via_evtchn(priv->evtchn); 65 + } 66 + 67 + static unsigned int shr_data_offset(struct vtpm_shared_page *shr) 68 + { 69 + return sizeof(*shr) + sizeof(u32) * shr->nr_extra_pages; 70 + } 71 + 72 + static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) 73 + { 74 + struct tpm_private *priv = TPM_VPRIV(chip); 75 + struct vtpm_shared_page *shr = priv->shr; 76 + unsigned int offset = shr_data_offset(shr); 77 + 78 + u32 ordinal; 79 + unsigned long duration; 80 + 81 + if (offset > PAGE_SIZE) 82 + return -EINVAL; 83 + 84 + if (offset + count > PAGE_SIZE) 85 + return -EINVAL; 86 + 87 + /* Wait for completion of any existing command or cancellation */ 88 + if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, chip->vendor.timeout_c, 89 + &chip->vendor.read_queue, true) < 0) { 90 + vtpm_cancel(chip); 91 + return -ETIME; 92 + } 93 + 94 + memcpy(offset + (u8 *)shr, buf, count); 95 + shr->length = count; 96 + barrier(); 97 + shr->state = VTPM_STATE_SUBMIT; 98 + wmb(); 99 + notify_remote_via_evtchn(priv->evtchn); 100 + 101 + ordinal = be32_to_cpu(((struct tpm_input_header*)buf)->ordinal); 102 + duration = tpm_calc_ordinal_duration(chip, ordinal); 103 + 104 + if (wait_for_tpm_stat(chip, VTPM_STATUS_IDLE, duration, 105 + &chip->vendor.read_queue, true) < 0) { 106 + /* got a signal or timeout, try to cancel */ 107 + vtpm_cancel(chip); 108 + return -ETIME; 109 + } 110 + 111 + return count; 112 + } 113 + 114 + static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) 115 + { 116 + struct tpm_private *priv = TPM_VPRIV(chip); 117 + struct vtpm_shared_page *shr = priv->shr; 118 + unsigned int offset = shr_data_offset(shr); 119 + size_t length = shr->length; 120 + 121 + if (shr->state == VTPM_STATE_IDLE) 122 + return -ECANCELED; 123 + 124 + /* In theory the wait at the end of _send makes this one unnecessary */ 125 + if (wait_for_tpm_stat(chip, VTPM_STATUS_RESULT, chip->vendor.timeout_c, 126 + &chip->vendor.read_queue, true) < 0) { 127 + vtpm_cancel(chip); 128 + return -ETIME; 129 + } 130 + 131 + if (offset > PAGE_SIZE) 132 + return -EIO; 133 + 134 + if (offset + length > PAGE_SIZE) 135 + length = PAGE_SIZE - offset; 136 + 137 + if (length > count) 138 + length = count; 139 + 140 + memcpy(buf, offset + (u8 *)shr, length); 141 + 142 + return length; 143 + } 144 + 145 + ssize_t tpm_show_locality(struct device *dev, struct device_attribute *attr, 146 + char *buf) 147 + { 148 + struct tpm_chip *chip = dev_get_drvdata(dev); 149 + struct tpm_private *priv = TPM_VPRIV(chip); 150 + u8 locality = priv->shr->locality; 151 + 152 + return sprintf(buf, "%d\n", locality); 153 + } 154 + 155 + ssize_t tpm_store_locality(struct device *dev, struct device_attribute *attr, 156 + const char *buf, size_t len) 157 + { 158 + struct tpm_chip *chip = dev_get_drvdata(dev); 159 + struct tpm_private *priv = TPM_VPRIV(chip); 160 + u8 val; 161 + 162 + int rv = kstrtou8(buf, 0, &val); 163 + if (rv) 164 + return rv; 165 + 166 + priv->shr->locality = val; 167 + 168 + return len; 169 + } 170 + 171 + static const struct file_operations vtpm_ops = { 172 + .owner = THIS_MODULE, 173 + .llseek = no_llseek, 174 + .open = tpm_open, 175 + .read = tpm_read, 176 + .write = tpm_write, 177 + .release = tpm_release, 178 + }; 179 + 180 + static DEVICE_ATTR(pubek, S_IRUGO, tpm_show_pubek, NULL); 181 + static DEVICE_ATTR(pcrs, S_IRUGO, tpm_show_pcrs, NULL); 182 + static DEVICE_ATTR(enabled, S_IRUGO, tpm_show_enabled, NULL); 183 + static DEVICE_ATTR(active, S_IRUGO, tpm_show_active, NULL); 184 + static DEVICE_ATTR(owned, S_IRUGO, tpm_show_owned, NULL); 185 + static DEVICE_ATTR(temp_deactivated, S_IRUGO, tpm_show_temp_deactivated, 186 + NULL); 187 + static DEVICE_ATTR(caps, S_IRUGO, tpm_show_caps, NULL); 188 + static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel); 189 + static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL); 190 + static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL); 191 + static DEVICE_ATTR(locality, S_IRUGO | S_IWUSR, tpm_show_locality, 192 + tpm_store_locality); 193 + 194 + static struct attribute *vtpm_attrs[] = { 195 + &dev_attr_pubek.attr, 196 + &dev_attr_pcrs.attr, 197 + &dev_attr_enabled.attr, 198 + &dev_attr_active.attr, 199 + &dev_attr_owned.attr, 200 + &dev_attr_temp_deactivated.attr, 201 + &dev_attr_caps.attr, 202 + &dev_attr_cancel.attr, 203 + &dev_attr_durations.attr, 204 + &dev_attr_timeouts.attr, 205 + &dev_attr_locality.attr, 206 + NULL, 207 + }; 208 + 209 + static struct attribute_group vtpm_attr_grp = { 210 + .attrs = vtpm_attrs, 211 + }; 212 + 213 + #define TPM_LONG_TIMEOUT (10 * 60 * HZ) 214 + 215 + static const struct tpm_vendor_specific tpm_vtpm = { 216 + .status = vtpm_status, 217 + .recv = vtpm_recv, 218 + .send = vtpm_send, 219 + .cancel = vtpm_cancel, 220 + .req_complete_mask = VTPM_STATUS_IDLE | VTPM_STATUS_RESULT, 221 + .req_complete_val = VTPM_STATUS_IDLE | VTPM_STATUS_RESULT, 222 + .req_canceled = vtpm_req_canceled, 223 + .attr_group = &vtpm_attr_grp, 224 + .miscdev = { 225 + .fops = &vtpm_ops, 226 + }, 227 + .duration = { 228 + TPM_LONG_TIMEOUT, 229 + TPM_LONG_TIMEOUT, 230 + TPM_LONG_TIMEOUT, 231 + }, 232 + }; 233 + 234 + static irqreturn_t tpmif_interrupt(int dummy, void *dev_id) 235 + { 236 + struct tpm_private *priv = dev_id; 237 + 238 + switch (priv->shr->state) { 239 + case VTPM_STATE_IDLE: 240 + case VTPM_STATE_FINISH: 241 + wake_up_interruptible(&priv->chip->vendor.read_queue); 242 + break; 243 + case VTPM_STATE_SUBMIT: 244 + case VTPM_STATE_CANCEL: 245 + default: 246 + break; 247 + } 248 + return IRQ_HANDLED; 249 + } 250 + 251 + static int setup_chip(struct device *dev, struct tpm_private *priv) 252 + { 253 + struct tpm_chip *chip; 254 + 255 + chip = tpm_register_hardware(dev, &tpm_vtpm); 256 + if (!chip) 257 + return -ENODEV; 258 + 259 + init_waitqueue_head(&chip->vendor.read_queue); 260 + 261 + priv->chip = chip; 262 + TPM_VPRIV(chip) = priv; 263 + 264 + return 0; 265 + } 266 + 267 + /* caller must clean up in case of errors */ 268 + static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv) 269 + { 270 + struct xenbus_transaction xbt; 271 + const char *message = NULL; 272 + int rv; 273 + 274 + priv->shr = (void *)__get_free_page(GFP_KERNEL|__GFP_ZERO); 275 + if (!priv->shr) { 276 + xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring"); 277 + return -ENOMEM; 278 + } 279 + 280 + rv = xenbus_grant_ring(dev, virt_to_mfn(priv->shr)); 281 + if (rv < 0) 282 + return rv; 283 + 284 + priv->ring_ref = rv; 285 + 286 + rv = xenbus_alloc_evtchn(dev, &priv->evtchn); 287 + if (rv) 288 + return rv; 289 + 290 + rv = bind_evtchn_to_irqhandler(priv->evtchn, tpmif_interrupt, 0, 291 + "tpmif", priv); 292 + if (rv <= 0) { 293 + xenbus_dev_fatal(dev, rv, "allocating TPM irq"); 294 + return rv; 295 + } 296 + priv->chip->vendor.irq = rv; 297 + 298 + again: 299 + rv = xenbus_transaction_start(&xbt); 300 + if (rv) { 301 + xenbus_dev_fatal(dev, rv, "starting transaction"); 302 + return rv; 303 + } 304 + 305 + rv = xenbus_printf(xbt, dev->nodename, 306 + "ring-ref", "%u", priv->ring_ref); 307 + if (rv) { 308 + message = "writing ring-ref"; 309 + goto abort_transaction; 310 + } 311 + 312 + rv = xenbus_printf(xbt, dev->nodename, "event-channel", "%u", 313 + priv->evtchn); 314 + if (rv) { 315 + message = "writing event-channel"; 316 + goto abort_transaction; 317 + } 318 + 319 + rv = xenbus_printf(xbt, dev->nodename, "feature-protocol-v2", "1"); 320 + if (rv) { 321 + message = "writing feature-protocol-v2"; 322 + goto abort_transaction; 323 + } 324 + 325 + rv = xenbus_transaction_end(xbt, 0); 326 + if (rv == -EAGAIN) 327 + goto again; 328 + if (rv) { 329 + xenbus_dev_fatal(dev, rv, "completing transaction"); 330 + return rv; 331 + } 332 + 333 + xenbus_switch_state(dev, XenbusStateInitialised); 334 + 335 + return 0; 336 + 337 + abort_transaction: 338 + xenbus_transaction_end(xbt, 1); 339 + if (message) 340 + xenbus_dev_error(dev, rv, "%s", message); 341 + 342 + return rv; 343 + } 344 + 345 + static void ring_free(struct tpm_private *priv) 346 + { 347 + if (!priv) 348 + return; 349 + 350 + if (priv->ring_ref) 351 + gnttab_end_foreign_access(priv->ring_ref, 0, 352 + (unsigned long)priv->shr); 353 + else 354 + free_page((unsigned long)priv->shr); 355 + 356 + if (priv->chip && priv->chip->vendor.irq) 357 + unbind_from_irqhandler(priv->chip->vendor.irq, priv); 358 + 359 + kfree(priv); 360 + } 361 + 362 + static int tpmfront_probe(struct xenbus_device *dev, 363 + const struct xenbus_device_id *id) 364 + { 365 + struct tpm_private *priv; 366 + int rv; 367 + 368 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 369 + if (!priv) { 370 + xenbus_dev_fatal(dev, -ENOMEM, "allocating priv structure"); 371 + return -ENOMEM; 372 + } 373 + 374 + rv = setup_chip(&dev->dev, priv); 375 + if (rv) { 376 + kfree(priv); 377 + return rv; 378 + } 379 + 380 + rv = setup_ring(dev, priv); 381 + if (rv) { 382 + tpm_remove_hardware(&dev->dev); 383 + ring_free(priv); 384 + return rv; 385 + } 386 + 387 + tpm_get_timeouts(priv->chip); 388 + 389 + dev_set_drvdata(&dev->dev, priv->chip); 390 + 391 + return rv; 392 + } 393 + 394 + static int tpmfront_remove(struct xenbus_device *dev) 395 + { 396 + struct tpm_chip *chip = dev_get_drvdata(&dev->dev); 397 + struct tpm_private *priv = TPM_VPRIV(chip); 398 + tpm_remove_hardware(&dev->dev); 399 + ring_free(priv); 400 + TPM_VPRIV(chip) = NULL; 401 + return 0; 402 + } 403 + 404 + static int tpmfront_resume(struct xenbus_device *dev) 405 + { 406 + /* A suspend/resume/migrate will interrupt a vTPM anyway */ 407 + tpmfront_remove(dev); 408 + return tpmfront_probe(dev, NULL); 409 + } 410 + 411 + static void backend_changed(struct xenbus_device *dev, 412 + enum xenbus_state backend_state) 413 + { 414 + int val; 415 + 416 + switch (backend_state) { 417 + case XenbusStateInitialised: 418 + case XenbusStateConnected: 419 + if (dev->state == XenbusStateConnected) 420 + break; 421 + 422 + if (xenbus_scanf(XBT_NIL, dev->otherend, 423 + "feature-protocol-v2", "%d", &val) < 0) 424 + val = 0; 425 + if (!val) { 426 + xenbus_dev_fatal(dev, -EINVAL, 427 + "vTPM protocol 2 required"); 428 + return; 429 + } 430 + xenbus_switch_state(dev, XenbusStateConnected); 431 + break; 432 + 433 + case XenbusStateClosing: 434 + case XenbusStateClosed: 435 + device_unregister(&dev->dev); 436 + xenbus_frontend_closed(dev); 437 + break; 438 + default: 439 + break; 440 + } 441 + } 442 + 443 + static const struct xenbus_device_id tpmfront_ids[] = { 444 + { "vtpm" }, 445 + { "" } 446 + }; 447 + MODULE_ALIAS("xen:vtpm"); 448 + 449 + static DEFINE_XENBUS_DRIVER(tpmfront, , 450 + .probe = tpmfront_probe, 451 + .remove = tpmfront_remove, 452 + .resume = tpmfront_resume, 453 + .otherend_changed = backend_changed, 454 + ); 455 + 456 + static int __init xen_tpmfront_init(void) 457 + { 458 + if (!xen_domain()) 459 + return -ENODEV; 460 + 461 + return xenbus_register_frontend(&tpmfront_driver); 462 + } 463 + module_init(xen_tpmfront_init); 464 + 465 + static void __exit xen_tpmfront_exit(void) 466 + { 467 + xenbus_unregister_driver(&tpmfront_driver); 468 + } 469 + module_exit(xen_tpmfront_exit); 470 + 471 + MODULE_AUTHOR("Daniel De Graaf <dgdegra@tycho.nsa.gov>"); 472 + MODULE_DESCRIPTION("Xen vTPM Driver"); 473 + MODULE_LICENSE("GPL");
+71 -3
drivers/xen/balloon.c
··· 38 38 39 39 #define pr_fmt(fmt) "xen:" KBUILD_MODNAME ": " fmt 40 40 41 + #include <linux/cpu.h> 41 42 #include <linux/kernel.h> 42 43 #include <linux/sched.h> 43 44 #include <linux/errno.h> ··· 53 52 #include <linux/notifier.h> 54 53 #include <linux/memory.h> 55 54 #include <linux/memory_hotplug.h> 55 + #include <linux/percpu-defs.h> 56 56 57 57 #include <asm/page.h> 58 58 #include <asm/pgalloc.h> ··· 92 90 93 91 /* We increase/decrease in batches which fit in a page */ 94 92 static xen_pfn_t frame_list[PAGE_SIZE / sizeof(unsigned long)]; 93 + static DEFINE_PER_CPU(struct page *, balloon_scratch_page); 94 + 95 95 96 96 /* List of ballooned pages, threaded through the mem_map array. */ 97 97 static LIST_HEAD(ballooned_pages); ··· 416 412 if (xen_pv_domain() && !PageHighMem(page)) { 417 413 ret = HYPERVISOR_update_va_mapping( 418 414 (unsigned long)__va(pfn << PAGE_SHIFT), 419 - __pte_ma(0), 0); 415 + pfn_pte(page_to_pfn(__get_cpu_var(balloon_scratch_page)), 416 + PAGE_KERNEL_RO), 0); 420 417 BUG_ON(ret); 421 418 } 422 419 #endif ··· 430 425 /* No more mappings: invalidate P2M and add to balloon. */ 431 426 for (i = 0; i < nr_pages; i++) { 432 427 pfn = mfn_to_pfn(frame_list[i]); 433 - __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); 428 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 429 + unsigned long p; 430 + struct page *pg; 431 + pg = __get_cpu_var(balloon_scratch_page); 432 + p = page_to_pfn(pg); 433 + __set_phys_to_machine(pfn, pfn_to_mfn(p)); 434 + } 434 435 balloon_append(pfn_to_page(pfn)); 435 436 } 436 437 ··· 489 478 schedule_delayed_work(&balloon_worker, balloon_stats.schedule_delay * HZ); 490 479 491 480 mutex_unlock(&balloon_mutex); 481 + } 482 + 483 + struct page *get_balloon_scratch_page(void) 484 + { 485 + struct page *ret = get_cpu_var(balloon_scratch_page); 486 + BUG_ON(ret == NULL); 487 + return ret; 488 + } 489 + 490 + void put_balloon_scratch_page(void) 491 + { 492 + put_cpu_var(balloon_scratch_page); 492 493 } 493 494 494 495 /* Resets the Xen limit, sets new target, and kicks off processing. */ ··· 596 573 } 597 574 } 598 575 576 + static int __cpuinit balloon_cpu_notify(struct notifier_block *self, 577 + unsigned long action, void *hcpu) 578 + { 579 + int cpu = (long)hcpu; 580 + switch (action) { 581 + case CPU_UP_PREPARE: 582 + if (per_cpu(balloon_scratch_page, cpu) != NULL) 583 + break; 584 + per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 585 + if (per_cpu(balloon_scratch_page, cpu) == NULL) { 586 + pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 587 + return NOTIFY_BAD; 588 + } 589 + break; 590 + default: 591 + break; 592 + } 593 + return NOTIFY_OK; 594 + } 595 + 596 + static struct notifier_block balloon_cpu_notifier __cpuinitdata = { 597 + .notifier_call = balloon_cpu_notify, 598 + }; 599 + 599 600 static int __init balloon_init(void) 600 601 { 601 - int i; 602 + int i, cpu; 602 603 603 604 if (!xen_domain()) 604 605 return -ENODEV; 606 + 607 + for_each_online_cpu(cpu) 608 + { 609 + per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL); 610 + if (per_cpu(balloon_scratch_page, cpu) == NULL) { 611 + pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu); 612 + return -ENOMEM; 613 + } 614 + } 615 + register_cpu_notifier(&balloon_cpu_notifier); 605 616 606 617 pr_info("Initialising balloon driver\n"); 607 618 ··· 672 615 } 673 616 674 617 subsys_initcall(balloon_init); 618 + 619 + static int __init balloon_clear(void) 620 + { 621 + int cpu; 622 + 623 + for_each_possible_cpu(cpu) 624 + per_cpu(balloon_scratch_page, cpu) = NULL; 625 + 626 + return 0; 627 + } 628 + early_initcall(balloon_clear); 675 629 676 630 MODULE_LICENSE("GPL");
+24 -6
drivers/xen/events.c
··· 56 56 #include <xen/interface/hvm/params.h> 57 57 #include <xen/interface/physdev.h> 58 58 #include <xen/interface/sched.h> 59 + #include <xen/interface/vcpu.h> 59 60 #include <asm/hw_irq.h> 60 61 61 62 /* ··· 1213 1212 1214 1213 void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector) 1215 1214 { 1216 - int irq = per_cpu(ipi_to_irq, cpu)[vector]; 1215 + int irq; 1216 + 1217 + #ifdef CONFIG_X86 1218 + if (unlikely(vector == XEN_NMI_VECTOR)) { 1219 + int rc = HYPERVISOR_vcpu_op(VCPUOP_send_nmi, cpu, NULL); 1220 + if (rc < 0) 1221 + printk(KERN_WARNING "Sending nmi to CPU%d failed (rc:%d)\n", cpu, rc); 1222 + return; 1223 + } 1224 + #endif 1225 + irq = per_cpu(ipi_to_irq, cpu)[vector]; 1217 1226 BUG_ON(irq < 0); 1218 1227 notify_remote_via_irq(irq); 1219 1228 } ··· 1390 1379 1391 1380 pending_bits = active_evtchns(cpu, s, word_idx); 1392 1381 bit_idx = 0; /* usually scan entire word from start */ 1382 + /* 1383 + * We scan the starting word in two parts. 1384 + * 1385 + * 1st time: start in the middle, scanning the 1386 + * upper bits. 1387 + * 1388 + * 2nd time: scan the whole word (not just the 1389 + * parts skipped in the first pass) -- if an 1390 + * event in the previously scanned bits is 1391 + * pending again it would just be scanned on 1392 + * the next loop anyway. 1393 + */ 1393 1394 if (word_idx == start_word_idx) { 1394 - /* We scan the starting word in two parts */ 1395 1395 if (i == 0) 1396 - /* 1st time: start in the middle */ 1397 1396 bit_idx = start_bit_idx; 1398 - else 1399 - /* 2nd time: mask bits done already */ 1400 - bit_idx &= (1UL << start_bit_idx) - 1; 1401 1397 } 1402 1398 1403 1399 do {
+111 -80
drivers/xen/evtchn.c
··· 57 57 58 58 struct per_user_data { 59 59 struct mutex bind_mutex; /* serialize bind/unbind operations */ 60 + struct rb_root evtchns; 60 61 61 62 /* Notification ring, accessed via /dev/xen/evtchn. */ 62 63 #define EVTCHN_RING_SIZE (PAGE_SIZE / sizeof(evtchn_port_t)) ··· 65 64 evtchn_port_t *ring; 66 65 unsigned int ring_cons, ring_prod, ring_overflow; 67 66 struct mutex ring_cons_mutex; /* protect against concurrent readers */ 67 + spinlock_t ring_prod_lock; /* product against concurrent interrupts */ 68 68 69 69 /* Processes wait on this queue when ring is empty. */ 70 70 wait_queue_head_t evtchn_wait; ··· 73 71 const char *name; 74 72 }; 75 73 76 - /* 77 - * Who's bound to each port? This is logically an array of struct 78 - * per_user_data *, but we encode the current enabled-state in bit 0. 79 - */ 80 - static unsigned long *port_user; 81 - static DEFINE_SPINLOCK(port_user_lock); /* protects port_user[] and ring_prod */ 74 + struct user_evtchn { 75 + struct rb_node node; 76 + struct per_user_data *user; 77 + unsigned port; 78 + bool enabled; 79 + }; 82 80 83 - static inline struct per_user_data *get_port_user(unsigned port) 81 + static int add_evtchn(struct per_user_data *u, struct user_evtchn *evtchn) 84 82 { 85 - return (struct per_user_data *)(port_user[port] & ~1); 83 + struct rb_node **new = &(u->evtchns.rb_node), *parent = NULL; 84 + 85 + while (*new) { 86 + struct user_evtchn *this; 87 + 88 + this = container_of(*new, struct user_evtchn, node); 89 + 90 + parent = *new; 91 + if (this->port < evtchn->port) 92 + new = &((*new)->rb_left); 93 + else if (this->port > evtchn->port) 94 + new = &((*new)->rb_right); 95 + else 96 + return -EEXIST; 97 + } 98 + 99 + /* Add new node and rebalance tree. */ 100 + rb_link_node(&evtchn->node, parent, new); 101 + rb_insert_color(&evtchn->node, &u->evtchns); 102 + 103 + return 0; 86 104 } 87 105 88 - static inline void set_port_user(unsigned port, struct per_user_data *u) 106 + static void del_evtchn(struct per_user_data *u, struct user_evtchn *evtchn) 89 107 { 90 - port_user[port] = (unsigned long)u; 108 + rb_erase(&evtchn->node, &u->evtchns); 109 + kfree(evtchn); 91 110 } 92 111 93 - static inline bool get_port_enabled(unsigned port) 112 + static struct user_evtchn *find_evtchn(struct per_user_data *u, unsigned port) 94 113 { 95 - return port_user[port] & 1; 96 - } 114 + struct rb_node *node = u->evtchns.rb_node; 97 115 98 - static inline void set_port_enabled(unsigned port, bool enabled) 99 - { 100 - if (enabled) 101 - port_user[port] |= 1; 102 - else 103 - port_user[port] &= ~1; 116 + while (node) { 117 + struct user_evtchn *evtchn; 118 + 119 + evtchn = container_of(node, struct user_evtchn, node); 120 + 121 + if (evtchn->port < port) 122 + node = node->rb_left; 123 + else if (evtchn->port > port) 124 + node = node->rb_right; 125 + else 126 + return evtchn; 127 + } 128 + return NULL; 104 129 } 105 130 106 131 static irqreturn_t evtchn_interrupt(int irq, void *data) 107 132 { 108 - unsigned int port = (unsigned long)data; 109 - struct per_user_data *u; 133 + struct user_evtchn *evtchn = data; 134 + struct per_user_data *u = evtchn->user; 110 135 111 - spin_lock(&port_user_lock); 112 - 113 - u = get_port_user(port); 114 - 115 - WARN(!get_port_enabled(port), 136 + WARN(!evtchn->enabled, 116 137 "Interrupt for port %d, but apparently not enabled; per-user %p\n", 117 - port, u); 138 + evtchn->port, u); 118 139 119 140 disable_irq_nosync(irq); 120 - set_port_enabled(port, false); 141 + evtchn->enabled = false; 142 + 143 + spin_lock(&u->ring_prod_lock); 121 144 122 145 if ((u->ring_prod - u->ring_cons) < EVTCHN_RING_SIZE) { 123 - u->ring[EVTCHN_RING_MASK(u->ring_prod)] = port; 146 + u->ring[EVTCHN_RING_MASK(u->ring_prod)] = evtchn->port; 124 147 wmb(); /* Ensure ring contents visible */ 125 148 if (u->ring_cons == u->ring_prod++) { 126 149 wake_up_interruptible(&u->evtchn_wait); ··· 155 128 } else 156 129 u->ring_overflow = 1; 157 130 158 - spin_unlock(&port_user_lock); 131 + spin_unlock(&u->ring_prod_lock); 159 132 160 133 return IRQ_HANDLED; 161 134 } ··· 256 229 if (copy_from_user(kbuf, buf, count) != 0) 257 230 goto out; 258 231 259 - spin_lock_irq(&port_user_lock); 232 + mutex_lock(&u->bind_mutex); 260 233 261 234 for (i = 0; i < (count/sizeof(evtchn_port_t)); i++) { 262 235 unsigned port = kbuf[i]; 236 + struct user_evtchn *evtchn; 263 237 264 - if (port < NR_EVENT_CHANNELS && 265 - get_port_user(port) == u && 266 - !get_port_enabled(port)) { 267 - set_port_enabled(port, true); 238 + evtchn = find_evtchn(u, port); 239 + if (evtchn && !evtchn->enabled) { 240 + evtchn->enabled = true; 268 241 enable_irq(irq_from_evtchn(port)); 269 242 } 270 243 } 271 244 272 - spin_unlock_irq(&port_user_lock); 245 + mutex_unlock(&u->bind_mutex); 273 246 274 247 rc = count; 275 248 ··· 280 253 281 254 static int evtchn_bind_to_user(struct per_user_data *u, int port) 282 255 { 256 + struct user_evtchn *evtchn; 257 + struct evtchn_close close; 283 258 int rc = 0; 284 259 285 260 /* ··· 292 263 * interrupt handler yet, and our caller has already 293 264 * serialized bind operations.) 294 265 */ 295 - BUG_ON(get_port_user(port) != NULL); 296 - set_port_user(port, u); 297 - set_port_enabled(port, true); /* start enabled */ 266 + 267 + evtchn = kzalloc(sizeof(*evtchn), GFP_KERNEL); 268 + if (!evtchn) 269 + return -ENOMEM; 270 + 271 + evtchn->user = u; 272 + evtchn->port = port; 273 + evtchn->enabled = true; /* start enabled */ 274 + 275 + rc = add_evtchn(u, evtchn); 276 + if (rc < 0) 277 + goto err; 298 278 299 279 rc = bind_evtchn_to_irqhandler(port, evtchn_interrupt, IRQF_DISABLED, 300 - u->name, (void *)(unsigned long)port); 301 - if (rc >= 0) 302 - rc = evtchn_make_refcounted(port); 303 - else { 304 - /* bind failed, should close the port now */ 305 - struct evtchn_close close; 306 - close.port = port; 307 - if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) 308 - BUG(); 309 - set_port_user(port, NULL); 310 - } 280 + u->name, evtchn); 281 + if (rc < 0) 282 + goto err; 311 283 284 + rc = evtchn_make_refcounted(port); 285 + return rc; 286 + 287 + err: 288 + /* bind failed, should close the port now */ 289 + close.port = port; 290 + if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0) 291 + BUG(); 292 + del_evtchn(u, evtchn); 312 293 return rc; 313 294 } 314 295 315 - static void evtchn_unbind_from_user(struct per_user_data *u, int port) 296 + static void evtchn_unbind_from_user(struct per_user_data *u, 297 + struct user_evtchn *evtchn) 316 298 { 317 - int irq = irq_from_evtchn(port); 299 + int irq = irq_from_evtchn(evtchn->port); 318 300 319 301 BUG_ON(irq < 0); 320 302 321 - unbind_from_irqhandler(irq, (void *)(unsigned long)port); 303 + unbind_from_irqhandler(irq, evtchn); 322 304 323 - set_port_user(port, NULL); 305 + del_evtchn(u, evtchn); 324 306 } 325 307 326 308 static long evtchn_ioctl(struct file *file, ··· 410 370 411 371 case IOCTL_EVTCHN_UNBIND: { 412 372 struct ioctl_evtchn_unbind unbind; 373 + struct user_evtchn *evtchn; 413 374 414 375 rc = -EFAULT; 415 376 if (copy_from_user(&unbind, uarg, sizeof(unbind))) ··· 421 380 break; 422 381 423 382 rc = -ENOTCONN; 424 - if (get_port_user(unbind.port) != u) 383 + evtchn = find_evtchn(u, unbind.port); 384 + if (!evtchn) 425 385 break; 426 386 427 387 disable_irq(irq_from_evtchn(unbind.port)); 428 - 429 - evtchn_unbind_from_user(u, unbind.port); 430 - 388 + evtchn_unbind_from_user(u, evtchn); 431 389 rc = 0; 432 390 break; 433 391 } 434 392 435 393 case IOCTL_EVTCHN_NOTIFY: { 436 394 struct ioctl_evtchn_notify notify; 395 + struct user_evtchn *evtchn; 437 396 438 397 rc = -EFAULT; 439 398 if (copy_from_user(&notify, uarg, sizeof(notify))) 440 399 break; 441 400 442 - if (notify.port >= NR_EVENT_CHANNELS) { 443 - rc = -EINVAL; 444 - } else if (get_port_user(notify.port) != u) { 445 - rc = -ENOTCONN; 446 - } else { 401 + rc = -ENOTCONN; 402 + evtchn = find_evtchn(u, notify.port); 403 + if (evtchn) { 447 404 notify_remote_via_evtchn(notify.port); 448 405 rc = 0; 449 406 } ··· 451 412 case IOCTL_EVTCHN_RESET: { 452 413 /* Initialise the ring to empty. Clear errors. */ 453 414 mutex_lock(&u->ring_cons_mutex); 454 - spin_lock_irq(&port_user_lock); 415 + spin_lock_irq(&u->ring_prod_lock); 455 416 u->ring_cons = u->ring_prod = u->ring_overflow = 0; 456 - spin_unlock_irq(&port_user_lock); 417 + spin_unlock_irq(&u->ring_prod_lock); 457 418 mutex_unlock(&u->ring_cons_mutex); 458 419 rc = 0; 459 420 break; ··· 512 473 513 474 mutex_init(&u->bind_mutex); 514 475 mutex_init(&u->ring_cons_mutex); 476 + spin_lock_init(&u->ring_prod_lock); 515 477 516 478 filp->private_data = u; 517 479 ··· 521 481 522 482 static int evtchn_release(struct inode *inode, struct file *filp) 523 483 { 524 - int i; 525 484 struct per_user_data *u = filp->private_data; 485 + struct rb_node *node; 526 486 527 - for (i = 0; i < NR_EVENT_CHANNELS; i++) { 528 - if (get_port_user(i) != u) 529 - continue; 487 + while ((node = u->evtchns.rb_node)) { 488 + struct user_evtchn *evtchn; 530 489 531 - disable_irq(irq_from_evtchn(i)); 532 - evtchn_unbind_from_user(get_port_user(i), i); 490 + evtchn = rb_entry(node, struct user_evtchn, node); 491 + disable_irq(irq_from_evtchn(evtchn->port)); 492 + evtchn_unbind_from_user(u, evtchn); 533 493 } 534 494 535 495 free_page((unsigned long)u->ring); ··· 563 523 if (!xen_domain()) 564 524 return -ENODEV; 565 525 566 - port_user = kcalloc(NR_EVENT_CHANNELS, sizeof(*port_user), GFP_KERNEL); 567 - if (port_user == NULL) 568 - return -ENOMEM; 569 - 570 - spin_lock_init(&port_user_lock); 571 - 572 526 /* Create '/dev/xen/evtchn'. */ 573 527 err = misc_register(&evtchn_miscdev); 574 528 if (err != 0) { ··· 577 543 578 544 static void __exit evtchn_cleanup(void) 579 545 { 580 - kfree(port_user); 581 - port_user = NULL; 582 - 583 546 misc_deregister(&evtchn_miscdev); 584 547 } 585 548
+2 -9
drivers/xen/gntdev.c
··· 272 272 * with find_grant_ptes. 273 273 */ 274 274 for (i = 0; i < map->count; i++) { 275 - unsigned level; 276 275 unsigned long address = (unsigned long) 277 276 pfn_to_kaddr(page_to_pfn(map->pages[i])); 278 - pte_t *ptep; 279 - u64 pte_maddr = 0; 280 277 BUG_ON(PageHighMem(map->pages[i])); 281 278 282 - ptep = lookup_address(address, &level); 283 - pte_maddr = arbitrary_virt_to_machine(ptep).maddr; 284 - gnttab_set_map_op(&map->kmap_ops[i], pte_maddr, 285 - map->flags | 286 - GNTMAP_host_map | 287 - GNTMAP_contains_pte, 279 + gnttab_set_map_op(&map->kmap_ops[i], address, 280 + map->flags | GNTMAP_host_map, 288 281 map->grants[i].ref, 289 282 map->grants[i].domid); 290 283 }
+11 -2
drivers/xen/grant-table.c
··· 730 730 void (*fn)(void *), void *arg, u16 count) 731 731 { 732 732 unsigned long flags; 733 + struct gnttab_free_callback *cb; 734 + 733 735 spin_lock_irqsave(&gnttab_list_lock, flags); 734 - if (callback->next) 735 - goto out; 736 + 737 + /* Check if the callback is already on the list */ 738 + cb = gnttab_free_callback_list; 739 + while (cb) { 740 + if (cb == callback) 741 + goto out; 742 + cb = cb->next; 743 + } 744 + 736 745 callback->fn = fn; 737 746 callback->arg = arg; 738 747 callback->count = count;
+63 -20
drivers/xen/privcmd.c
··· 43 43 44 44 #define PRIV_VMA_LOCKED ((void *)1) 45 45 46 - #ifndef HAVE_ARCH_PRIVCMD_MMAP 47 - static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); 48 - #endif 46 + static int privcmd_vma_range_is_mapped( 47 + struct vm_area_struct *vma, 48 + unsigned long addr, 49 + unsigned long nr_pages); 49 50 50 51 static long privcmd_ioctl_hypercall(void __user *udata) 51 52 { ··· 226 225 vma = find_vma(mm, msg->va); 227 226 rc = -EINVAL; 228 227 229 - if (!vma || (msg->va != vma->vm_start) || 230 - !privcmd_enforce_singleshot_mapping(vma)) 228 + if (!vma || (msg->va != vma->vm_start) || vma->vm_private_data) 231 229 goto out_up; 230 + vma->vm_private_data = PRIV_VMA_LOCKED; 232 231 } 233 232 234 233 state.va = vma->vm_start; ··· 359 358 kfree(pages); 360 359 return -ENOMEM; 361 360 } 362 - BUG_ON(vma->vm_private_data != PRIV_VMA_LOCKED); 361 + BUG_ON(vma->vm_private_data != NULL); 363 362 vma->vm_private_data = pages; 364 363 365 364 return 0; ··· 422 421 423 422 vma = find_vma(mm, m.addr); 424 423 if (!vma || 425 - vma->vm_ops != &privcmd_vm_ops || 426 - (m.addr != vma->vm_start) || 427 - ((m.addr + (nr_pages << PAGE_SHIFT)) != vma->vm_end) || 428 - !privcmd_enforce_singleshot_mapping(vma)) { 429 - up_write(&mm->mmap_sem); 424 + vma->vm_ops != &privcmd_vm_ops) { 430 425 ret = -EINVAL; 431 - goto out; 426 + goto out_unlock; 432 427 } 433 - if (xen_feature(XENFEAT_auto_translated_physmap)) { 434 - ret = alloc_empty_pages(vma, m.num); 435 - if (ret < 0) { 436 - up_write(&mm->mmap_sem); 437 - goto out; 428 + 429 + /* 430 + * Caller must either: 431 + * 432 + * Map the whole VMA range, which will also allocate all the 433 + * pages required for the auto_translated_physmap case. 434 + * 435 + * Or 436 + * 437 + * Map unmapped holes left from a previous map attempt (e.g., 438 + * because those foreign frames were previously paged out). 439 + */ 440 + if (vma->vm_private_data == NULL) { 441 + if (m.addr != vma->vm_start || 442 + m.addr + (nr_pages << PAGE_SHIFT) != vma->vm_end) { 443 + ret = -EINVAL; 444 + goto out_unlock; 445 + } 446 + if (xen_feature(XENFEAT_auto_translated_physmap)) { 447 + ret = alloc_empty_pages(vma, m.num); 448 + if (ret < 0) 449 + goto out_unlock; 450 + } else 451 + vma->vm_private_data = PRIV_VMA_LOCKED; 452 + } else { 453 + if (m.addr < vma->vm_start || 454 + m.addr + (nr_pages << PAGE_SHIFT) > vma->vm_end) { 455 + ret = -EINVAL; 456 + goto out_unlock; 457 + } 458 + if (privcmd_vma_range_is_mapped(vma, m.addr, nr_pages)) { 459 + ret = -EINVAL; 460 + goto out_unlock; 438 461 } 439 462 } 440 463 ··· 491 466 492 467 out: 493 468 free_page_list(&pagelist); 494 - 495 469 return ret; 470 + 471 + out_unlock: 472 + up_write(&mm->mmap_sem); 473 + goto out; 496 474 } 497 475 498 476 static long privcmd_ioctl(struct file *file, ··· 568 540 return 0; 569 541 } 570 542 571 - static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma) 543 + /* 544 + * For MMAPBATCH*. This allows asserting the singleshot mapping 545 + * on a per pfn/pte basis. Mapping calls that fail with ENOENT 546 + * can be then retried until success. 547 + */ 548 + static int is_mapped_fn(pte_t *pte, struct page *pmd_page, 549 + unsigned long addr, void *data) 572 550 { 573 - return !cmpxchg(&vma->vm_private_data, NULL, PRIV_VMA_LOCKED); 551 + return pte_none(*pte) ? 0 : -EBUSY; 552 + } 553 + 554 + static int privcmd_vma_range_is_mapped( 555 + struct vm_area_struct *vma, 556 + unsigned long addr, 557 + unsigned long nr_pages) 558 + { 559 + return apply_to_page_range(vma->vm_mm, addr, nr_pages << PAGE_SHIFT, 560 + is_mapped_fn, NULL) != 0; 574 561 } 575 562 576 563 const struct file_operations xen_privcmd_fops = {
+4 -4
drivers/xen/swiotlb-xen.c
··· 506 506 to do proper error handling. */ 507 507 xen_swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir, 508 508 attrs); 509 - sgl[0].dma_length = 0; 509 + sg_dma_len(sgl) = 0; 510 510 return DMA_ERROR_CODE; 511 511 } 512 512 sg->dma_address = xen_phys_to_bus(map); 513 513 } else 514 514 sg->dma_address = dev_addr; 515 - sg->dma_length = sg->length; 515 + sg_dma_len(sg) = sg->length; 516 516 } 517 517 return nelems; 518 518 } ··· 533 533 BUG_ON(dir == DMA_NONE); 534 534 535 535 for_each_sg(sgl, sg, nelems, i) 536 - xen_unmap_single(hwdev, sg->dma_address, sg->dma_length, dir); 536 + xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir); 537 537 538 538 } 539 539 EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs); ··· 555 555 556 556 for_each_sg(sgl, sg, nelems, i) 557 557 xen_swiotlb_sync_single(hwdev, sg->dma_address, 558 - sg->dma_length, dir, target); 558 + sg_dma_len(sg), dir, target); 559 559 } 560 560 561 561 void
+36 -18
drivers/xen/xen-selfballoon.c
··· 265 265 if (!capable(CAP_SYS_ADMIN)) 266 266 return -EPERM; 267 267 268 - err = strict_strtoul(buf, 10, &tmp); 269 - if (err || ((tmp != 0) && (tmp != 1))) 268 + err = kstrtoul(buf, 10, &tmp); 269 + if (err) 270 + return err; 271 + if ((tmp != 0) && (tmp != 1)) 270 272 return -EINVAL; 271 273 272 274 xen_selfballooning_enabled = !!tmp; ··· 294 292 295 293 if (!capable(CAP_SYS_ADMIN)) 296 294 return -EPERM; 297 - err = strict_strtoul(buf, 10, &val); 298 - if (err || val == 0) 295 + err = kstrtoul(buf, 10, &val); 296 + if (err) 297 + return err; 298 + if (val == 0) 299 299 return -EINVAL; 300 300 selfballoon_interval = val; 301 301 return count; ··· 318 314 319 315 if (!capable(CAP_SYS_ADMIN)) 320 316 return -EPERM; 321 - err = strict_strtoul(buf, 10, &val); 322 - if (err || val == 0) 317 + err = kstrtoul(buf, 10, &val); 318 + if (err) 319 + return err; 320 + if (val == 0) 323 321 return -EINVAL; 324 322 selfballoon_downhysteresis = val; 325 323 return count; ··· 343 337 344 338 if (!capable(CAP_SYS_ADMIN)) 345 339 return -EPERM; 346 - err = strict_strtoul(buf, 10, &val); 347 - if (err || val == 0) 340 + err = kstrtoul(buf, 10, &val); 341 + if (err) 342 + return err; 343 + if (val == 0) 348 344 return -EINVAL; 349 345 selfballoon_uphysteresis = val; 350 346 return count; ··· 368 360 369 361 if (!capable(CAP_SYS_ADMIN)) 370 362 return -EPERM; 371 - err = strict_strtoul(buf, 10, &val); 372 - if (err || val == 0) 363 + err = kstrtoul(buf, 10, &val); 364 + if (err) 365 + return err; 366 + if (val == 0) 373 367 return -EINVAL; 374 368 selfballoon_min_usable_mb = val; 375 369 return count; ··· 394 384 395 385 if (!capable(CAP_SYS_ADMIN)) 396 386 return -EPERM; 397 - err = strict_strtoul(buf, 10, &val); 398 - if (err || val == 0) 387 + err = kstrtoul(buf, 10, &val); 388 + if (err) 389 + return err; 390 + if (val == 0) 399 391 return -EINVAL; 400 392 selfballoon_reserved_mb = val; 401 393 return count; ··· 422 410 423 411 if (!capable(CAP_SYS_ADMIN)) 424 412 return -EPERM; 425 - err = strict_strtoul(buf, 10, &tmp); 426 - if (err || ((tmp != 0) && (tmp != 1))) 413 + err = kstrtoul(buf, 10, &tmp); 414 + if (err) 415 + return err; 416 + if ((tmp != 0) && (tmp != 1)) 427 417 return -EINVAL; 428 418 frontswap_selfshrinking = !!tmp; 429 419 if (!was_enabled && !xen_selfballooning_enabled && ··· 451 437 452 438 if (!capable(CAP_SYS_ADMIN)) 453 439 return -EPERM; 454 - err = strict_strtoul(buf, 10, &val); 455 - if (err || val == 0) 440 + err = kstrtoul(buf, 10, &val); 441 + if (err) 442 + return err; 443 + if (val == 0) 456 444 return -EINVAL; 457 445 frontswap_inertia = val; 458 446 frontswap_inertia_counter = val; ··· 476 460 477 461 if (!capable(CAP_SYS_ADMIN)) 478 462 return -EPERM; 479 - err = strict_strtoul(buf, 10, &val); 480 - if (err || val == 0) 463 + err = kstrtoul(buf, 10, &val); 464 + if (err) 465 + return err; 466 + if (val == 0) 481 467 return -EINVAL; 482 468 frontswap_hysteresis = val; 483 469 return count;
+3
include/xen/balloon.h
··· 29 29 bool highmem); 30 30 void free_xenballooned_pages(int nr_pages, struct page **pages); 31 31 32 + struct page *get_balloon_scratch_page(void); 33 + void put_balloon_scratch_page(void); 34 + 32 35 struct device; 33 36 #ifdef CONFIG_XEN_SELFBALLOONING 34 37 extern int register_xen_selfballooning(struct device *dev);
+52
include/xen/interface/io/tpmif.h
··· 1 + /****************************************************************************** 2 + * tpmif.h 3 + * 4 + * TPM I/O interface for Xen guest OSes, v2 5 + * 6 + * This file is in the public domain. 7 + * 8 + */ 9 + 10 + #ifndef __XEN_PUBLIC_IO_TPMIF_H__ 11 + #define __XEN_PUBLIC_IO_TPMIF_H__ 12 + 13 + /* 14 + * Xenbus state machine 15 + * 16 + * Device open: 17 + * 1. Both ends start in XenbusStateInitialising 18 + * 2. Backend transitions to InitWait (frontend does not wait on this step) 19 + * 3. Frontend populates ring-ref, event-channel, feature-protocol-v2 20 + * 4. Frontend transitions to Initialised 21 + * 5. Backend maps grant and event channel, verifies feature-protocol-v2 22 + * 6. Backend transitions to Connected 23 + * 7. Frontend verifies feature-protocol-v2, transitions to Connected 24 + * 25 + * Device close: 26 + * 1. State is changed to XenbusStateClosing 27 + * 2. Frontend transitions to Closed 28 + * 3. Backend unmaps grant and event, changes state to InitWait 29 + */ 30 + 31 + enum vtpm_shared_page_state { 32 + VTPM_STATE_IDLE, /* no contents / vTPM idle / cancel complete */ 33 + VTPM_STATE_SUBMIT, /* request ready / vTPM working */ 34 + VTPM_STATE_FINISH, /* response ready / vTPM idle */ 35 + VTPM_STATE_CANCEL, /* cancel requested / vTPM working */ 36 + }; 37 + /* The backend should only change state to IDLE or FINISH, while the 38 + * frontend should only change to SUBMIT or CANCEL. */ 39 + 40 + 41 + struct vtpm_shared_page { 42 + uint32_t length; /* request/response length in bytes */ 43 + 44 + uint8_t state; /* enum vtpm_shared_page_state */ 45 + uint8_t locality; /* for the current request */ 46 + uint8_t pad; 47 + 48 + uint8_t nr_extra_pages; /* extra pages for long packets; may be zero */ 49 + uint32_t extra_pages[0]; /* grant IDs; length in nr_extra_pages */ 50 + }; 51 + 52 + #endif
+2
include/xen/interface/vcpu.h
··· 170 170 }; 171 171 DEFINE_GUEST_HANDLE_STRUCT(vcpu_register_vcpu_info); 172 172 173 + /* Send an NMI to the specified VCPU. @extra_arg == NULL. */ 174 + #define VCPUOP_send_nmi 11 173 175 #endif /* __XEN_PUBLIC_VCPU_H__ */
+4 -4
lib/swiotlb.c
··· 870 870 swiotlb_full(hwdev, sg->length, dir, 0); 871 871 swiotlb_unmap_sg_attrs(hwdev, sgl, i, dir, 872 872 attrs); 873 - sgl[0].dma_length = 0; 873 + sg_dma_len(sgl) = 0; 874 874 return 0; 875 875 } 876 876 sg->dma_address = phys_to_dma(hwdev, map); 877 877 } else 878 878 sg->dma_address = dev_addr; 879 - sg->dma_length = sg->length; 879 + sg_dma_len(sg) = sg->length; 880 880 } 881 881 return nelems; 882 882 } ··· 904 904 BUG_ON(dir == DMA_NONE); 905 905 906 906 for_each_sg(sgl, sg, nelems, i) 907 - unmap_single(hwdev, sg->dma_address, sg->dma_length, dir); 907 + unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir); 908 908 909 909 } 910 910 EXPORT_SYMBOL(swiotlb_unmap_sg_attrs); ··· 934 934 935 935 for_each_sg(sgl, sg, nelems, i) 936 936 swiotlb_sync_single(hwdev, sg->dma_address, 937 - sg->dma_length, dir, target); 937 + sg_dma_len(sg), dir, target); 938 938 } 939 939 940 940 void