Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'iommu-updates-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu

Pull iommu updates from Joerg Roedel:
"Core code:
- map/unmap_pages() cleanup
- SVA and IOPF refactoring
- Clean up and document return codes from device/domain attachment

AMD driver:
- Rework and extend parsing code for ivrs_ioapic, ivrs_hpet and
ivrs_acpihid command line options
- Some smaller cleanups

Intel driver:
- Blocking domain support
- Cleanups

S390 driver:
- Fixes and improvements for attach and aperture handling

PAMU driver:
- Resource leak fix and cleanup

Rockchip driver:
- Page table permission bit fix

Mediatek driver:
- Improve safety from invalid dts input
- Smaller fixes and improvements

Exynos driver:
- Fix driver initialization sequence

Sun50i driver:
- Remove IOMMU_DOMAIN_IDENTITY as it has not been working forever
- Various other fixes"

* tag 'iommu-updates-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (74 commits)
iommu/mediatek: Fix forever loop in error handling
iommu/mediatek: Fix crash on isr after kexec()
iommu/sun50i: Remove IOMMU_DOMAIN_IDENTITY
iommu/amd: Fix typo in macro parameter name
iommu/mediatek: Remove unused "mapping" member from mtk_iommu_data
iommu/mediatek: Improve safety for mediatek,smi property in larb nodes
iommu/mediatek: Validate number of phandles associated with "mediatek,larbs"
iommu/mediatek: Add error path for loop of mm_dts_parse
iommu/mediatek: Use component_match_add
iommu/mediatek: Add platform_device_put for recovering the device refcnt
iommu/fsl_pamu: Fix resource leak in fsl_pamu_probe()
iommu/vt-d: Use real field for indication of first level
iommu/vt-d: Remove unnecessary domain_context_mapped()
iommu/vt-d: Rename domain_add_dev_info()
iommu/vt-d: Rename iommu_disable_dev_iotlb()
iommu/vt-d: Add blocking domain support
iommu/vt-d: Add device_block_translation() helper
iommu/vt-d: Allocate pasid table in device probe path
iommu/amd: Check return value of mmu_notifier_register()
iommu/amd: Fix pci device refcount leak in ppr_notifier()
...

+1178 -663
+22 -5
Documentation/admin-guide/kernel-parameters.txt
··· 2313 2313 Provide an override to the IOAPIC-ID<->DEVICE-ID 2314 2314 mapping provided in the IVRS ACPI table. 2315 2315 By default, PCI segment is 0, and can be omitted. 2316 - For example: 2316 + 2317 + For example, to map IOAPIC-ID decimal 10 to 2318 + PCI segment 0x1 and PCI device 00:14.0, 2319 + write the parameter as: 2320 + ivrs_ioapic=10@0001:00:14.0 2321 + 2322 + Deprecated formats: 2317 2323 * To map IOAPIC-ID decimal 10 to PCI device 00:14.0 2318 2324 write the parameter as: 2319 2325 ivrs_ioapic[10]=00:14.0 ··· 2331 2325 Provide an override to the HPET-ID<->DEVICE-ID 2332 2326 mapping provided in the IVRS ACPI table. 2333 2327 By default, PCI segment is 0, and can be omitted. 2334 - For example: 2328 + 2329 + For example, to map HPET-ID decimal 10 to 2330 + PCI segment 0x1 and PCI device 00:14.0, 2331 + write the parameter as: 2332 + ivrs_hpet=10@0001:00:14.0 2333 + 2334 + Deprecated formats: 2335 2335 * To map HPET-ID decimal 0 to PCI device 00:14.0 2336 2336 write the parameter as: 2337 2337 ivrs_hpet[0]=00:14.0 ··· 2348 2336 ivrs_acpihid [HW,X86-64] 2349 2337 Provide an override to the ACPI-HID:UID<->DEVICE-ID 2350 2338 mapping provided in the IVRS ACPI table. 2339 + By default, PCI segment is 0, and can be omitted. 2351 2340 2352 2341 For example, to map UART-HID:UID AMD0020:0 to 2353 2342 PCI segment 0x1 and PCI device ID 00:14.5, 2354 2343 write the parameter as: 2355 - ivrs_acpihid[0001:00:14.5]=AMD0020:0 2344 + ivrs_acpihid=AMD0020:0@0001:00:14.5 2356 2345 2357 - By default, PCI segment is 0, and can be omitted. 2358 - For example, PCI device 00:14.5 write the parameter as: 2346 + Deprecated formats: 2347 + * To map UART-HID:UID AMD0020:0 to PCI segment is 0, 2348 + PCI device ID 00:14.5, write the parameter as: 2359 2349 ivrs_acpihid[00:14.5]=AMD0020:0 2350 + * To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and 2351 + PCI device ID 00:14.5, write the parameter as: 2352 + ivrs_acpihid[0001:00:14.5]=AMD0020:0 2360 2353 2361 2354 js= [HW,JOY] Analog joystick 2362 2355 See Documentation/input/joydev/joystick.rst.
+171 -11
Documentation/devicetree/bindings/iommu/arm,smmu.yaml
··· 28 28 - enum: 29 29 - qcom,msm8996-smmu-v2 30 30 - qcom,msm8998-smmu-v2 31 + - qcom,sdm630-smmu-v2 31 32 - const: qcom,smmu-v2 32 33 33 - - description: Qcom SoCs implementing "arm,mmu-500" 34 + - description: Qcom SoCs implementing "qcom,smmu-500" and "arm,mmu-500" 34 35 items: 36 + - enum: 37 + - qcom,qcm2290-smmu-500 38 + - qcom,qdu1000-smmu-500 39 + - qcom,sc7180-smmu-500 40 + - qcom,sc7280-smmu-500 41 + - qcom,sc8180x-smmu-500 42 + - qcom,sc8280xp-smmu-500 43 + - qcom,sdm670-smmu-500 44 + - qcom,sdm845-smmu-500 45 + - qcom,sm6115-smmu-500 46 + - qcom,sm6350-smmu-500 47 + - qcom,sm6375-smmu-500 48 + - qcom,sm8150-smmu-500 49 + - qcom,sm8250-smmu-500 50 + - qcom,sm8350-smmu-500 51 + - qcom,sm8450-smmu-500 52 + - const: qcom,smmu-500 53 + - const: arm,mmu-500 54 + 55 + - description: Qcom SoCs implementing "arm,mmu-500" (non-qcom implementation) 56 + deprecated: true 57 + items: 58 + - enum: 59 + - qcom,sdx55-smmu-500 60 + - qcom,sdx65-smmu-500 61 + - const: arm,mmu-500 62 + 63 + - description: Qcom SoCs implementing "arm,mmu-500" (legacy binding) 64 + deprecated: true 65 + items: 66 + # Do not add additional SoC to this list. Instead use two previous lists. 35 67 - enum: 36 68 - qcom,qcm2290-smmu-500 37 69 - qcom,sc7180-smmu-500 ··· 71 39 - qcom,sc8180x-smmu-500 72 40 - qcom,sc8280xp-smmu-500 73 41 - qcom,sdm845-smmu-500 74 - - qcom,sdx55-smmu-500 75 - - qcom,sdx65-smmu-500 42 + - qcom,sm6115-smmu-500 76 43 - qcom,sm6350-smmu-500 77 44 - qcom,sm6375-smmu-500 78 45 - qcom,sm8150-smmu-500 ··· 79 48 - qcom,sm8350-smmu-500 80 49 - qcom,sm8450-smmu-500 81 50 - const: arm,mmu-500 51 + 52 + - description: Qcom Adreno GPUs implementing "arm,smmu-500" 53 + items: 54 + - enum: 55 + - qcom,sc7280-smmu-500 56 + - qcom,sm8250-smmu-500 57 + - const: qcom,adreno-smmu 58 + - const: arm,mmu-500 82 59 - description: Qcom Adreno GPUs implementing "arm,smmu-v2" 83 60 items: 84 61 - enum: 62 + - qcom,msm8996-smmu-v2 85 63 - qcom,sc7180-smmu-v2 64 + - qcom,sdm630-smmu-v2 86 65 - qcom,sdm845-smmu-v2 66 + - qcom,sm6350-smmu-v2 87 67 - const: qcom,adreno-smmu 68 + - const: qcom,smmu-v2 69 + - description: Qcom Adreno GPUs on Google Cheza platform 70 + items: 71 + - const: qcom,sdm845-smmu-v2 88 72 - const: qcom,smmu-v2 89 73 - description: Marvell SoCs implementing "arm,mmu-500" 90 74 items: ··· 193 147 present in such cases. 194 148 195 149 clock-names: 196 - items: 197 - - const: bus 198 - - const: iface 150 + minItems: 1 151 + maxItems: 7 199 152 200 153 clocks: 201 - items: 202 - - description: bus clock required for downstream bus access and for the 203 - smmu ptw 204 - - description: interface clock required to access smmu's registers 205 - through the TCU's programming interface. 154 + minItems: 1 155 + maxItems: 7 206 156 207 157 power-domains: 208 158 maxItems: 1 ··· 247 205 properties: 248 206 reg: 249 207 maxItems: 1 208 + 209 + - if: 210 + properties: 211 + compatible: 212 + contains: 213 + enum: 214 + - qcom,msm8998-smmu-v2 215 + - qcom,sdm630-smmu-v2 216 + then: 217 + anyOf: 218 + - properties: 219 + clock-names: 220 + items: 221 + - const: bus 222 + clocks: 223 + items: 224 + - description: bus clock required for downstream bus access and for 225 + the smmu ptw 226 + - properties: 227 + clock-names: 228 + items: 229 + - const: iface 230 + - const: mem 231 + - const: mem_iface 232 + clocks: 233 + items: 234 + - description: interface clock required to access smmu's registers 235 + through the TCU's programming interface. 236 + - description: bus clock required for memory access 237 + - description: bus clock required for GPU memory access 238 + - properties: 239 + clock-names: 240 + items: 241 + - const: iface-mm 242 + - const: iface-smmu 243 + - const: bus-mm 244 + - const: bus-smmu 245 + clocks: 246 + items: 247 + - description: interface clock required to access mnoc's registers 248 + through the TCU's programming interface. 249 + - description: interface clock required to access smmu's registers 250 + through the TCU's programming interface. 251 + - description: bus clock required for downstream bus access 252 + - description: bus clock required for the smmu ptw 253 + 254 + - if: 255 + properties: 256 + compatible: 257 + contains: 258 + enum: 259 + - qcom,msm8996-smmu-v2 260 + - qcom,sc7180-smmu-v2 261 + - qcom,sdm845-smmu-v2 262 + then: 263 + properties: 264 + clock-names: 265 + items: 266 + - const: bus 267 + - const: iface 268 + 269 + clocks: 270 + items: 271 + - description: bus clock required for downstream bus access and for 272 + the smmu ptw 273 + - description: interface clock required to access smmu's registers 274 + through the TCU's programming interface. 275 + 276 + - if: 277 + properties: 278 + compatible: 279 + contains: 280 + const: qcom,sc7280-smmu-500 281 + then: 282 + properties: 283 + clock-names: 284 + items: 285 + - const: gcc_gpu_memnoc_gfx_clk 286 + - const: gcc_gpu_snoc_dvm_gfx_clk 287 + - const: gpu_cc_ahb_clk 288 + - const: gpu_cc_hlos1_vote_gpu_smmu_clk 289 + - const: gpu_cc_cx_gmu_clk 290 + - const: gpu_cc_hub_cx_int_clk 291 + - const: gpu_cc_hub_aon_clk 292 + 293 + clocks: 294 + items: 295 + - description: GPU memnoc_gfx clock 296 + - description: GPU snoc_dvm_gfx clock 297 + - description: GPU ahb clock 298 + - description: GPU hlos1_vote_GPU smmu clock 299 + - description: GPU cx_gmu clock 300 + - description: GPU hub_cx_int clock 301 + - description: GPU hub_aon clock 302 + 303 + - if: 304 + properties: 305 + compatible: 306 + contains: 307 + enum: 308 + - qcom,sm6350-smmu-v2 309 + - qcom,sm8150-smmu-500 310 + - qcom,sm8250-smmu-500 311 + then: 312 + properties: 313 + clock-names: 314 + items: 315 + - const: ahb 316 + - const: bus 317 + - const: iface 318 + 319 + clocks: 320 + items: 321 + - description: bus clock required for AHB bus access 322 + - description: bus clock required for downstream bus access and for 323 + the smmu ptw 324 + - description: interface clock required to access smmu's registers 325 + through the TCU's programming interface. 250 326 251 327 examples: 252 328 - |+
+2
Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml
··· 82 82 - mediatek,mt8195-iommu-vdo # generation two 83 83 - mediatek,mt8195-iommu-vpp # generation two 84 84 - mediatek,mt8195-iommu-infra # generation two 85 + - mediatek,mt8365-m4u # generation two 85 86 86 87 - description: mt7623 generation one 87 88 items: ··· 133 132 dt-binding/memory/mt8186-memory-port.h for mt8186, 134 133 dt-binding/memory/mt8192-larb-port.h for mt8192. 135 134 dt-binding/memory/mt8195-memory-port.h for mt8195. 135 + dt-binding/memory/mediatek,mt8365-larb-port.h for mt8365. 136 136 137 137 power-domains: 138 138 maxItems: 1
+3 -2
arch/s390/include/asm/pci.h
··· 117 117 struct zpci_dev { 118 118 struct zpci_bus *zbus; 119 119 struct list_head entry; /* list of all zpci_devices, needed for hotplug, etc. */ 120 + struct list_head iommu_list; 120 121 struct kref kref; 122 + struct rcu_head rcu; 121 123 struct hotplug_slot hotplug_slot; 122 124 123 125 enum zpci_state state; ··· 157 155 158 156 /* DMA stuff */ 159 157 unsigned long *dma_table; 160 - spinlock_t dma_table_lock; 161 158 int tlb_refresh; 162 159 163 160 spinlock_t iommu_bitmap_lock; ··· 221 220 bool zpci_is_device_configured(struct zpci_dev *zdev); 222 221 223 222 int zpci_hot_reset_device(struct zpci_dev *zdev); 224 - int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64); 223 + int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64, u8 *); 225 224 int zpci_unregister_ioat(struct zpci_dev *, u8); 226 225 void zpci_remove_reserved_devices(void); 227 226 void zpci_update_fh(struct zpci_dev *zdev, u32 fh);
+4 -2
arch/s390/kvm/pci.c
··· 434 434 static int kvm_s390_pci_register_kvm(void *opaque, struct kvm *kvm) 435 435 { 436 436 struct zpci_dev *zdev = opaque; 437 + u8 status; 437 438 int rc; 438 439 439 440 if (!zdev) ··· 487 486 488 487 /* Re-register the IOMMU that was already created */ 489 488 rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 490 - virt_to_phys(zdev->dma_table)); 489 + virt_to_phys(zdev->dma_table), &status); 491 490 if (rc) 492 491 goto clear_gisa; 493 492 ··· 517 516 { 518 517 struct zpci_dev *zdev = opaque; 519 518 struct kvm *kvm; 519 + u8 status; 520 520 521 521 if (!zdev) 522 522 return; ··· 556 554 557 555 /* Re-register the IOMMU that was already created */ 558 556 zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 559 - virt_to_phys(zdev->dma_table)); 557 + virt_to_phys(zdev->dma_table), &status); 560 558 561 559 out: 562 560 spin_lock(&kvm->arch.kzdev_list_lock);
+7 -6
arch/s390/pci/pci.c
··· 116 116 117 117 /* Modify PCI: Register I/O address translation parameters */ 118 118 int zpci_register_ioat(struct zpci_dev *zdev, u8 dmaas, 119 - u64 base, u64 limit, u64 iota) 119 + u64 base, u64 limit, u64 iota, u8 *status) 120 120 { 121 121 u64 req = ZPCI_CREATE_REQ(zdev->fh, dmaas, ZPCI_MOD_FC_REG_IOAT); 122 122 struct zpci_fib fib = {0}; 123 - u8 cc, status; 123 + u8 cc; 124 124 125 125 WARN_ON_ONCE(iota & 0x3fff); 126 126 fib.pba = base; 127 127 fib.pal = limit; 128 128 fib.iota = iota | ZPCI_IOTA_RTTO_FLAG; 129 129 fib.gd = zdev->gisa; 130 - cc = zpci_mod_fc(req, &fib, &status); 130 + cc = zpci_mod_fc(req, &fib, status); 131 131 if (cc) 132 - zpci_dbg(3, "reg ioat fid:%x, cc:%d, status:%d\n", zdev->fid, cc, status); 132 + zpci_dbg(3, "reg ioat fid:%x, cc:%d, status:%d\n", zdev->fid, cc, *status); 133 133 return cc; 134 134 } 135 135 EXPORT_SYMBOL_GPL(zpci_register_ioat); ··· 764 764 */ 765 765 int zpci_hot_reset_device(struct zpci_dev *zdev) 766 766 { 767 + u8 status; 767 768 int rc; 768 769 769 770 zpci_dbg(3, "rst fid:%x, fh:%x\n", zdev->fid, zdev->fh); ··· 788 787 789 788 if (zdev->dma_table) 790 789 rc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 791 - virt_to_phys(zdev->dma_table)); 790 + virt_to_phys(zdev->dma_table), &status); 792 791 else 793 792 rc = zpci_dma_init_device(zdev); 794 793 if (rc) { ··· 996 995 break; 997 996 } 998 997 zpci_dbg(3, "rem fid:%x\n", zdev->fid); 999 - kfree(zdev); 998 + kfree_rcu(zdev, rcu); 1000 999 } 1001 1000 1002 1001 int zpci_report_error(struct pci_dev *pdev,
+47 -30
arch/s390/pci/pci_dma.c
··· 63 63 kmem_cache_free(dma_page_table_cache, table); 64 64 } 65 65 66 - static unsigned long *dma_get_seg_table_origin(unsigned long *entry) 66 + static unsigned long *dma_get_seg_table_origin(unsigned long *rtep) 67 67 { 68 + unsigned long old_rte, rte; 68 69 unsigned long *sto; 69 70 70 - if (reg_entry_isvalid(*entry)) 71 - sto = get_rt_sto(*entry); 72 - else { 71 + rte = READ_ONCE(*rtep); 72 + if (reg_entry_isvalid(rte)) { 73 + sto = get_rt_sto(rte); 74 + } else { 73 75 sto = dma_alloc_cpu_table(); 74 76 if (!sto) 75 77 return NULL; 76 78 77 - set_rt_sto(entry, virt_to_phys(sto)); 78 - validate_rt_entry(entry); 79 - entry_clr_protected(entry); 79 + set_rt_sto(&rte, virt_to_phys(sto)); 80 + validate_rt_entry(&rte); 81 + entry_clr_protected(&rte); 82 + 83 + old_rte = cmpxchg(rtep, ZPCI_TABLE_INVALID, rte); 84 + if (old_rte != ZPCI_TABLE_INVALID) { 85 + /* Somone else was faster, use theirs */ 86 + dma_free_cpu_table(sto); 87 + sto = get_rt_sto(old_rte); 88 + } 80 89 } 81 90 return sto; 82 91 } 83 92 84 - static unsigned long *dma_get_page_table_origin(unsigned long *entry) 93 + static unsigned long *dma_get_page_table_origin(unsigned long *step) 85 94 { 95 + unsigned long old_ste, ste; 86 96 unsigned long *pto; 87 97 88 - if (reg_entry_isvalid(*entry)) 89 - pto = get_st_pto(*entry); 90 - else { 98 + ste = READ_ONCE(*step); 99 + if (reg_entry_isvalid(ste)) { 100 + pto = get_st_pto(ste); 101 + } else { 91 102 pto = dma_alloc_page_table(); 92 103 if (!pto) 93 104 return NULL; 94 - set_st_pto(entry, virt_to_phys(pto)); 95 - validate_st_entry(entry); 96 - entry_clr_protected(entry); 105 + set_st_pto(&ste, virt_to_phys(pto)); 106 + validate_st_entry(&ste); 107 + entry_clr_protected(&ste); 108 + 109 + old_ste = cmpxchg(step, ZPCI_TABLE_INVALID, ste); 110 + if (old_ste != ZPCI_TABLE_INVALID) { 111 + /* Somone else was faster, use theirs */ 112 + dma_free_page_table(pto); 113 + pto = get_st_pto(old_ste); 114 + } 97 115 } 98 116 return pto; 99 117 } ··· 135 117 return &pto[px]; 136 118 } 137 119 138 - void dma_update_cpu_trans(unsigned long *entry, phys_addr_t page_addr, int flags) 120 + void dma_update_cpu_trans(unsigned long *ptep, phys_addr_t page_addr, int flags) 139 121 { 122 + unsigned long pte; 123 + 124 + pte = READ_ONCE(*ptep); 140 125 if (flags & ZPCI_PTE_INVALID) { 141 - invalidate_pt_entry(entry); 126 + invalidate_pt_entry(&pte); 142 127 } else { 143 - set_pt_pfaa(entry, page_addr); 144 - validate_pt_entry(entry); 128 + set_pt_pfaa(&pte, page_addr); 129 + validate_pt_entry(&pte); 145 130 } 146 131 147 132 if (flags & ZPCI_TABLE_PROTECTED) 148 - entry_set_protected(entry); 133 + entry_set_protected(&pte); 149 134 else 150 - entry_clr_protected(entry); 135 + entry_clr_protected(&pte); 136 + 137 + xchg(ptep, pte); 151 138 } 152 139 153 140 static int __dma_update_trans(struct zpci_dev *zdev, phys_addr_t pa, ··· 160 137 { 161 138 unsigned int nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; 162 139 phys_addr_t page_addr = (pa & PAGE_MASK); 163 - unsigned long irq_flags; 164 140 unsigned long *entry; 165 141 int i, rc = 0; 166 142 167 143 if (!nr_pages) 168 144 return -EINVAL; 169 145 170 - spin_lock_irqsave(&zdev->dma_table_lock, irq_flags); 171 - if (!zdev->dma_table) { 172 - rc = -EINVAL; 173 - goto out_unlock; 174 - } 146 + if (!zdev->dma_table) 147 + return -EINVAL; 175 148 176 149 for (i = 0; i < nr_pages; i++) { 177 150 entry = dma_walk_cpu_trans(zdev->dma_table, dma_addr); ··· 192 173 dma_update_cpu_trans(entry, page_addr, flags); 193 174 } 194 175 } 195 - out_unlock: 196 - spin_unlock_irqrestore(&zdev->dma_table_lock, irq_flags); 197 176 return rc; 198 177 } 199 178 ··· 564 547 565 548 int zpci_dma_init_device(struct zpci_dev *zdev) 566 549 { 550 + u8 status; 567 551 int rc; 568 552 569 553 /* ··· 575 557 WARN_ON(zdev->s390_domain); 576 558 577 559 spin_lock_init(&zdev->iommu_bitmap_lock); 578 - spin_lock_init(&zdev->dma_table_lock); 579 560 580 561 zdev->dma_table = dma_alloc_cpu_table(); 581 562 if (!zdev->dma_table) { ··· 615 598 616 599 } 617 600 if (zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 618 - virt_to_phys(zdev->dma_table))) { 601 + virt_to_phys(zdev->dma_table), &status)) { 619 602 rc = -EIO; 620 603 goto free_bitmap; 621 604 }
+61 -25
drivers/iommu/amd/init.c
··· 85 85 86 86 #define LOOP_TIMEOUT 2000000 87 87 88 - #define IVRS_GET_SBDF_ID(seg, bus, dev, fd) (((seg & 0xffff) << 16) | ((bus & 0xff) << 8) \ 88 + #define IVRS_GET_SBDF_ID(seg, bus, dev, fn) (((seg & 0xffff) << 16) | ((bus & 0xff) << 8) \ 89 89 | ((dev & 0x1f) << 3) | (fn & 0x7)) 90 90 91 91 /* ··· 3402 3402 static int __init parse_ivrs_ioapic(char *str) 3403 3403 { 3404 3404 u32 seg = 0, bus, dev, fn; 3405 - int ret, id, i; 3405 + int id, i; 3406 3406 u32 devid; 3407 3407 3408 - ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); 3409 - if (ret != 4) { 3410 - ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); 3411 - if (ret != 5) { 3412 - pr_err("Invalid command line: ivrs_ioapic%s\n", str); 3413 - return 1; 3414 - } 3408 + if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 || 3409 + sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) 3410 + goto found; 3411 + 3412 + if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 || 3413 + sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) { 3414 + pr_warn("ivrs_ioapic%s option format deprecated; use ivrs_ioapic=%d@%04x:%02x:%02x.%d instead\n", 3415 + str, id, seg, bus, dev, fn); 3416 + goto found; 3415 3417 } 3416 3418 3419 + pr_err("Invalid command line: ivrs_ioapic%s\n", str); 3420 + return 1; 3421 + 3422 + found: 3417 3423 if (early_ioapic_map_size == EARLY_MAP_SIZE) { 3418 3424 pr_err("Early IOAPIC map overflow - ignoring ivrs_ioapic%s\n", 3419 3425 str); ··· 3440 3434 static int __init parse_ivrs_hpet(char *str) 3441 3435 { 3442 3436 u32 seg = 0, bus, dev, fn; 3443 - int ret, id, i; 3437 + int id, i; 3444 3438 u32 devid; 3445 3439 3446 - ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn); 3447 - if (ret != 4) { 3448 - ret = sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn); 3449 - if (ret != 5) { 3450 - pr_err("Invalid command line: ivrs_hpet%s\n", str); 3451 - return 1; 3452 - } 3440 + if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 || 3441 + sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) 3442 + goto found; 3443 + 3444 + if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 || 3445 + sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) { 3446 + pr_warn("ivrs_hpet%s option format deprecated; use ivrs_hpet=%d@%04x:%02x:%02x.%d instead\n", 3447 + str, id, seg, bus, dev, fn); 3448 + goto found; 3453 3449 } 3454 3450 3451 + pr_err("Invalid command line: ivrs_hpet%s\n", str); 3452 + return 1; 3453 + 3454 + found: 3455 3455 if (early_hpet_map_size == EARLY_MAP_SIZE) { 3456 3456 pr_err("Early HPET map overflow - ignoring ivrs_hpet%s\n", 3457 3457 str); ··· 3478 3466 static int __init parse_ivrs_acpihid(char *str) 3479 3467 { 3480 3468 u32 seg = 0, bus, dev, fn; 3481 - char *hid, *uid, *p; 3469 + char *hid, *uid, *p, *addr; 3482 3470 char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0}; 3483 - int ret, i; 3471 + int i; 3484 3472 3485 - ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid); 3486 - if (ret != 4) { 3487 - ret = sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid); 3488 - if (ret != 5) { 3489 - pr_err("Invalid command line: ivrs_acpihid(%s)\n", str); 3490 - return 1; 3473 + addr = strchr(str, '@'); 3474 + if (!addr) { 3475 + if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 || 3476 + sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) { 3477 + pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n", 3478 + str, acpiid, seg, bus, dev, fn); 3479 + goto found; 3491 3480 } 3481 + goto not_found; 3492 3482 } 3493 3483 3484 + /* We have the '@', make it the terminator to get just the acpiid */ 3485 + *addr++ = 0; 3486 + 3487 + if (sscanf(str, "=%s", acpiid) != 1) 3488 + goto not_found; 3489 + 3490 + if (sscanf(addr, "%x:%x.%x", &bus, &dev, &fn) == 3 || 3491 + sscanf(addr, "%x:%x:%x.%x", &seg, &bus, &dev, &fn) == 4) 3492 + goto found; 3493 + 3494 + not_found: 3495 + pr_err("Invalid command line: ivrs_acpihid%s\n", str); 3496 + return 1; 3497 + 3498 + found: 3494 3499 p = acpiid; 3495 3500 hid = strsep(&p, ":"); 3496 3501 uid = p; ··· 3516 3487 pr_err("Invalid command line: hid or uid\n"); 3517 3488 return 1; 3518 3489 } 3490 + 3491 + /* 3492 + * Ignore leading zeroes after ':', so e.g., AMDI0095:00 3493 + * will match AMDI0095:0 in the second strcmp in acpi_dev_hid_uid_match 3494 + */ 3495 + while (*uid == '0' && *(uid + 1)) 3496 + uid++; 3519 3497 3520 3498 i = early_acpihid_map_size++; 3521 3499 memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+1 -2
drivers/iommu/amd/iommu.c
··· 767 767 768 768 static void iommu_poll_ga_log(struct amd_iommu *iommu) 769 769 { 770 - u32 head, tail, cnt = 0; 770 + u32 head, tail; 771 771 772 772 if (iommu->ga_log == NULL) 773 773 return; ··· 780 780 u64 log_entry; 781 781 782 782 raw = (u64 *)(iommu->ga_log + head); 783 - cnt++; 784 783 785 784 /* Avoid memcpy function-call overhead */ 786 785 log_entry = *raw;
+4 -1
drivers/iommu/amd/iommu_v2.c
··· 587 587 put_device_state(dev_state); 588 588 589 589 out: 590 + pci_dev_put(pdev); 590 591 return ret; 591 592 } 592 593 ··· 640 639 if (pasid_state->mm == NULL) 641 640 goto out_free; 642 641 643 - mmu_notifier_register(&pasid_state->mn, mm); 642 + ret = mmu_notifier_register(&pasid_state->mn, mm); 643 + if (ret) 644 + goto out_free; 644 645 645 646 ret = set_pasid_state(dev_state, pasid_state, pasid); 646 647 if (ret)
+3
drivers/iommu/arm/arm-smmu/arm-smmu-impl.c
··· 136 136 reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); 137 137 reg &= ~ARM_MMU500_ACTLR_CPRE; 138 138 arm_smmu_cb_write(smmu, i, ARM_SMMU_CB_ACTLR, reg); 139 + reg = arm_smmu_cb_read(smmu, i, ARM_SMMU_CB_ACTLR); 140 + if (reg & ARM_MMU500_ACTLR_CPRE) 141 + dev_warn_once(smmu->dev, "Failed to disable prefetcher [errata #841119 and #826419], check ACR.CACHE_LOCK\n"); 139 142 } 140 143 141 144 return 0;
-91
drivers/iommu/arm/arm-smmu/arm-smmu-qcom-debug.c
··· 10 10 #include "arm-smmu.h" 11 11 #include "arm-smmu-qcom.h" 12 12 13 - enum qcom_smmu_impl_reg_offset { 14 - QCOM_SMMU_TBU_PWR_STATUS, 15 - QCOM_SMMU_STATS_SYNC_INV_TBU_ACK, 16 - QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR, 17 - }; 18 - 19 - struct qcom_smmu_config { 20 - const u32 *reg_offset; 21 - }; 22 - 23 13 void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu) 24 14 { 25 15 int ret; ··· 48 58 "TBU: power_status %#x sync_inv_ack %#x sync_inv_progress %#x\n", 49 59 tbu_pwr_status, sync_inv_ack, sync_inv_progress); 50 60 } 51 - } 52 - 53 - /* Implementation Defined Register Space 0 register offsets */ 54 - static const u32 qcom_smmu_impl0_reg_offset[] = { 55 - [QCOM_SMMU_TBU_PWR_STATUS] = 0x2204, 56 - [QCOM_SMMU_STATS_SYNC_INV_TBU_ACK] = 0x25dc, 57 - [QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR] = 0x2670, 58 - }; 59 - 60 - static const struct qcom_smmu_config qcm2290_smmu_cfg = { 61 - .reg_offset = qcom_smmu_impl0_reg_offset, 62 - }; 63 - 64 - static const struct qcom_smmu_config sc7180_smmu_cfg = { 65 - .reg_offset = qcom_smmu_impl0_reg_offset, 66 - }; 67 - 68 - static const struct qcom_smmu_config sc7280_smmu_cfg = { 69 - .reg_offset = qcom_smmu_impl0_reg_offset, 70 - }; 71 - 72 - static const struct qcom_smmu_config sc8180x_smmu_cfg = { 73 - .reg_offset = qcom_smmu_impl0_reg_offset, 74 - }; 75 - 76 - static const struct qcom_smmu_config sc8280xp_smmu_cfg = { 77 - .reg_offset = qcom_smmu_impl0_reg_offset, 78 - }; 79 - 80 - static const struct qcom_smmu_config sm6125_smmu_cfg = { 81 - .reg_offset = qcom_smmu_impl0_reg_offset, 82 - }; 83 - 84 - static const struct qcom_smmu_config sm6350_smmu_cfg = { 85 - .reg_offset = qcom_smmu_impl0_reg_offset, 86 - }; 87 - 88 - static const struct qcom_smmu_config sm8150_smmu_cfg = { 89 - .reg_offset = qcom_smmu_impl0_reg_offset, 90 - }; 91 - 92 - static const struct qcom_smmu_config sm8250_smmu_cfg = { 93 - .reg_offset = qcom_smmu_impl0_reg_offset, 94 - }; 95 - 96 - static const struct qcom_smmu_config sm8350_smmu_cfg = { 97 - .reg_offset = qcom_smmu_impl0_reg_offset, 98 - }; 99 - 100 - static const struct qcom_smmu_config sm8450_smmu_cfg = { 101 - .reg_offset = qcom_smmu_impl0_reg_offset, 102 - }; 103 - 104 - static const struct of_device_id __maybe_unused qcom_smmu_impl_debug_match[] = { 105 - { .compatible = "qcom,msm8998-smmu-v2" }, 106 - { .compatible = "qcom,qcm2290-smmu-500", .data = &qcm2290_smmu_cfg }, 107 - { .compatible = "qcom,sc7180-smmu-500", .data = &sc7180_smmu_cfg }, 108 - { .compatible = "qcom,sc7280-smmu-500", .data = &sc7280_smmu_cfg}, 109 - { .compatible = "qcom,sc8180x-smmu-500", .data = &sc8180x_smmu_cfg }, 110 - { .compatible = "qcom,sc8280xp-smmu-500", .data = &sc8280xp_smmu_cfg }, 111 - { .compatible = "qcom,sdm630-smmu-v2" }, 112 - { .compatible = "qcom,sdm845-smmu-500" }, 113 - { .compatible = "qcom,sm6125-smmu-500", .data = &sm6125_smmu_cfg}, 114 - { .compatible = "qcom,sm6350-smmu-500", .data = &sm6350_smmu_cfg}, 115 - { .compatible = "qcom,sm8150-smmu-500", .data = &sm8150_smmu_cfg }, 116 - { .compatible = "qcom,sm8250-smmu-500", .data = &sm8250_smmu_cfg }, 117 - { .compatible = "qcom,sm8350-smmu-500", .data = &sm8350_smmu_cfg }, 118 - { .compatible = "qcom,sm8450-smmu-500", .data = &sm8450_smmu_cfg }, 119 - { } 120 - }; 121 - 122 - const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu) 123 - { 124 - const struct of_device_id *match; 125 - const struct device_node *np = smmu->dev->of_node; 126 - 127 - match = of_match_node(qcom_smmu_impl_debug_match, np); 128 - if (!match) 129 - return NULL; 130 - 131 - return match->data; 132 61 }
+116 -45
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
··· 361 361 { 362 362 int ret; 363 363 364 + arm_mmu500_reset(smmu); 365 + 364 366 /* 365 367 * To address performance degradation in non-real time clients, 366 368 * such as USB and UFS, turn off wait-for-safe on sdm845 based boards, ··· 376 374 return ret; 377 375 } 378 376 379 - static int qcom_smmu500_reset(struct arm_smmu_device *smmu) 380 - { 381 - const struct device_node *np = smmu->dev->of_node; 382 - 383 - arm_mmu500_reset(smmu); 384 - 385 - if (of_device_is_compatible(np, "qcom,sdm845-smmu-500")) 386 - return qcom_sdm845_smmu500_reset(smmu); 387 - 388 - return 0; 389 - } 390 - 391 - static const struct arm_smmu_impl qcom_smmu_impl = { 377 + static const struct arm_smmu_impl qcom_smmu_v2_impl = { 392 378 .init_context = qcom_smmu_init_context, 393 379 .cfg_probe = qcom_smmu_cfg_probe, 394 380 .def_domain_type = qcom_smmu_def_domain_type, 395 - .reset = qcom_smmu500_reset, 396 381 .write_s2cr = qcom_smmu_write_s2cr, 397 382 .tlb_sync = qcom_smmu_tlb_sync, 398 383 }; 399 384 400 - static const struct arm_smmu_impl qcom_adreno_smmu_impl = { 385 + static const struct arm_smmu_impl qcom_smmu_500_impl = { 386 + .init_context = qcom_smmu_init_context, 387 + .cfg_probe = qcom_smmu_cfg_probe, 388 + .def_domain_type = qcom_smmu_def_domain_type, 389 + .reset = arm_mmu500_reset, 390 + .write_s2cr = qcom_smmu_write_s2cr, 391 + .tlb_sync = qcom_smmu_tlb_sync, 392 + }; 393 + 394 + static const struct arm_smmu_impl sdm845_smmu_500_impl = { 395 + .init_context = qcom_smmu_init_context, 396 + .cfg_probe = qcom_smmu_cfg_probe, 397 + .def_domain_type = qcom_smmu_def_domain_type, 398 + .reset = qcom_sdm845_smmu500_reset, 399 + .write_s2cr = qcom_smmu_write_s2cr, 400 + .tlb_sync = qcom_smmu_tlb_sync, 401 + }; 402 + 403 + static const struct arm_smmu_impl qcom_adreno_smmu_v2_impl = { 401 404 .init_context = qcom_adreno_smmu_init_context, 402 405 .def_domain_type = qcom_smmu_def_domain_type, 403 - .reset = qcom_smmu500_reset, 406 + .alloc_context_bank = qcom_adreno_smmu_alloc_context_bank, 407 + .write_sctlr = qcom_adreno_smmu_write_sctlr, 408 + .tlb_sync = qcom_smmu_tlb_sync, 409 + }; 410 + 411 + static const struct arm_smmu_impl qcom_adreno_smmu_500_impl = { 412 + .init_context = qcom_adreno_smmu_init_context, 413 + .def_domain_type = qcom_smmu_def_domain_type, 414 + .reset = arm_mmu500_reset, 404 415 .alloc_context_bank = qcom_adreno_smmu_alloc_context_bank, 405 416 .write_sctlr = qcom_adreno_smmu_write_sctlr, 406 417 .tlb_sync = qcom_smmu_tlb_sync, 407 418 }; 408 419 409 420 static struct arm_smmu_device *qcom_smmu_create(struct arm_smmu_device *smmu, 410 - const struct arm_smmu_impl *impl) 421 + const struct qcom_smmu_match_data *data) 411 422 { 423 + const struct device_node *np = smmu->dev->of_node; 424 + const struct arm_smmu_impl *impl; 412 425 struct qcom_smmu *qsmmu; 426 + 427 + if (!data) 428 + return ERR_PTR(-EINVAL); 429 + 430 + if (np && of_device_is_compatible(np, "qcom,adreno-smmu")) 431 + impl = data->adreno_impl; 432 + else 433 + impl = data->impl; 434 + 435 + if (!impl) 436 + return smmu; 413 437 414 438 /* Check to make sure qcom_scm has finished probing */ 415 439 if (!qcom_scm_is_available()) ··· 446 418 return ERR_PTR(-ENOMEM); 447 419 448 420 qsmmu->smmu.impl = impl; 449 - qsmmu->cfg = qcom_smmu_impl_data(smmu); 421 + qsmmu->cfg = data->cfg; 450 422 451 423 return &qsmmu->smmu; 452 424 } 453 425 426 + /* Implementation Defined Register Space 0 register offsets */ 427 + static const u32 qcom_smmu_impl0_reg_offset[] = { 428 + [QCOM_SMMU_TBU_PWR_STATUS] = 0x2204, 429 + [QCOM_SMMU_STATS_SYNC_INV_TBU_ACK] = 0x25dc, 430 + [QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR] = 0x2670, 431 + }; 432 + 433 + static const struct qcom_smmu_config qcom_smmu_impl0_cfg = { 434 + .reg_offset = qcom_smmu_impl0_reg_offset, 435 + }; 436 + 437 + /* 438 + * It is not yet possible to use MDP SMMU with the bypass quirk on the msm8996, 439 + * there are not enough context banks. 440 + */ 441 + static const struct qcom_smmu_match_data msm8996_smmu_data = { 442 + .impl = NULL, 443 + .adreno_impl = &qcom_adreno_smmu_v2_impl, 444 + }; 445 + 446 + static const struct qcom_smmu_match_data qcom_smmu_v2_data = { 447 + .impl = &qcom_smmu_v2_impl, 448 + .adreno_impl = &qcom_adreno_smmu_v2_impl, 449 + }; 450 + 451 + static const struct qcom_smmu_match_data sdm845_smmu_500_data = { 452 + .impl = &sdm845_smmu_500_impl, 453 + /* 454 + * No need for adreno impl here. On sdm845 the Adreno SMMU is handled 455 + * by the separate sdm845-smmu-v2 device. 456 + */ 457 + /* Also no debug configuration. */ 458 + }; 459 + 460 + static const struct qcom_smmu_match_data qcom_smmu_500_impl0_data = { 461 + .impl = &qcom_smmu_500_impl, 462 + .adreno_impl = &qcom_adreno_smmu_500_impl, 463 + .cfg = &qcom_smmu_impl0_cfg, 464 + }; 465 + 466 + /* 467 + * Do not add any more qcom,SOC-smmu-500 entries to this list, unless they need 468 + * special handling and can not be covered by the qcom,smmu-500 entry. 469 + */ 454 470 static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = { 455 - { .compatible = "qcom,msm8998-smmu-v2" }, 456 - { .compatible = "qcom,qcm2290-smmu-500" }, 457 - { .compatible = "qcom,sc7180-smmu-500" }, 458 - { .compatible = "qcom,sc7280-smmu-500" }, 459 - { .compatible = "qcom,sc8180x-smmu-500" }, 460 - { .compatible = "qcom,sc8280xp-smmu-500" }, 461 - { .compatible = "qcom,sdm630-smmu-v2" }, 462 - { .compatible = "qcom,sdm845-smmu-500" }, 463 - { .compatible = "qcom,sm6125-smmu-500" }, 464 - { .compatible = "qcom,sm6350-smmu-500" }, 465 - { .compatible = "qcom,sm6375-smmu-500" }, 466 - { .compatible = "qcom,sm8150-smmu-500" }, 467 - { .compatible = "qcom,sm8250-smmu-500" }, 468 - { .compatible = "qcom,sm8350-smmu-500" }, 469 - { .compatible = "qcom,sm8450-smmu-500" }, 471 + { .compatible = "qcom,msm8996-smmu-v2", .data = &msm8996_smmu_data }, 472 + { .compatible = "qcom,msm8998-smmu-v2", .data = &qcom_smmu_v2_data }, 473 + { .compatible = "qcom,qcm2290-smmu-500", .data = &qcom_smmu_500_impl0_data }, 474 + { .compatible = "qcom,qdu1000-smmu-500", .data = &qcom_smmu_500_impl0_data }, 475 + { .compatible = "qcom,sc7180-smmu-500", .data = &qcom_smmu_500_impl0_data }, 476 + { .compatible = "qcom,sc7280-smmu-500", .data = &qcom_smmu_500_impl0_data }, 477 + { .compatible = "qcom,sc8180x-smmu-500", .data = &qcom_smmu_500_impl0_data }, 478 + { .compatible = "qcom,sc8280xp-smmu-500", .data = &qcom_smmu_500_impl0_data }, 479 + { .compatible = "qcom,sdm630-smmu-v2", .data = &qcom_smmu_v2_data }, 480 + { .compatible = "qcom,sdm845-smmu-v2", .data = &qcom_smmu_v2_data }, 481 + { .compatible = "qcom,sdm845-smmu-500", .data = &sdm845_smmu_500_data }, 482 + { .compatible = "qcom,sm6115-smmu-500", .data = &qcom_smmu_500_impl0_data}, 483 + { .compatible = "qcom,sm6125-smmu-500", .data = &qcom_smmu_500_impl0_data }, 484 + { .compatible = "qcom,sm6350-smmu-v2", .data = &qcom_smmu_v2_data }, 485 + { .compatible = "qcom,sm6350-smmu-500", .data = &qcom_smmu_500_impl0_data }, 486 + { .compatible = "qcom,sm6375-smmu-500", .data = &qcom_smmu_500_impl0_data }, 487 + { .compatible = "qcom,sm8150-smmu-500", .data = &qcom_smmu_500_impl0_data }, 488 + { .compatible = "qcom,sm8250-smmu-500", .data = &qcom_smmu_500_impl0_data }, 489 + { .compatible = "qcom,sm8350-smmu-500", .data = &qcom_smmu_500_impl0_data }, 490 + { .compatible = "qcom,sm8450-smmu-500", .data = &qcom_smmu_500_impl0_data }, 491 + { .compatible = "qcom,smmu-500", .data = &qcom_smmu_500_impl0_data }, 470 492 { } 471 493 }; 472 494 ··· 531 453 struct arm_smmu_device *qcom_smmu_impl_init(struct arm_smmu_device *smmu) 532 454 { 533 455 const struct device_node *np = smmu->dev->of_node; 456 + const struct of_device_id *match; 534 457 535 458 #ifdef CONFIG_ACPI 536 459 if (np == NULL) { 537 460 /* Match platform for ACPI boot */ 538 461 if (acpi_match_platform_list(qcom_acpi_platlist) >= 0) 539 - return qcom_smmu_create(smmu, &qcom_smmu_impl); 462 + return qcom_smmu_create(smmu, &qcom_smmu_500_impl0_data); 540 463 } 541 464 #endif 542 465 543 - /* 544 - * Do not change this order of implementation, i.e., first adreno 545 - * smmu impl and then apss smmu since we can have both implementing 546 - * arm,mmu-500 in which case we will miss setting adreno smmu specific 547 - * features if the order is changed. 548 - */ 549 - if (of_device_is_compatible(np, "qcom,adreno-smmu")) 550 - return qcom_smmu_create(smmu, &qcom_adreno_smmu_impl); 551 - 552 - if (of_match_node(qcom_smmu_impl_of_match, np)) 553 - return qcom_smmu_create(smmu, &qcom_smmu_impl); 466 + match = of_match_node(qcom_smmu_impl_of_match, np); 467 + if (match) 468 + return qcom_smmu_create(smmu, match->data); 554 469 555 470 return smmu; 556 471 }
+16 -5
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.h
··· 14 14 u32 stall_enabled; 15 15 }; 16 16 17 + enum qcom_smmu_impl_reg_offset { 18 + QCOM_SMMU_TBU_PWR_STATUS, 19 + QCOM_SMMU_STATS_SYNC_INV_TBU_ACK, 20 + QCOM_SMMU_MMU2QSS_AND_SAFE_WAIT_CNTR, 21 + }; 22 + 23 + struct qcom_smmu_config { 24 + const u32 *reg_offset; 25 + }; 26 + 27 + struct qcom_smmu_match_data { 28 + const struct qcom_smmu_config *cfg; 29 + const struct arm_smmu_impl *impl; 30 + const struct arm_smmu_impl *adreno_impl; 31 + }; 32 + 17 33 #ifdef CONFIG_ARM_SMMU_QCOM_DEBUG 18 34 void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu); 19 - const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu); 20 35 #else 21 36 static inline void qcom_smmu_tlb_sync_debug(struct arm_smmu_device *smmu) { } 22 - static inline const void *qcom_smmu_impl_data(struct arm_smmu_device *smmu) 23 - { 24 - return NULL; 25 - } 26 37 #endif 27 38 28 39 #endif /* _ARM_SMMU_QCOM_H */
+8 -6
drivers/iommu/arm/arm-smmu/qcom_iommu.c
··· 410 410 } 411 411 412 412 static int qcom_iommu_map(struct iommu_domain *domain, unsigned long iova, 413 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 413 + phys_addr_t paddr, size_t pgsize, size_t pgcount, 414 + int prot, gfp_t gfp, size_t *mapped) 414 415 { 415 416 int ret; 416 417 unsigned long flags; ··· 422 421 return -ENODEV; 423 422 424 423 spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); 425 - ret = ops->map(ops, iova, paddr, size, prot, GFP_ATOMIC); 424 + ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot, GFP_ATOMIC, mapped); 426 425 spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); 427 426 return ret; 428 427 } 429 428 430 429 static size_t qcom_iommu_unmap(struct iommu_domain *domain, unsigned long iova, 431 - size_t size, struct iommu_iotlb_gather *gather) 430 + size_t pgsize, size_t pgcount, 431 + struct iommu_iotlb_gather *gather) 432 432 { 433 433 size_t ret; 434 434 unsigned long flags; ··· 446 444 */ 447 445 pm_runtime_get_sync(qcom_domain->iommu->dev); 448 446 spin_lock_irqsave(&qcom_domain->pgtbl_lock, flags); 449 - ret = ops->unmap(ops, iova, size, gather); 447 + ret = ops->unmap_pages(ops, iova, pgsize, pgcount, gather); 450 448 spin_unlock_irqrestore(&qcom_domain->pgtbl_lock, flags); 451 449 pm_runtime_put_sync(qcom_domain->iommu->dev); 452 450 ··· 584 582 .default_domain_ops = &(const struct iommu_domain_ops) { 585 583 .attach_dev = qcom_iommu_attach_dev, 586 584 .detach_dev = qcom_iommu_detach_dev, 587 - .map = qcom_iommu_map, 588 - .unmap = qcom_iommu_unmap, 585 + .map_pages = qcom_iommu_map, 586 + .unmap_pages = qcom_iommu_unmap, 589 587 .flush_iotlb_all = qcom_iommu_flush_iotlb_all, 590 588 .iotlb_sync = qcom_iommu_iotlb_sync, 591 589 .iova_to_phys = qcom_iommu_iova_to_phys,
+12 -14
drivers/iommu/exynos-iommu.c
··· 708 708 if (ret) 709 709 return ret; 710 710 711 - ret = iommu_device_register(&data->iommu, &exynos_iommu_ops, dev); 712 - if (ret) 713 - goto err_iommu_register; 714 - 715 711 platform_set_drvdata(pdev, data); 716 712 717 713 if (PG_ENT_SHIFT < 0) { ··· 739 743 740 744 pm_runtime_enable(dev); 741 745 746 + ret = iommu_device_register(&data->iommu, &exynos_iommu_ops, dev); 747 + if (ret) 748 + goto err_dma_set_mask; 749 + 742 750 return 0; 743 751 744 752 err_dma_set_mask: 745 - iommu_device_unregister(&data->iommu); 746 - err_iommu_register: 747 753 iommu_device_sysfs_remove(&data->iommu); 748 754 return ret; 749 755 } ··· 1430 1432 return -ENOMEM; 1431 1433 } 1432 1434 1433 - ret = platform_driver_register(&exynos_sysmmu_driver); 1434 - if (ret) { 1435 - pr_err("%s: Failed to register driver\n", __func__); 1436 - goto err_reg_driver; 1437 - } 1438 - 1439 1435 zero_lv2_table = kmem_cache_zalloc(lv2table_kmem_cache, GFP_KERNEL); 1440 1436 if (zero_lv2_table == NULL) { 1441 1437 pr_err("%s: Failed to allocate zero level2 page table\n", ··· 1438 1446 goto err_zero_lv2; 1439 1447 } 1440 1448 1449 + ret = platform_driver_register(&exynos_sysmmu_driver); 1450 + if (ret) { 1451 + pr_err("%s: Failed to register driver\n", __func__); 1452 + goto err_reg_driver; 1453 + } 1454 + 1441 1455 return 0; 1442 - err_zero_lv2: 1443 - platform_driver_unregister(&exynos_sysmmu_driver); 1444 1456 err_reg_driver: 1457 + platform_driver_unregister(&exynos_sysmmu_driver); 1458 + err_zero_lv2: 1445 1459 kmem_cache_destroy(lv2table_kmem_cache); 1446 1460 return ret; 1447 1461 }
+3 -3
drivers/iommu/fsl_pamu.c
··· 779 779 of_get_address(dev->of_node, 0, &size, NULL); 780 780 781 781 irq = irq_of_parse_and_map(dev->of_node, 0); 782 - if (irq == NO_IRQ) { 782 + if (!irq) { 783 783 dev_warn(dev, "no interrupts listed in PAMU node\n"); 784 784 goto error; 785 785 } ··· 868 868 ret = create_csd(ppaact_phys, mem_size, csd_port_id); 869 869 if (ret) { 870 870 dev_err(dev, "could not create coherence subdomain\n"); 871 - return ret; 871 + goto error; 872 872 } 873 873 } 874 874 ··· 903 903 return 0; 904 904 905 905 error: 906 - if (irq != NO_IRQ) 906 + if (irq) 907 907 free_irq(irq, data); 908 908 909 909 kfree_sensitive(data);
+85 -84
drivers/iommu/intel/iommu.c
··· 277 277 #define for_each_rmrr_units(rmrr) \ 278 278 list_for_each_entry(rmrr, &dmar_rmrr_units, list) 279 279 280 - static void dmar_remove_one_dev_info(struct device *dev); 280 + static void device_block_translation(struct device *dev); 281 + static void intel_iommu_domain_free(struct iommu_domain *domain); 281 282 282 283 int dmar_disabled = !IS_ENABLED(CONFIG_INTEL_IOMMU_DEFAULT_ON); 283 284 int intel_iommu_sm = IS_ENABLED(CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON); ··· 381 380 static inline int domain_type_is_si(struct dmar_domain *domain) 382 381 { 383 382 return domain->domain.type == IOMMU_DOMAIN_IDENTITY; 384 - } 385 - 386 - static inline bool domain_use_first_level(struct dmar_domain *domain) 387 - { 388 - return domain->flags & DOMAIN_FLAG_USE_FIRST_LEVEL; 389 383 } 390 384 391 385 static inline int domain_pfn_supported(struct dmar_domain *domain, ··· 496 500 rcu_read_lock(); 497 501 for_each_active_iommu(iommu, drhd) { 498 502 if (iommu != skip) { 499 - if (domain && domain_use_first_level(domain)) { 503 + if (domain && domain->use_first_level) { 500 504 if (!cap_fl1gp_support(iommu->cap)) 501 505 mask = 0x1; 502 506 } else { ··· 574 578 * paging and 57-bits with 5-level paging). Hence, skip bit 575 579 * [N-1]. 576 580 */ 577 - if (domain_use_first_level(domain)) 581 + if (domain->use_first_level) 578 582 domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw - 1); 579 583 else 580 584 domain->domain.geometry.aperture_end = __DOMAIN_MAX_ADDR(domain->gaw); ··· 775 779 clflush_cache_range(addr, size); 776 780 } 777 781 778 - static int device_context_mapped(struct intel_iommu *iommu, u8 bus, u8 devfn) 779 - { 780 - struct context_entry *context; 781 - int ret = 0; 782 - 783 - spin_lock(&iommu->lock); 784 - context = iommu_context_addr(iommu, bus, devfn, 0); 785 - if (context) 786 - ret = context_present(context); 787 - spin_unlock(&iommu->lock); 788 - return ret; 789 - } 790 - 791 782 static void free_context_table(struct intel_iommu *iommu) 792 783 { 793 784 struct context_entry *context; ··· 942 959 943 960 domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE); 944 961 pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE; 945 - if (domain_use_first_level(domain)) 962 + if (domain->use_first_level) 946 963 pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS; 947 964 948 965 if (cmpxchg64(&pte->val, 0ULL, pteval)) ··· 1401 1418 { 1402 1419 struct pci_dev *pdev; 1403 1420 1404 - if (!info || !dev_is_pci(info->dev)) 1421 + if (!dev_is_pci(info->dev)) 1405 1422 return; 1406 1423 1407 1424 pdev = to_pci_dev(info->dev); ··· 1441 1458 } 1442 1459 } 1443 1460 1444 - static void iommu_disable_dev_iotlb(struct device_domain_info *info) 1461 + static void iommu_disable_pci_caps(struct device_domain_info *info) 1445 1462 { 1446 1463 struct pci_dev *pdev; 1447 1464 ··· 1512 1529 if (ih) 1513 1530 ih = 1 << 6; 1514 1531 1515 - if (domain_use_first_level(domain)) { 1532 + if (domain->use_first_level) { 1516 1533 qi_flush_piotlb(iommu, did, PASID_RID2PASID, addr, pages, ih); 1517 1534 } else { 1518 1535 unsigned long bitmask = aligned_pages - 1; ··· 1566 1583 * It's a non-present to present mapping. Only flush if caching mode 1567 1584 * and second level. 1568 1585 */ 1569 - if (cap_caching_mode(iommu->cap) && !domain_use_first_level(domain)) 1586 + if (cap_caching_mode(iommu->cap) && !domain->use_first_level) 1570 1587 iommu_flush_iotlb_psi(iommu, domain, pfn, pages, 0, 1); 1571 1588 else 1572 1589 iommu_flush_write_buffer(iommu); ··· 1582 1599 struct intel_iommu *iommu = info->iommu; 1583 1600 u16 did = domain_id_iommu(dmar_domain, iommu); 1584 1601 1585 - if (domain_use_first_level(dmar_domain)) 1602 + if (dmar_domain->use_first_level) 1586 1603 qi_flush_piotlb(iommu, did, PASID_RID2PASID, 0, -1, 0); 1587 1604 else 1588 1605 iommu->flush.flush_iotlb(iommu, did, 0, 0, ··· 1755 1772 1756 1773 domain->nid = NUMA_NO_NODE; 1757 1774 if (first_level_by_default(type)) 1758 - domain->flags |= DOMAIN_FLAG_USE_FIRST_LEVEL; 1775 + domain->use_first_level = true; 1759 1776 domain->has_iotlb_device = false; 1760 1777 INIT_LIST_HEAD(&domain->devices); 1761 1778 spin_lock_init(&domain->lock); ··· 2047 2064 } else { 2048 2065 iommu_flush_write_buffer(iommu); 2049 2066 } 2050 - iommu_enable_pci_caps(info); 2051 2067 2052 2068 ret = 0; 2053 2069 ··· 2096 2114 2097 2115 return pci_for_each_dma_alias(to_pci_dev(dev), 2098 2116 &domain_context_mapping_cb, &data); 2099 - } 2100 - 2101 - static int domain_context_mapped_cb(struct pci_dev *pdev, 2102 - u16 alias, void *opaque) 2103 - { 2104 - struct intel_iommu *iommu = opaque; 2105 - 2106 - return !device_context_mapped(iommu, PCI_BUS_NUM(alias), alias & 0xff); 2107 - } 2108 - 2109 - static int domain_context_mapped(struct device *dev) 2110 - { 2111 - struct intel_iommu *iommu; 2112 - u8 bus, devfn; 2113 - 2114 - iommu = device_to_iommu(dev, &bus, &devfn); 2115 - if (!iommu) 2116 - return -ENODEV; 2117 - 2118 - if (!dev_is_pci(dev)) 2119 - return device_context_mapped(iommu, bus, devfn); 2120 - 2121 - return !pci_for_each_dma_alias(to_pci_dev(dev), 2122 - domain_context_mapped_cb, iommu); 2123 2117 } 2124 2118 2125 2119 /* Returns a number of VTD pages, but aligned to MM page size */ ··· 2187 2229 2188 2230 attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP); 2189 2231 attr |= DMA_FL_PTE_PRESENT; 2190 - if (domain_use_first_level(domain)) { 2232 + if (domain->use_first_level) { 2191 2233 attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS; 2192 2234 if (prot & DMA_PTE_WRITE) 2193 2235 attr |= DMA_FL_PTE_DIRTY; ··· 2430 2472 return 0; 2431 2473 } 2432 2474 2433 - static int domain_add_dev_info(struct dmar_domain *domain, struct device *dev) 2475 + static int dmar_domain_attach_device(struct dmar_domain *domain, 2476 + struct device *dev) 2434 2477 { 2435 2478 struct device_domain_info *info = dev_iommu_priv_get(dev); 2436 2479 struct intel_iommu *iommu; ··· 2453 2494 2454 2495 /* PASID table is mandatory for a PCI device in scalable mode. */ 2455 2496 if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) { 2456 - ret = intel_pasid_alloc_table(dev); 2457 - if (ret) { 2458 - dev_err(dev, "PASID table allocation failed\n"); 2459 - dmar_remove_one_dev_info(dev); 2460 - return ret; 2461 - } 2462 - 2463 2497 /* Setup the PASID entry for requests without PASID: */ 2464 2498 if (hw_pass_through && domain_type_is_si(domain)) 2465 2499 ret = intel_pasid_setup_pass_through(iommu, domain, 2466 2500 dev, PASID_RID2PASID); 2467 - else if (domain_use_first_level(domain)) 2501 + else if (domain->use_first_level) 2468 2502 ret = domain_setup_first_level(iommu, domain, dev, 2469 2503 PASID_RID2PASID); 2470 2504 else ··· 2465 2513 dev, PASID_RID2PASID); 2466 2514 if (ret) { 2467 2515 dev_err(dev, "Setup RID2PASID failed\n"); 2468 - dmar_remove_one_dev_info(dev); 2516 + device_block_translation(dev); 2469 2517 return ret; 2470 2518 } 2471 2519 } ··· 2473 2521 ret = domain_context_mapping(domain, dev); 2474 2522 if (ret) { 2475 2523 dev_err(dev, "Domain context map failed\n"); 2476 - dmar_remove_one_dev_info(dev); 2524 + device_block_translation(dev); 2477 2525 return ret; 2478 2526 } 2527 + 2528 + iommu_enable_pci_caps(info); 2479 2529 2480 2530 return 0; 2481 2531 } ··· 4079 4125 intel_pasid_tear_down_entry(iommu, info->dev, 4080 4126 PASID_RID2PASID, false); 4081 4127 4082 - iommu_disable_dev_iotlb(info); 4128 + iommu_disable_pci_caps(info); 4083 4129 domain_context_clear(info); 4084 - intel_pasid_free_table(info->dev); 4085 4130 } 4086 4131 4087 4132 spin_lock_irqsave(&domain->lock, flags); ··· 4088 4135 spin_unlock_irqrestore(&domain->lock, flags); 4089 4136 4090 4137 domain_detach_iommu(domain, iommu); 4138 + info->domain = NULL; 4139 + } 4140 + 4141 + /* 4142 + * Clear the page table pointer in context or pasid table entries so that 4143 + * all DMA requests without PASID from the device are blocked. If the page 4144 + * table has been set, clean up the data structures. 4145 + */ 4146 + static void device_block_translation(struct device *dev) 4147 + { 4148 + struct device_domain_info *info = dev_iommu_priv_get(dev); 4149 + struct intel_iommu *iommu = info->iommu; 4150 + unsigned long flags; 4151 + 4152 + iommu_disable_pci_caps(info); 4153 + if (!dev_is_real_dma_subdevice(dev)) { 4154 + if (sm_supported(iommu)) 4155 + intel_pasid_tear_down_entry(iommu, dev, 4156 + PASID_RID2PASID, false); 4157 + else 4158 + domain_context_clear(info); 4159 + } 4160 + 4161 + if (!info->domain) 4162 + return; 4163 + 4164 + spin_lock_irqsave(&info->domain->lock, flags); 4165 + list_del(&info->link); 4166 + spin_unlock_irqrestore(&info->domain->lock, flags); 4167 + 4168 + domain_detach_iommu(info->domain, iommu); 4091 4169 info->domain = NULL; 4092 4170 } 4093 4171 ··· 4143 4159 return 0; 4144 4160 } 4145 4161 4162 + static int blocking_domain_attach_dev(struct iommu_domain *domain, 4163 + struct device *dev) 4164 + { 4165 + device_block_translation(dev); 4166 + return 0; 4167 + } 4168 + 4169 + static struct iommu_domain blocking_domain = { 4170 + .ops = &(const struct iommu_domain_ops) { 4171 + .attach_dev = blocking_domain_attach_dev, 4172 + .free = intel_iommu_domain_free 4173 + } 4174 + }; 4175 + 4146 4176 static struct iommu_domain *intel_iommu_domain_alloc(unsigned type) 4147 4177 { 4148 4178 struct dmar_domain *dmar_domain; 4149 4179 struct iommu_domain *domain; 4150 4180 4151 4181 switch (type) { 4182 + case IOMMU_DOMAIN_BLOCKED: 4183 + return &blocking_domain; 4152 4184 case IOMMU_DOMAIN_DMA: 4153 4185 case IOMMU_DOMAIN_DMA_FQ: 4154 4186 case IOMMU_DOMAIN_UNMANAGED: ··· 4199 4199 4200 4200 static void intel_iommu_domain_free(struct iommu_domain *domain) 4201 4201 { 4202 - if (domain != &si_domain->domain) 4202 + if (domain != &si_domain->domain && domain != &blocking_domain) 4203 4203 domain_exit(to_dmar_domain(domain)); 4204 4204 } 4205 4205 ··· 4246 4246 static int intel_iommu_attach_device(struct iommu_domain *domain, 4247 4247 struct device *dev) 4248 4248 { 4249 + struct device_domain_info *info = dev_iommu_priv_get(dev); 4249 4250 int ret; 4250 4251 4251 4252 if (domain->type == IOMMU_DOMAIN_UNMANAGED && ··· 4255 4254 return -EPERM; 4256 4255 } 4257 4256 4258 - /* normally dev is not mapped */ 4259 - if (unlikely(domain_context_mapped(dev))) { 4260 - struct device_domain_info *info = dev_iommu_priv_get(dev); 4261 - 4262 - if (info->domain) 4263 - dmar_remove_one_dev_info(dev); 4264 - } 4257 + if (info->domain) 4258 + device_block_translation(dev); 4265 4259 4266 4260 ret = prepare_domain_attach_device(domain, dev); 4267 4261 if (ret) 4268 4262 return ret; 4269 4263 4270 - return domain_add_dev_info(to_dmar_domain(domain), dev); 4271 - } 4272 - 4273 - static void intel_iommu_detach_device(struct iommu_domain *domain, 4274 - struct device *dev) 4275 - { 4276 - dmar_remove_one_dev_info(dev); 4264 + return dmar_domain_attach_device(to_dmar_domain(domain), dev); 4277 4265 } 4278 4266 4279 4267 static int intel_iommu_map(struct iommu_domain *domain, ··· 4426 4436 * Second level page table supports per-PTE snoop control. The 4427 4437 * iommu_map() interface will handle this by setting SNP bit. 4428 4438 */ 4429 - if (!domain_use_first_level(domain)) { 4439 + if (!domain->use_first_level) { 4430 4440 domain->set_pte_snp = true; 4431 4441 return; 4432 4442 } ··· 4481 4491 struct device_domain_info *info; 4482 4492 struct intel_iommu *iommu; 4483 4493 u8 bus, devfn; 4494 + int ret; 4484 4495 4485 4496 iommu = device_to_iommu(dev, &bus, &devfn); 4486 4497 if (!iommu || !iommu->iommu.ops) ··· 4526 4535 4527 4536 dev_iommu_priv_set(dev, info); 4528 4537 4538 + if (sm_supported(iommu) && !dev_is_real_dma_subdevice(dev)) { 4539 + ret = intel_pasid_alloc_table(dev); 4540 + if (ret) { 4541 + dev_err(dev, "PASID table allocation failed\n"); 4542 + dev_iommu_priv_set(dev, NULL); 4543 + kfree(info); 4544 + return ERR_PTR(ret); 4545 + } 4546 + } 4547 + 4529 4548 return &iommu->iommu; 4530 4549 } 4531 4550 ··· 4544 4543 struct device_domain_info *info = dev_iommu_priv_get(dev); 4545 4544 4546 4545 dmar_remove_one_dev_info(dev); 4546 + intel_pasid_free_table(dev); 4547 4547 dev_iommu_priv_set(dev, NULL); 4548 4548 kfree(info); 4549 4549 set_dma_ops(dev, NULL); ··· 4779 4777 #endif 4780 4778 .default_domain_ops = &(const struct iommu_domain_ops) { 4781 4779 .attach_dev = intel_iommu_attach_device, 4782 - .detach_dev = intel_iommu_detach_device, 4783 4780 .map_pages = intel_iommu_map_pages, 4784 4781 .unmap_pages = intel_iommu_unmap_pages, 4785 4782 .iotlb_sync_map = intel_iommu_iotlb_sync_map,
+5 -10
drivers/iommu/intel/iommu.h
··· 515 515 u64 hi; 516 516 }; 517 517 518 - /* 519 - * When VT-d works in the scalable mode, it allows DMA translation to 520 - * happen through either first level or second level page table. This 521 - * bit marks that the DMA translation for the domain goes through the 522 - * first level page table, otherwise, it goes through the second level. 523 - */ 524 - #define DOMAIN_FLAG_USE_FIRST_LEVEL BIT(1) 525 - 526 518 struct iommu_domain_info { 527 519 struct intel_iommu *iommu; 528 520 unsigned int refcnt; /* Refcount of devices per iommu */ ··· 531 539 u8 iommu_coherency: 1; /* indicate coherency of iommu access */ 532 540 u8 force_snooping : 1; /* Create IOPTEs with snoop control */ 533 541 u8 set_pte_snp:1; 542 + u8 use_first_level:1; /* DMA translation for the domain goes 543 + * through the first level page table, 544 + * otherwise, goes through the second 545 + * level. 546 + */ 534 547 535 548 spinlock_t lock; /* Protect device tracking lists */ 536 549 struct list_head devices; /* all devices' list */ ··· 545 548 546 549 /* adjusted guest address width, 0 is level 2 30-bit */ 547 550 int agaw; 548 - 549 - int flags; /* flags to find out type of domain */ 550 551 int iommu_superpage;/* Level of superpages supported: 551 552 0 == 4KiB (no superpages), 1 == 2MiB, 552 553 2 == 1GiB, 3 == 512GiB, 4 == 1TiB */
+15 -26
drivers/iommu/io-pgtable-arm-v7s.c
··· 564 564 565 565 iova += pgsize; 566 566 paddr += pgsize; 567 - if (mapped) 568 - *mapped += pgsize; 567 + *mapped += pgsize; 569 568 } 570 569 /* 571 570 * Synchronise all PTE updates for the new mapping before there's ··· 573 574 wmb(); 574 575 575 576 return ret; 576 - } 577 - 578 - static int arm_v7s_map(struct io_pgtable_ops *ops, unsigned long iova, 579 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 580 - { 581 - return arm_v7s_map_pages(ops, iova, paddr, size, 1, prot, gfp, NULL); 582 577 } 583 578 584 579 static void arm_v7s_free_pgtable(struct io_pgtable *iop) ··· 757 764 return unmapped; 758 765 } 759 766 760 - static size_t arm_v7s_unmap(struct io_pgtable_ops *ops, unsigned long iova, 761 - size_t size, struct iommu_iotlb_gather *gather) 762 - { 763 - return arm_v7s_unmap_pages(ops, iova, size, 1, gather); 764 - } 765 - 766 767 static phys_addr_t arm_v7s_iova_to_phys(struct io_pgtable_ops *ops, 767 768 unsigned long iova) 768 769 { ··· 829 842 goto out_free_data; 830 843 831 844 data->iop.ops = (struct io_pgtable_ops) { 832 - .map = arm_v7s_map, 833 845 .map_pages = arm_v7s_map_pages, 834 - .unmap = arm_v7s_unmap, 835 846 .unmap_pages = arm_v7s_unmap_pages, 836 847 .iova_to_phys = arm_v7s_iova_to_phys, 837 848 }; ··· 939 954 }; 940 955 unsigned int iova, size, iova_start; 941 956 unsigned int i, loopnr = 0; 957 + size_t mapped; 942 958 943 959 selftest_running = true; 944 960 ··· 970 984 iova = 0; 971 985 for_each_set_bit(i, &cfg.pgsize_bitmap, BITS_PER_LONG) { 972 986 size = 1UL << i; 973 - if (ops->map(ops, iova, iova, size, IOMMU_READ | 974 - IOMMU_WRITE | 975 - IOMMU_NOEXEC | 976 - IOMMU_CACHE, GFP_KERNEL)) 987 + if (ops->map_pages(ops, iova, iova, size, 1, 988 + IOMMU_READ | IOMMU_WRITE | 989 + IOMMU_NOEXEC | IOMMU_CACHE, 990 + GFP_KERNEL, &mapped)) 977 991 return __FAIL(ops); 978 992 979 993 /* Overlapping mappings */ 980 - if (!ops->map(ops, iova, iova + size, size, 981 - IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL)) 994 + if (!ops->map_pages(ops, iova, iova + size, size, 1, 995 + IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL, 996 + &mapped)) 982 997 return __FAIL(ops); 983 998 984 999 if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) ··· 994 1007 size = 1UL << __ffs(cfg.pgsize_bitmap); 995 1008 while (i < loopnr) { 996 1009 iova_start = i * SZ_16M; 997 - if (ops->unmap(ops, iova_start + size, size, NULL) != size) 1010 + if (ops->unmap_pages(ops, iova_start + size, size, 1, NULL) != size) 998 1011 return __FAIL(ops); 999 1012 1000 1013 /* Remap of partial unmap */ 1001 - if (ops->map(ops, iova_start + size, size, size, IOMMU_READ, GFP_KERNEL)) 1014 + if (ops->map_pages(ops, iova_start + size, size, size, 1, 1015 + IOMMU_READ, GFP_KERNEL, &mapped)) 1002 1016 return __FAIL(ops); 1003 1017 1004 1018 if (ops->iova_to_phys(ops, iova_start + size + 42) ··· 1013 1025 for_each_set_bit(i, &cfg.pgsize_bitmap, BITS_PER_LONG) { 1014 1026 size = 1UL << i; 1015 1027 1016 - if (ops->unmap(ops, iova, size, NULL) != size) 1028 + if (ops->unmap_pages(ops, iova, size, 1, NULL) != size) 1017 1029 return __FAIL(ops); 1018 1030 1019 1031 if (ops->iova_to_phys(ops, iova + 42)) 1020 1032 return __FAIL(ops); 1021 1033 1022 1034 /* Remap full block */ 1023 - if (ops->map(ops, iova, iova, size, IOMMU_WRITE, GFP_KERNEL)) 1035 + if (ops->map_pages(ops, iova, iova, size, 1, IOMMU_WRITE, 1036 + GFP_KERNEL, &mapped)) 1024 1037 return __FAIL(ops); 1025 1038 1026 1039 if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+15 -27
drivers/iommu/io-pgtable-arm.c
··· 360 360 max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start; 361 361 num_entries = min_t(int, pgcount, max_entries); 362 362 ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep); 363 - if (!ret && mapped) 363 + if (!ret) 364 364 *mapped += num_entries * size; 365 365 366 366 return ret; ··· 494 494 wmb(); 495 495 496 496 return ret; 497 - } 498 - 499 - static int arm_lpae_map(struct io_pgtable_ops *ops, unsigned long iova, 500 - phys_addr_t paddr, size_t size, int iommu_prot, gfp_t gfp) 501 - { 502 - return arm_lpae_map_pages(ops, iova, paddr, size, 1, iommu_prot, gfp, 503 - NULL); 504 497 } 505 498 506 499 static void __arm_lpae_free_pgtable(struct arm_lpae_io_pgtable *data, int lvl, ··· 675 682 data->start_level, ptep); 676 683 } 677 684 678 - static size_t arm_lpae_unmap(struct io_pgtable_ops *ops, unsigned long iova, 679 - size_t size, struct iommu_iotlb_gather *gather) 680 - { 681 - return arm_lpae_unmap_pages(ops, iova, size, 1, gather); 682 - } 683 - 684 685 static phys_addr_t arm_lpae_iova_to_phys(struct io_pgtable_ops *ops, 685 686 unsigned long iova) 686 687 { ··· 786 799 data->pgd_bits = va_bits - (data->bits_per_level * (levels - 1)); 787 800 788 801 data->iop.ops = (struct io_pgtable_ops) { 789 - .map = arm_lpae_map, 790 802 .map_pages = arm_lpae_map_pages, 791 - .unmap = arm_lpae_unmap, 792 803 .unmap_pages = arm_lpae_unmap_pages, 793 804 .iova_to_phys = arm_lpae_iova_to_phys, 794 805 }; ··· 1161 1176 1162 1177 int i, j; 1163 1178 unsigned long iova; 1164 - size_t size; 1179 + size_t size, mapped; 1165 1180 struct io_pgtable_ops *ops; 1166 1181 1167 1182 selftest_running = true; ··· 1194 1209 for_each_set_bit(j, &cfg->pgsize_bitmap, BITS_PER_LONG) { 1195 1210 size = 1UL << j; 1196 1211 1197 - if (ops->map(ops, iova, iova, size, IOMMU_READ | 1198 - IOMMU_WRITE | 1199 - IOMMU_NOEXEC | 1200 - IOMMU_CACHE, GFP_KERNEL)) 1212 + if (ops->map_pages(ops, iova, iova, size, 1, 1213 + IOMMU_READ | IOMMU_WRITE | 1214 + IOMMU_NOEXEC | IOMMU_CACHE, 1215 + GFP_KERNEL, &mapped)) 1201 1216 return __FAIL(ops, i); 1202 1217 1203 1218 /* Overlapping mappings */ 1204 - if (!ops->map(ops, iova, iova + size, size, 1205 - IOMMU_READ | IOMMU_NOEXEC, GFP_KERNEL)) 1219 + if (!ops->map_pages(ops, iova, iova + size, size, 1, 1220 + IOMMU_READ | IOMMU_NOEXEC, 1221 + GFP_KERNEL, &mapped)) 1206 1222 return __FAIL(ops, i); 1207 1223 1208 1224 if (ops->iova_to_phys(ops, iova + 42) != (iova + 42)) ··· 1214 1228 1215 1229 /* Partial unmap */ 1216 1230 size = 1UL << __ffs(cfg->pgsize_bitmap); 1217 - if (ops->unmap(ops, SZ_1G + size, size, NULL) != size) 1231 + if (ops->unmap_pages(ops, SZ_1G + size, size, 1, NULL) != size) 1218 1232 return __FAIL(ops, i); 1219 1233 1220 1234 /* Remap of partial unmap */ 1221 - if (ops->map(ops, SZ_1G + size, size, size, IOMMU_READ, GFP_KERNEL)) 1235 + if (ops->map_pages(ops, SZ_1G + size, size, size, 1, 1236 + IOMMU_READ, GFP_KERNEL, &mapped)) 1222 1237 return __FAIL(ops, i); 1223 1238 1224 1239 if (ops->iova_to_phys(ops, SZ_1G + size + 42) != (size + 42)) ··· 1230 1243 for_each_set_bit(j, &cfg->pgsize_bitmap, BITS_PER_LONG) { 1231 1244 size = 1UL << j; 1232 1245 1233 - if (ops->unmap(ops, iova, size, NULL) != size) 1246 + if (ops->unmap_pages(ops, iova, size, 1, NULL) != size) 1234 1247 return __FAIL(ops, i); 1235 1248 1236 1249 if (ops->iova_to_phys(ops, iova + 42)) 1237 1250 return __FAIL(ops, i); 1238 1251 1239 1252 /* Remap full block */ 1240 - if (ops->map(ops, iova, iova, size, IOMMU_WRITE, GFP_KERNEL)) 1253 + if (ops->map_pages(ops, iova, iova, size, 1, 1254 + IOMMU_WRITE, GFP_KERNEL, &mapped)) 1241 1255 return __FAIL(ops, i); 1242 1256 1243 1257 if (ops->iova_to_phys(ops, iova + 42) != (iova + 42))
+22 -6
drivers/iommu/iommu.c
··· 306 306 const struct iommu_ops *ops = dev->bus->iommu_ops; 307 307 struct iommu_device *iommu_dev; 308 308 struct iommu_group *group; 309 + static DEFINE_MUTEX(iommu_probe_device_lock); 309 310 int ret; 310 311 311 312 if (!ops) 312 313 return -ENODEV; 313 - 314 - if (!dev_iommu_get(dev)) 315 - return -ENOMEM; 314 + /* 315 + * Serialise to avoid races between IOMMU drivers registering in 316 + * parallel and/or the "replay" calls from ACPI/OF code via client 317 + * driver probe. Once the latter have been cleaned up we should 318 + * probably be able to use device_lock() here to minimise the scope, 319 + * but for now enforcing a simple global ordering is fine. 320 + */ 321 + mutex_lock(&iommu_probe_device_lock); 322 + if (!dev_iommu_get(dev)) { 323 + ret = -ENOMEM; 324 + goto err_unlock; 325 + } 316 326 317 327 if (!try_module_get(ops->owner)) { 318 328 ret = -EINVAL; ··· 343 333 ret = PTR_ERR(group); 344 334 goto out_release; 345 335 } 346 - iommu_group_put(group); 347 336 337 + mutex_lock(&group->mutex); 348 338 if (group_list && !group->default_domain && list_empty(&group->entry)) 349 339 list_add_tail(&group->entry, group_list); 340 + mutex_unlock(&group->mutex); 341 + iommu_group_put(group); 350 342 343 + mutex_unlock(&iommu_probe_device_lock); 351 344 iommu_device_link(iommu_dev, dev); 352 345 353 346 return 0; ··· 364 351 365 352 err_free: 366 353 dev_iommu_free(dev); 354 + 355 + err_unlock: 356 + mutex_unlock(&iommu_probe_device_lock); 367 357 368 358 return ret; 369 359 } ··· 1840 1824 return ret; 1841 1825 1842 1826 list_for_each_entry_safe(group, next, &group_list, entry) { 1827 + mutex_lock(&group->mutex); 1828 + 1843 1829 /* Remove item from the list */ 1844 1830 list_del_init(&group->entry); 1845 - 1846 - mutex_lock(&group->mutex); 1847 1831 1848 1832 /* Try to allocate default domain */ 1849 1833 probe_alloc_default_domain(bus, group);
+9 -9
drivers/iommu/ipmmu-vmsa.c
··· 659 659 } 660 660 661 661 static int ipmmu_map(struct iommu_domain *io_domain, unsigned long iova, 662 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 662 + phys_addr_t paddr, size_t pgsize, size_t pgcount, 663 + int prot, gfp_t gfp, size_t *mapped) 663 664 { 664 665 struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); 665 666 666 - if (!domain) 667 - return -ENODEV; 668 - 669 - return domain->iop->map(domain->iop, iova, paddr, size, prot, gfp); 667 + return domain->iop->map_pages(domain->iop, iova, paddr, pgsize, pgcount, 668 + prot, gfp, mapped); 670 669 } 671 670 672 671 static size_t ipmmu_unmap(struct iommu_domain *io_domain, unsigned long iova, 673 - size_t size, struct iommu_iotlb_gather *gather) 672 + size_t pgsize, size_t pgcount, 673 + struct iommu_iotlb_gather *gather) 674 674 { 675 675 struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain); 676 676 677 - return domain->iop->unmap(domain->iop, iova, size, gather); 677 + return domain->iop->unmap_pages(domain->iop, iova, pgsize, pgcount, gather); 678 678 } 679 679 680 680 static void ipmmu_flush_iotlb_all(struct iommu_domain *io_domain) ··· 877 877 .default_domain_ops = &(const struct iommu_domain_ops) { 878 878 .attach_dev = ipmmu_attach_device, 879 879 .detach_dev = ipmmu_detach_device, 880 - .map = ipmmu_map, 881 - .unmap = ipmmu_unmap, 880 + .map_pages = ipmmu_map, 881 + .unmap_pages = ipmmu_unmap, 882 882 .flush_iotlb_all = ipmmu_flush_iotlb_all, 883 883 .iotlb_sync = ipmmu_iotlb_sync, 884 884 .iova_to_phys = ipmmu_iova_to_phys,
+11 -7
drivers/iommu/msm_iommu.c
··· 471 471 } 472 472 473 473 static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova, 474 - phys_addr_t pa, size_t len, int prot, gfp_t gfp) 474 + phys_addr_t pa, size_t pgsize, size_t pgcount, 475 + int prot, gfp_t gfp, size_t *mapped) 475 476 { 476 477 struct msm_priv *priv = to_msm_priv(domain); 477 478 unsigned long flags; 478 479 int ret; 479 480 480 481 spin_lock_irqsave(&priv->pgtlock, flags); 481 - ret = priv->iop->map(priv->iop, iova, pa, len, prot, GFP_ATOMIC); 482 + ret = priv->iop->map_pages(priv->iop, iova, pa, pgsize, pgcount, prot, 483 + GFP_ATOMIC, mapped); 482 484 spin_unlock_irqrestore(&priv->pgtlock, flags); 483 485 484 486 return ret; ··· 495 493 } 496 494 497 495 static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova, 498 - size_t len, struct iommu_iotlb_gather *gather) 496 + size_t pgsize, size_t pgcount, 497 + struct iommu_iotlb_gather *gather) 499 498 { 500 499 struct msm_priv *priv = to_msm_priv(domain); 501 500 unsigned long flags; 501 + size_t ret; 502 502 503 503 spin_lock_irqsave(&priv->pgtlock, flags); 504 - len = priv->iop->unmap(priv->iop, iova, len, gather); 504 + ret = priv->iop->unmap_pages(priv->iop, iova, pgsize, pgcount, gather); 505 505 spin_unlock_irqrestore(&priv->pgtlock, flags); 506 506 507 - return len; 507 + return ret; 508 508 } 509 509 510 510 static phys_addr_t msm_iommu_iova_to_phys(struct iommu_domain *domain, ··· 683 679 .default_domain_ops = &(const struct iommu_domain_ops) { 684 680 .attach_dev = msm_iommu_attach_dev, 685 681 .detach_dev = msm_iommu_detach_dev, 686 - .map = msm_iommu_map, 687 - .unmap = msm_iommu_unmap, 682 + .map_pages = msm_iommu_map, 683 + .unmap_pages = msm_iommu_unmap, 688 684 /* 689 685 * Nothing is needed here, the barrier to guarantee 690 686 * completion of the tlb sync operation is implicitly
+110 -39
drivers/iommu/mtk_iommu.c
··· 108 108 #define F_MMU_INT_ID_SUB_COMM_ID(a) (((a) >> 7) & 0x3) 109 109 #define F_MMU_INT_ID_COMM_ID_EXT(a) (((a) >> 10) & 0x7) 110 110 #define F_MMU_INT_ID_SUB_COMM_ID_EXT(a) (((a) >> 7) & 0x7) 111 + /* Macro for 5 bits length port ID field (default) */ 111 112 #define F_MMU_INT_ID_LARB_ID(a) (((a) >> 7) & 0x7) 112 113 #define F_MMU_INT_ID_PORT_ID(a) (((a) >> 2) & 0x1f) 114 + /* Macro for 6 bits length port ID field */ 115 + #define F_MMU_INT_ID_LARB_ID_WID_6(a) (((a) >> 8) & 0x7) 116 + #define F_MMU_INT_ID_PORT_ID_WID_6(a) (((a) >> 2) & 0x3f) 113 117 114 118 #define MTK_PROTECT_PA_ALIGN 256 115 119 #define MTK_IOMMU_BANK_SZ 0x1000 ··· 143 139 #define IFA_IOMMU_PCIE_SUPPORT BIT(16) 144 140 #define PGTABLE_PA_35_EN BIT(17) 145 141 #define TF_PORT_TO_ADDR_MT8173 BIT(18) 142 + #define INT_ID_PORT_WIDTH_6 BIT(19) 146 143 147 144 #define MTK_IOMMU_HAS_FLAG_MASK(pdata, _x, mask) \ 148 145 ((((pdata)->flags) & (mask)) == (_x)) ··· 170 165 M4U_MT8186, 171 166 M4U_MT8192, 172 167 M4U_MT8195, 168 + M4U_MT8365, 173 169 }; 174 170 175 171 struct mtk_iommu_iova_region { ··· 229 223 struct device *smicomm_dev; 230 224 231 225 struct mtk_iommu_bank_data *bank; 232 - 233 - struct dma_iommu_mapping *mapping; /* For mtk_iommu_v1.c */ 234 226 struct regmap *pericfg; 235 - 236 227 struct mutex mutex; /* Protect m4u_group/m4u_dom above */ 237 228 238 229 /* ··· 444 441 fault_pa |= (u64)pa34_32 << 32; 445 442 446 443 if (MTK_IOMMU_IS_TYPE(plat_data, MTK_IOMMU_TYPE_MM)) { 447 - fault_port = F_MMU_INT_ID_PORT_ID(regval); 448 444 if (MTK_IOMMU_HAS_FLAG(plat_data, HAS_SUB_COMM_2BITS)) { 449 445 fault_larb = F_MMU_INT_ID_COMM_ID(regval); 450 446 sub_comm = F_MMU_INT_ID_SUB_COMM_ID(regval); 447 + fault_port = F_MMU_INT_ID_PORT_ID(regval); 451 448 } else if (MTK_IOMMU_HAS_FLAG(plat_data, HAS_SUB_COMM_3BITS)) { 452 449 fault_larb = F_MMU_INT_ID_COMM_ID_EXT(regval); 453 450 sub_comm = F_MMU_INT_ID_SUB_COMM_ID_EXT(regval); 451 + fault_port = F_MMU_INT_ID_PORT_ID(regval); 452 + } else if (MTK_IOMMU_HAS_FLAG(plat_data, INT_ID_PORT_WIDTH_6)) { 453 + fault_port = F_MMU_INT_ID_PORT_ID_WID_6(regval); 454 + fault_larb = F_MMU_INT_ID_LARB_ID_WID_6(regval); 454 455 } else { 456 + fault_port = F_MMU_INT_ID_PORT_ID(regval); 455 457 fault_larb = F_MMU_INT_ID_LARB_ID(regval); 456 458 } 457 459 fault_larb = data->plat_data->larbid_remap[fault_larb][sub_comm]; 458 460 } 459 461 460 - if (report_iommu_fault(&dom->domain, bank->parent_dev, fault_iova, 462 + if (!dom || report_iommu_fault(&dom->domain, bank->parent_dev, fault_iova, 461 463 write ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ)) { 462 464 dev_err_ratelimited( 463 465 bank->parent_dev, ··· 719 711 } 720 712 721 713 static int mtk_iommu_map(struct iommu_domain *domain, unsigned long iova, 722 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 714 + phys_addr_t paddr, size_t pgsize, size_t pgcount, 715 + int prot, gfp_t gfp, size_t *mapped) 723 716 { 724 717 struct mtk_iommu_domain *dom = to_mtk_domain(domain); 725 718 ··· 729 720 paddr |= BIT_ULL(32); 730 721 731 722 /* Synchronize with the tlb_lock */ 732 - return dom->iop->map(dom->iop, iova, paddr, size, prot, gfp); 723 + return dom->iop->map_pages(dom->iop, iova, paddr, pgsize, pgcount, prot, gfp, mapped); 733 724 } 734 725 735 726 static size_t mtk_iommu_unmap(struct iommu_domain *domain, 736 - unsigned long iova, size_t size, 727 + unsigned long iova, size_t pgsize, size_t pgcount, 737 728 struct iommu_iotlb_gather *gather) 738 729 { 739 730 struct mtk_iommu_domain *dom = to_mtk_domain(domain); 740 731 741 - iommu_iotlb_gather_add_range(gather, iova, size); 742 - return dom->iop->unmap(dom->iop, iova, size, gather); 732 + iommu_iotlb_gather_add_range(gather, iova, pgsize * pgcount); 733 + return dom->iop->unmap_pages(dom->iop, iova, pgsize, pgcount, gather); 743 734 } 744 735 745 736 static void mtk_iommu_flush_iotlb_all(struct iommu_domain *domain) ··· 947 938 .default_domain_ops = &(const struct iommu_domain_ops) { 948 939 .attach_dev = mtk_iommu_attach_device, 949 940 .detach_dev = mtk_iommu_detach_device, 950 - .map = mtk_iommu_map, 951 - .unmap = mtk_iommu_unmap, 941 + .map_pages = mtk_iommu_map, 942 + .unmap_pages = mtk_iommu_unmap, 952 943 .flush_iotlb_all = mtk_iommu_flush_iotlb_all, 953 944 .iotlb_sync = mtk_iommu_iotlb_sync, 954 945 .iotlb_sync_map = mtk_iommu_sync_map, ··· 1052 1043 static int mtk_iommu_mm_dts_parse(struct device *dev, struct component_match **match, 1053 1044 struct mtk_iommu_data *data) 1054 1045 { 1055 - struct device_node *larbnode, *smicomm_node, *smi_subcomm_node; 1056 - struct platform_device *plarbdev; 1046 + struct device_node *larbnode, *frst_avail_smicomm_node = NULL; 1047 + struct platform_device *plarbdev, *pcommdev; 1057 1048 struct device_link *link; 1058 1049 int i, larb_nr, ret; 1059 1050 1060 1051 larb_nr = of_count_phandle_with_args(dev->of_node, "mediatek,larbs", NULL); 1061 1052 if (larb_nr < 0) 1062 1053 return larb_nr; 1054 + if (larb_nr == 0 || larb_nr > MTK_LARB_NR_MAX) 1055 + return -EINVAL; 1063 1056 1064 1057 for (i = 0; i < larb_nr; i++) { 1058 + struct device_node *smicomm_node, *smi_subcomm_node; 1065 1059 u32 id; 1066 1060 1067 1061 larbnode = of_parse_phandle(dev->of_node, "mediatek,larbs", i); 1068 - if (!larbnode) 1069 - return -EINVAL; 1062 + if (!larbnode) { 1063 + ret = -EINVAL; 1064 + goto err_larbdev_put; 1065 + } 1070 1066 1071 1067 if (!of_device_is_available(larbnode)) { 1072 1068 of_node_put(larbnode); ··· 1081 1067 ret = of_property_read_u32(larbnode, "mediatek,larb-id", &id); 1082 1068 if (ret)/* The id is consecutive if there is no this property */ 1083 1069 id = i; 1070 + if (id >= MTK_LARB_NR_MAX) { 1071 + of_node_put(larbnode); 1072 + ret = -EINVAL; 1073 + goto err_larbdev_put; 1074 + } 1084 1075 1085 1076 plarbdev = of_find_device_by_node(larbnode); 1077 + of_node_put(larbnode); 1086 1078 if (!plarbdev) { 1087 - of_node_put(larbnode); 1088 - return -ENODEV; 1079 + ret = -ENODEV; 1080 + goto err_larbdev_put; 1089 1081 } 1090 - if (!plarbdev->dev.driver) { 1091 - of_node_put(larbnode); 1092 - return -EPROBE_DEFER; 1082 + if (data->larb_imu[id].dev) { 1083 + platform_device_put(plarbdev); 1084 + ret = -EEXIST; 1085 + goto err_larbdev_put; 1093 1086 } 1094 1087 data->larb_imu[id].dev = &plarbdev->dev; 1095 1088 1096 - component_match_add_release(dev, match, component_release_of, 1097 - component_compare_of, larbnode); 1089 + if (!plarbdev->dev.driver) { 1090 + ret = -EPROBE_DEFER; 1091 + goto err_larbdev_put; 1092 + } 1093 + 1094 + /* Get smi-(sub)-common dev from the last larb. */ 1095 + smi_subcomm_node = of_parse_phandle(larbnode, "mediatek,smi", 0); 1096 + if (!smi_subcomm_node) { 1097 + ret = -EINVAL; 1098 + goto err_larbdev_put; 1099 + } 1100 + 1101 + /* 1102 + * It may have two level smi-common. the node is smi-sub-common if it 1103 + * has a new mediatek,smi property. otherwise it is smi-commmon. 1104 + */ 1105 + smicomm_node = of_parse_phandle(smi_subcomm_node, "mediatek,smi", 0); 1106 + if (smicomm_node) 1107 + of_node_put(smi_subcomm_node); 1108 + else 1109 + smicomm_node = smi_subcomm_node; 1110 + 1111 + /* 1112 + * All the larbs that connect to one IOMMU must connect with the same 1113 + * smi-common. 1114 + */ 1115 + if (!frst_avail_smicomm_node) { 1116 + frst_avail_smicomm_node = smicomm_node; 1117 + } else if (frst_avail_smicomm_node != smicomm_node) { 1118 + dev_err(dev, "mediatek,smi property is not right @larb%d.", id); 1119 + of_node_put(smicomm_node); 1120 + ret = -EINVAL; 1121 + goto err_larbdev_put; 1122 + } else { 1123 + of_node_put(smicomm_node); 1124 + } 1125 + 1126 + component_match_add(dev, match, component_compare_dev, &plarbdev->dev); 1127 + platform_device_put(plarbdev); 1098 1128 } 1099 1129 1100 - /* Get smi-(sub)-common dev from the last larb. */ 1101 - smi_subcomm_node = of_parse_phandle(larbnode, "mediatek,smi", 0); 1102 - if (!smi_subcomm_node) 1130 + if (!frst_avail_smicomm_node) 1103 1131 return -EINVAL; 1104 1132 1105 - /* 1106 - * It may have two level smi-common. the node is smi-sub-common if it 1107 - * has a new mediatek,smi property. otherwise it is smi-commmon. 1108 - */ 1109 - smicomm_node = of_parse_phandle(smi_subcomm_node, "mediatek,smi", 0); 1110 - if (smicomm_node) 1111 - of_node_put(smi_subcomm_node); 1112 - else 1113 - smicomm_node = smi_subcomm_node; 1114 - 1115 - plarbdev = of_find_device_by_node(smicomm_node); 1116 - of_node_put(smicomm_node); 1117 - data->smicomm_dev = &plarbdev->dev; 1133 + pcommdev = of_find_device_by_node(frst_avail_smicomm_node); 1134 + of_node_put(frst_avail_smicomm_node); 1135 + if (!pcommdev) 1136 + return -ENODEV; 1137 + data->smicomm_dev = &pcommdev->dev; 1118 1138 1119 1139 link = device_link_add(data->smicomm_dev, dev, 1120 1140 DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME); 1141 + platform_device_put(pcommdev); 1121 1142 if (!link) { 1122 1143 dev_err(dev, "Unable to link %s.\n", dev_name(data->smicomm_dev)); 1123 1144 return -EINVAL; 1124 1145 } 1125 1146 return 0; 1147 + 1148 + err_larbdev_put: 1149 + for (i = MTK_LARB_NR_MAX - 1; i >= 0; i--) { 1150 + if (!data->larb_imu[i].dev) 1151 + continue; 1152 + put_device(data->larb_imu[i].dev); 1153 + } 1154 + return ret; 1126 1155 } 1127 1156 1128 1157 static int mtk_iommu_probe(struct platform_device *pdev) ··· 1230 1173 1231 1174 banks_num = data->plat_data->banks_num; 1232 1175 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1176 + if (!res) 1177 + return -EINVAL; 1233 1178 if (resource_size(res) < banks_num * MTK_IOMMU_BANK_SZ) { 1234 1179 dev_err(dev, "banknr %d. res %pR is not enough.\n", banks_num, res); 1235 1180 return -EINVAL; ··· 1575 1516 {4, MTK_INVALID_LARBID, MTK_INVALID_LARBID, MTK_INVALID_LARBID, 6}}, 1576 1517 }; 1577 1518 1519 + static const struct mtk_iommu_plat_data mt8365_data = { 1520 + .m4u_plat = M4U_MT8365, 1521 + .flags = RESET_AXI | INT_ID_PORT_WIDTH_6, 1522 + .inv_sel_reg = REG_MMU_INV_SEL_GEN1, 1523 + .banks_num = 1, 1524 + .banks_enable = {true}, 1525 + .iova_region = single_domain, 1526 + .iova_region_nr = ARRAY_SIZE(single_domain), 1527 + .larbid_remap = {{0}, {1}, {2}, {3}, {4}, {5}}, /* Linear mapping. */ 1528 + }; 1529 + 1578 1530 static const struct of_device_id mtk_iommu_of_ids[] = { 1579 1531 { .compatible = "mediatek,mt2712-m4u", .data = &mt2712_data}, 1580 1532 { .compatible = "mediatek,mt6779-m4u", .data = &mt6779_data}, ··· 1598 1528 { .compatible = "mediatek,mt8195-iommu-infra", .data = &mt8195_data_infra}, 1599 1529 { .compatible = "mediatek,mt8195-iommu-vdo", .data = &mt8195_data_vdo}, 1600 1530 { .compatible = "mediatek,mt8195-iommu-vpp", .data = &mt8195_data_vpp}, 1531 + { .compatible = "mediatek,mt8365-m4u", .data = &mt8365_data}, 1601 1532 {} 1602 1533 }; 1603 1534
+14 -16
drivers/iommu/mtk_iommu_v1.c
··· 327 327 } 328 328 329 329 static int mtk_iommu_v1_map(struct iommu_domain *domain, unsigned long iova, 330 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 330 + phys_addr_t paddr, size_t pgsize, size_t pgcount, 331 + int prot, gfp_t gfp, size_t *mapped) 331 332 { 332 333 struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain); 333 - unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT; 334 334 unsigned long flags; 335 335 unsigned int i; 336 336 u32 *pgt_base_iova = dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT); 337 337 u32 pabase = (u32)paddr; 338 - int map_size = 0; 339 338 340 339 spin_lock_irqsave(&dom->pgtlock, flags); 341 - for (i = 0; i < page_num; i++) { 342 - if (pgt_base_iova[i]) { 343 - memset(pgt_base_iova, 0, i * sizeof(u32)); 340 + for (i = 0; i < pgcount; i++) { 341 + if (pgt_base_iova[i]) 344 342 break; 345 - } 346 343 pgt_base_iova[i] = pabase | F_DESC_VALID | F_DESC_NONSEC; 347 344 pabase += MT2701_IOMMU_PAGE_SIZE; 348 - map_size += MT2701_IOMMU_PAGE_SIZE; 349 345 } 350 346 351 347 spin_unlock_irqrestore(&dom->pgtlock, flags); 352 348 353 - mtk_iommu_v1_tlb_flush_range(dom->data, iova, size); 349 + *mapped = i * MT2701_IOMMU_PAGE_SIZE; 350 + mtk_iommu_v1_tlb_flush_range(dom->data, iova, *mapped); 354 351 355 - return map_size == size ? 0 : -EEXIST; 352 + return i == pgcount ? 0 : -EEXIST; 356 353 } 357 354 358 355 static size_t mtk_iommu_v1_unmap(struct iommu_domain *domain, unsigned long iova, 359 - size_t size, struct iommu_iotlb_gather *gather) 356 + size_t pgsize, size_t pgcount, 357 + struct iommu_iotlb_gather *gather) 360 358 { 361 359 struct mtk_iommu_v1_domain *dom = to_mtk_domain(domain); 362 360 unsigned long flags; 363 361 u32 *pgt_base_iova = dom->pgt_va + (iova >> MT2701_IOMMU_PAGE_SHIFT); 364 - unsigned int page_num = size >> MT2701_IOMMU_PAGE_SHIFT; 362 + size_t size = pgcount * MT2701_IOMMU_PAGE_SIZE; 365 363 366 364 spin_lock_irqsave(&dom->pgtlock, flags); 367 - memset(pgt_base_iova, 0, page_num * sizeof(u32)); 365 + memset(pgt_base_iova, 0, pgcount * sizeof(u32)); 368 366 spin_unlock_irqrestore(&dom->pgtlock, flags); 369 367 370 368 mtk_iommu_v1_tlb_flush_range(dom->data, iova, size); ··· 584 586 .release_device = mtk_iommu_v1_release_device, 585 587 .def_domain_type = mtk_iommu_v1_def_domain_type, 586 588 .device_group = generic_device_group, 587 - .pgsize_bitmap = ~0UL << MT2701_IOMMU_PAGE_SHIFT, 589 + .pgsize_bitmap = MT2701_IOMMU_PAGE_SIZE, 588 590 .owner = THIS_MODULE, 589 591 .default_domain_ops = &(const struct iommu_domain_ops) { 590 592 .attach_dev = mtk_iommu_v1_attach_device, 591 593 .detach_dev = mtk_iommu_v1_detach_device, 592 - .map = mtk_iommu_v1_map, 593 - .unmap = mtk_iommu_v1_unmap, 594 + .map_pages = mtk_iommu_v1_map, 595 + .unmap_pages = mtk_iommu_v1_unmap, 594 596 .iova_to_phys = mtk_iommu_v1_iova_to_phys, 595 597 .free = mtk_iommu_v1_domain_free, 596 598 }
+4 -6
drivers/iommu/rockchip-iommu.c
··· 280 280 * 11:9 - Page address bit 34:32 281 281 * 8:4 - Page address bit 39:35 282 282 * 3 - Security 283 - * 2 - Readable 284 - * 1 - Writable 283 + * 2 - Writable 284 + * 1 - Readable 285 285 * 0 - 1 if Page @ Page address is valid 286 286 */ 287 - #define RK_PTE_PAGE_READABLE_V2 BIT(2) 288 - #define RK_PTE_PAGE_WRITABLE_V2 BIT(1) 289 287 290 288 static u32 rk_mk_pte_v2(phys_addr_t page, int prot) 291 289 { 292 290 u32 flags = 0; 293 291 294 - flags |= (prot & IOMMU_READ) ? RK_PTE_PAGE_READABLE_V2 : 0; 295 - flags |= (prot & IOMMU_WRITE) ? RK_PTE_PAGE_WRITABLE_V2 : 0; 292 + flags |= (prot & IOMMU_READ) ? RK_PTE_PAGE_READABLE : 0; 293 + flags |= (prot & IOMMU_WRITE) ? RK_PTE_PAGE_WRITABLE : 0; 296 294 297 295 return rk_mk_dte_v2(page) | flags; 298 296 }
+221 -162
drivers/iommu/s390-iommu.c
··· 10 10 #include <linux/iommu.h> 11 11 #include <linux/iommu-helper.h> 12 12 #include <linux/sizes.h> 13 + #include <linux/rculist.h> 14 + #include <linux/rcupdate.h> 13 15 #include <asm/pci_dma.h> 14 - 15 - /* 16 - * Physically contiguous memory regions can be mapped with 4 KiB alignment, 17 - * we allow all page sizes that are an order of 4KiB (no special large page 18 - * support so far). 19 - */ 20 - #define S390_IOMMU_PGSIZES (~0xFFFUL) 21 16 22 17 static const struct iommu_ops s390_iommu_ops; 23 18 ··· 20 25 struct iommu_domain domain; 21 26 struct list_head devices; 22 27 unsigned long *dma_table; 23 - spinlock_t dma_table_lock; 24 28 spinlock_t list_lock; 25 - }; 26 - 27 - struct s390_domain_device { 28 - struct list_head list; 29 - struct zpci_dev *zdev; 29 + struct rcu_head rcu; 30 30 }; 31 31 32 32 static struct s390_domain *to_s390_domain(struct iommu_domain *dom) ··· 57 67 kfree(s390_domain); 58 68 return NULL; 59 69 } 70 + s390_domain->domain.geometry.force_aperture = true; 71 + s390_domain->domain.geometry.aperture_start = 0; 72 + s390_domain->domain.geometry.aperture_end = ZPCI_TABLE_SIZE_RT - 1; 60 73 61 - spin_lock_init(&s390_domain->dma_table_lock); 62 74 spin_lock_init(&s390_domain->list_lock); 63 - INIT_LIST_HEAD(&s390_domain->devices); 75 + INIT_LIST_HEAD_RCU(&s390_domain->devices); 64 76 65 77 return &s390_domain->domain; 78 + } 79 + 80 + static void s390_iommu_rcu_free_domain(struct rcu_head *head) 81 + { 82 + struct s390_domain *s390_domain = container_of(head, struct s390_domain, rcu); 83 + 84 + dma_cleanup_tables(s390_domain->dma_table); 85 + kfree(s390_domain); 66 86 } 67 87 68 88 static void s390_domain_free(struct iommu_domain *domain) 69 89 { 70 90 struct s390_domain *s390_domain = to_s390_domain(domain); 71 91 72 - dma_cleanup_tables(s390_domain->dma_table); 73 - kfree(s390_domain); 92 + rcu_read_lock(); 93 + WARN_ON(!list_empty(&s390_domain->devices)); 94 + rcu_read_unlock(); 95 + 96 + call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain); 97 + } 98 + 99 + static void __s390_iommu_detach_device(struct zpci_dev *zdev) 100 + { 101 + struct s390_domain *s390_domain = zdev->s390_domain; 102 + unsigned long flags; 103 + 104 + if (!s390_domain) 105 + return; 106 + 107 + spin_lock_irqsave(&s390_domain->list_lock, flags); 108 + list_del_rcu(&zdev->iommu_list); 109 + spin_unlock_irqrestore(&s390_domain->list_lock, flags); 110 + 111 + zpci_unregister_ioat(zdev, 0); 112 + zdev->s390_domain = NULL; 113 + zdev->dma_table = NULL; 74 114 } 75 115 76 116 static int s390_iommu_attach_device(struct iommu_domain *domain, ··· 108 88 { 109 89 struct s390_domain *s390_domain = to_s390_domain(domain); 110 90 struct zpci_dev *zdev = to_zpci_dev(dev); 111 - struct s390_domain_device *domain_device; 112 91 unsigned long flags; 113 - int cc, rc; 92 + u8 status; 93 + int cc; 114 94 115 95 if (!zdev) 116 96 return -ENODEV; 117 97 118 - domain_device = kzalloc(sizeof(*domain_device), GFP_KERNEL); 119 - if (!domain_device) 120 - return -ENOMEM; 121 - 122 - if (zdev->dma_table && !zdev->s390_domain) { 123 - cc = zpci_dma_exit_device(zdev); 124 - if (cc) { 125 - rc = -EIO; 126 - goto out_free; 127 - } 128 - } 98 + if (WARN_ON(domain->geometry.aperture_start > zdev->end_dma || 99 + domain->geometry.aperture_end < zdev->start_dma)) 100 + return -EINVAL; 129 101 130 102 if (zdev->s390_domain) 131 - zpci_unregister_ioat(zdev, 0); 103 + __s390_iommu_detach_device(zdev); 104 + else if (zdev->dma_table) 105 + zpci_dma_exit_device(zdev); 106 + 107 + cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 108 + virt_to_phys(s390_domain->dma_table), &status); 109 + /* 110 + * If the device is undergoing error recovery the reset code 111 + * will re-establish the new domain. 112 + */ 113 + if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL) 114 + return -EIO; 115 + zdev->dma_table = s390_domain->dma_table; 132 116 133 117 zdev->dma_table = s390_domain->dma_table; 134 - cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 135 - virt_to_phys(zdev->dma_table)); 136 - if (cc) { 137 - rc = -EIO; 138 - goto out_restore; 139 - } 118 + zdev->s390_domain = s390_domain; 140 119 141 120 spin_lock_irqsave(&s390_domain->list_lock, flags); 142 - /* First device defines the DMA range limits */ 143 - if (list_empty(&s390_domain->devices)) { 144 - domain->geometry.aperture_start = zdev->start_dma; 145 - domain->geometry.aperture_end = zdev->end_dma; 146 - domain->geometry.force_aperture = true; 147 - /* Allow only devices with identical DMA range limits */ 148 - } else if (domain->geometry.aperture_start != zdev->start_dma || 149 - domain->geometry.aperture_end != zdev->end_dma) { 150 - rc = -EINVAL; 151 - spin_unlock_irqrestore(&s390_domain->list_lock, flags); 152 - goto out_restore; 153 - } 154 - domain_device->zdev = zdev; 155 - zdev->s390_domain = s390_domain; 156 - list_add(&domain_device->list, &s390_domain->devices); 121 + list_add_rcu(&zdev->iommu_list, &s390_domain->devices); 157 122 spin_unlock_irqrestore(&s390_domain->list_lock, flags); 158 123 159 124 return 0; 160 - 161 - out_restore: 162 - if (!zdev->s390_domain) { 163 - zpci_dma_init_device(zdev); 164 - } else { 165 - zdev->dma_table = zdev->s390_domain->dma_table; 166 - zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma, 167 - virt_to_phys(zdev->dma_table)); 168 - } 169 - out_free: 170 - kfree(domain_device); 171 - 172 - return rc; 173 125 } 174 126 175 127 static void s390_iommu_detach_device(struct iommu_domain *domain, 176 128 struct device *dev) 177 129 { 178 - struct s390_domain *s390_domain = to_s390_domain(domain); 179 130 struct zpci_dev *zdev = to_zpci_dev(dev); 180 - struct s390_domain_device *domain_device, *tmp; 181 - unsigned long flags; 182 - int found = 0; 183 131 184 - if (!zdev) 185 - return; 132 + WARN_ON(zdev->s390_domain != to_s390_domain(domain)); 186 133 187 - spin_lock_irqsave(&s390_domain->list_lock, flags); 188 - list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices, 189 - list) { 190 - if (domain_device->zdev == zdev) { 191 - list_del(&domain_device->list); 192 - kfree(domain_device); 193 - found = 1; 194 - break; 195 - } 134 + __s390_iommu_detach_device(zdev); 135 + zpci_dma_init_device(zdev); 136 + } 137 + 138 + static void s390_iommu_get_resv_regions(struct device *dev, 139 + struct list_head *list) 140 + { 141 + struct zpci_dev *zdev = to_zpci_dev(dev); 142 + struct iommu_resv_region *region; 143 + 144 + if (zdev->start_dma) { 145 + region = iommu_alloc_resv_region(0, zdev->start_dma, 0, 146 + IOMMU_RESV_RESERVED, GFP_KERNEL); 147 + if (!region) 148 + return; 149 + list_add_tail(&region->list, list); 196 150 } 197 - spin_unlock_irqrestore(&s390_domain->list_lock, flags); 198 151 199 - if (found && (zdev->s390_domain == s390_domain)) { 200 - zdev->s390_domain = NULL; 201 - zpci_unregister_ioat(zdev, 0); 202 - zpci_dma_init_device(zdev); 152 + if (zdev->end_dma < ZPCI_TABLE_SIZE_RT - 1) { 153 + region = iommu_alloc_resv_region(zdev->end_dma + 1, 154 + ZPCI_TABLE_SIZE_RT - zdev->end_dma - 1, 155 + 0, IOMMU_RESV_RESERVED, GFP_KERNEL); 156 + if (!region) 157 + return; 158 + list_add_tail(&region->list, list); 203 159 } 204 160 } 205 161 ··· 188 192 189 193 zdev = to_zpci_dev(dev); 190 194 195 + if (zdev->start_dma > zdev->end_dma || 196 + zdev->start_dma > ZPCI_TABLE_SIZE_RT - 1) 197 + return ERR_PTR(-EINVAL); 198 + 199 + if (zdev->end_dma > ZPCI_TABLE_SIZE_RT - 1) 200 + zdev->end_dma = ZPCI_TABLE_SIZE_RT - 1; 201 + 191 202 return &zdev->iommu_dev; 192 203 } 193 204 194 205 static void s390_iommu_release_device(struct device *dev) 195 206 { 196 207 struct zpci_dev *zdev = to_zpci_dev(dev); 197 - struct iommu_domain *domain; 198 208 199 209 /* 200 - * This is a workaround for a scenario where the IOMMU API common code 201 - * "forgets" to call the detach_dev callback: After binding a device 202 - * to vfio-pci and completing the VFIO_SET_IOMMU ioctl (which triggers 203 - * the attach_dev), removing the device via 204 - * "echo 1 > /sys/bus/pci/devices/.../remove" won't trigger detach_dev, 205 - * only release_device will be called via the BUS_NOTIFY_REMOVED_DEVICE 206 - * notifier. 207 - * 208 - * So let's call detach_dev from here if it hasn't been called before. 210 + * release_device is expected to detach any domain currently attached 211 + * to the device, but keep it attached to other devices in the group. 209 212 */ 210 - if (zdev && zdev->s390_domain) { 211 - domain = iommu_get_domain_for_dev(dev); 212 - if (domain) 213 - s390_iommu_detach_device(domain, dev); 214 - } 213 + if (zdev) 214 + __s390_iommu_detach_device(zdev); 215 215 } 216 216 217 - static int s390_iommu_update_trans(struct s390_domain *s390_domain, 218 - phys_addr_t pa, dma_addr_t dma_addr, 219 - size_t size, int flags) 217 + static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain) 220 218 { 221 - struct s390_domain_device *domain_device; 219 + struct s390_domain *s390_domain = to_s390_domain(domain); 220 + struct zpci_dev *zdev; 221 + 222 + rcu_read_lock(); 223 + list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { 224 + zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma, 225 + zdev->end_dma - zdev->start_dma + 1); 226 + } 227 + rcu_read_unlock(); 228 + } 229 + 230 + static void s390_iommu_iotlb_sync(struct iommu_domain *domain, 231 + struct iommu_iotlb_gather *gather) 232 + { 233 + struct s390_domain *s390_domain = to_s390_domain(domain); 234 + size_t size = gather->end - gather->start + 1; 235 + struct zpci_dev *zdev; 236 + 237 + /* If gather was never added to there is nothing to flush */ 238 + if (!gather->end) 239 + return; 240 + 241 + rcu_read_lock(); 242 + list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { 243 + zpci_refresh_trans((u64)zdev->fh << 32, gather->start, 244 + size); 245 + } 246 + rcu_read_unlock(); 247 + } 248 + 249 + static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain, 250 + unsigned long iova, size_t size) 251 + { 252 + struct s390_domain *s390_domain = to_s390_domain(domain); 253 + struct zpci_dev *zdev; 254 + 255 + rcu_read_lock(); 256 + list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) { 257 + if (!zdev->tlb_refresh) 258 + continue; 259 + zpci_refresh_trans((u64)zdev->fh << 32, 260 + iova, size); 261 + } 262 + rcu_read_unlock(); 263 + } 264 + 265 + static int s390_iommu_validate_trans(struct s390_domain *s390_domain, 266 + phys_addr_t pa, dma_addr_t dma_addr, 267 + unsigned long nr_pages, int flags) 268 + { 222 269 phys_addr_t page_addr = pa & PAGE_MASK; 223 - dma_addr_t start_dma_addr = dma_addr; 224 - unsigned long irq_flags, nr_pages, i; 225 270 unsigned long *entry; 226 - int rc = 0; 271 + unsigned long i; 272 + int rc; 227 273 228 - if (dma_addr < s390_domain->domain.geometry.aperture_start || 229 - dma_addr + size > s390_domain->domain.geometry.aperture_end) 230 - return -EINVAL; 231 - 232 - nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; 233 - if (!nr_pages) 234 - return 0; 235 - 236 - spin_lock_irqsave(&s390_domain->dma_table_lock, irq_flags); 237 274 for (i = 0; i < nr_pages; i++) { 238 275 entry = dma_walk_cpu_trans(s390_domain->dma_table, dma_addr); 239 - if (!entry) { 276 + if (unlikely(!entry)) { 240 277 rc = -ENOMEM; 241 278 goto undo_cpu_trans; 242 279 } ··· 278 249 dma_addr += PAGE_SIZE; 279 250 } 280 251 281 - spin_lock(&s390_domain->list_lock); 282 - list_for_each_entry(domain_device, &s390_domain->devices, list) { 283 - rc = zpci_refresh_trans((u64) domain_device->zdev->fh << 32, 284 - start_dma_addr, nr_pages * PAGE_SIZE); 285 - if (rc) 286 - break; 287 - } 288 - spin_unlock(&s390_domain->list_lock); 252 + return 0; 289 253 290 254 undo_cpu_trans: 291 - if (rc && ((flags & ZPCI_PTE_VALID_MASK) == ZPCI_PTE_VALID)) { 292 - flags = ZPCI_PTE_INVALID; 293 - while (i-- > 0) { 294 - page_addr -= PAGE_SIZE; 295 - dma_addr -= PAGE_SIZE; 296 - entry = dma_walk_cpu_trans(s390_domain->dma_table, 297 - dma_addr); 298 - if (!entry) 299 - break; 300 - dma_update_cpu_trans(entry, page_addr, flags); 301 - } 255 + while (i-- > 0) { 256 + dma_addr -= PAGE_SIZE; 257 + entry = dma_walk_cpu_trans(s390_domain->dma_table, 258 + dma_addr); 259 + if (!entry) 260 + break; 261 + dma_update_cpu_trans(entry, 0, ZPCI_PTE_INVALID); 302 262 } 303 - spin_unlock_irqrestore(&s390_domain->dma_table_lock, irq_flags); 304 263 305 264 return rc; 306 265 } 307 266 308 - static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova, 309 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 267 + static int s390_iommu_invalidate_trans(struct s390_domain *s390_domain, 268 + dma_addr_t dma_addr, unsigned long nr_pages) 269 + { 270 + unsigned long *entry; 271 + unsigned long i; 272 + int rc = 0; 273 + 274 + for (i = 0; i < nr_pages; i++) { 275 + entry = dma_walk_cpu_trans(s390_domain->dma_table, dma_addr); 276 + if (unlikely(!entry)) { 277 + rc = -EINVAL; 278 + break; 279 + } 280 + dma_update_cpu_trans(entry, 0, ZPCI_PTE_INVALID); 281 + dma_addr += PAGE_SIZE; 282 + } 283 + 284 + return rc; 285 + } 286 + 287 + static int s390_iommu_map_pages(struct iommu_domain *domain, 288 + unsigned long iova, phys_addr_t paddr, 289 + size_t pgsize, size_t pgcount, 290 + int prot, gfp_t gfp, size_t *mapped) 310 291 { 311 292 struct s390_domain *s390_domain = to_s390_domain(domain); 293 + size_t size = pgcount << __ffs(pgsize); 312 294 int flags = ZPCI_PTE_VALID, rc = 0; 295 + 296 + if (pgsize != SZ_4K) 297 + return -EINVAL; 298 + 299 + if (iova < s390_domain->domain.geometry.aperture_start || 300 + (iova + size - 1) > s390_domain->domain.geometry.aperture_end) 301 + return -EINVAL; 302 + 303 + if (!IS_ALIGNED(iova | paddr, pgsize)) 304 + return -EINVAL; 313 305 314 306 if (!(prot & IOMMU_READ)) 315 307 return -EINVAL; ··· 338 288 if (!(prot & IOMMU_WRITE)) 339 289 flags |= ZPCI_TABLE_PROTECTED; 340 290 341 - rc = s390_iommu_update_trans(s390_domain, paddr, iova, 342 - size, flags); 291 + rc = s390_iommu_validate_trans(s390_domain, paddr, iova, 292 + pgcount, flags); 293 + if (!rc) 294 + *mapped = size; 343 295 344 296 return rc; 345 297 } ··· 350 298 dma_addr_t iova) 351 299 { 352 300 struct s390_domain *s390_domain = to_s390_domain(domain); 353 - unsigned long *sto, *pto, *rto, flags; 301 + unsigned long *rto, *sto, *pto; 302 + unsigned long ste, pte, rte; 354 303 unsigned int rtx, sx, px; 355 304 phys_addr_t phys = 0; 356 305 ··· 364 311 px = calc_px(iova); 365 312 rto = s390_domain->dma_table; 366 313 367 - spin_lock_irqsave(&s390_domain->dma_table_lock, flags); 368 - if (rto && reg_entry_isvalid(rto[rtx])) { 369 - sto = get_rt_sto(rto[rtx]); 370 - if (sto && reg_entry_isvalid(sto[sx])) { 371 - pto = get_st_pto(sto[sx]); 372 - if (pto && pt_entry_isvalid(pto[px])) 373 - phys = pto[px] & ZPCI_PTE_ADDR_MASK; 314 + rte = READ_ONCE(rto[rtx]); 315 + if (reg_entry_isvalid(rte)) { 316 + sto = get_rt_sto(rte); 317 + ste = READ_ONCE(sto[sx]); 318 + if (reg_entry_isvalid(ste)) { 319 + pto = get_st_pto(ste); 320 + pte = READ_ONCE(pto[px]); 321 + if (pt_entry_isvalid(pte)) 322 + phys = pte & ZPCI_PTE_ADDR_MASK; 374 323 } 375 324 } 376 - spin_unlock_irqrestore(&s390_domain->dma_table_lock, flags); 377 325 378 326 return phys; 379 327 } 380 328 381 - static size_t s390_iommu_unmap(struct iommu_domain *domain, 382 - unsigned long iova, size_t size, 383 - struct iommu_iotlb_gather *gather) 329 + static size_t s390_iommu_unmap_pages(struct iommu_domain *domain, 330 + unsigned long iova, 331 + size_t pgsize, size_t pgcount, 332 + struct iommu_iotlb_gather *gather) 384 333 { 385 334 struct s390_domain *s390_domain = to_s390_domain(domain); 386 - int flags = ZPCI_PTE_INVALID; 387 - phys_addr_t paddr; 335 + size_t size = pgcount << __ffs(pgsize); 388 336 int rc; 389 337 390 - paddr = s390_iommu_iova_to_phys(domain, iova); 391 - if (!paddr) 338 + if (WARN_ON(iova < s390_domain->domain.geometry.aperture_start || 339 + (iova + size - 1) > s390_domain->domain.geometry.aperture_end)) 392 340 return 0; 393 341 394 - rc = s390_iommu_update_trans(s390_domain, paddr, iova, 395 - size, flags); 342 + rc = s390_iommu_invalidate_trans(s390_domain, iova, pgcount); 396 343 if (rc) 397 344 return 0; 345 + 346 + iommu_iotlb_gather_add_range(gather, iova, size); 398 347 399 348 return size; 400 349 } ··· 435 380 .probe_device = s390_iommu_probe_device, 436 381 .release_device = s390_iommu_release_device, 437 382 .device_group = generic_device_group, 438 - .pgsize_bitmap = S390_IOMMU_PGSIZES, 383 + .pgsize_bitmap = SZ_4K, 384 + .get_resv_regions = s390_iommu_get_resv_regions, 439 385 .default_domain_ops = &(const struct iommu_domain_ops) { 440 386 .attach_dev = s390_iommu_attach_device, 441 387 .detach_dev = s390_iommu_detach_device, 442 - .map = s390_iommu_map, 443 - .unmap = s390_iommu_unmap, 388 + .map_pages = s390_iommu_map_pages, 389 + .unmap_pages = s390_iommu_unmap_pages, 390 + .flush_iotlb_all = s390_iommu_flush_iotlb_all, 391 + .iotlb_sync = s390_iommu_iotlb_sync, 392 + .iotlb_sync_map = s390_iommu_iotlb_sync_map, 444 393 .iova_to_phys = s390_iommu_iova_to_phys, 445 394 .free = s390_domain_free, 446 395 }
+14 -11
drivers/iommu/sprd-iommu.c
··· 271 271 } 272 272 273 273 static int sprd_iommu_map(struct iommu_domain *domain, unsigned long iova, 274 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp) 274 + phys_addr_t paddr, size_t pgsize, size_t pgcount, 275 + int prot, gfp_t gfp, size_t *mapped) 275 276 { 276 277 struct sprd_iommu_domain *dom = to_sprd_domain(domain); 277 - unsigned int page_num = size >> SPRD_IOMMU_PAGE_SHIFT; 278 + size_t size = pgcount * SPRD_IOMMU_PAGE_SIZE; 278 279 unsigned long flags; 279 280 unsigned int i; 280 281 u32 *pgt_base_iova; ··· 297 296 pgt_base_iova = dom->pgt_va + ((iova - start) >> SPRD_IOMMU_PAGE_SHIFT); 298 297 299 298 spin_lock_irqsave(&dom->pgtlock, flags); 300 - for (i = 0; i < page_num; i++) { 299 + for (i = 0; i < pgcount; i++) { 301 300 pgt_base_iova[i] = pabase >> SPRD_IOMMU_PAGE_SHIFT; 302 301 pabase += SPRD_IOMMU_PAGE_SIZE; 303 302 } 304 303 spin_unlock_irqrestore(&dom->pgtlock, flags); 305 304 305 + *mapped = size; 306 306 return 0; 307 307 } 308 308 309 309 static size_t sprd_iommu_unmap(struct iommu_domain *domain, unsigned long iova, 310 - size_t size, struct iommu_iotlb_gather *iotlb_gather) 310 + size_t pgsize, size_t pgcount, 311 + struct iommu_iotlb_gather *iotlb_gather) 311 312 { 312 313 struct sprd_iommu_domain *dom = to_sprd_domain(domain); 313 314 unsigned long flags; 314 315 u32 *pgt_base_iova; 315 - unsigned int page_num = size >> SPRD_IOMMU_PAGE_SHIFT; 316 + size_t size = pgcount * SPRD_IOMMU_PAGE_SIZE; 316 317 unsigned long start = domain->geometry.aperture_start; 317 318 unsigned long end = domain->geometry.aperture_end; 318 319 319 320 if (iova < start || (iova + size) > (end + 1)) 320 - return -EINVAL; 321 + return 0; 321 322 322 323 pgt_base_iova = dom->pgt_va + ((iova - start) >> SPRD_IOMMU_PAGE_SHIFT); 323 324 324 325 spin_lock_irqsave(&dom->pgtlock, flags); 325 - memset(pgt_base_iova, 0, page_num * sizeof(u32)); 326 + memset(pgt_base_iova, 0, pgcount * sizeof(u32)); 326 327 spin_unlock_irqrestore(&dom->pgtlock, flags); 327 328 328 - return 0; 329 + return size; 329 330 } 330 331 331 332 static void sprd_iommu_sync_map(struct iommu_domain *domain, ··· 410 407 .probe_device = sprd_iommu_probe_device, 411 408 .device_group = sprd_iommu_device_group, 412 409 .of_xlate = sprd_iommu_of_xlate, 413 - .pgsize_bitmap = ~0UL << SPRD_IOMMU_PAGE_SHIFT, 410 + .pgsize_bitmap = SPRD_IOMMU_PAGE_SIZE, 414 411 .owner = THIS_MODULE, 415 412 .default_domain_ops = &(const struct iommu_domain_ops) { 416 413 .attach_dev = sprd_iommu_attach_device, 417 414 .detach_dev = sprd_iommu_detach_device, 418 - .map = sprd_iommu_map, 419 - .unmap = sprd_iommu_unmap, 415 + .map_pages = sprd_iommu_map, 416 + .unmap_pages = sprd_iommu_unmap, 420 417 .iotlb_sync_map = sprd_iommu_sync_map, 421 418 .iotlb_sync = sprd_iommu_sync, 422 419 .iova_to_phys = sprd_iommu_iova_to_phys,
+83 -6
drivers/iommu/sun50i-iommu.c
··· 27 27 #include <linux/types.h> 28 28 29 29 #define IOMMU_RESET_REG 0x010 30 + #define IOMMU_RESET_RELEASE_ALL 0xffffffff 30 31 #define IOMMU_ENABLE_REG 0x020 31 32 #define IOMMU_ENABLE_ENABLE BIT(0) 32 33 ··· 92 91 93 92 #define NUM_PT_ENTRIES 256 94 93 #define PT_SIZE (NUM_PT_ENTRIES * PT_ENTRY_SIZE) 94 + 95 + #define SPAGE_SIZE 4096 95 96 96 97 struct sun50i_iommu { 97 98 struct iommu_device iommu; ··· 273 270 enum sun50i_iommu_aci aci; 274 271 u32 flags = 0; 275 272 276 - if (prot & (IOMMU_READ | IOMMU_WRITE)) 273 + if ((prot & (IOMMU_READ | IOMMU_WRITE)) == (IOMMU_READ | IOMMU_WRITE)) 277 274 aci = SUN50I_IOMMU_ACI_RD_WR; 278 275 else if (prot & IOMMU_READ) 279 276 aci = SUN50I_IOMMU_ACI_RD; ··· 295 292 size_t size = count * PT_ENTRY_SIZE; 296 293 297 294 dma_sync_single_for_device(iommu->dev, dma, size, DMA_TO_DEVICE); 295 + } 296 + 297 + static void sun50i_iommu_zap_iova(struct sun50i_iommu *iommu, 298 + unsigned long iova) 299 + { 300 + u32 reg; 301 + int ret; 302 + 303 + iommu_write(iommu, IOMMU_TLB_IVLD_ADDR_REG, iova); 304 + iommu_write(iommu, IOMMU_TLB_IVLD_ADDR_MASK_REG, GENMASK(31, 12)); 305 + iommu_write(iommu, IOMMU_TLB_IVLD_ENABLE_REG, 306 + IOMMU_TLB_IVLD_ENABLE_ENABLE); 307 + 308 + ret = readl_poll_timeout_atomic(iommu->base + IOMMU_TLB_IVLD_ENABLE_REG, 309 + reg, !reg, 1, 2000); 310 + if (ret) 311 + dev_warn(iommu->dev, "TLB invalidation timed out!\n"); 312 + } 313 + 314 + static void sun50i_iommu_zap_ptw_cache(struct sun50i_iommu *iommu, 315 + unsigned long iova) 316 + { 317 + u32 reg; 318 + int ret; 319 + 320 + iommu_write(iommu, IOMMU_PC_IVLD_ADDR_REG, iova); 321 + iommu_write(iommu, IOMMU_PC_IVLD_ENABLE_REG, 322 + IOMMU_PC_IVLD_ENABLE_ENABLE); 323 + 324 + ret = readl_poll_timeout_atomic(iommu->base + IOMMU_PC_IVLD_ENABLE_REG, 325 + reg, !reg, 1, 2000); 326 + if (ret) 327 + dev_warn(iommu->dev, "PTW cache invalidation timed out!\n"); 328 + } 329 + 330 + static void sun50i_iommu_zap_range(struct sun50i_iommu *iommu, 331 + unsigned long iova, size_t size) 332 + { 333 + assert_spin_locked(&iommu->iommu_lock); 334 + 335 + iommu_write(iommu, IOMMU_AUTO_GATING_REG, 0); 336 + 337 + sun50i_iommu_zap_iova(iommu, iova); 338 + sun50i_iommu_zap_iova(iommu, iova + SPAGE_SIZE); 339 + if (size > SPAGE_SIZE) { 340 + sun50i_iommu_zap_iova(iommu, iova + size); 341 + sun50i_iommu_zap_iova(iommu, iova + size + SPAGE_SIZE); 342 + } 343 + sun50i_iommu_zap_ptw_cache(iommu, iova); 344 + sun50i_iommu_zap_ptw_cache(iommu, iova + SZ_1M); 345 + if (size > SZ_1M) { 346 + sun50i_iommu_zap_ptw_cache(iommu, iova + size); 347 + sun50i_iommu_zap_ptw_cache(iommu, iova + size + SZ_1M); 348 + } 349 + 350 + iommu_write(iommu, IOMMU_AUTO_GATING_REG, IOMMU_AUTO_GATING_ENABLE); 298 351 } 299 352 300 353 static int sun50i_iommu_flush_all_tlb(struct sun50i_iommu *iommu) ··· 399 340 400 341 spin_lock_irqsave(&iommu->iommu_lock, flags); 401 342 sun50i_iommu_flush_all_tlb(iommu); 343 + spin_unlock_irqrestore(&iommu->iommu_lock, flags); 344 + } 345 + 346 + static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain, 347 + unsigned long iova, size_t size) 348 + { 349 + struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain); 350 + struct sun50i_iommu *iommu = sun50i_domain->iommu; 351 + unsigned long flags; 352 + 353 + spin_lock_irqsave(&iommu->iommu_lock, flags); 354 + sun50i_iommu_zap_range(iommu, iova, size); 402 355 spin_unlock_irqrestore(&iommu->iommu_lock, flags); 403 356 } 404 357 ··· 582 511 sun50i_iommu_free_page_table(iommu, drop_pt); 583 512 } 584 513 585 - sun50i_table_flush(sun50i_domain, page_table, PT_SIZE); 514 + sun50i_table_flush(sun50i_domain, page_table, NUM_PT_ENTRIES); 586 515 sun50i_table_flush(sun50i_domain, dte_addr, 1); 587 516 588 517 return page_table; ··· 672 601 struct sun50i_iommu_domain *sun50i_domain; 673 602 674 603 if (type != IOMMU_DOMAIN_DMA && 675 - type != IOMMU_DOMAIN_IDENTITY && 676 604 type != IOMMU_DOMAIN_UNMANAGED) 677 605 return NULL; 678 606 ··· 836 766 .attach_dev = sun50i_iommu_attach_device, 837 767 .detach_dev = sun50i_iommu_detach_device, 838 768 .flush_iotlb_all = sun50i_iommu_flush_iotlb_all, 769 + .iotlb_sync_map = sun50i_iommu_iotlb_sync_map, 839 770 .iotlb_sync = sun50i_iommu_iotlb_sync, 840 771 .iova_to_phys = sun50i_iommu_iova_to_phys, 841 772 .map = sun50i_iommu_map, ··· 856 785 report_iommu_fault(iommu->domain, iommu->dev, iova, prot); 857 786 else 858 787 dev_err(iommu->dev, "Page fault while iommu not attached to any domain?\n"); 788 + 789 + sun50i_iommu_zap_range(iommu, iova, SPAGE_SIZE); 859 790 } 860 791 861 792 static phys_addr_t sun50i_iommu_handle_pt_irq(struct sun50i_iommu *iommu, ··· 941 868 942 869 static irqreturn_t sun50i_iommu_irq(int irq, void *dev_id) 943 870 { 871 + u32 status, l1_status, l2_status, resets; 944 872 struct sun50i_iommu *iommu = dev_id; 945 - u32 status; 946 873 947 874 spin_lock(&iommu->iommu_lock); 948 875 ··· 951 878 spin_unlock(&iommu->iommu_lock); 952 879 return IRQ_NONE; 953 880 } 881 + 882 + l1_status = iommu_read(iommu, IOMMU_L1PG_INT_REG); 883 + l2_status = iommu_read(iommu, IOMMU_L2PG_INT_REG); 954 884 955 885 if (status & IOMMU_INT_INVALID_L2PG) 956 886 sun50i_iommu_handle_pt_irq(iommu, ··· 968 892 969 893 iommu_write(iommu, IOMMU_INT_CLR_REG, status); 970 894 971 - iommu_write(iommu, IOMMU_RESET_REG, ~status); 972 - iommu_write(iommu, IOMMU_RESET_REG, status); 895 + resets = (status | l1_status | l2_status) & IOMMU_INT_MASTER_MASK; 896 + iommu_write(iommu, IOMMU_RESET_REG, ~resets); 897 + iommu_write(iommu, IOMMU_RESET_REG, IOMMU_RESET_RELEASE_ALL); 973 898 974 899 spin_unlock(&iommu->iommu_lock); 975 900
+90
include/dt-bindings/memory/mediatek,mt8365-larb-port.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + /* 3 + * Copyright (c) 2022 MediaTek Inc. 4 + * Author: Yong Wu <yong.wu@mediatek.com> 5 + */ 6 + #ifndef _DT_BINDINGS_MEMORY_MT8365_LARB_PORT_H_ 7 + #define _DT_BINDINGS_MEMORY_MT8365_LARB_PORT_H_ 8 + 9 + #include <dt-bindings/memory/mtk-memory-port.h> 10 + 11 + #define M4U_LARB0_ID 0 12 + #define M4U_LARB1_ID 1 13 + #define M4U_LARB2_ID 2 14 + #define M4U_LARB3_ID 3 15 + 16 + /* larb0 */ 17 + #define M4U_PORT_DISP_OVL0 MTK_M4U_ID(M4U_LARB0_ID, 0) 18 + #define M4U_PORT_DISP_OVL0_2L MTK_M4U_ID(M4U_LARB0_ID, 1) 19 + #define M4U_PORT_DISP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 2) 20 + #define M4U_PORT_DISP_WDMA0 MTK_M4U_ID(M4U_LARB0_ID, 3) 21 + #define M4U_PORT_DISP_RDMA1 MTK_M4U_ID(M4U_LARB0_ID, 4) 22 + #define M4U_PORT_MDP_RDMA0 MTK_M4U_ID(M4U_LARB0_ID, 5) 23 + #define M4U_PORT_MDP_WROT1 MTK_M4U_ID(M4U_LARB0_ID, 6) 24 + #define M4U_PORT_MDP_WROT0 MTK_M4U_ID(M4U_LARB0_ID, 7) 25 + #define M4U_PORT_MDP_RDMA1 MTK_M4U_ID(M4U_LARB0_ID, 8) 26 + #define M4U_PORT_DISP_FAKE0 MTK_M4U_ID(M4U_LARB0_ID, 9) 27 + #define M4U_PORT_APU_READ MTK_M4U_ID(M4U_LARB0_ID, 10) 28 + #define M4U_PORT_APU_WRITE MTK_M4U_ID(M4U_LARB0_ID, 11) 29 + 30 + /* larb1 */ 31 + #define M4U_PORT_VENC_RCPU MTK_M4U_ID(M4U_LARB1_ID, 0) 32 + #define M4U_PORT_VENC_REC MTK_M4U_ID(M4U_LARB1_ID, 1) 33 + #define M4U_PORT_VENC_BSDMA MTK_M4U_ID(M4U_LARB1_ID, 2) 34 + #define M4U_PORT_VENC_SV_COMV MTK_M4U_ID(M4U_LARB1_ID, 3) 35 + #define M4U_PORT_VENC_RD_COMV MTK_M4U_ID(M4U_LARB1_ID, 4) 36 + #define M4U_PORT_VENC_NBM_RDMA MTK_M4U_ID(M4U_LARB1_ID, 5) 37 + #define M4U_PORT_VENC_NBM_RDMA_LITE MTK_M4U_ID(M4U_LARB1_ID, 6) 38 + #define M4U_PORT_JPGENC_Y_RDMA MTK_M4U_ID(M4U_LARB1_ID, 7) 39 + #define M4U_PORT_JPGENC_C_RDMA MTK_M4U_ID(M4U_LARB1_ID, 8) 40 + #define M4U_PORT_JPGENC_Q_TABLE MTK_M4U_ID(M4U_LARB1_ID, 9) 41 + #define M4U_PORT_JPGENC_BSDMA MTK_M4U_ID(M4U_LARB1_ID, 10) 42 + #define M4U_PORT_JPGDEC_WDMA MTK_M4U_ID(M4U_LARB1_ID, 11) 43 + #define M4U_PORT_JPGDEC_BSDMA MTK_M4U_ID(M4U_LARB1_ID, 12) 44 + #define M4U_PORT_VENC_NBM_WDMA MTK_M4U_ID(M4U_LARB1_ID, 13) 45 + #define M4U_PORT_VENC_NBM_WDMA_LITE MTK_M4U_ID(M4U_LARB1_ID, 14) 46 + #define M4U_PORT_VENC_CUR_LUMA MTK_M4U_ID(M4U_LARB1_ID, 15) 47 + #define M4U_PORT_VENC_CUR_CHROMA MTK_M4U_ID(M4U_LARB1_ID, 16) 48 + #define M4U_PORT_VENC_REF_LUMA MTK_M4U_ID(M4U_LARB1_ID, 17) 49 + #define M4U_PORT_VENC_REF_CHROMA MTK_M4U_ID(M4U_LARB1_ID, 18) 50 + 51 + /* larb2 */ 52 + #define M4U_PORT_CAM_IMGO MTK_M4U_ID(M4U_LARB2_ID, 0) 53 + #define M4U_PORT_CAM_RRZO MTK_M4U_ID(M4U_LARB2_ID, 1) 54 + #define M4U_PORT_CAM_AAO MTK_M4U_ID(M4U_LARB2_ID, 2) 55 + #define M4U_PORT_CAM_LCS MTK_M4U_ID(M4U_LARB2_ID, 3) 56 + #define M4U_PORT_CAM_ESFKO MTK_M4U_ID(M4U_LARB2_ID, 4) 57 + #define M4U_PORT_CAM_CAM_SV0 MTK_M4U_ID(M4U_LARB2_ID, 5) 58 + #define M4U_PORT_CAM_CAM_SV1 MTK_M4U_ID(M4U_LARB2_ID, 6) 59 + #define M4U_PORT_CAM_LSCI MTK_M4U_ID(M4U_LARB2_ID, 7) 60 + #define M4U_PORT_CAM_LSCI_D MTK_M4U_ID(M4U_LARB2_ID, 8) 61 + #define M4U_PORT_CAM_AFO MTK_M4U_ID(M4U_LARB2_ID, 9) 62 + #define M4U_PORT_CAM_SPARE MTK_M4U_ID(M4U_LARB2_ID, 10) 63 + #define M4U_PORT_CAM_BPCI MTK_M4U_ID(M4U_LARB2_ID, 11) 64 + #define M4U_PORT_CAM_BPCI_D MTK_M4U_ID(M4U_LARB2_ID, 12) 65 + #define M4U_PORT_CAM_UFDI MTK_M4U_ID(M4U_LARB2_ID, 13) 66 + #define M4U_PORT_CAM_IMGI MTK_M4U_ID(M4U_LARB2_ID, 14) 67 + #define M4U_PORT_CAM_IMG2O MTK_M4U_ID(M4U_LARB2_ID, 15) 68 + #define M4U_PORT_CAM_IMG3O MTK_M4U_ID(M4U_LARB2_ID, 16) 69 + #define M4U_PORT_CAM_WPE0_I MTK_M4U_ID(M4U_LARB2_ID, 17) 70 + #define M4U_PORT_CAM_WPE1_I MTK_M4U_ID(M4U_LARB2_ID, 18) 71 + #define M4U_PORT_CAM_WPE_O MTK_M4U_ID(M4U_LARB2_ID, 19) 72 + #define M4U_PORT_CAM_FD0_I MTK_M4U_ID(M4U_LARB2_ID, 20) 73 + #define M4U_PORT_CAM_FD1_I MTK_M4U_ID(M4U_LARB2_ID, 21) 74 + #define M4U_PORT_CAM_FD0_O MTK_M4U_ID(M4U_LARB2_ID, 22) 75 + #define M4U_PORT_CAM_FD1_O MTK_M4U_ID(M4U_LARB2_ID, 23) 76 + 77 + /* larb3 */ 78 + #define M4U_PORT_HW_VDEC_MC_EXT MTK_M4U_ID(M4U_LARB3_ID, 0) 79 + #define M4U_PORT_HW_VDEC_UFO_EXT MTK_M4U_ID(M4U_LARB3_ID, 1) 80 + #define M4U_PORT_HW_VDEC_PP_EXT MTK_M4U_ID(M4U_LARB3_ID, 2) 81 + #define M4U_PORT_HW_VDEC_PRED_RD_EXT MTK_M4U_ID(M4U_LARB3_ID, 3) 82 + #define M4U_PORT_HW_VDEC_PRED_WR_EXT MTK_M4U_ID(M4U_LARB3_ID, 4) 83 + #define M4U_PORT_HW_VDEC_PPWRAP_EXT MTK_M4U_ID(M4U_LARB3_ID, 5) 84 + #define M4U_PORT_HW_VDEC_TILE_EXT MTK_M4U_ID(M4U_LARB3_ID, 6) 85 + #define M4U_PORT_HW_VDEC_VLD_EXT MTK_M4U_ID(M4U_LARB3_ID, 7) 86 + #define M4U_PORT_HW_VDEC_VLD2_EXT MTK_M4U_ID(M4U_LARB3_ID, 8) 87 + #define M4U_PORT_HW_VDEC_AVC_MV_EXT MTK_M4U_ID(M4U_LARB3_ID, 9) 88 + #define M4U_PORT_HW_VDEC_RG_CTRL_DMA_EXT MTK_M4U_ID(M4U_LARB3_ID, 10) 89 + 90 + #endif
-6
include/linux/io-pgtable.h
··· 150 150 /** 151 151 * struct io_pgtable_ops - Page table manipulation API for IOMMU drivers. 152 152 * 153 - * @map: Map a physically contiguous memory region. 154 153 * @map_pages: Map a physically contiguous range of pages of the same size. 155 - * @unmap: Unmap a physically contiguous memory region. 156 154 * @unmap_pages: Unmap a range of virtually contiguous pages of the same size. 157 155 * @iova_to_phys: Translate iova to physical address. 158 156 * ··· 158 160 * the same names. 159 161 */ 160 162 struct io_pgtable_ops { 161 - int (*map)(struct io_pgtable_ops *ops, unsigned long iova, 162 - phys_addr_t paddr, size_t size, int prot, gfp_t gfp); 163 163 int (*map_pages)(struct io_pgtable_ops *ops, unsigned long iova, 164 164 phys_addr_t paddr, size_t pgsize, size_t pgcount, 165 165 int prot, gfp_t gfp, size_t *mapped); 166 - size_t (*unmap)(struct io_pgtable_ops *ops, unsigned long iova, 167 - size_t size, struct iommu_iotlb_gather *gather); 168 166 size_t (*unmap_pages)(struct io_pgtable_ops *ops, unsigned long iova, 169 167 size_t pgsize, size_t pgcount, 170 168 struct iommu_iotlb_gather *gather);