···109109 When counting down the counter start from preset value110110 and fire event when reach 0.111111112112-What: /sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available113113-KernelVersion: 4.12114114-Contact: benjamin.gaignard@st.com115115-Description:116116- Reading returns the list possible quadrature modes.117117-118118-What: /sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode119119-KernelVersion: 4.12120120-Contact: benjamin.gaignard@st.com121121-Description:122122- Configure the device counter quadrature modes:123123-124124- channel_A:125125- Encoder A input servers as the count input and B as126126- the UP/DOWN direction control input.127127-128128- channel_B:129129- Encoder B input serves as the count input and A as130130- the UP/DOWN direction control input.131131-132132- quadrature:133133- Encoder A and B inputs are mixed to get direction134134- and count with a scale of 0.25.135135-136112What: /sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available137113KernelVersion: 4.12138114Contact: benjamin.gaignard@st.com
···7676 resets:7777 maxItems: 178787979+ wifi-2.4ghz-coexistence:8080+ type: boolean8181+ description: >8282+ Should the pixel frequencies in the WiFi frequencies range be8383+ avoided?8484+7985required:8086 - compatible8187 - reg
···8899- reg : The I2C address of the device.10101111+Optional properties:1212+1313+- realtek,power-up-delay-ms1414+ Set a delay time for flush work to be completed,1515+ this value is adjustable depending on platform.11161217Example:13181419rt1015: codec@28 {1520 compatible = "realtek,rt1015";1621 reg = <0x28>;2222+ realtek,power-up-delay-ms = <50>;1723};
+104-16
Documentation/driver-api/media/drivers/vidtv.rst
···149149 Because the generator is implemented in a separate file, it can be150150 reused elsewhere in the media subsystem.151151152152- Currently vidtv supports working with 3 PSI tables: PAT, PMT and153153- SDT.152152+ Currently vidtv supports working with 5 PSI tables: PAT, PMT,153153+ SDT, NIT and EIT.154154155155 The specification for PAT and PMT can be found in *ISO 13818-1:156156- Systems*, while the specification for the SDT can be found in *ETSI156156+ Systems*, while the specification for the SDT, NIT, EIT can be found in *ETSI157157 EN 300 468: Specification for Service Information (SI) in DVB158158 systems*.159159···196196 #. Their services will be concatenated to populate the SDT.197197198198 #. Their programs will be concatenated to populate the PAT199199+200200+ #. Their events will be concatenated to populate the EIT199201200202 #. For each program in the PAT, a PMT section will be created201203···258256The first step to check whether the demod loaded successfully is to run::259257260258 $ dvb-fe-tool259259+ Device Dummy demod for DVB-T/T2/C/S/S2 (/dev/dvb/adapter0/frontend0) capabilities:260260+ CAN_FEC_1_2261261+ CAN_FEC_2_3262262+ CAN_FEC_3_4263263+ CAN_FEC_4_5264264+ CAN_FEC_5_6265265+ CAN_FEC_6_7266266+ CAN_FEC_7_8267267+ CAN_FEC_8_9268268+ CAN_FEC_AUTO269269+ CAN_GUARD_INTERVAL_AUTO270270+ CAN_HIERARCHY_AUTO271271+ CAN_INVERSION_AUTO272272+ CAN_QAM_16273273+ CAN_QAM_32274274+ CAN_QAM_64275275+ CAN_QAM_128276276+ CAN_QAM_256277277+ CAN_QAM_AUTO278278+ CAN_QPSK279279+ CAN_TRANSMISSION_MODE_AUTO280280+ DVB API Version 5.11, Current v5 delivery system: DVBC/ANNEX_A281281+ Supported delivery systems:282282+ DVBT283283+ DVBT2284284+ [DVBC/ANNEX_A]285285+ DVBS286286+ DVBS2287287+ Frequency range for the current standard:288288+ From: 51.0 MHz289289+ To: 2.15 GHz290290+ Step: 62.5 kHz291291+ Tolerance: 29.5 MHz292292+ Symbol rate ranges for the current standard:293293+ From: 1.00 MBauds294294+ To: 45.0 MBauds261295262296This should return what is currently set up at the demod struct, i.e.::263297···352314here's an example::353315354316 [Channel]355355- FREQUENCY = 330000000317317+ FREQUENCY = 474000000356318 MODULATION = QAM/AUTO357319 SYMBOL_RATE = 6940000358320 INNER_FEC = AUTO···373335Assuming this channel is named 'channel.conf', you can then run::374336375337 $ dvbv5-scan channel.conf338338+ dvbv5-scan ~/vidtv.conf339339+ ERROR command BANDWIDTH_HZ (5) not found during retrieve340340+ Cannot calc frequency shift. Either bandwidth/symbol-rate is unavailable (yet).341341+ Scanning frequency #1 330000000342342+ (0x00) Signal= -68.00dBm343343+ Scanning frequency #2 474000000344344+ Lock (0x1f) Signal= -34.45dBm C/N= 33.74dB UCB= 0345345+ Service Beethoven, provider LinuxTV.org: digital television376346377347For more information on dvb-scan, check its documentation online here:378348`dvb-scan Documentation <https://www.linuxtv.org/wiki/index.php/Dvbscan>`_.···390344391345dvbv5-zap is a command line tool that can be used to record MPEG-TS to disk. The392346typical use is to tune into a channel and put it into record mode. The example393393-below - which is taken from the documentation - illustrates that::347347+below - which is taken from the documentation - illustrates that\ [1]_::394348395395- $ dvbv5-zap -c dvb_channel.conf "trilhas sonoras" -r396396- using demux '/dev/dvb/adapter0/demux0'349349+ $ dvbv5-zap -c dvb_channel.conf "beethoven" -o music.ts -P -t 10350350+ using demux 'dvb0.demux0'397351 reading channels from file 'dvb_channel.conf'398398- service has pid type 05: 204399399- tuning to 573000000 Hz400400- audio pid 104401401- dvb_set_pesfilter 104402402- Lock (0x1f) Quality= Good Signal= 100.00% C/N= -13.80dB UCB= 70 postBER= 3.14x10^-3 PER= 0403403- DVR interface '/dev/dvb/adapter0/dvr0' can now be opened352352+ tuning to 474000000 Hz353353+ pass all PID's to TS354354+ dvb_set_pesfilter 8192355355+ dvb_dev_set_bufsize: buffer set to 6160384356356+ Lock (0x1f) Quality= Good Signal= -34.66dBm C/N= 33.41dB UCB= 0 postBER= 0 preBER= 1.05x10^-3 PER= 0357357+ Lock (0x1f) Quality= Good Signal= -34.57dBm C/N= 33.46dB UCB= 0 postBER= 0 preBER= 1.05x10^-3 PER= 0358358+ Record to file 'music.ts' started359359+ received 24587768 bytes (2401 Kbytes/sec)360360+ Lock (0x1f) Quality= Good Signal= -34.42dBm C/N= 33.89dB UCB= 0 postBER= 0 preBER= 2.44x10^-3 PER= 0404361405405-The channel can be watched by playing the contents of the DVR interface, with406406-some player that recognizes the MPEG-TS format, such as *mplayer* or *vlc*.362362+.. [1] In this example, it records 10 seconds with all program ID's stored363363+ at the music.ts file.364364+365365+366366+The channel can be watched by playing the contents of the stream with some367367+player that recognizes the MPEG-TS format, such as ``mplayer`` or ``vlc``.407368408369By playing the contents of the stream one can visually inspect the workings of409409-vidtv, e.g.::370370+vidtv, e.g., to play a recorded TS file with::371371+372372+ $ mplayer music.ts373373+374374+or, alternatively, running this command on one terminal::375375+376376+ $ dvbv5-zap -c dvb_channel.conf "beethoven" -P -r &377377+378378+And, on a second terminal, playing the contents from DVR interface with::410379411380 $ mplayer /dev/dvb/adapter0/dvr0412381···484423- Updating the error statistics accordingly (e.g. BER, etc).485424486425- Simulating some noise in the encoded data.426426+427427+Functions and structs used within vidtv428428+---------------------------------------429429+430430+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_bridge.h431431+432432+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_channel.h433433+434434+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_demod.h435435+436436+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_encoder.h437437+438438+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_mux.h439439+440440+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_pes.h441441+442442+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_psi.h443443+444444+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_s302m.h445445+446446+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_ts.h447447+448448+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_tuner.h449449+450450+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_common.c451451+452452+.. kernel-doc:: drivers/media/test-drivers/vidtv/vidtv_tuner.c
+26
Documentation/networking/netdev-FAQ.rst
···254254minimum, your changes should survive an ``allyesconfig`` and an255255``allmodconfig`` build without new warnings or failures.256256257257+Q: How do I post corresponding changes to user space components?258258+----------------------------------------------------------------259259+A: User space code exercising kernel features should be posted260260+alongside kernel patches. This gives reviewers a chance to see261261+how any new interface is used and how well it works.262262+263263+When user space tools reside in the kernel repo itself all changes264264+should generally come as one series. If series becomes too large265265+or the user space project is not reviewed on netdev include a link266266+to a public repo where user space patches can be seen.267267+268268+In case user space tooling lives in a separate repository but is269269+reviewed on netdev (e.g. patches to `iproute2` tools) kernel and270270+user space patches should form separate series (threads) when posted271271+to the mailing list, e.g.::272272+273273+ [PATCH net-next 0/3] net: some feature cover letter274274+ └─ [PATCH net-next 1/3] net: some feature prep275275+ └─ [PATCH net-next 2/3] net: some feature do it276276+ └─ [PATCH net-next 3/3] selftest: net: some feature277277+278278+ [PATCH iproute2-next] ip: add support for some feature279279+280280+Posting as one thread is discouraged because it confuses patchwork281281+(as of patchwork 2.2.2).282282+257283Q: Any other tips to help ensure my net/net-next patch gets OK'd?258284-----------------------------------------------------------------259285A: Attention to detail. Re-read your own work as if you were the
+11-8
MAINTAINERS
···1995199519961996ARM/LPC32XX SOC SUPPORT19971997M: Vladimir Zapolskiy <vz@mleia.com>19981998-M: Sylvain Lemieux <slemieux.tyco@gmail.com>19991998L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)20001999S: Maintained20012000T: git git://github.com/vzapolskiy/linux-lpc32xx.git···35273528M: Arend van Spriel <arend.vanspriel@broadcom.com>35283529M: Franky Lin <franky.lin@broadcom.com>35293530M: Hante Meuleman <hante.meuleman@broadcom.com>35303530-M: Chi-Hsien Lin <chi-hsien.lin@cypress.com>35313531-M: Wright Feng <wright.feng@cypress.com>35313531+M: Chi-hsien Lin <chi-hsien.lin@infineon.com>35323532+M: Wright Feng <wright.feng@infineon.com>35333533+M: Chung-hsien Hsu <chung-hsien.hsu@infineon.com>35323534L: linux-wireless@vger.kernel.org35333535L: brcm80211-dev-list.pdl@broadcom.com35343534-L: brcm80211-dev-list@cypress.com35363536+L: SHA-cyfmac-dev-list@infineon.com35353537S: Supported35363538F: drivers/net/wireless/broadcom/brcm80211/35373539···9155915591569156IOMMU DRIVERS91579157M: Joerg Roedel <joro@8bytes.org>91589158+M: Will Deacon <will@kernel.org>91589159L: iommu@lists.linux-foundation.org91599160S: Maintained91609161T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git···96399638F: arch/s390/include/asm/gmap.h96409639F: arch/s390/include/asm/kvm*96419640F: arch/s390/include/uapi/asm/kvm*96419641+F: arch/s390/kernel/uv.c96429642F: arch/s390/kvm/96439643F: arch/s390/mm/gmap.c96449644F: tools/testing/selftests/kvm/*/s390x/···1315813156M: Ilias Apalodimas <ilias.apalodimas@linaro.org>1315913157L: netdev@vger.kernel.org1316013158S: Supported1315913159+F: Documentation/networking/page_pool.rst1316113160F: include/net/page_pool.h1316113161+F: include/trace/events/page_pool.h1316213162F: net/core/page_pool.c13163131631316413164PANASONIC LAPTOP ACPI EXTRAS DRIVER···1480214798F: drivers/net/wireless/realtek/rtlwifi/14803147991480414800REALTEK WIRELESS DRIVER (rtw88)1480514805-M: Yan-Hsuan Chuang <yhchuang@realtek.com>1480114801+M: Yan-Hsuan Chuang <tony0620emma@gmail.com>1480614802L: linux-wireless@vger.kernel.org1480714803S: Maintained1480814804F: drivers/net/wireless/realtek/rtw88/···1577515771F: include/linux/slimbus.h15776157721577715773SFC NETWORK DRIVER1577815778-M: Solarflare linux maintainers <linux-net-drivers@solarflare.com>1577915779-M: Edward Cree <ecree@solarflare.com>1578015780-M: Martin Habets <mhabets@solarflare.com>1577415774+M: Edward Cree <ecree.xilinx@gmail.com>1577515775+M: Martin Habets <habetsm.xilinx@gmail.com>1578115776L: netdev@vger.kernel.org1578215777S: Supported1578315778F: drivers/net/ethernet/sfc/
···38383939#ifdef CONFIG_ARC_DW2_UNWIND40404141-static void seed_unwind_frame_info(struct task_struct *tsk,4242- struct pt_regs *regs,4343- struct unwind_frame_info *frame_info)4141+static int4242+seed_unwind_frame_info(struct task_struct *tsk, struct pt_regs *regs,4343+ struct unwind_frame_info *frame_info)4444{4545- /*4646- * synchronous unwinding (e.g. dump_stack)4747- * - uses current values of SP and friends4848- */4949- if (tsk == NULL && regs == NULL) {4545+ if (regs) {4646+ /*4747+ * Asynchronous unwinding of intr/exception4848+ * - Just uses the pt_regs passed4949+ */5050+ frame_info->task = tsk;5151+5252+ frame_info->regs.r27 = regs->fp;5353+ frame_info->regs.r28 = regs->sp;5454+ frame_info->regs.r31 = regs->blink;5555+ frame_info->regs.r63 = regs->ret;5656+ frame_info->call_frame = 0;5757+ } else if (tsk == NULL || tsk == current) {5858+ /*5959+ * synchronous unwinding (e.g. dump_stack)6060+ * - uses current values of SP and friends6161+ */5062 unsigned long fp, sp, blink, ret;5163 frame_info->task = current;5264···7563 frame_info->regs.r31 = blink;7664 frame_info->regs.r63 = ret;7765 frame_info->call_frame = 0;7878- } else if (regs == NULL) {6666+ } else {7967 /*8080- * Asynchronous unwinding of sleeping task8181- * - Gets SP etc from task's pt_regs (saved bottom of kernel8282- * mode stack of task)6868+ * Asynchronous unwinding of a likely sleeping task6969+ * - first ensure it is actually sleeping7070+ * - if so, it will be in __switch_to, kernel mode SP of task7171+ * is safe-kept and BLINK at a well known location in there8372 */7373+7474+ if (tsk->state == TASK_RUNNING)7575+ return -1;84768577 frame_info->task = tsk;8678···10690 frame_info->regs.r28 += 60;10791 frame_info->call_frame = 0;10892109109- } else {110110- /*111111- * Asynchronous unwinding of intr/exception112112- * - Just uses the pt_regs passed113113- */114114- frame_info->task = tsk;115115-116116- frame_info->regs.r27 = regs->fp;117117- frame_info->regs.r28 = regs->sp;118118- frame_info->regs.r31 = regs->blink;119119- frame_info->regs.r63 = regs->ret;120120- frame_info->call_frame = 0;12193 }9494+ return 0;12295}1239612497#endif···121116 unsigned int address;122117 struct unwind_frame_info frame_info;123118124124- seed_unwind_frame_info(tsk, regs, &frame_info);119119+ if (seed_unwind_frame_info(tsk, regs, &frame_info))120120+ return 0;125121126122 while (1) {127123 address = UNW_PC(&frame_info);
+12-12
arch/arc/mm/tlb.c
···3030 * -Changes related to MMU v2 (Rel 4.8)3131 *3232 * Vineetg: Aug 29th 20083333- * -In TLB Flush operations (Metal Fix MMU) there is a explict command to3333+ * -In TLB Flush operations (Metal Fix MMU) there is a explicit command to3434 * flush Micro-TLBS. If TLB Index Reg is invalid prior to TLBIVUTLB cmd,3535 * it fails. Thus need to load it with ANY valid value before invoking3636 * TLBIVUTLB cmd3737 *3838 * Vineetg: Aug 21th 2008:3939 * -Reduced the duration of IRQ lockouts in TLB Flush routines4040- * -Multiple copies of TLB erase code seperated into a "single" function4040+ * -Multiple copies of TLB erase code separated into a "single" function4141 * -In TLB Flush routines, interrupt disabling moved UP to retrieve ASID4242 * in interrupt-safe region.4343 *···6666 *6767 * Although J-TLB is 2 way set assoc, ARC700 caches J-TLB into uTLBS which has6868 * much higher associativity. u-D-TLB is 8 ways, u-I-TLB is 4 ways.6969- * Given this, the thrasing problem should never happen because once the 36969+ * Given this, the thrashing problem should never happen because once the 37070 * J-TLB entries are created (even though 3rd will knock out one of the prev7171 * two), the u-D-TLB and u-I-TLB will have what is required to accomplish memcpy7272 *···127127 * There was however an obscure hardware bug, where uTLB flush would128128 * fail when a prior probe for J-TLB (both totally unrelated) would129129 * return lkup err - because the entry didn't exist in MMU.130130- * The Workround was to set Index reg with some valid value, prior to130130+ * The Workaround was to set Index reg with some valid value, prior to131131 * flush. This was fixed in MMU v3132132 */133133 unsigned int idx;···272272}273273274274/*275275- * Flush the entrie MM for userland. The fastest way is to move to Next ASID275275+ * Flush the entire MM for userland. The fastest way is to move to Next ASID276276 */277277noinline void local_flush_tlb_mm(struct mm_struct *mm)278278{···303303 * Difference between this and Kernel Range Flush is304304 * -Here the fastest way (if range is too large) is to move to next ASID305305 * without doing any explicit Shootdown306306- * -In case of kernel Flush, entry has to be shot down explictly306306+ * -In case of kernel Flush, entry has to be shot down explicitly307307 */308308void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,309309 unsigned long end)···620620 * Super Page size is configurable in hardware (4K to 16M), but fixed once621621 * RTL builds.622622 *623623- * The exact THP size a Linx configuration will support is a function of:623623+ * The exact THP size a Linux configuration will support is a function of:624624 * - MMU page size (typical 8K, RTL fixed)625625 * - software page walker address split between PGD:PTE:PFN (typical626626 * 11:8:13, but can be changed with 1 line)···698698699699#endif700700701701-/* Read the Cache Build Confuration Registers, Decode them and save into701701+/* Read the Cache Build Configuration Registers, Decode them and save into702702 * the cpuinfo structure for later use.703703 * No Validation is done here, simply read/convert the BCRs704704 */···803803 pr_info("%s", arc_mmu_mumbojumbo(0, str, sizeof(str)));804804805805 /*806806- * Can't be done in processor.h due to header include depenedencies806806+ * Can't be done in processor.h due to header include dependencies807807 */808808 BUILD_BUG_ON(!IS_ALIGNED((CONFIG_ARC_KVADDR_SIZE << 20), PMD_SIZE));809809810810 /*811811 * stack top size sanity check,812812- * Can't be done in processor.h due to header include depenedencies812812+ * Can't be done in processor.h due to header include dependencies813813 */814814 BUILD_BUG_ON(!IS_ALIGNED(STACK_TOP, PMD_SIZE));815815···881881 * the duplicate one.882882 * -Knob to be verbose abt it.(TODO: hook them up to debugfs)883883 */884884-volatile int dup_pd_silent; /* Be slient abt it or complain (default) */884884+volatile int dup_pd_silent; /* Be silent abt it or complain (default) */885885886886void do_tlb_overlap_fault(unsigned long cause, unsigned long address,887887 struct pt_regs *regs)···948948949949/***********************************************************************950950 * Diagnostic Routines951951- * -Called from Low Level TLB Hanlders if things don;t look good951951+ * -Called from Low Level TLB Handlers if things don;t look good952952 **********************************************************************/953953954954#ifdef CONFIG_ARC_DBG_TLB_PARANOIA
+3
arch/arm/boot/compressed/head.S
···14721472 @ issued from HYP mode take us to the correct handler code. We14731473 @ will disable the MMU before jumping to the kernel proper.14741474 @14751475+ ARM( bic r1, r1, #(1 << 30) ) @ clear HSCTLR.TE14761476+ THUMB( orr r1, r1, #(1 << 30) ) @ set HSCTLR.TE14771477+ mcr p15, 4, r1, c1, c0, 014751478 adr r0, __hyp_reentry_vectors14761479 mcr p15, 4, r0, c12, c0, 0 @ set HYP vector base (HVBAR)14771480 isb
···7575#define PTE_HWTABLE_OFF (PTE_HWTABLE_PTRS * sizeof(pte_t))7676#define PTE_HWTABLE_SIZE (PTRS_PER_PTE * sizeof(u32))77777878+#define MAX_POSSIBLE_PHYSMEM_BITS 327979+7880/*7981 * PMD_SHIFT determines the size of the area a second-level page table can map8082 * PGDIR_SHIFT determines what a third-level page table entry can map
+2
arch/arm/include/asm/pgtable-3level.h
···2525#define PTE_HWTABLE_OFF (0)2626#define PTE_HWTABLE_SIZE (PTRS_PER_PTE * sizeof(u64))27272828+#define MAX_POSSIBLE_PHYSMEM_BITS 402929+2830/*2931 * PGDIR_SHIFT determines the size a top-level page table entry can map.3032 */
+2-1
arch/arm/mach-omap2/Kconfig
···77 depends on ARCH_MULTI_V688 select ARCH_OMAP2PLUS99 select CPU_V61010- select PM_GENERIC_DOMAINS if PM1110 select SOC_HAS_OMAP2_SDRC12111312config ARCH_OMAP3···105106 select OMAP_DM_TIMER106107 select OMAP_GPMC107108 select PINCTRL109109+ select PM_GENERIC_DOMAINS if PM110110+ select PM_GENERIC_DOMAINS_OF if PM108111 select RESET_CONTROLLER109112 select SOC_BUS110113 select TI_SYSC
+5-3
arch/arm/mach-omap2/cpuidle44xx.c
···175175 if (mpuss_can_lose_context) {176176 error = cpu_cluster_pm_enter();177177 if (error) {178178- omap_set_pwrdm_state(mpu_pd, PWRDM_POWER_ON);179179- goto cpu_cluster_pm_out;178178+ index = 0;179179+ cx = state_ptr + index;180180+ pwrdm_set_logic_retst(mpu_pd, cx->mpu_logic_state);181181+ omap_set_pwrdm_state(mpu_pd, cx->mpu_state);182182+ mpuss_can_lose_context = 0;180183 }181184 }182185 }···187184 omap4_enter_lowpower(dev->cpu, cx->cpu_state);188185 cpu_done[dev->cpu] = true;189186190190-cpu_cluster_pm_out:191187 /* Wakeup CPU1 only if it is not offlined */192188 if (dev->cpu == 0 && cpumask_test_cpu(1, cpu_online_mask)) {193189
···115115#define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID))116116#define pte_valid_not_user(pte) \117117 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)118118-#define pte_valid_young(pte) \119119- ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF))120118#define pte_valid_user(pte) \121119 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == (PTE_VALID | PTE_USER))122120···122124 * Could the pte be present in the TLB? We must check mm_tlb_flush_pending123125 * so that we don't erroneously return false for pages that have been124126 * remapped as PROT_NONE but are yet to be flushed from the TLB.127127+ * Note that we can't make any assumptions based on the state of the access128128+ * flag, since ptep_clear_flush_young() elides a DSB when invalidating the129129+ * TLB.125130 */126131#define pte_accessible(mm, pte) \127127- (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid_young(pte))132132+ (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte))128133129134/*130135 * p??_access_permitted() is true for valid user mappings (subject to the···165164 return pmd;166165}167166168168-static inline pte_t pte_wrprotect(pte_t pte)169169-{170170- pte = clear_pte_bit(pte, __pgprot(PTE_WRITE));171171- pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));172172- return pte;173173-}174174-175167static inline pte_t pte_mkwrite(pte_t pte)176168{177169 pte = set_pte_bit(pte, __pgprot(PTE_WRITE));···187193 if (pte_write(pte))188194 pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));189195196196+ return pte;197197+}198198+199199+static inline pte_t pte_wrprotect(pte_t pte)200200+{201201+ /*202202+ * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY203203+ * clear), set the PTE_DIRTY bit.204204+ */205205+ if (pte_hw_dirty(pte))206206+ pte = pte_mkdirty(pte);207207+208208+ pte = clear_pte_bit(pte, __pgprot(PTE_WRITE));209209+ pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));190210 return pte;191211}192212···853845 pte = READ_ONCE(*ptep);854846 do {855847 old_pte = pte;856856- /*857857- * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY858858- * clear), set the PTE_DIRTY bit.859859- */860860- if (pte_hw_dirty(pte))861861- pte = pte_mkdirty(pte);862848 pte = pte_wrprotect(pte);863849 pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),864850 pte_val(old_pte), pte_val(pte));
···13131414SECTIONS {1515 HYP_SECTION(.text)1616+ /*1717+ * .hyp..data..percpu needs to be page aligned to maintain the same1818+ * alignment for when linking into vmlinux.1919+ */2020+ . = ALIGN(PAGE_SIZE);1621 HYP_SECTION_NAME(.data..percpu) : {1722 PERCPU_INPUT(L1_CACHE_BYTES)1823 }
+20-2
arch/arm64/kvm/vgic/vgic-mmio-v3.c
···273273 return extract_bytes(value, addr & 7, len);274274}275275276276+static unsigned long vgic_uaccess_read_v3r_typer(struct kvm_vcpu *vcpu,277277+ gpa_t addr, unsigned int len)278278+{279279+ unsigned long mpidr = kvm_vcpu_get_mpidr_aff(vcpu);280280+ int target_vcpu_id = vcpu->vcpu_id;281281+ u64 value;282282+283283+ value = (u64)(mpidr & GENMASK(23, 0)) << 32;284284+ value |= ((target_vcpu_id & 0xffff) << 8);285285+286286+ if (vgic_has_its(vcpu->kvm))287287+ value |= GICR_TYPER_PLPIS;288288+289289+ /* reporting of the Last bit is not supported for userspace */290290+ return extract_bytes(value, addr & 7, len);291291+}292292+276293static unsigned long vgic_mmio_read_v3r_iidr(struct kvm_vcpu *vcpu,277294 gpa_t addr, unsigned int len)278295{···610593 REGISTER_DESC_WITH_LENGTH(GICR_IIDR,611594 vgic_mmio_read_v3r_iidr, vgic_mmio_write_wi, 4,612595 VGIC_ACCESS_32bit),613613- REGISTER_DESC_WITH_LENGTH(GICR_TYPER,614614- vgic_mmio_read_v3r_typer, vgic_mmio_write_wi, 8,596596+ REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_TYPER,597597+ vgic_mmio_read_v3r_typer, vgic_mmio_write_wi,598598+ vgic_uaccess_read_v3r_typer, vgic_mmio_uaccess_write_wi, 8,615599 VGIC_ACCESS_64bit | VGIC_ACCESS_32bit),616600 REGISTER_DESC_WITH_LENGTH(GICR_WAKER,617601 vgic_mmio_read_raz, vgic_mmio_write_wi, 4,
···248248cpu-as-$(CONFIG_40x) += -Wa,-m405249249cpu-as-$(CONFIG_44x) += -Wa,-m440250250cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec)251251-cpu-as-$(CONFIG_E200) += -Wa,-me200252251cpu-as-$(CONFIG_E500) += -Wa,-me500253252254253# When using '-many -mpower4' gas will first try and find a matching power4
···4646#define __HAVE_ARCH_RESERVED_KERNEL_PAGES4747#endif48484949+#ifdef CONFIG_MEMORY_HOTPLUG5050+extern int create_section_mapping(unsigned long start, unsigned long end,5151+ int nid, pgprot_t prot);5252+#endif5353+4954#endif /* __KERNEL__ */5055#endif /* _ASM_MMZONE_H_ */
···1313#endif /* CONFIG_SPARSEMEM */14141515#ifdef CONFIG_MEMORY_HOTPLUG1616-extern int create_section_mapping(unsigned long start, unsigned long end,1717- int nid, pgprot_t prot);1816extern int remove_section_mapping(unsigned long start, unsigned long end);1717+extern int memory_add_physaddr_to_nid(u64 start);1818+#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid19192020#ifdef CONFIG_NUMA2121extern int hot_add_scn_to_nid(unsigned long scn_addr);···2626}2727#endif /* CONFIG_NUMA */2828#endif /* CONFIG_MEMORY_HOTPLUG */2929-3029#endif /* __KERNEL__ */3130#endif /* _ASM_POWERPC_SPARSEMEM_H */
+7-6
arch/powerpc/kernel/exceptions-64s.S
···10001000 * Vectors for the FWNMI option. Share common code.10011001 */10021002TRAMP_REAL_BEGIN(system_reset_fwnmi)10031003- /* XXX: fwnmi guest could run a nested/PR guest, so why no test? */10041004- __IKVM_REAL(system_reset)=010051003 GEN_INT_ENTRY system_reset, virt=01006100410071005#endif /* CONFIG_PPC_PSERIES */···14101412 * If none is found, do a Linux page fault. Linux page faults can happen in14111413 * kernel mode due to user copy operations of course.14121414 *14151415+ * KVM: The KVM HDSI handler may perform a load with MSR[DR]=1 in guest14161416+ * MMU context, which may cause a DSI in the host, which must go to the14171417+ * KVM handler. MSR[IR] is not enabled, so the real-mode handler will14181418+ * always be used regardless of AIL setting.14191419+ *14131420 * - Radix MMU14141421 * The hardware loads from the Linux page table directly, so a fault goes14151422 * immediately to Linux page fault.···14251422 IVEC=0x30014261423 IDAR=114271424 IDSISR=114281428-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE14291425 IKVM_SKIP=114301426 IKVM_REAL=114311431-#endif14321427INT_DEFINE_END(data_access)1433142814341429EXC_REAL_BEGIN(data_access, 0x300, 0x80)···14651464 * ppc64_bolted_size (first segment). The kernel handler must avoid stomping14661465 * on user-handler data structures.14671466 *14671467+ * KVM: Same as 0x300, DSLB must test for KVM guest.14681468+ *14681469 * A dedicated save area EXSLB is used (XXX: but it actually need not be14691470 * these days, we could use EXGEN).14701471 */···14751472 IAREA=PACA_EXSLB14761473 IRECONCILE=014771474 IDAR=114781478-#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE14791475 IKVM_SKIP=114801476 IKVM_REAL=114811481-#endif14821477INT_DEFINE_END(data_access_slb)1483147814841479EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80)
···129129 .paddr = paddr130130 };131131132132- if (uv_call(0, (u64)&uvcb))132132+ if (uv_call(0, (u64)&uvcb)) {133133+ /*134134+ * Older firmware uses 107/d as an indication of a non secure135135+ * page. Let us emulate the newer variant (no-op).136136+ */137137+ if (uvcb.header.rc == 0x107 && uvcb.header.rrc == 0xd)138138+ return 0;133139 return -EINVAL;140140+ }134141 return 0;135142}136143
···208208 return -EIO;209209 }210210 kvm->arch.gmap->guest_handle = uvcb.guest_handle;211211- atomic_set(&kvm->mm->context.is_protected, 1);212211 return 0;213212}214213···227228 *rrc = uvcb.header.rrc;228229 KVM_UV_EVENT(kvm, 3, "PROTVIRT VM SET PARMS: rc %x rrc %x",229230 *rc, *rrc);231231+ if (!cc)232232+ atomic_set(&kvm->mm->context.is_protected, 1);230233 return cc ? -EINVAL : 0;231234}232235
+2
arch/s390/mm/gmap.c
···26902690#include <linux/sched/mm.h>26912691void s390_reset_acc(struct mm_struct *mm)26922692{26932693+ if (!mm_is_protected(mm))26942694+ return;26932695 /*26942696 * we might be called during26952697 * reset: we walk the pages and clear
···100100 return find_matching_signature(mc, csig, cpf);101101}102102103103-/*104104- * Given CPU signature and a microcode patch, this function finds if the105105- * microcode patch has matching family and model with the CPU.106106- *107107- * %true - if there's a match108108- * %false - otherwise109109- */110110-static bool microcode_matches(struct microcode_header_intel *mc_header,111111- unsigned long sig)112112-{113113- unsigned long total_size = get_totalsize(mc_header);114114- unsigned long data_size = get_datasize(mc_header);115115- struct extended_sigtable *ext_header;116116- unsigned int fam_ucode, model_ucode;117117- struct extended_signature *ext_sig;118118- unsigned int fam, model;119119- int ext_sigcount, i;120120-121121- fam = x86_family(sig);122122- model = x86_model(sig);123123-124124- fam_ucode = x86_family(mc_header->sig);125125- model_ucode = x86_model(mc_header->sig);126126-127127- if (fam == fam_ucode && model == model_ucode)128128- return true;129129-130130- /* Look for ext. headers: */131131- if (total_size <= data_size + MC_HEADER_SIZE)132132- return false;133133-134134- ext_header = (void *) mc_header + data_size + MC_HEADER_SIZE;135135- ext_sig = (void *)ext_header + EXT_HEADER_SIZE;136136- ext_sigcount = ext_header->count;137137-138138- for (i = 0; i < ext_sigcount; i++) {139139- fam_ucode = x86_family(ext_sig->sig);140140- model_ucode = x86_model(ext_sig->sig);141141-142142- if (fam == fam_ucode && model == model_ucode)143143- return true;144144-145145- ext_sig++;146146- }147147- return false;148148-}149149-150103static struct ucode_patch *memdup_patch(void *data, unsigned int size)151104{152105 struct ucode_patch *p;···117164 return p;118165}119166120120-static void save_microcode_patch(void *data, unsigned int size)167167+static void save_microcode_patch(struct ucode_cpu_info *uci, void *data, unsigned int size)121168{122169 struct microcode_header_intel *mc_hdr, *mc_saved_hdr;123170 struct ucode_patch *iter, *tmp, *p = NULL;···161208 }162209163210 if (!p)211211+ return;212212+213213+ if (!find_matching_signature(p->data, uci->cpu_sig.sig, uci->cpu_sig.pf))164214 return;165215166216 /*···300344301345 size -= mc_size;302346303303- if (!microcode_matches(mc_header, uci->cpu_sig.sig)) {347347+ if (!find_matching_signature(data, uci->cpu_sig.sig,348348+ uci->cpu_sig.pf)) {304349 data += mc_size;305350 continue;306351 }307352308353 if (save) {309309- save_microcode_patch(data, mc_size);354354+ save_microcode_patch(uci, data, mc_size);310355 goto next;311356 }312357···440483 * Save this microcode patch. It will be loaded early when a CPU is441484 * hot-added or resumes.442485 */443443-static void save_mc_for_early(u8 *mc, unsigned int size)486486+static void save_mc_for_early(struct ucode_cpu_info *uci, u8 *mc, unsigned int size)444487{445488 /* Synchronization during CPU hotplug. */446489 static DEFINE_MUTEX(x86_cpu_microcode_mutex);447490448491 mutex_lock(&x86_cpu_microcode_mutex);449492450450- save_microcode_patch(mc, size);493493+ save_microcode_patch(uci, mc, size);451494 show_saved_mc();452495453496 mutex_unlock(&x86_cpu_microcode_mutex);···892935 * permanent memory. So it will be loaded early when a CPU is hot added893936 * or resumes.894937 */895895- save_mc_for_early(new_mc, new_mc_size);938938+ save_mc_for_early(uci, new_mc, new_mc_size);896939897940 pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n",898941 cpu, new_rev, uci->cpu_sig.rev);
+19-4
arch/x86/kernel/dumpstack.c
···7878 if (!user_mode(regs))7979 return copy_from_kernel_nofault(buf, (u8 *)src, nbytes);80808181+ /* The user space code from other tasks cannot be accessed. */8282+ if (regs != task_pt_regs(current))8383+ return -EPERM;8184 /*8285 * Make sure userspace isn't trying to trick us into dumping kernel8386 * memory by pointing the userspace instruction pointer at it.···8885 if (__chk_range_not_ok(src, nbytes, TASK_SIZE_MAX))8986 return -EINVAL;90878888+ /*8989+ * Even if named copy_from_user_nmi() this can be invoked from9090+ * other contexts and will not try to resolve a pagefault, which is9191+ * the correct thing to do here as this code can be called from any9292+ * context.9393+ */9194 return copy_from_user_nmi(buf, (void __user *)src, nbytes);9295}9396···124115 u8 opcodes[OPCODE_BUFSIZE];125116 unsigned long prologue = regs->ip - PROLOGUE_SIZE;126117127127- if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) {128128- printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n",129129- loglvl, prologue);130130- } else {118118+ switch (copy_code(regs, opcodes, prologue, sizeof(opcodes))) {119119+ case 0:131120 printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %"132121 __stringify(EPILOGUE_SIZE) "ph\n", loglvl, opcodes,133122 opcodes[PROLOGUE_SIZE], opcodes + PROLOGUE_SIZE + 1);123123+ break;124124+ case -EPERM:125125+ /* No access to the user space stack of other tasks. Ignore. */126126+ break;127127+ default:128128+ printk("%sCode: Unable to access opcode bytes at RIP 0x%lx.\n",129129+ loglvl, prologue);130130+ break;134131 }135132}136133
+1-7
arch/x86/kernel/tboot.c
···514514 if (!tboot_enabled())515515 return 0;516516517517- if (intel_iommu_tboot_noforce)518518- return 1;519519-520520- if (no_iommu || swiotlb || dmar_disabled)517517+ if (no_iommu || dmar_disabled)521518 pr_warn("Forcing Intel-IOMMU to enabled\n");522519523520 dmar_disabled = 0;524524-#ifdef CONFIG_SWIOTLB525525- swiotlb = 0;526526-#endif527521 no_iommu = 0;528522529523 return 1;
+34-51
arch/x86/kvm/irq.c
···4040 * check if there is pending interrupt from4141 * non-APIC source without intack.4242 */4343-static int kvm_cpu_has_extint(struct kvm_vcpu *v)4444-{4545- u8 accept = kvm_apic_accept_pic_intr(v);4646-4747- if (accept) {4848- if (irqchip_split(v->kvm))4949- return pending_userspace_extint(v);5050- else5151- return v->kvm->arch.vpic->output;5252- } else5353- return 0;5454-}5555-5656-/*5757- * check if there is injectable interrupt:5858- * when virtual interrupt delivery enabled,5959- * interrupt from apic will handled by hardware,6060- * we don't need to check it here.6161- */6262-int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v)4343+int kvm_cpu_has_extint(struct kvm_vcpu *v)6344{6445 /*6565- * FIXME: interrupt.injected represents an interrupt that it's4646+ * FIXME: interrupt.injected represents an interrupt whose6647 * side-effects have already been applied (e.g. bit from IRR6748 * already moved to ISR). Therefore, it is incorrect to rely6849 * on interrupt.injected to know if there is a pending···5675 if (!lapic_in_kernel(v))5776 return v->arch.interrupt.injected;58777878+ if (!kvm_apic_accept_pic_intr(v))7979+ return 0;8080+8181+ if (irqchip_split(v->kvm))8282+ return pending_userspace_extint(v);8383+ else8484+ return v->kvm->arch.vpic->output;8585+}8686+8787+/*8888+ * check if there is injectable interrupt:8989+ * when virtual interrupt delivery enabled,9090+ * interrupt from apic will handled by hardware,9191+ * we don't need to check it here.9292+ */9393+int kvm_cpu_has_injectable_intr(struct kvm_vcpu *v)9494+{5995 if (kvm_cpu_has_extint(v))6096 return 1;6197···8991 */9092int kvm_cpu_has_interrupt(struct kvm_vcpu *v)9193{9292- /*9393- * FIXME: interrupt.injected represents an interrupt that it's9494- * side-effects have already been applied (e.g. bit from IRR9595- * already moved to ISR). Therefore, it is incorrect to rely9696- * on interrupt.injected to know if there is a pending9797- * interrupt in the user-mode LAPIC.9898- * This leads to nVMX/nSVM not be able to distinguish9999- * if it should exit from L2 to L1 on EXTERNAL_INTERRUPT on100100- * pending interrupt or should re-inject an injected101101- * interrupt.102102- */103103- if (!lapic_in_kernel(v))104104- return v->arch.interrupt.injected;105105-10694 if (kvm_cpu_has_extint(v))10795 return 1;10896···102118 */103119static int kvm_cpu_get_extint(struct kvm_vcpu *v)104120{105105- if (kvm_cpu_has_extint(v)) {106106- if (irqchip_split(v->kvm)) {107107- int vector = v->arch.pending_external_vector;108108-109109- v->arch.pending_external_vector = -1;110110- return vector;111111- } else112112- return kvm_pic_read_irq(v->kvm); /* PIC */113113- } else121121+ if (!kvm_cpu_has_extint(v)) {122122+ WARN_ON(!lapic_in_kernel(v));114123 return -1;124124+ }125125+126126+ if (!lapic_in_kernel(v))127127+ return v->arch.interrupt.nr;128128+129129+ if (irqchip_split(v->kvm)) {130130+ int vector = v->arch.pending_external_vector;131131+132132+ v->arch.pending_external_vector = -1;133133+ return vector;134134+ } else135135+ return kvm_pic_read_irq(v->kvm); /* PIC */115136}116137117138/*···124135 */125136int kvm_cpu_get_interrupt(struct kvm_vcpu *v)126137{127127- int vector;128128-129129- if (!lapic_in_kernel(v))130130- return v->arch.interrupt.nr;131131-132132- vector = kvm_cpu_get_extint(v);133133-138138+ int vector = kvm_cpu_get_extint(v);134139 if (vector != -1)135140 return vector; /* PIC */136141
···4051405140524052static int kvm_cpu_accept_dm_intr(struct kvm_vcpu *vcpu)40534053{40544054+ /*40554055+ * We can accept userspace's request for interrupt injection40564056+ * as long as we have a place to store the interrupt number.40574057+ * The actual injection will happen when the CPU is able to40584058+ * deliver the interrupt.40594059+ */40604060+ if (kvm_cpu_has_extint(vcpu))40614061+ return false;40624062+40634063+ /* Acknowledging ExtINT does not happen if LINT0 is masked. */40544064 return (!lapic_in_kernel(vcpu) ||40554065 kvm_apic_accept_pic_intr(vcpu));40564066}4057406740584058-/*40594059- * if userspace requested an interrupt window, check that the40604060- * interrupt window is open.40614061- *40624062- * No need to exit to userspace if we already have an interrupt queued.40634063- */40644068static int kvm_vcpu_ready_for_interrupt_injection(struct kvm_vcpu *vcpu)40654069{40664070 return kvm_arch_interrupt_allowed(vcpu) &&40674067- !kvm_cpu_has_interrupt(vcpu) &&40684068- !kvm_event_needs_reinjection(vcpu) &&40694071 kvm_cpu_accept_dm_intr(vcpu);40704072}40714073
···93939494void xen_uninit_lock_cpu(int cpu)9595{9696+ int irq;9797+9698 if (!xen_pvspin)9799 return;981009999- unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);101101+ /*102102+ * When booting the kernel with 'mitigations=auto,nosmt', the secondary103103+ * CPUs are not activated, and lock_kicker_irq is not initialized.104104+ */105105+ irq = per_cpu(lock_kicker_irq, cpu);106106+ if (irq == -1)107107+ return;108108+109109+ unbind_from_irqhandler(irq, NULL);100110 per_cpu(lock_kicker_irq, cpu) = -1;101111 kfree(per_cpu(irq_name, cpu));102112 per_cpu(irq_name, cpu) = NULL;
···225225 /* release the tag's ownership to the req cloned from */226226 spin_lock_irqsave(&fq->mq_flush_lock, flags);227227228228- WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);229228 if (!refcount_dec_and_test(&flush_rq->ref)) {230229 fq->rq_status = error;231230 spin_unlock_irqrestore(&fq->mq_flush_lock, flags);232231 return;233232 }234233234234+ /*235235+ * Flush request has to be marked as IDLE when it is really ended236236+ * because its .end_io() is called from timeout code path too for237237+ * avoiding use-after-free.238238+ */239239+ WRITE_ONCE(flush_rq->state, MQ_RQ_IDLE);235240 if (fq->rq_status != BLK_STS_OK)236241 error = fq->rq_status;237242
+7
block/keyslot-manager.c
···103103 spin_lock_init(&ksm->idle_slots_lock);104104105105 slot_hashtable_size = roundup_pow_of_two(num_slots);106106+ /*107107+ * hash_ptr() assumes bits != 0, so ensure the hash table has at least 2108108+ * buckets. This only makes a difference when there is only 1 keyslot.109109+ */110110+ if (slot_hashtable_size < 2)111111+ slot_hashtable_size = 2;112112+106113 ksm->log_slot_ht_size = ilog2(slot_hashtable_size);107114 ksm->slot_hashtable = kvmalloc_array(slot_hashtable_size,108115 sizeof(ksm->slot_hashtable[0]),
···4444 * iort_set_fwnode() - Create iort_fwnode and use it to register4545 * iommu data in the iort_fwnode_list4646 *4747- * @node: IORT table node associated with the IOMMU4747+ * @iort_node: IORT table node associated with the IOMMU4848 * @fwnode: fwnode associated with the IORT node4949 *5050 * Returns: 0 on success···673673/**674674 * iort_get_device_domain() - Find MSI domain related to a device675675 * @dev: The device.676676- * @req_id: Requester ID for the device.676676+ * @id: Requester ID for the device.677677+ * @bus_token: irq domain bus token.677678 *678679 * Returns: the MSI domain for this device, NULL otherwise679680 */···11371136 *11381137 * @dev: device to configure11391138 * @dma_addr: device DMA address result pointer11401140- * @size: DMA range size result pointer11391139+ * @dma_size: DMA range size result pointer11411140 */11421141void iort_dma_setup(struct device *dev, u64 *dma_addr, u64 *dma_size)11431142{···15271526/**15281527 * iort_add_platform_device() - Allocate a platform device for IORT node15291528 * @node: Pointer to device ACPI IORT node15291529+ * @ops: Pointer to IORT device config struct15301530 *15311531 * Returns: 0 on success, <0 failure15321532 */
···236236 if (!handle || !handle->perf_ops)237237 return -ENODEV;238238239239+#ifdef CONFIG_COMMON_CLK239240 /* dummy clock provider as needed by OPP if clocks property is used */240241 if (of_find_property(dev->of_node, "#clock-cells", NULL))241242 devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, NULL);243243+#endif242244243245 ret = cpufreq_register_driver(&scmi_cpufreq_driver);244246 if (ret) {245245- dev_err(&sdev->dev, "%s: registering cpufreq failed, err: %d\n",247247+ dev_err(dev, "%s: registering cpufreq failed, err: %d\n",246248 __func__, ret);247249 }248250
-1
drivers/dax/Kconfig
···5050 Say M if unsure.51515252config DEV_DAX_HMEM_DEVICES5353- depends on NUMA_KEEP_MEMINFO # for phys_to_target_node()5453 depends on DEV_DAX_HMEM && DAX=y5554 def_bool y5655
+9-8
drivers/dma/dmaengine.c
···10391039static int __dma_async_device_channel_register(struct dma_device *device,10401040 struct dma_chan *chan)10411041{10421042- int rc = 0;10421042+ int rc;1043104310441044 chan->local = alloc_percpu(typeof(*chan->local));10451045 if (!chan->local)10461046- goto err_out;10461046+ return -ENOMEM;10471047 chan->dev = kzalloc(sizeof(*chan->dev), GFP_KERNEL);10481048 if (!chan->dev) {10491049- free_percpu(chan->local);10501050- chan->local = NULL;10511051- goto err_out;10491049+ rc = -ENOMEM;10501050+ goto err_free_local;10521051 }1053105210541053 /*···10601061 if (chan->chan_id < 0) {10611062 pr_err("%s: unable to alloc ida for chan: %d\n",10621063 __func__, chan->chan_id);10631063- goto err_out;10641064+ rc = chan->chan_id;10651065+ goto err_free_dev;10641066 }1065106710661068 chan->dev->device.class = &dma_devclass;···10821082 mutex_lock(&device->chan_mutex);10831083 ida_free(&device->chan_ida, chan->chan_id);10841084 mutex_unlock(&device->chan_mutex);10851085- err_out:10861086- free_percpu(chan->local);10851085+ err_free_dev:10871086 kfree(chan->dev);10871087+ err_free_local:10881088+ free_percpu(chan->local);10881089 return rc;10891090}10901091
···103103 u32 priority;104104 enum idxd_wq_state state;105105 unsigned long flags;106106- union wqcfg wqcfg;106106+ union wqcfg *wqcfg;107107 u32 vec_ptr; /* interrupt steering */108108 struct dsa_hw_desc **hw_descs;109109 int num_descs;···183183 int max_wq_size;184184 int token_limit;185185 int nr_tokens; /* non-reserved tokens */186186+ unsigned int wqcfg_size;186187187188 union sw_err_reg sw_err;188189 wait_queue_head_t cmd_waitq;
+5
drivers/dma/idxd/init.c
···178178 wq->idxd_cdev.minor = -1;179179 wq->max_xfer_bytes = idxd->max_xfer_bytes;180180 wq->max_batch_size = idxd->max_batch_size;181181+ wq->wqcfg = devm_kzalloc(dev, idxd->wqcfg_size, GFP_KERNEL);182182+ if (!wq->wqcfg)183183+ return -ENOMEM;181184 }182185183186 for (i = 0; i < idxd->max_engines; i++) {···254251 dev_dbg(dev, "total workqueue size: %u\n", idxd->max_wq_size);255252 idxd->max_wqs = idxd->hw.wq_cap.num_wqs;256253 dev_dbg(dev, "max workqueues: %u\n", idxd->max_wqs);254254+ idxd->wqcfg_size = 1 << (idxd->hw.wq_cap.wqcfg_size + IDXD_WQCFG_MIN);255255+ dev_dbg(dev, "wqcfg size: %u\n", idxd->wqcfg_size);257256258257 /* reading operation capabilities */259258 for (i = 0; i < 4; i++) {
+23-2
drivers/dma/idxd/registers.h
···8899#define IDXD_MMIO_BAR 01010#define IDXD_WQ_BAR 21111-#define IDXD_PORTAL_SIZE 0x40001111+#define IDXD_PORTAL_SIZE PAGE_SIZE12121313/* MMIO Device BAR0 Registers */1414#define IDXD_VER_OFFSET 0x00···4343 struct {4444 u64 total_wq_size:16;4545 u64 num_wqs:8;4646- u64 rsvd:24;4646+ u64 wqcfg_size:4;4747+ u64 rsvd:20;4748 u64 shared_mode:1;4849 u64 dedicated_mode:1;4950 u64 rsvd2:1;···5655 u64 bits;5756} __packed;5857#define IDXD_WQCAP_OFFSET 0x205858+#define IDXD_WQCFG_MIN 559596060union group_cap_reg {6161 struct {···335333 };336334 u32 bits[8];337335} __packed;336336+337337+/*338338+ * This macro calculates the offset into the WQCFG register339339+ * idxd - struct idxd *340340+ * n - wq id341341+ * ofs - the index of the 32b dword for the config register342342+ *343343+ * The WQCFG register block is divided into groups per each wq. The n index344344+ * allows us to move to the register group that's for that particular wq.345345+ * Each register is 32bits. The ofs gives us the number of register to access.346346+ */347347+#define WQCFG_OFFSET(_idxd_dev, n, ofs) \348348+({\349349+ typeof(_idxd_dev) __idxd_dev = (_idxd_dev); \350350+ (__idxd_dev)->wqcfg_offset + (n) * (__idxd_dev)->wqcfg_size + sizeof(u32) * (ofs); \351351+})352352+353353+#define WQCFG_STRIDES(_idxd_dev) ((_idxd_dev)->wqcfg_size / sizeof(u32))354354+338355#endif
+1-1
drivers/dma/idxd/submit.c
···7474 if (idxd->state != IDXD_DEV_ENABLED)7575 return -EIO;76767777- portal = wq->dportal + idxd_get_wq_portal_offset(IDXD_PORTAL_UNLIMITED);7777+ portal = wq->dportal;7878 /*7979 * The wmb() flushes writes to coherent DMA data before possibly8080 * triggering a DMA read. The wmb() is necessary even on UP because
-10
drivers/dma/ioat/dca.c
···4040#define DCA2_TAG_MAP_BYTE3 0x824141#define DCA2_TAG_MAP_BYTE4 0x8242424343-/* verify if tag map matches expected values */4444-static inline int dca2_tag_map_valid(u8 *tag_map)4545-{4646- return ((tag_map[0] == DCA2_TAG_MAP_BYTE0) &&4747- (tag_map[1] == DCA2_TAG_MAP_BYTE1) &&4848- (tag_map[2] == DCA2_TAG_MAP_BYTE2) &&4949- (tag_map[3] == DCA2_TAG_MAP_BYTE3) &&5050- (tag_map[4] == DCA2_TAG_MAP_BYTE4));5151-}5252-5343/*5444 * "Legacy" DCA systems do not implement the DCA register set in the5545 * I/OAT device. Software needs direct support for their tag mappings.
+1-1
drivers/dma/pl330.c
···27992799 * If burst size is smaller than bus width then make sure we only28002800 * transfer one at a time to avoid a burst stradling an MFIFO entry.28012801 */28022802- if (desc->rqcfg.brst_size * 8 < pl330->pcfg.data_bus_width)28022802+ if (burst * 8 < pl330->pcfg.data_bus_width)28032803 desc->rqcfg.brst_len = 1;2804280428052805 desc->bytes_requested = len;
···15221522 }15231523}1524152415251525+/* Currently used by omap2 & 3 to block deeper SoC idle states */15261526+static bool omap_dma_busy(struct omap_dmadev *od)15271527+{15281528+ struct omap_chan *c;15291529+ int lch = -1;15301530+15311531+ while (1) {15321532+ lch = find_next_bit(od->lch_bitmap, od->lch_count, lch + 1);15331533+ if (lch >= od->lch_count)15341534+ break;15351535+ c = od->lch_map[lch];15361536+ if (!c)15371537+ continue;15381538+ if (omap_dma_chan_read(c, CCR) & CCR_ENABLE)15391539+ return true;15401540+ }15411541+15421542+ return false;15431543+}15441544+15251545/* Currently only used for omap2. For omap1, also a check for lcd_dma is needed */15261546static int omap_dma_busy_notifier(struct notifier_block *nb,15271547 unsigned long cmd, void *v)15281548{15291549 struct omap_dmadev *od;15301530- struct omap_chan *c;15311531- int lch = -1;1532155015331551 od = container_of(nb, struct omap_dmadev, nb);1534155215351553 switch (cmd) {15361554 case CPU_CLUSTER_PM_ENTER:15371537- while (1) {15381538- lch = find_next_bit(od->lch_bitmap, od->lch_count,15391539- lch + 1);15401540- if (lch >= od->lch_count)15411541- break;15421542- c = od->lch_map[lch];15431543- if (!c)15441544- continue;15451545- if (omap_dma_chan_read(c, CCR) & CCR_ENABLE)15461546- return NOTIFY_BAD;15471547- }15551555+ if (omap_dma_busy(od))15561556+ return NOTIFY_BAD;15481557 break;15491558 case CPU_CLUSTER_PM_ENTER_FAILED:15501559 case CPU_CLUSTER_PM_EXIT:···1604159516051596 switch (cmd) {16061597 case CPU_CLUSTER_PM_ENTER:15981598+ if (omap_dma_busy(od))15991599+ return NOTIFY_BAD;16071600 omap_dma_context_save(od);16081601 break;16091602 case CPU_CLUSTER_PM_ENTER_FAILED:
···6767 unsigned harvest_config;6868 /* store image width to adjust nb memory state */6969 unsigned decode_image_width;7070+ uint32_t keyselect;7071};71727273int amdgpu_uvd_sw_init(struct amdgpu_device *adev);
···3030#include "i915_trace.h"3131#include "intel_breadcrumbs.h"3232#include "intel_context.h"3333+#include "intel_engine_pm.h"3334#include "intel_gt_pm.h"3435#include "intel_gt_requests.h"35363636-static void irq_enable(struct intel_engine_cs *engine)3737+static bool irq_enable(struct intel_engine_cs *engine)3738{3839 if (!engine->irq_enable)3939- return;4040+ return false;40414142 /* Caller disables interrupts */4243 spin_lock(&engine->gt->irq_lock);4344 engine->irq_enable(engine);4445 spin_unlock(&engine->gt->irq_lock);4646+4747+ return true;4548}46494750static void irq_disable(struct intel_engine_cs *engine)···60576158static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)6259{6363- lockdep_assert_held(&b->irq_lock);6464-6565- if (!b->irq_engine || b->irq_armed)6666- return;6767-6868- if (!intel_gt_pm_get_if_awake(b->irq_engine->gt))6060+ /*6161+ * Since we are waiting on a request, the GPU should be busy6262+ * and should have its own rpm reference.6363+ */6464+ if (GEM_WARN_ON(!intel_gt_pm_get_if_awake(b->irq_engine->gt)))6965 return;70667167 /*···7573 */7674 WRITE_ONCE(b->irq_armed, true);77757878- /*7979- * Since we are waiting on a request, the GPU should be busy8080- * and should have its own rpm reference. This is tracked8181- * by i915->gt.awake, we can forgo holding our own wakref8282- * for the interrupt as before i915->gt.awake is released (when8383- * the driver is idle) we disarm the breadcrumbs.8484- */7676+ /* Requests may have completed before we could enable the interrupt. */7777+ if (!b->irq_enabled++ && irq_enable(b->irq_engine))7878+ irq_work_queue(&b->irq_work);7979+}85808686- if (!b->irq_enabled++)8787- irq_enable(b->irq_engine);8181+static void intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)8282+{8383+ if (!b->irq_engine)8484+ return;8585+8686+ spin_lock(&b->irq_lock);8787+ if (!b->irq_armed)8888+ __intel_breadcrumbs_arm_irq(b);8989+ spin_unlock(&b->irq_lock);8890}89919092static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b)9193{9292- lockdep_assert_held(&b->irq_lock);9393-9494- if (!b->irq_engine || !b->irq_armed)9595- return;9696-9794 GEM_BUG_ON(!b->irq_enabled);9895 if (!--b->irq_enabled)9996 irq_disable(b->irq_engine);···106105{107106 intel_context_get(ce);108107 list_add_tail(&ce->signal_link, &b->signalers);109109- if (list_is_first(&ce->signal_link, &b->signalers))110110- __intel_breadcrumbs_arm_irq(b);111108}112109113110static void remove_signaling_context(struct intel_breadcrumbs *b,···173174 intel_engine_add_retire(b->irq_engine, tl);174175}175176176176-static bool __signal_request(struct i915_request *rq, struct list_head *signals)177177+static bool __signal_request(struct i915_request *rq)177178{178178- clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags);179179-180179 if (!__dma_fence_signal(&rq->fence)) {181180 i915_request_put(rq);182181 return false;183182 }184183185185- list_add_tail(&rq->signal_link, signals);186184 return true;185185+}186186+187187+static struct llist_node *188188+slist_add(struct llist_node *node, struct llist_node *head)189189+{190190+ node->next = head;191191+ return node;187192}188193189194static void signal_irq_work(struct irq_work *work)190195{191196 struct intel_breadcrumbs *b = container_of(work, typeof(*b), irq_work);192197 const ktime_t timestamp = ktime_get();198198+ struct llist_node *signal, *sn;193199 struct intel_context *ce, *cn;194200 struct list_head *pos, *next;195195- LIST_HEAD(signal);201201+202202+ signal = NULL;203203+ if (unlikely(!llist_empty(&b->signaled_requests)))204204+ signal = llist_del_all(&b->signaled_requests);196205197206 spin_lock(&b->irq_lock);198207199199- if (list_empty(&b->signalers))208208+ /*209209+ * Keep the irq armed until the interrupt after all listeners are gone.210210+ *211211+ * Enabling/disabling the interrupt is rather costly, roughly a couple212212+ * of hundred microseconds. If we are proactive and enable/disable213213+ * the interrupt around every request that wants a breadcrumb, we214214+ * quickly drown in the extra orders of magnitude of latency imposed215215+ * on request submission.216216+ *217217+ * So we try to be lazy, and keep the interrupts enabled until no218218+ * more listeners appear within a breadcrumb interrupt interval (that219219+ * is until a request completes that no one cares about). The220220+ * observation is that listeners come in batches, and will often221221+ * listen to a bunch of requests in succession. Though note on icl+,222222+ * interrupts are always enabled due to concerns with rc6 being223223+ * dysfunctional with per-engine interrupt masking.224224+ *225225+ * We also try to avoid raising too many interrupts, as they may226226+ * be generated by userspace batches and it is unfortunately rather227227+ * too easy to drown the CPU under a flood of GPU interrupts. Thus228228+ * whenever no one appears to be listening, we turn off the interrupts.229229+ * Fewer interrupts should conserve power -- at the very least, fewer230230+ * interrupt draw less ire from other users of the system and tools231231+ * like powertop.232232+ */233233+ if (!signal && b->irq_armed && list_empty(&b->signalers))200234 __intel_breadcrumbs_disarm_irq(b);201201-202202- list_splice_init(&b->signaled_requests, &signal);203235204236 list_for_each_entry_safe(ce, cn, &b->signalers, signal_link) {205237 GEM_BUG_ON(list_empty(&ce->signals));···248218 * spinlock as the callback chain may end up adding249219 * more signalers to the same context or engine.250220 */251251- __signal_request(rq, &signal);221221+ clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags);222222+ if (__signal_request(rq))223223+ /* We own signal_node now, xfer to local list */224224+ signal = slist_add(&rq->signal_node, signal);252225 }253226254227 /*···271238272239 spin_unlock(&b->irq_lock);273240274274- list_for_each_safe(pos, next, &signal) {241241+ llist_for_each_safe(signal, sn, signal) {275242 struct i915_request *rq =276276- list_entry(pos, typeof(*rq), signal_link);243243+ llist_entry(signal, typeof(*rq), signal_node);277244 struct list_head cb_list;278245279246 spin_lock(&rq->lock);···284251285252 i915_request_put(rq);286253 }254254+255255+ if (!READ_ONCE(b->irq_armed) && !list_empty(&b->signalers))256256+ intel_breadcrumbs_arm_irq(b);287257}288258289259struct intel_breadcrumbs *···300264301265 spin_lock_init(&b->irq_lock);302266 INIT_LIST_HEAD(&b->signalers);303303- INIT_LIST_HEAD(&b->signaled_requests);267267+ init_llist_head(&b->signaled_requests);304268305269 init_irq_work(&b->irq_work, signal_irq_work);306270···328292329293void intel_breadcrumbs_park(struct intel_breadcrumbs *b)330294{331331- unsigned long flags;332332-333333- if (!READ_ONCE(b->irq_armed))334334- return;335335-336336- spin_lock_irqsave(&b->irq_lock, flags);337337- __intel_breadcrumbs_disarm_irq(b);338338- spin_unlock_irqrestore(&b->irq_lock, flags);339339-340340- if (!list_empty(&b->signalers))341341- irq_work_queue(&b->irq_work);295295+ /* Kick the work once more to drain the signalers */296296+ irq_work_sync(&b->irq_work);297297+ while (unlikely(READ_ONCE(b->irq_armed))) {298298+ local_irq_disable();299299+ signal_irq_work(&b->irq_work);300300+ local_irq_enable();301301+ cond_resched();302302+ }303303+ GEM_BUG_ON(!list_empty(&b->signalers));342304}343305344306void intel_breadcrumbs_free(struct intel_breadcrumbs *b)345307{308308+ irq_work_sync(&b->irq_work);309309+ GEM_BUG_ON(!list_empty(&b->signalers));310310+ GEM_BUG_ON(b->irq_armed);346311 kfree(b);347312}348313···364327 * its signal completion.365328 */366329 if (__request_completed(rq)) {367367- if (__signal_request(rq, &b->signaled_requests))330330+ if (__signal_request(rq) &&331331+ llist_add(&rq->signal_node, &b->signaled_requests))368332 irq_work_queue(&b->irq_work);369333 return;370334 }···400362 GEM_BUG_ON(!check_signal_order(ce, rq));401363 set_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags);402364403403- /* Check after attaching to irq, interrupt may have already fired. */404404- if (__request_completed(rq))405405- irq_work_queue(&b->irq_work);365365+ /*366366+ * Defer enabling the interrupt to after HW submission and recheck367367+ * the request as it may have completed and raised the interrupt as368368+ * we were attaching it into the lists.369369+ */370370+ irq_work_queue(&b->irq_work);406371}407372408373bool i915_request_enable_breadcrumb(struct i915_request *rq)
+1-1
drivers/gpu/drm/i915/gt/intel_breadcrumbs_types.h
···3535 struct intel_engine_cs *irq_engine;36363737 struct list_head signalers;3838- struct list_head signaled_requests;3838+ struct llist_head signaled_requests;39394040 struct irq_work irq_work; /* for use from inside irq_lock */4141
+54-7
drivers/gpu/drm/i915/gt/intel_lrc.c
···182182struct virtual_engine {183183 struct intel_engine_cs base;184184 struct intel_context context;185185+ struct rcu_work rcu;185186186187 /*187188 * We allow only a single request through the virtual engine at a time···54265425 return &ve->base.execlists.default_priolist.requests[0];54275426}5428542754295429-static void virtual_context_destroy(struct kref *kref)54285428+static void rcu_virtual_context_destroy(struct work_struct *wrk)54305429{54315430 struct virtual_engine *ve =54325432- container_of(kref, typeof(*ve), context.ref);54315431+ container_of(wrk, typeof(*ve), rcu.work);54335432 unsigned int n;5434543354355435- GEM_BUG_ON(!list_empty(virtual_queue(ve)));54365436- GEM_BUG_ON(ve->request);54375434 GEM_BUG_ON(ve->context.inflight);5438543554365436+ /* Preempt-to-busy may leave a stale request behind. */54375437+ if (unlikely(ve->request)) {54385438+ struct i915_request *old;54395439+54405440+ spin_lock_irq(&ve->base.active.lock);54415441+54425442+ old = fetch_and_zero(&ve->request);54435443+ if (old) {54445444+ GEM_BUG_ON(!i915_request_completed(old));54455445+ __i915_request_submit(old);54465446+ i915_request_put(old);54475447+ }54485448+54495449+ spin_unlock_irq(&ve->base.active.lock);54505450+ }54515451+54525452+ /*54535453+ * Flush the tasklet in case it is still running on another core.54545454+ *54555455+ * This needs to be done before we remove ourselves from the siblings'54565456+ * rbtrees as in the case it is running in parallel, it may reinsert54575457+ * the rb_node into a sibling.54585458+ */54595459+ tasklet_kill(&ve->base.execlists.tasklet);54605460+54615461+ /* Decouple ourselves from the siblings, no more access allowed. */54395462 for (n = 0; n < ve->num_siblings; n++) {54405463 struct intel_engine_cs *sibling = ve->siblings[n];54415464 struct rb_node *node = &ve->nodes[sibling->id].rb;54425442- unsigned long flags;5443546554445466 if (RB_EMPTY_NODE(node))54455467 continue;5446546854475447- spin_lock_irqsave(&sibling->active.lock, flags);54695469+ spin_lock_irq(&sibling->active.lock);5448547054495471 /* Detachment is lazily performed in the execlists tasklet */54505472 if (!RB_EMPTY_NODE(node))54515473 rb_erase_cached(node, &sibling->execlists.virtual);5452547454535453- spin_unlock_irqrestore(&sibling->active.lock, flags);54755475+ spin_unlock_irq(&sibling->active.lock);54545476 }54555477 GEM_BUG_ON(__tasklet_is_scheduled(&ve->base.execlists.tasklet));54785478+ GEM_BUG_ON(!list_empty(virtual_queue(ve)));5456547954575480 if (ve->context.state)54585481 __execlists_context_fini(&ve->context);54595482 intel_context_fini(&ve->context);5460548354845484+ intel_breadcrumbs_free(ve->base.breadcrumbs);54615485 intel_engine_free_request_pool(&ve->base);5462548654635487 kfree(ve->bonds);54645488 kfree(ve);54895489+}54905490+54915491+static void virtual_context_destroy(struct kref *kref)54925492+{54935493+ struct virtual_engine *ve =54945494+ container_of(kref, typeof(*ve), context.ref);54955495+54965496+ GEM_BUG_ON(!list_empty(&ve->context.signals));54975497+54985498+ /*54995499+ * When destroying the virtual engine, we have to be aware that55005500+ * it may still be in use from an hardirq/softirq context causing55015501+ * the resubmission of a completed request (background completion55025502+ * due to preempt-to-busy). Before we can free the engine, we need55035503+ * to flush the submission code and tasklets that are still potentially55045504+ * accessing the engine. Flushing the tasklets requires process context,55055505+ * and since we can guard the resubmit onto the engine with an RCU read55065506+ * lock, we can delegate the free of the engine to an RCU worker.55075507+ */55085508+ INIT_RCU_WORK(&ve->rcu, rcu_virtual_context_destroy);55095509+ queue_rcu_work(system_wq, &ve->rcu);54655510}5466551154675512static void virtual_engine_initial_hint(struct virtual_engine *ve)
+3-2
drivers/gpu/drm/i915/gt/intel_mocs.c
···243243 * only, __init_mocs_table() take care to program unused index with244244 * this entry.245245 */246246- MOCS_ENTRY(1, LE_3_WB | LE_TC_1_LLC | LE_LRUM(3),247247- L3_3_WB),246246+ MOCS_ENTRY(I915_MOCS_PTE,247247+ LE_0_PAGETABLE | LE_TC_0_PAGETABLE,248248+ L3_1_UC),248249 GEN11_MOCS_ENTRIES,249250250251 /* Implicitly enable L1 - HDC:L1 + L3 + LLC */
···255255#define F_CMD_ACCESS (1 << 3)256256/* This reg has been accessed by a VM */257257#define F_ACCESSED (1 << 4)258258-/* This reg has been accessed through GPU commands */258258+/* This reg could be accessed by unaligned address */259259#define F_UNALIGN (1 << 6)260260/* This reg is in GVT's mmio save-restor list and in hardware261261 * logical context image
+3-1
drivers/gpu/drm/i915/gvt/kvmgt.c
···829829 /* Take a module reference as mdev core doesn't take830830 * a reference for vendor driver.831831 */832832- if (!try_module_get(THIS_MODULE))832832+ if (!try_module_get(THIS_MODULE)) {833833+ ret = -ENODEV;833834 goto undo_group;835835+ }834836835837 ret = kvmgt_guest_init(mdev);836838 if (ret)
+2-1
drivers/gpu/drm/i915/gvt/vgpu.c
···439439440440 if (IS_BROADWELL(dev_priv))441441 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);442442- else442442+ /* FixMe: Re-enable APL/BXT once vfio_edid enabled */443443+ else if (!IS_BROXTON(dev_priv))443444 ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);444445 if (ret)445446 goto out_clean_sched_policy;
···176176 struct intel_context *context;177177 struct intel_ring *ring;178178 struct intel_timeline __rcu *timeline;179179- struct list_head signal_link;179179+180180+ union {181181+ struct list_head signal_link;182182+ struct llist_node signal_node;183183+ };180184181185 /*182186 * The rcu epoch of when this request was allocated. Used to judiciously
-13
drivers/gpu/drm/i915/intel_pm.c
···7118711871197119static void tgl_init_clock_gating(struct drm_i915_private *dev_priv)71207120{71217121- u32 vd_pg_enable = 0;71227122- unsigned int i;71237123-71247121 /* Wa_1409120013:tgl */71257122 I915_WRITE(ILK_DPFC_CHICKEN,71267123 ILK_DPFC_CHICKEN_COMP_DUMMY_PIXEL);71277127-71287128- /* This is not a WA. Enable VD HCP & MFX_ENC powergate */71297129- for (i = 0; i < I915_MAX_VCS; i++) {71307130- if (HAS_ENGINE(&dev_priv->gt, _VCS(i)))71317131- vd_pg_enable |= VDN_HCP_POWERGATE_ENABLE(i) |71327132- VDN_MFX_POWERGATE_ENABLE(i);71337133- }71347134-71357135- I915_WRITE(POWERGATE_ENABLE,71367136- I915_READ(POWERGATE_ENABLE) | vd_pg_enable);7137712471387125 /* Wa_1409825376:tgl (pre-prod)*/71397126 if (IS_TGL_DISP_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_B1))
···814814 *815815 * XXX(hch): this has no business in a driver and needs to move816816 * to the device tree.817817+ *818818+ * If we have two subsequent calls to dma_direct_set_offset819819+ * returns -EINVAL. Unfortunately, this happens when we have two820820+ * backends in the system, and will result in the driver821821+ * reporting an error while it has been setup properly before.822822+ * Ignore EINVAL, but it should really be removed eventually.817823 */818824 ret = dma_direct_set_offset(drm->dev, PHYS_OFFSET, 0, SZ_4G);819819- if (ret)825825+ if (ret && ret != -EINVAL)820826 return ret;821827 }822828
+1
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
···208208 phy_node = of_parse_phandle(dev->of_node, "phys", 0);209209 if (!phy_node) {210210 dev_err(dev, "Can't found PHY phandle\n");211211+ ret = -EINVAL;211212 goto err_disable_clk_tmds;212213 }213214
+4
drivers/gpu/drm/vc4/vc4_drv.h
···219219220220 struct drm_modeset_lock ctm_state_lock;221221 struct drm_private_obj ctm_manager;222222+ struct drm_private_obj hvs_channels;222223 struct drm_private_obj load_tracker;223224224225 /* List of vc4_debugfs_info_entry for adding to debugfs once···532531 unsigned int top;533532 unsigned int bottom;534533 } margins;534534+535535+ /* Transitional state below, only valid during atomic commits */536536+ bool update_muxing;535537};536538537539#define VC4_HVS_CHANNEL_DISABLED ((unsigned int)-1)
+48
drivers/gpu/drm/vc4/vc4_hdmi.c
···760760{761761}762762763763+#define WIFI_2_4GHz_CH1_MIN_FREQ 2400000000ULL764764+#define WIFI_2_4GHz_CH1_MAX_FREQ 2422000000ULL765765+766766+static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder,767767+ struct drm_crtc_state *crtc_state,768768+ struct drm_connector_state *conn_state)769769+{770770+ struct drm_display_mode *mode = &crtc_state->adjusted_mode;771771+ struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);772772+ unsigned long long pixel_rate = mode->clock * 1000;773773+ unsigned long long tmds_rate;774774+775775+ if (vc4_hdmi->variant->unsupported_odd_h_timings &&776776+ ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||777777+ (mode->hsync_end % 2) || (mode->htotal % 2)))778778+ return -EINVAL;779779+780780+ /*781781+ * The 1440p@60 pixel rate is in the same range than the first782782+ * WiFi channel (between 2.4GHz and 2.422GHz with 22MHz783783+ * bandwidth). Slightly lower the frequency to bring it out of784784+ * the WiFi range.785785+ */786786+ tmds_rate = pixel_rate * 10;787787+ if (vc4_hdmi->disable_wifi_frequencies &&788788+ (tmds_rate >= WIFI_2_4GHz_CH1_MIN_FREQ &&789789+ tmds_rate <= WIFI_2_4GHz_CH1_MAX_FREQ)) {790790+ mode->clock = 238560;791791+ pixel_rate = mode->clock * 1000;792792+ }793793+794794+ if (pixel_rate > vc4_hdmi->variant->max_pixel_clock)795795+ return -EINVAL;796796+797797+ return 0;798798+}799799+763800static enum drm_mode_status764801vc4_hdmi_encoder_mode_valid(struct drm_encoder *encoder,765802 const struct drm_display_mode *mode)766803{767804 struct vc4_hdmi *vc4_hdmi = encoder_to_vc4_hdmi(encoder);805805+806806+ if (vc4_hdmi->variant->unsupported_odd_h_timings &&807807+ ((mode->hdisplay % 2) || (mode->hsync_start % 2) ||808808+ (mode->hsync_end % 2) || (mode->htotal % 2)))809809+ return MODE_H_ILLEGAL;768810769811 if ((mode->clock * 1000) > vc4_hdmi->variant->max_pixel_clock)770812 return MODE_CLOCK_HIGH;···815773}816774817775static const struct drm_encoder_helper_funcs vc4_hdmi_encoder_helper_funcs = {776776+ .atomic_check = vc4_hdmi_encoder_atomic_check,818777 .mode_valid = vc4_hdmi_encoder_mode_valid,819778 .disable = vc4_hdmi_encoder_disable,820779 .enable = vc4_hdmi_encoder_enable,···17371694 vc4_hdmi->hpd_active_low = hpd_gpio_flags & OF_GPIO_ACTIVE_LOW;17381695 }1739169616971697+ vc4_hdmi->disable_wifi_frequencies =16981698+ of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence");16991699+17401700 pm_runtime_enable(dev);1741170117421702 drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS);···18631817 PHY_LANE_2,18641818 PHY_LANE_CK,18651819 },18201820+ .unsupported_odd_h_timings = true,1866182118671822 .init_resources = vc5_hdmi_init_resources,18681823 .csc_setup = vc5_hdmi_csc_setup,···18891842 PHY_LANE_CK,18901843 PHY_LANE_2,18911844 },18451845+ .unsupported_odd_h_timings = true,1892184618931847 .init_resources = vc5_hdmi_init_resources,18941848 .csc_setup = vc5_hdmi_csc_setup,
+11
drivers/gpu/drm/vc4/vc4_hdmi.h
···6262 */6363 enum vc4_hdmi_phy_channel phy_lane_mapping[4];64646565+ /* The BCM2711 cannot deal with odd horizontal pixel timings */6666+ bool unsupported_odd_h_timings;6767+6568 /* Callback to get the resources (memory region, interrupts,6669 * clocks, etc) for that variant.6770 */···141138142139 int hpd_gpio;143140 bool hpd_active_low;141141+142142+ /*143143+ * On some systems (like the RPi4), some modes are in the same144144+ * frequency range than the WiFi channels (1440p@60Hz for145145+ * example). Should we take evasive actions because that system146146+ * has a wifi adapter?147147+ */148148+ bool disable_wifi_frequencies;144149145150 struct cec_adapter *cec_adap;146151 struct cec_msg cec_rx_msg;
+182-64
drivers/gpu/drm/vc4/vc4_kms.c
···2424#include "vc4_drv.h"2525#include "vc4_regs.h"26262727+#define HVS_NUM_CHANNELS 32828+2729struct vc4_ctm_state {2830 struct drm_private_state base;2931 struct drm_color_ctm *ctm;···3533static struct vc4_ctm_state *to_vc4_ctm_state(struct drm_private_state *priv)3634{3735 return container_of(priv, struct vc4_ctm_state, base);3636+}3737+3838+struct vc4_hvs_state {3939+ struct drm_private_state base;4040+ unsigned int unassigned_channels;4141+};4242+4343+static struct vc4_hvs_state *4444+to_vc4_hvs_state(struct drm_private_state *priv)4545+{4646+ return container_of(priv, struct vc4_hvs_state, base);3847}39484049struct vc4_load_tracker_state {···126113 drm_atomic_private_obj_init(&vc4->base, &vc4->ctm_manager, &ctm_state->base,127114 &vc4_ctm_state_funcs);128115129129- return drmm_add_action(&vc4->base, vc4_ctm_obj_fini, NULL);116116+ return drmm_add_action_or_reset(&vc4->base, vc4_ctm_obj_fini, NULL);130117}131118132119/* Converts a DRM S31.32 value to the HW S0.9 format. */···182169 VC4_SET_FIELD(ctm_state->fifo, SCALER_OLEDOFFS_DISPFIFO));183170}184171172172+static struct vc4_hvs_state *173173+vc4_hvs_get_global_state(struct drm_atomic_state *state)174174+{175175+ struct vc4_dev *vc4 = to_vc4_dev(state->dev);176176+ struct drm_private_state *priv_state;177177+178178+ priv_state = drm_atomic_get_private_obj_state(state, &vc4->hvs_channels);179179+ if (IS_ERR(priv_state))180180+ return ERR_CAST(priv_state);181181+182182+ return to_vc4_hvs_state(priv_state);183183+}184184+185185static void vc4_hvs_pv_muxing_commit(struct vc4_dev *vc4,186186 struct drm_atomic_state *state)187187{···239213{240214 struct drm_crtc_state *crtc_state;241215 struct drm_crtc *crtc;242242- unsigned char dsp2_mux = 0;243243- unsigned char dsp3_mux = 3;244244- unsigned char dsp4_mux = 3;245245- unsigned char dsp5_mux = 3;216216+ unsigned char mux;246217 unsigned int i;247218 u32 reg;248219···247224 struct vc4_crtc_state *vc4_state = to_vc4_crtc_state(crtc_state);248225 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);249226250250- if (!crtc_state->active)227227+ if (!vc4_state->update_muxing)251228 continue;252229253230 switch (vc4_crtc->data->hvs_output) {254231 case 2:255255- dsp2_mux = (vc4_state->assigned_channel == 2) ? 0 : 1;232232+ mux = (vc4_state->assigned_channel == 2) ? 0 : 1;233233+ reg = HVS_READ(SCALER_DISPECTRL);234234+ HVS_WRITE(SCALER_DISPECTRL,235235+ (reg & ~SCALER_DISPECTRL_DSP2_MUX_MASK) |236236+ VC4_SET_FIELD(mux, SCALER_DISPECTRL_DSP2_MUX));256237 break;257238258239 case 3:259259- dsp3_mux = vc4_state->assigned_channel;240240+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)241241+ mux = 3;242242+ else243243+ mux = vc4_state->assigned_channel;244244+245245+ reg = HVS_READ(SCALER_DISPCTRL);246246+ HVS_WRITE(SCALER_DISPCTRL,247247+ (reg & ~SCALER_DISPCTRL_DSP3_MUX_MASK) |248248+ VC4_SET_FIELD(mux, SCALER_DISPCTRL_DSP3_MUX));260249 break;261250262251 case 4:263263- dsp4_mux = vc4_state->assigned_channel;252252+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)253253+ mux = 3;254254+ else255255+ mux = vc4_state->assigned_channel;256256+257257+ reg = HVS_READ(SCALER_DISPEOLN);258258+ HVS_WRITE(SCALER_DISPEOLN,259259+ (reg & ~SCALER_DISPEOLN_DSP4_MUX_MASK) |260260+ VC4_SET_FIELD(mux, SCALER_DISPEOLN_DSP4_MUX));261261+264262 break;265263266264 case 5:267267- dsp5_mux = vc4_state->assigned_channel;265265+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)266266+ mux = 3;267267+ else268268+ mux = vc4_state->assigned_channel;269269+270270+ reg = HVS_READ(SCALER_DISPDITHER);271271+ HVS_WRITE(SCALER_DISPDITHER,272272+ (reg & ~SCALER_DISPDITHER_DSP5_MUX_MASK) |273273+ VC4_SET_FIELD(mux, SCALER_DISPDITHER_DSP5_MUX));268274 break;269275270276 default:271277 break;272278 }273279 }274274-275275- reg = HVS_READ(SCALER_DISPECTRL);276276- HVS_WRITE(SCALER_DISPECTRL,277277- (reg & ~SCALER_DISPECTRL_DSP2_MUX_MASK) |278278- VC4_SET_FIELD(dsp2_mux, SCALER_DISPECTRL_DSP2_MUX));279279-280280- reg = HVS_READ(SCALER_DISPCTRL);281281- HVS_WRITE(SCALER_DISPCTRL,282282- (reg & ~SCALER_DISPCTRL_DSP3_MUX_MASK) |283283- VC4_SET_FIELD(dsp3_mux, SCALER_DISPCTRL_DSP3_MUX));284284-285285- reg = HVS_READ(SCALER_DISPEOLN);286286- HVS_WRITE(SCALER_DISPEOLN,287287- (reg & ~SCALER_DISPEOLN_DSP4_MUX_MASK) |288288- VC4_SET_FIELD(dsp4_mux, SCALER_DISPEOLN_DSP4_MUX));289289-290290- reg = HVS_READ(SCALER_DISPDITHER);291291- HVS_WRITE(SCALER_DISPDITHER,292292- (reg & ~SCALER_DISPDITHER_DSP5_MUX_MASK) |293293- VC4_SET_FIELD(dsp5_mux, SCALER_DISPDITHER_DSP5_MUX));294280}295281296282static void···689657 &load_state->base,690658 &vc4_load_tracker_state_funcs);691659692692- return drmm_add_action(&vc4->base, vc4_load_tracker_obj_fini, NULL);660660+ return drmm_add_action_or_reset(&vc4->base, vc4_load_tracker_obj_fini, NULL);693661}694662695695-#define NUM_OUTPUTS 6696696-#define NUM_CHANNELS 3697697-698698-static int699699-vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)663663+static struct drm_private_state *664664+vc4_hvs_channels_duplicate_state(struct drm_private_obj *obj)700665{701701- unsigned long unassigned_channels = GENMASK(NUM_CHANNELS - 1, 0);666666+ struct vc4_hvs_state *old_state = to_vc4_hvs_state(obj->state);667667+ struct vc4_hvs_state *state;668668+669669+ state = kzalloc(sizeof(*state), GFP_KERNEL);670670+ if (!state)671671+ return NULL;672672+673673+ __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base);674674+675675+ state->unassigned_channels = old_state->unassigned_channels;676676+677677+ return &state->base;678678+}679679+680680+static void vc4_hvs_channels_destroy_state(struct drm_private_obj *obj,681681+ struct drm_private_state *state)682682+{683683+ struct vc4_hvs_state *hvs_state = to_vc4_hvs_state(state);684684+685685+ kfree(hvs_state);686686+}687687+688688+static const struct drm_private_state_funcs vc4_hvs_state_funcs = {689689+ .atomic_duplicate_state = vc4_hvs_channels_duplicate_state,690690+ .atomic_destroy_state = vc4_hvs_channels_destroy_state,691691+};692692+693693+static void vc4_hvs_channels_obj_fini(struct drm_device *dev, void *unused)694694+{695695+ struct vc4_dev *vc4 = to_vc4_dev(dev);696696+697697+ drm_atomic_private_obj_fini(&vc4->hvs_channels);698698+}699699+700700+static int vc4_hvs_channels_obj_init(struct vc4_dev *vc4)701701+{702702+ struct vc4_hvs_state *state;703703+704704+ state = kzalloc(sizeof(*state), GFP_KERNEL);705705+ if (!state)706706+ return -ENOMEM;707707+708708+ state->unassigned_channels = GENMASK(HVS_NUM_CHANNELS - 1, 0);709709+ drm_atomic_private_obj_init(&vc4->base, &vc4->hvs_channels,710710+ &state->base,711711+ &vc4_hvs_state_funcs);712712+713713+ return drmm_add_action_or_reset(&vc4->base, vc4_hvs_channels_obj_fini, NULL);714714+}715715+716716+/*717717+ * The BCM2711 HVS has up to 7 outputs connected to the pixelvalves and718718+ * the TXP (and therefore all the CRTCs found on that platform).719719+ *720720+ * The naive (and our initial) implementation would just iterate over721721+ * all the active CRTCs, try to find a suitable FIFO, and then remove it722722+ * from the pool of available FIFOs. However, there are a few corner723723+ * cases that need to be considered:724724+ *725725+ * - When running in a dual-display setup (so with two CRTCs involved),726726+ * we can update the state of a single CRTC (for example by changing727727+ * its mode using xrandr under X11) without affecting the other. In728728+ * this case, the other CRTC wouldn't be in the state at all, so we729729+ * need to consider all the running CRTCs in the DRM device to assign730730+ * a FIFO, not just the one in the state.731731+ *732732+ * - To fix the above, we can't use drm_atomic_get_crtc_state on all733733+ * enabled CRTCs to pull their CRTC state into the global state, since734734+ * a page flip would start considering their vblank to complete. Since735735+ * we don't have a guarantee that they are actually active, that736736+ * vblank might never happen, and shouldn't even be considered if we737737+ * want to do a page flip on a single CRTC. That can be tested by738738+ * doing a modetest -v first on HDMI1 and then on HDMI0.739739+ *740740+ * - Since we need the pixelvalve to be disabled and enabled back when741741+ * the FIFO is changed, we should keep the FIFO assigned for as long742742+ * as the CRTC is enabled, only considering it free again once that743743+ * CRTC has been disabled. This can be tested by booting X11 on a744744+ * single display, and changing the resolution down and then back up.745745+ */746746+static int vc4_pv_muxing_atomic_check(struct drm_device *dev,747747+ struct drm_atomic_state *state)748748+{749749+ struct vc4_hvs_state *hvs_new_state;702750 struct drm_crtc_state *old_crtc_state, *new_crtc_state;703751 struct drm_crtc *crtc;704704- int i, ret;752752+ unsigned int i;705753706706- /*707707- * Since the HVS FIFOs are shared across all the pixelvalves and708708- * the TXP (and thus all the CRTCs), we need to pull the current709709- * state of all the enabled CRTCs so that an update to a single710710- * CRTC still keeps the previous FIFOs enabled and assigned to711711- * the same CRTCs, instead of evaluating only the CRTC being712712- * modified.713713- */714714- list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {715715- struct drm_crtc_state *crtc_state;716716-717717- if (!crtc->state->enable)718718- continue;719719-720720- crtc_state = drm_atomic_get_crtc_state(state, crtc);721721- if (IS_ERR(crtc_state))722722- return PTR_ERR(crtc_state);723723- }754754+ hvs_new_state = vc4_hvs_get_global_state(state);755755+ if (!hvs_new_state)756756+ return -EINVAL;724757725758 for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {759759+ struct vc4_crtc_state *old_vc4_crtc_state =760760+ to_vc4_crtc_state(old_crtc_state);726761 struct vc4_crtc_state *new_vc4_crtc_state =727762 to_vc4_crtc_state(new_crtc_state);728763 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);729764 unsigned int matching_channels;730765731731- if (old_crtc_state->enable && !new_crtc_state->enable)732732- new_vc4_crtc_state->assigned_channel = VC4_HVS_CHANNEL_DISABLED;733733-734734- if (!new_crtc_state->enable)766766+ /* Nothing to do here, let's skip it */767767+ if (old_crtc_state->enable == new_crtc_state->enable)735768 continue;736769737737- if (new_vc4_crtc_state->assigned_channel != VC4_HVS_CHANNEL_DISABLED) {738738- unassigned_channels &= ~BIT(new_vc4_crtc_state->assigned_channel);770770+ /* Muxing will need to be modified, mark it as such */771771+ new_vc4_crtc_state->update_muxing = true;772772+773773+ /* If we're disabling our CRTC, we put back our channel */774774+ if (!new_crtc_state->enable) {775775+ hvs_new_state->unassigned_channels |= BIT(old_vc4_crtc_state->assigned_channel);776776+ new_vc4_crtc_state->assigned_channel = VC4_HVS_CHANNEL_DISABLED;739777 continue;740778 }741779···833731 * the future, we will need to have something smarter,834732 * but it works so far.835733 */836836- matching_channels = unassigned_channels & vc4_crtc->data->hvs_available_channels;734734+ matching_channels = hvs_new_state->unassigned_channels & vc4_crtc->data->hvs_available_channels;837735 if (matching_channels) {838736 unsigned int channel = ffs(matching_channels) - 1;839737840738 new_vc4_crtc_state->assigned_channel = channel;841841- unassigned_channels &= ~BIT(channel);739739+ hvs_new_state->unassigned_channels &= ~BIT(channel);842740 } else {843741 return -EINVAL;844742 }845743 }744744+745745+ return 0;746746+}747747+748748+static int749749+vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state)750750+{751751+ int ret;752752+753753+ ret = vc4_pv_muxing_atomic_check(dev, state);754754+ if (ret)755755+ return ret;846756847757 ret = vc4_ctm_atomic_check(dev, state);848758 if (ret < 0)···919805 return ret;920806921807 ret = vc4_load_tracker_obj_init(vc4);808808+ if (ret)809809+ return ret;810810+811811+ ret = vc4_hvs_channels_obj_init(vc4);922812 if (ret)923813 return ret;924814
+39-5
drivers/hid/hid-cypress.c
···2323#define CP_2WHEEL_MOUSE_HACK 0x022424#define CP_2WHEEL_MOUSE_HACK_ON 0x0425252626+#define VA_INVAL_LOGICAL_BOUNDARY 0x082727+2628/*2729 * Some USB barcode readers from cypress have usage min and usage max in2830 * the wrong order2931 */3030-static __u8 *cp_report_fixup(struct hid_device *hdev, __u8 *rdesc,3232+static __u8 *cp_rdesc_fixup(struct hid_device *hdev, __u8 *rdesc,3133 unsigned int *rsize)3234{3333- unsigned long quirks = (unsigned long)hid_get_drvdata(hdev);3435 unsigned int i;3535-3636- if (!(quirks & CP_RDESC_SWAPPED_MIN_MAX))3737- return rdesc;38363937 if (*rsize < 4)4038 return rdesc;···4345 rdesc[i + 2] = 0x29;4446 swap(rdesc[i + 3], rdesc[i + 1]);4547 }4848+ return rdesc;4949+}5050+5151+static __u8 *va_logical_boundary_fixup(struct hid_device *hdev, __u8 *rdesc,5252+ unsigned int *rsize)5353+{5454+ /*5555+ * Varmilo VA104M (with VID Cypress and device ID 07B1) incorrectly5656+ * reports Logical Minimum of its Consumer Control device as 5725757+ * (0x02 0x3c). Fix this by setting its Logical Minimum to zero.5858+ */5959+ if (*rsize == 25 &&6060+ rdesc[0] == 0x05 && rdesc[1] == 0x0c &&6161+ rdesc[2] == 0x09 && rdesc[3] == 0x01 &&6262+ rdesc[6] == 0x19 && rdesc[7] == 0x00 &&6363+ rdesc[11] == 0x16 && rdesc[12] == 0x3c && rdesc[13] == 0x02) {6464+ hid_info(hdev,6565+ "fixing up varmilo VA104M consumer control report descriptor\n");6666+ rdesc[12] = 0x00;6767+ rdesc[13] = 0x00;6868+ }6969+ return rdesc;7070+}7171+7272+static __u8 *cp_report_fixup(struct hid_device *hdev, __u8 *rdesc,7373+ unsigned int *rsize)7474+{7575+ unsigned long quirks = (unsigned long)hid_get_drvdata(hdev);7676+7777+ if (quirks & CP_RDESC_SWAPPED_MIN_MAX)7878+ rdesc = cp_rdesc_fixup(hdev, rdesc, rsize);7979+ if (quirks & VA_INVAL_LOGICAL_BOUNDARY)8080+ rdesc = va_logical_boundary_fixup(hdev, rdesc, rsize);8181+4682 return rdesc;4783}4884···160128 .driver_data = CP_RDESC_SWAPPED_MIN_MAX },161129 { HID_USB_DEVICE(USB_VENDOR_ID_CYPRESS, USB_DEVICE_ID_CYPRESS_MOUSE),162130 .driver_data = CP_2WHEEL_MOUSE_HACK },131131+ { HID_USB_DEVICE(USB_VENDOR_ID_CYPRESS, USB_DEVICE_ID_CYPRESS_VARMILO_VA104M_07B1),132132+ .driver_data = VA_INVAL_LOGICAL_BOUNDARY },163133 { }164134};165135MODULE_DEVICE_TABLE(hid, cp_devices);
···11111212#include "hid-ids.h"13131414+#define QUIRK_TOUCHPAD_ON_OFF_REPORT BIT(0)1515+1616+static __u8 *ite_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize)1717+{1818+ unsigned long quirks = (unsigned long)hid_get_drvdata(hdev);1919+2020+ if (quirks & QUIRK_TOUCHPAD_ON_OFF_REPORT) {2121+ if (*rsize == 188 && rdesc[162] == 0x81 && rdesc[163] == 0x02) {2222+ hid_info(hdev, "Fixing up ITE keyboard report descriptor\n");2323+ rdesc[163] = HID_MAIN_ITEM_RELATIVE;2424+ }2525+ }2626+2727+ return rdesc;2828+}2929+3030+static int ite_input_mapping(struct hid_device *hdev,3131+ struct hid_input *hi, struct hid_field *field,3232+ struct hid_usage *usage, unsigned long **bit,3333+ int *max)3434+{3535+3636+ unsigned long quirks = (unsigned long)hid_get_drvdata(hdev);3737+3838+ if ((quirks & QUIRK_TOUCHPAD_ON_OFF_REPORT) &&3939+ (usage->hid & HID_USAGE_PAGE) == 0x00880000) {4040+ if (usage->hid == 0x00880078) {4141+ /* Touchpad on, userspace expects F22 for this */4242+ hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_F22);4343+ return 1;4444+ }4545+ if (usage->hid == 0x00880079) {4646+ /* Touchpad off, userspace expects F23 for this */4747+ hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_F23);4848+ return 1;4949+ }5050+ return -1;5151+ }5252+5353+ return 0;5454+}5555+1456static int ite_event(struct hid_device *hdev, struct hid_field *field,1557 struct hid_usage *usage, __s32 value)1658{···7937 return 0;8038}81394040+static int ite_probe(struct hid_device *hdev, const struct hid_device_id *id)4141+{4242+ int ret;4343+4444+ hid_set_drvdata(hdev, (void *)id->driver_data);4545+4646+ ret = hid_open_report(hdev);4747+ if (ret)4848+ return ret;4949+5050+ return hid_hw_start(hdev, HID_CONNECT_DEFAULT);5151+}5252+8253static const struct hid_device_id ite_devices[] = {8354 { HID_USB_DEVICE(USB_VENDOR_ID_ITE, USB_DEVICE_ID_ITE8595) },8455 { HID_USB_DEVICE(USB_VENDOR_ID_258A, USB_DEVICE_ID_258A_6A88) },8556 /* ITE8595 USB kbd ctlr, with Synaptics touchpad connected to it. */8657 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,8758 USB_VENDOR_ID_SYNAPTICS,8888- USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012) },5959+ USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012),6060+ .driver_data = QUIRK_TOUCHPAD_ON_OFF_REPORT },8961 /* ITE8910 USB kbd ctlr, with Synaptics touchpad connected to it. */9062 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,9163 USB_VENDOR_ID_SYNAPTICS,···11155static struct hid_driver ite_driver = {11256 .name = "itetech",11357 .id_table = ite_devices,5858+ .probe = ite_probe,5959+ .report_fixup = ite_report_fixup,6060+ .input_mapping = ite_input_mapping,11461 .event = ite_event,11562};11663module_hid_driver(ite_driver);
+21-1
drivers/hid/hid-logitech-dj.c
···328328 0x25, 0x01, /* LOGICAL_MAX (1) */329329 0x75, 0x01, /* REPORT_SIZE (1) */330330 0x95, 0x04, /* REPORT_COUNT (4) */331331- 0x81, 0x06, /* INPUT */331331+ 0x81, 0x02, /* INPUT (Data,Var,Abs) */332332 0xC0, /* END_COLLECTION */333333 0xC0, /* END_COLLECTION */334334};···866866 schedule_work(&djrcv_dev->work);867867}868868869869+/*870870+ * Some quad/bluetooth keyboards have a builtin touchpad in this case we see871871+ * only 1 paired device with a device_type of REPORT_TYPE_KEYBOARD. For the872872+ * touchpad to work we must also forward mouse input reports to the dj_hiddev873873+ * created for the keyboard (instead of forwarding them to a second paired874874+ * device with a device_type of REPORT_TYPE_MOUSE as we normally would).875875+ */876876+static const u16 kbd_builtin_touchpad_ids[] = {877877+ 0xb309, /* Dinovo Edge */878878+ 0xb30c, /* Dinovo Mini */879879+};880880+869881static void logi_hidpp_dev_conn_notif_equad(struct hid_device *hdev,870882 struct hidpp_event *hidpp_report,871883 struct dj_workitem *workitem)872884{873885 struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev);886886+ int i, id;874887875888 workitem->type = WORKITEM_TYPE_PAIRED;876889 workitem->device_type = hidpp_report->params[HIDPP_PARAM_DEVICE_INFO] &···895882 workitem->reports_supported |= STD_KEYBOARD | MULTIMEDIA |896883 POWER_KEYS | MEDIA_CENTER |897884 HIDPP;885885+ id = (workitem->quad_id_msb << 8) | workitem->quad_id_lsb;886886+ for (i = 0; i < ARRAY_SIZE(kbd_builtin_touchpad_ids); i++) {887887+ if (id == kbd_builtin_touchpad_ids[i]) {888888+ workitem->reports_supported |= STD_MOUSE;889889+ break;890890+ }891891+ }898892 break;899893 case REPORT_TYPE_MOUSE:900894 workitem->reports_supported |= STD_MOUSE | HIDPP;
+32
drivers/hid/hid-logitech-hidpp.c
···9393#define HIDPP_CAPABILITY_BATTERY_LEVEL_STATUS BIT(3)9494#define HIDPP_CAPABILITY_BATTERY_VOLTAGE BIT(4)95959696+#define lg_map_key_clear(c) hid_map_usage_clear(hi, usage, bit, max, EV_KEY, (c))9797+9698/*9799 * There are two hidpp protocols in use, the first version hidpp10 is known98100 * as register access protocol or RAP, the second version hidpp20 is known as···29532951}2954295229552953/* -------------------------------------------------------------------------- */29542954+/* Logitech Dinovo Mini keyboard with builtin touchpad */29552955+/* -------------------------------------------------------------------------- */29562956+#define DINOVO_MINI_PRODUCT_ID 0xb30c29572957+29582958+static int lg_dinovo_input_mapping(struct hid_device *hdev, struct hid_input *hi,29592959+ struct hid_field *field, struct hid_usage *usage,29602960+ unsigned long **bit, int *max)29612961+{29622962+ if ((usage->hid & HID_USAGE_PAGE) != HID_UP_LOGIVENDOR)29632963+ return 0;29642964+29652965+ switch (usage->hid & HID_USAGE) {29662966+ case 0x00d: lg_map_key_clear(KEY_MEDIA); break;29672967+ default:29682968+ return 0;29692969+ }29702970+ return 1;29712971+}29722972+29732973+/* -------------------------------------------------------------------------- */29562974/* HID++1.0 devices which use HID++ reports for their wheels */29572975/* -------------------------------------------------------------------------- */29582976static int hidpp10_wheel_connect(struct hidpp_device *hidpp)···32063184 else if (hidpp->quirks & HIDPP_QUIRK_CLASS_M560 &&32073185 field->application != HID_GD_MOUSE)32083186 return m560_input_mapping(hdev, hi, field, usage, bit, max);31873187+31883188+ if (hdev->product == DINOVO_MINI_PRODUCT_ID)31893189+ return lg_dinovo_input_mapping(hdev, hi, field, usage, bit, max);3209319032103191 return 0;32113192}···39723947 LDJ_DEVICE(0x405e), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },39733948 { /* Mouse Logitech MX Anywhere 2 */39743949 LDJ_DEVICE(0x404a), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },39503950+ { LDJ_DEVICE(0x4072), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },39753951 { LDJ_DEVICE(0xb013), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },39763952 { LDJ_DEVICE(0xb018), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },39773953 { LDJ_DEVICE(0xb01f), .driver_data = HIDPP_QUIRK_HI_RES_SCROLL_X2121 },···39963970 .driver_data = HIDPP_QUIRK_CLASS_K750 },39973971 { /* Keyboard MX5000 (Bluetooth-receiver in HID proxy mode) */39983972 LDJ_DEVICE(0xb305),39733973+ .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS },39743974+ { /* Dinovo Edge (Bluetooth-receiver in HID proxy mode) */39753975+ LDJ_DEVICE(0xb309),39993976 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS },40003977 { /* Keyboard MX5500 (Bluetooth-receiver in HID proxy mode) */40013978 LDJ_DEVICE(0xb30b),···4041401240424013 { /* MX5000 keyboard over Bluetooth */40434014 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb305),40154015+ .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS },40164016+ { /* Dinovo Edge keyboard over Bluetooth */40174017+ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb309),40444018 .driver_data = HIDPP_QUIRK_HIDPP_CONSUMER_VENDOR_KEYS },40454019 { /* MX5500 keyboard over Bluetooth */40464020 HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb30b),
···997997 break;998998 case VID_PID(USB_VENDOR_ID_UGTIZER,999999 USB_DEVICE_ID_UGTIZER_TABLET_GP0610):10001000+ case VID_PID(USB_VENDOR_ID_UGTIZER,10011001+ USB_DEVICE_ID_UGTIZER_TABLET_GT5040):10001002 case VID_PID(USB_VENDOR_ID_UGEE,10011003 USB_DEVICE_ID_UGEE_XPPEN_TABLET_G540):10021004 case VID_PID(USB_VENDOR_ID_UGEE,
···2929#include <asm/iommu_table.h>3030#include <asm/io_apic.h>3131#include <asm/irq_remapping.h>3232+#include <asm/set_memory.h>32333334#include <linux/crash_dump.h>3435···673672 free_pages((unsigned long)iommu->cmd_buf, get_order(CMD_BUFFER_SIZE));674673}675674675675+static void *__init iommu_alloc_4k_pages(struct amd_iommu *iommu,676676+ gfp_t gfp, size_t size)677677+{678678+ int order = get_order(size);679679+ void *buf = (void *)__get_free_pages(gfp, order);680680+681681+ if (buf &&682682+ iommu_feature(iommu, FEATURE_SNP) &&683683+ set_memory_4k((unsigned long)buf, (1 << order))) {684684+ free_pages((unsigned long)buf, order);685685+ buf = NULL;686686+ }687687+688688+ return buf;689689+}690690+676691/* allocates the memory where the IOMMU will log its events to */677692static int __init alloc_event_buffer(struct amd_iommu *iommu)678693{679679- iommu->evt_buf = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,680680- get_order(EVT_BUFFER_SIZE));694694+ iommu->evt_buf = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO,695695+ EVT_BUFFER_SIZE);681696682697 return iommu->evt_buf ? 0 : -ENOMEM;683698}···732715/* allocates the memory where the IOMMU will log its events to */733716static int __init alloc_ppr_log(struct amd_iommu *iommu)734717{735735- iommu->ppr_log = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,736736- get_order(PPR_LOG_SIZE));718718+ iommu->ppr_log = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO,719719+ PPR_LOG_SIZE);737720738721 return iommu->ppr_log ? 0 : -ENOMEM;739722}···855838856839static int __init alloc_cwwb_sem(struct amd_iommu *iommu)857840{858858- iommu->cmd_sem = (void *)get_zeroed_page(GFP_KERNEL);841841+ iommu->cmd_sem = iommu_alloc_4k_pages(iommu, GFP_KERNEL | __GFP_ZERO, 1);859842860843 return iommu->cmd_sem ? 0 : -ENOMEM;861844}
+4
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
···6969{7070 struct qcom_smmu *qsmmu;71717272+ /* Check to make sure qcom_scm has finished probing */7373+ if (!qcom_scm_is_available())7474+ return ERR_PTR(-EPROBE_DEFER);7575+7276 qsmmu = devm_kzalloc(smmu->dev, sizeof(*qsmmu), GFP_KERNEL);7377 if (!qsmmu)7478 return ERR_PTR(-ENOMEM);
+5-2
drivers/iommu/intel/dmar.c
···335335336336static inline void vf_inherit_msi_domain(struct pci_dev *pdev)337337{338338- dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&pdev->physfn->dev));338338+ struct pci_dev *physfn = pci_physfn(pdev);339339+340340+ dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&physfn->dev));339341}340342341343static int dmar_pci_bus_notifier(struct notifier_block *nb,···986984 warn_invalid_dmar(phys_addr, " returns all ones");987985 goto unmap;988986 }989989- iommu->vccap = dmar_readq(iommu->reg + DMAR_VCCAP_REG);987987+ if (ecap_vcs(iommu->ecap))988988+ iommu->vccap = dmar_readq(iommu->reg + DMAR_VCCAP_REG);990989991990 /* the registers might be more than one page */992991 map_size = max_t(int, ecap_max_iotlb_offset(iommu->ecap),
+5-4
drivers/iommu/intel/iommu.c
···179179 * (used when kernel is launched w/ TXT)180180 */181181static int force_on = 0;182182-int intel_iommu_tboot_noforce;182182+static int intel_iommu_tboot_noforce;183183static int no_platform_optin;184184185185#define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry))···18331833 if (ecap_prs(iommu->ecap))18341834 intel_svm_finish_prq(iommu);18351835 }18361836- if (ecap_vcs(iommu->ecap) && vccap_pasid(iommu->vccap))18361836+ if (vccap_pasid(iommu->vccap))18371837 ioasid_unregister_allocator(&iommu->pasid_allocator);1838183818391839#endif···32123212 * is active. All vIOMMU allocators will eventually be calling the same32133213 * host allocator.32143214 */32153215- if (!ecap_vcs(iommu->ecap) || !vccap_pasid(iommu->vccap))32153215+ if (!vccap_pasid(iommu->vccap))32163216 return;3217321732183218 pr_info("Register custom PASID allocator\n");···48844884 * Intel IOMMU is required for a TXT/tboot launch or platform48854885 * opt in, so enforce that.48864886 */48874887- force_on = tboot_force_iommu() || platform_optin_force_iommu();48874887+ force_on = (!intel_iommu_tboot_noforce && tboot_force_iommu()) ||48884888+ platform_optin_force_iommu();4888488948894890 if (iommu_init_mempool()) {48904891 if (force_on)
+6-4
drivers/iommu/iommu.c
···264264 */265265 iommu_alloc_default_domain(group, dev);266266267267- if (group->default_domain)267267+ if (group->default_domain) {268268 ret = __iommu_attach_device(group->default_domain, dev);269269+ if (ret) {270270+ iommu_group_put(group);271271+ goto err_release;272272+ }273273+ }269274270275 iommu_create_device_direct_mappings(group, dev);271276272277 iommu_group_put(group);273273-274274- if (ret)275275- goto err_release;276278277279 if (ops->probe_finalize)278280 ops->probe_finalize(dev);
+21-7
drivers/media/platform/Kconfig
···253253 depends on MTK_IOMMU || COMPILE_TEST254254 depends on VIDEO_DEV && VIDEO_V4L2255255 depends on ARCH_MEDIATEK || COMPILE_TEST256256+ depends on VIDEO_MEDIATEK_VPU || MTK_SCP257257+ # The two following lines ensure we have the same state ("m" or "y") as258258+ # our dependencies, to avoid missing symbols during link.259259+ depends on VIDEO_MEDIATEK_VPU || !VIDEO_MEDIATEK_VPU260260+ depends on MTK_SCP || !MTK_SCP256261 select VIDEOBUF2_DMA_CONTIG257262 select V4L2_MEM2MEM_DEV258258- select VIDEO_MEDIATEK_VPU259259- select MTK_SCP263263+ select VIDEO_MEDIATEK_VCODEC_VPU if VIDEO_MEDIATEK_VPU264264+ select VIDEO_MEDIATEK_VCODEC_SCP if MTK_SCP260265 help261261- Mediatek video codec driver provides HW capability to262262- encode and decode in a range of video formats263263- This driver rely on VPU driver to communicate with VPU.266266+ Mediatek video codec driver provides HW capability to267267+ encode and decode in a range of video formats on MT8173268268+ and MT8183.264269265265- To compile this driver as modules, choose M here: the266266- modules will be called mtk-vcodec-dec and mtk-vcodec-enc.270270+ Note that support for MT8173 requires VIDEO_MEDIATEK_VPU to271271+ also be selected. Support for MT8183 depends on MTK_SCP.272272+273273+ To compile this driver as modules, choose M here: the274274+ modules will be called mtk-vcodec-dec and mtk-vcodec-enc.275275+276276+config VIDEO_MEDIATEK_VCODEC_VPU277277+ bool278278+279279+config VIDEO_MEDIATEK_VCODEC_SCP280280+ bool267281268282config VIDEO_MEM2MEM_DEINTERLACE269283 tristate "Deinterlace support"
···794794 return 0;795795796796opp_dl_add_err:797797- dev_pm_domain_detach(core->opp_pmdomain, true);797797+ dev_pm_opp_detach_genpd(core->opp_table);798798opp_attach_err:799799 if (core->pd_dl_venus) {800800 device_link_del(core->pd_dl_venus);···832832 if (core->opp_dl_venus)833833 device_link_del(core->opp_dl_venus);834834835835- dev_pm_domain_detach(core->opp_pmdomain, true);835835+ dev_pm_opp_detach_genpd(core->opp_table);836836}837837838838static int core_get_v4(struct device *dev)
+30-1
drivers/media/platform/qcom/venus/venc.c
···537537 struct hfi_quantization quant;538538 struct hfi_quantization_range quant_range;539539 u32 ptype, rate_control, bitrate;540540+ u32 profile, level;540541 int ret;541542542543 ret = venus_helper_set_work_mode(inst, VIDC_WORK_MODE_2);···685684 if (ret)686685 return ret;687686688688- ret = venus_helper_set_profile_level(inst, ctr->profile, ctr->level);687687+ switch (inst->hfi_codec) {688688+ case HFI_VIDEO_CODEC_H264:689689+ profile = ctr->profile.h264;690690+ level = ctr->level.h264;691691+ break;692692+ case HFI_VIDEO_CODEC_MPEG4:693693+ profile = ctr->profile.mpeg4;694694+ level = ctr->level.mpeg4;695695+ break;696696+ case HFI_VIDEO_CODEC_VP8:697697+ profile = ctr->profile.vp8;698698+ level = 0;699699+ break;700700+ case HFI_VIDEO_CODEC_VP9:701701+ profile = ctr->profile.vp9;702702+ level = ctr->level.vp9;703703+ break;704704+ case HFI_VIDEO_CODEC_HEVC:705705+ profile = ctr->profile.hevc;706706+ level = ctr->level.hevc;707707+ break;708708+ case HFI_VIDEO_CODEC_MPEG2:709709+ default:710710+ profile = 0;711711+ level = 0;712712+ break;713713+ }714714+715715+ ret = venus_helper_set_profile_level(inst, profile, level);689716 if (ret)690717 return ret;691718
+12-2
drivers/media/platform/qcom/venus/venc_ctrls.c
···103103 ctr->h264_entropy_mode = ctrl->val;104104 break;105105 case V4L2_CID_MPEG_VIDEO_MPEG4_PROFILE:106106+ ctr->profile.mpeg4 = ctrl->val;107107+ break;106108 case V4L2_CID_MPEG_VIDEO_H264_PROFILE:109109+ ctr->profile.h264 = ctrl->val;110110+ break;107111 case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:112112+ ctr->profile.hevc = ctrl->val;113113+ break;108114 case V4L2_CID_MPEG_VIDEO_VP8_PROFILE:109109- ctr->profile = ctrl->val;115115+ ctr->profile.vp8 = ctrl->val;110116 break;111117 case V4L2_CID_MPEG_VIDEO_MPEG4_LEVEL:118118+ ctr->level.mpeg4 = ctrl->val;119119+ break;112120 case V4L2_CID_MPEG_VIDEO_H264_LEVEL:121121+ ctr->level.h264 = ctrl->val;122122+ break;113123 case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:114114- ctr->level = ctrl->val;124124+ ctr->level.hevc = ctrl->val;115125 break;116126 case V4L2_CID_MPEG_VIDEO_H264_I_FRAME_QP:117127 ctr->h264_i_qp = ctrl->val;
+66-52
drivers/media/test-drivers/vidtv/vidtv_bridge.c
···44 * validate the existing APIs in the media subsystem. It can also aid55 * developers working on userspace applications.66 *77- * When this module is loaded, it will attempt to modprobe 'dvb_vidtv_tuner' and 'dvb_vidtv_demod'.77+ * When this module is loaded, it will attempt to modprobe 'dvb_vidtv_tuner'88+ * and 'dvb_vidtv_demod'.89 *910 * Copyright (C) 2020 Daniel W. S. Almeida1011 */11121313+#include <linux/dev_printk.h>1214#include <linux/moduleparam.h>1315#include <linux/mutex.h>1416#include <linux/platform_device.h>1515-#include <linux/dev_printk.h>1617#include <linux/time.h>1718#include <linux/types.h>1819#include <linux/workqueue.h>19202021#include "vidtv_bridge.h"2121-#include "vidtv_demod.h"2222-#include "vidtv_tuner.h"2323-#include "vidtv_ts.h"2424-#include "vidtv_mux.h"2522#include "vidtv_common.h"2323+#include "vidtv_demod.h"2424+#include "vidtv_mux.h"2525+#include "vidtv_ts.h"2626+#include "vidtv_tuner.h"26272727-//#define MUX_BUF_MAX_SZ2828-//#define MUX_BUF_MIN_SZ2828+#define MUX_BUF_MIN_SZ 901642929+#define MUX_BUF_MAX_SZ (MUX_BUF_MIN_SZ * 10)2930#define TUNER_DEFAULT_ADDR 0x683031#define DEMOD_DEFAULT_ADDR 0x603232+#define VIDTV_DEFAULT_NETWORK_ID 0xff443333+#define VIDTV_DEFAULT_NETWORK_NAME "LinuxTV.org"3434+#define VIDTV_DEFAULT_TS_ID 0x408131353232-/* LNBf fake parameters: ranges used by an Universal (extended) European LNBf */3333-#define LNB_CUT_FREQUENCY 117000003434-#define LNB_LOW_FREQ 97500003535-#define LNB_HIGH_FREQ 106000003636-3636+/*3737+ * The LNBf fake parameters here are the ranges used by an3838+ * Universal (extended) European LNBf, which is likely the most common LNBf3939+ * found on Satellite digital TV system nowadays.4040+ */4141+#define LNB_CUT_FREQUENCY 11700000 /* high IF frequency */4242+#define LNB_LOW_FREQ 9750000 /* low IF frequency */4343+#define LNB_HIGH_FREQ 10600000 /* transition frequency */37443845static unsigned int drop_tslock_prob_on_low_snr;3946module_param(drop_tslock_prob_on_low_snr, uint, 0);···999210093static unsigned int pcr_period_msec = 40;10194module_param(pcr_period_msec, uint, 0);102102-MODULE_PARM_DESC(pcr_period_msec, "How often to send PCR packets. Default: 40ms");9595+MODULE_PARM_DESC(pcr_period_msec,9696+ "How often to send PCR packets. Default: 40ms");1039710498static unsigned int mux_rate_kbytes_sec = 4096;10599module_param(mux_rate_kbytes_sec, uint, 0);···112104113105static unsigned int mux_buf_sz_pkts;114106module_param(mux_buf_sz_pkts, uint, 0);115115-MODULE_PARM_DESC(mux_buf_sz_pkts, "Size for the internal mux buffer in multiples of 188 bytes");116116-117117-#define MUX_BUF_MIN_SZ 90164118118-#define MUX_BUF_MAX_SZ (MUX_BUF_MIN_SZ * 10)107107+MODULE_PARM_DESC(mux_buf_sz_pkts,108108+ "Size for the internal mux buffer in multiples of 188 bytes");119109120110static u32 vidtv_bridge_mux_buf_sz_for_mux_rate(void)121111{122112 u32 max_elapsed_time_msecs = VIDTV_MAX_SLEEP_USECS / USEC_PER_MSEC;123123- u32 nbytes_expected;124113 u32 mux_buf_sz = mux_buf_sz_pkts * TS_PACKET_LEN;114114+ u32 nbytes_expected;125115126116 nbytes_expected = mux_rate_kbytes_sec;127117 nbytes_expected *= max_elapsed_time_msecs;···149143 FE_HAS_LOCK);150144}151145152152-static void153153-vidtv_bridge_on_new_pkts_avail(void *priv, u8 *buf, u32 npkts)146146+/*147147+ * called on a separate thread by the mux when new packets become available148148+ */149149+static void vidtv_bridge_on_new_pkts_avail(void *priv, u8 *buf, u32 npkts)154150{155155- /*156156- * called on a separate thread by the mux when new packets become157157- * available158158- */159159- struct vidtv_dvb *dvb = (struct vidtv_dvb *)priv;151151+ struct vidtv_dvb *dvb = priv;160152161153 /* drop packets if we lose the lock */162154 if (vidtv_bridge_check_demod_lock(dvb, 0))···163159164160static int vidtv_start_streaming(struct vidtv_dvb *dvb)165161{166166- struct vidtv_mux_init_args mux_args = {0};162162+ struct vidtv_mux_init_args mux_args = {163163+ .mux_rate_kbytes_sec = mux_rate_kbytes_sec,164164+ .on_new_packets_available_cb = vidtv_bridge_on_new_pkts_avail,165165+ .pcr_period_usecs = pcr_period_msec * USEC_PER_MSEC,166166+ .si_period_usecs = si_period_msec * USEC_PER_MSEC,167167+ .pcr_pid = pcr_pid,168168+ .transport_stream_id = VIDTV_DEFAULT_TS_ID,169169+ .network_id = VIDTV_DEFAULT_NETWORK_ID,170170+ .network_name = VIDTV_DEFAULT_NETWORK_NAME,171171+ .priv = dvb,172172+ };167173 struct device *dev = &dvb->pdev->dev;168174 u32 mux_buf_sz;169175···182168 return 0;183169 }184170185185- mux_buf_sz = (mux_buf_sz_pkts) ? mux_buf_sz_pkts : vidtv_bridge_mux_buf_sz_for_mux_rate();171171+ if (mux_buf_sz_pkts)172172+ mux_buf_sz = mux_buf_sz_pkts;173173+ else174174+ mux_buf_sz = vidtv_bridge_mux_buf_sz_for_mux_rate();186175187187- mux_args.mux_rate_kbytes_sec = mux_rate_kbytes_sec;188188- mux_args.on_new_packets_available_cb = vidtv_bridge_on_new_pkts_avail;189189- mux_args.mux_buf_sz = mux_buf_sz;190190- mux_args.pcr_period_usecs = pcr_period_msec * 1000;191191- mux_args.si_period_usecs = si_period_msec * 1000;192192- mux_args.pcr_pid = pcr_pid;193193- mux_args.transport_stream_id = VIDTV_DEFAULT_TS_ID;194194- mux_args.priv = dvb;176176+ mux_args.mux_buf_sz = mux_buf_sz;195177196178 dvb->streaming = true;197197- dvb->mux = vidtv_mux_init(dvb->fe[0], dev, mux_args);179179+ dvb->mux = vidtv_mux_init(dvb->fe[0], dev, &mux_args);180180+ if (!dvb->mux)181181+ return -ENOMEM;198182 vidtv_mux_start_thread(dvb->mux);199183200184 dev_dbg_ratelimited(dev, "Started streaming\n");···216204{217205 struct dvb_demux *demux = feed->demux;218206 struct vidtv_dvb *dvb = demux->priv;219219- int rc;220207 int ret;208208+ int rc;221209222210 if (!demux->dmx.frontend)223211 return -EINVAL;···255243256244static struct dvb_frontend *vidtv_get_frontend_ptr(struct i2c_client *c)257245{258258- /* the demod will set this when its probe function runs */259246 struct vidtv_demod_state *state = i2c_get_clientdata(c);260247248248+ /* the demod will set this when its probe function runs */261249 return &state->frontend;262250}263251···265253 struct i2c_msg msgs[],266254 int num)267255{256256+ /*257257+ * Right now, this virtual driver doesn't really send or receive258258+ * messages from I2C. A real driver will require an implementation259259+ * here.260260+ */268261 return 0;269262}270263···337320338321static int vidtv_bridge_probe_demod(struct vidtv_dvb *dvb, u32 n)339322{340340- struct vidtv_demod_config cfg = {};341341-342342- cfg.drop_tslock_prob_on_low_snr = drop_tslock_prob_on_low_snr;343343- cfg.recover_tslock_prob_on_good_snr = recover_tslock_prob_on_good_snr;344344-323323+ struct vidtv_demod_config cfg = {324324+ .drop_tslock_prob_on_low_snr = drop_tslock_prob_on_low_snr,325325+ .recover_tslock_prob_on_good_snr = recover_tslock_prob_on_good_snr,326326+ };345327 dvb->i2c_client_demod[n] = dvb_module_probe("dvb_vidtv_demod",346328 NULL,347329 &dvb->i2c_adapter,···359343360344static int vidtv_bridge_probe_tuner(struct vidtv_dvb *dvb, u32 n)361345{362362- struct vidtv_tuner_config cfg = {};346346+ struct vidtv_tuner_config cfg = {347347+ .fe = dvb->fe[n],348348+ .mock_power_up_delay_msec = mock_power_up_delay_msec,349349+ .mock_tune_delay_msec = mock_tune_delay_msec,350350+ };363351 u32 freq;364352 int i;365365-366366- cfg.fe = dvb->fe[n];367367- cfg.mock_power_up_delay_msec = mock_power_up_delay_msec;368368- cfg.mock_tune_delay_msec = mock_tune_delay_msec;369353370354 /* TODO: check if the frequencies are at a valid range */371355···405389406390static int vidtv_bridge_dvb_init(struct vidtv_dvb *dvb)407391{408408- int ret;409409- int i;410410- int j;392392+ int ret, i, j;411393412394 ret = vidtv_bridge_i2c_register_adap(dvb);413395 if (ret < 0)
+3-1
drivers/media/test-drivers/vidtv/vidtv_bridge.h
···2020#include <linux/i2c.h>2121#include <linux/platform_device.h>2222#include <linux/types.h>2323+2324#include <media/dmxdev.h>2425#include <media/dvb_demux.h>2526#include <media/dvb_frontend.h>2727+2628#include "vidtv_mux.h"27292830/**···3432 * @adapter: Represents a DTV adapter. See 'dvb_register_adapter'.3533 * @demux: The demux used by the dvb_dmx_swfilter_packets() call.3634 * @dmx_dev: Represents a demux device.3737- * @dmx_frontend: The frontends associated with the demux.3535+ * @dmx_fe: The frontends associated with the demux.3836 * @i2c_adapter: The i2c_adapter associated with the bridge driver.3937 * @i2c_client_demod: The i2c_clients associated with the demodulator modules.4038 * @i2c_client_tuner: The i2c_clients associated with the tuner modules.
+274-38
drivers/media/test-drivers/vidtv/vidtv_channel.c
···99 * When vidtv boots, it will create some hardcoded channels.1010 * Their services will be concatenated to populate the SDT.1111 * Their programs will be concatenated to populate the PAT1212+ * Their events will be concatenated to populate the EIT1213 * For each program in the PAT, a PMT section will be created1314 * The PMT section for a channel will be assigned its streams.1415 * Every stream will have its corresponding encoder polled to produce TS packets···1918 * Copyright (C) 2020 Daniel W. S. Almeida2019 */21202222-#include <linux/types.h>2323-#include <linux/slab.h>2421#include <linux/dev_printk.h>2522#include <linux/ratelimit.h>2323+#include <linux/slab.h>2424+#include <linux/types.h>26252726#include "vidtv_channel.h"2828-#include "vidtv_psi.h"2727+#include "vidtv_common.h"2928#include "vidtv_encoder.h"3029#include "vidtv_mux.h"3131-#include "vidtv_common.h"3030+#include "vidtv_psi.h"3231#include "vidtv_s302m.h"33323433static void vidtv_channel_encoder_destroy(struct vidtv_encoder *e)3534{3636- struct vidtv_encoder *curr = e;3735 struct vidtv_encoder *tmp = NULL;3636+ struct vidtv_encoder *curr = e;38373938 while (curr) {4039 /* forward the call to the derived type */···4544}46454746#define ENCODING_ISO8859_15 "\x0b"4747+#define TS_NIT_PID 0x1048484949+/*5050+ * init an audio only channel with a s302m encoder5151+ */4952struct vidtv_channel5053*vidtv_channel_s302m_init(struct vidtv_channel *head, u16 transport_stream_id)5154{5252- /*5353- * init an audio only channel with a s302m encoder5454- */5555+ const __be32 s302m_fid = cpu_to_be32(VIDTV_S302M_FORMAT_IDENTIFIER);5656+ char *event_text = ENCODING_ISO8859_15 "Bagatelle No. 25 in A minor for solo piano, also known as F\xfcr Elise, composed by Ludwig van Beethoven";5757+ char *event_name = ENCODING_ISO8859_15 "Ludwig van Beethoven: F\xfcr Elise";5858+ struct vidtv_s302m_encoder_init_args encoder_args = {};5959+ char *iso_language_code = ENCODING_ISO8859_15 "eng";6060+ char *provider = ENCODING_ISO8859_15 "LinuxTV.org";6161+ char *name = ENCODING_ISO8859_15 "Beethoven";6262+ const u16 s302m_es_pid = 0x111; /* packet id for the ES */6363+ const u16 s302m_program_pid = 0x101; /* packet id for PMT*/5564 const u16 s302m_service_id = 0x880;5665 const u16 s302m_program_num = 0x880;5757- const u16 s302m_program_pid = 0x101; /* packet id for PMT*/5858- const u16 s302m_es_pid = 0x111; /* packet id for the ES */5959- const __be32 s302m_fid = cpu_to_be32(VIDTV_S302M_FORMAT_IDENTIFIER);6666+ const u16 s302m_beethoven_event_id = 1;6767+ struct vidtv_channel *s302m;60686161- char *name = ENCODING_ISO8859_15 "Beethoven";6262- char *provider = ENCODING_ISO8859_15 "LinuxTV.org";6363-6464- struct vidtv_channel *s302m = kzalloc(sizeof(*s302m), GFP_KERNEL);6565- struct vidtv_s302m_encoder_init_args encoder_args = {};6969+ s302m = kzalloc(sizeof(*s302m), GFP_KERNEL);7070+ if (!s302m)7171+ return NULL;66726773 s302m->name = kstrdup(name, GFP_KERNEL);7474+ if (!s302m->name)7575+ goto free_s302m;68766969- s302m->service = vidtv_psi_sdt_service_init(NULL, s302m_service_id);7777+ s302m->service = vidtv_psi_sdt_service_init(NULL, s302m_service_id, false, true);7878+ if (!s302m->service)7979+ goto free_name;70807181 s302m->service->descriptor = (struct vidtv_psi_desc *)7282 vidtv_psi_service_desc_init(NULL,7373- DIGITAL_TELEVISION_SERVICE,8383+ DIGITAL_RADIO_SOUND_SERVICE,7484 name,7585 provider);8686+ if (!s302m->service->descriptor)8787+ goto free_service;76887789 s302m->transport_stream_id = transport_stream_id;78907991 s302m->program = vidtv_psi_pat_program_init(NULL,8092 s302m_service_id,8193 s302m_program_pid);9494+ if (!s302m->program)9595+ goto free_service;82968397 s302m->program_num = s302m_program_num;84988599 s302m->streams = vidtv_psi_pmt_stream_init(NULL,86100 STREAM_PRIVATE_DATA,87101 s302m_es_pid);102102+ if (!s302m->streams)103103+ goto free_program;8810489105 s302m->streams->descriptor = (struct vidtv_psi_desc *)90106 vidtv_psi_registration_desc_init(NULL,91107 s302m_fid,92108 NULL,93109 0);110110+ if (!s302m->streams->descriptor)111111+ goto free_streams;112112+94113 encoder_args.es_pid = s302m_es_pid;9511496115 s302m->encoders = vidtv_s302m_encoder_init(encoder_args);116116+ if (!s302m->encoders)117117+ goto free_streams;118118+119119+ s302m->events = vidtv_psi_eit_event_init(NULL, s302m_beethoven_event_id);120120+ if (!s302m->events)121121+ goto free_encoders;122122+ s302m->events->descriptor = (struct vidtv_psi_desc *)123123+ vidtv_psi_short_event_desc_init(NULL,124124+ iso_language_code,125125+ event_name,126126+ event_text);127127+ if (!s302m->events->descriptor)128128+ goto free_events;9712998130 if (head) {99131 while (head->next)···136102 }137103138104 return s302m;105105+106106+free_events:107107+ vidtv_psi_eit_event_destroy(s302m->events);108108+free_encoders:109109+ vidtv_s302m_encoder_destroy(s302m->encoders);110110+free_streams:111111+ vidtv_psi_pmt_stream_destroy(s302m->streams);112112+free_program:113113+ vidtv_psi_pat_program_destroy(s302m->program);114114+free_service:115115+ vidtv_psi_sdt_service_destroy(s302m->service);116116+free_name:117117+ kfree(s302m->name);118118+free_s302m:119119+ kfree(s302m);120120+121121+ return NULL;122122+}123123+124124+static struct vidtv_psi_table_eit_event125125+*vidtv_channel_eit_event_cat_into_new(struct vidtv_mux *m)126126+{127127+ /* Concatenate the events */128128+ const struct vidtv_channel *cur_chnl = m->channels;129129+ struct vidtv_psi_table_eit_event *curr = NULL;130130+ struct vidtv_psi_table_eit_event *head = NULL;131131+ struct vidtv_psi_table_eit_event *tail = NULL;132132+ struct vidtv_psi_desc *desc = NULL;133133+ u16 event_id;134134+135135+ if (!cur_chnl)136136+ return NULL;137137+138138+ while (cur_chnl) {139139+ curr = cur_chnl->events;140140+141141+ if (!curr)142142+ dev_warn_ratelimited(m->dev,143143+ "No events found for channel %s\n",144144+ cur_chnl->name);145145+146146+ while (curr) {147147+ event_id = be16_to_cpu(curr->event_id);148148+ tail = vidtv_psi_eit_event_init(tail, event_id);149149+ if (!tail) {150150+ vidtv_psi_eit_event_destroy(head);151151+ return NULL;152152+ }153153+154154+ desc = vidtv_psi_desc_clone(curr->descriptor);155155+ vidtv_psi_desc_assign(&tail->descriptor, desc);156156+157157+ if (!head)158158+ head = tail;159159+160160+ curr = curr->next;161161+ }162162+163163+ cur_chnl = cur_chnl->next;164164+ }165165+166166+ return head;139167}140168141169static struct vidtv_psi_table_sdt_service···221125222126 if (!curr)223127 dev_warn_ratelimited(m->dev,224224- "No services found for channel %s\n", cur_chnl->name);128128+ "No services found for channel %s\n",129129+ cur_chnl->name);225130226131 while (curr) {227132 service_id = be16_to_cpu(curr->service_id);228228- tail = vidtv_psi_sdt_service_init(tail, service_id);133133+ tail = vidtv_psi_sdt_service_init(tail,134134+ service_id,135135+ curr->EIT_schedule,136136+ curr->EIT_present_following);137137+ if (!tail)138138+ goto free;229139230140 desc = vidtv_psi_desc_clone(curr->descriptor);141141+ if (!desc)142142+ goto free_tail;231143 vidtv_psi_desc_assign(&tail->descriptor, desc);232144233145 if (!head)···248144 }249145250146 return head;147147+148148+free_tail:149149+ vidtv_psi_sdt_service_destroy(tail);150150+free:151151+ vidtv_psi_sdt_service_destroy(head);152152+ return NULL;251153}252154253155static struct vidtv_psi_table_pat_program*···284174 tail = vidtv_psi_pat_program_init(tail,285175 serv_id,286176 pid);177177+ if (!tail) {178178+ vidtv_psi_pat_program_destroy(head);179179+ return NULL;180180+ }287181288182 if (!head)289183 head = tail;···297183298184 cur_chnl = cur_chnl->next;299185 }186186+ /* Add the NIT table */187187+ vidtv_psi_pat_program_init(tail, 0, TS_NIT_PID);300188301189 return head;302190}303191192192+/*193193+ * Match channels to their respective PMT sections, then assign the194194+ * streams195195+ */304196static void305197vidtv_channel_pmt_match_sections(struct vidtv_channel *channels,306198 struct vidtv_psi_table_pmt **sections,307199 u32 nsections)308200{309309- /*310310- * Match channels to their respective PMT sections, then assign the311311- * streams312312- */313201 struct vidtv_psi_table_pmt *curr_section = NULL;314314- struct vidtv_channel *cur_chnl = channels;315315-316316- struct vidtv_psi_table_pmt_stream *s = NULL;317202 struct vidtv_psi_table_pmt_stream *head = NULL;318203 struct vidtv_psi_table_pmt_stream *tail = NULL;319319-204204+ struct vidtv_psi_table_pmt_stream *s = NULL;205205+ struct vidtv_channel *cur_chnl = channels;320206 struct vidtv_psi_desc *desc = NULL;321321- u32 j;322322- u16 curr_id;323207 u16 e_pid; /* elementary stream pid */208208+ u16 curr_id;209209+ u32 j;324210325211 while (cur_chnl) {326212 for (j = 0; j < nsections; ++j) {···346232 head = tail;347233348234 desc = vidtv_psi_desc_clone(s->descriptor);349349- vidtv_psi_desc_assign(&tail->descriptor, desc);235235+ vidtv_psi_desc_assign(&tail->descriptor,236236+ desc);350237351238 s = s->next;352239 }···361246 }362247}363248364364-void vidtv_channel_si_init(struct vidtv_mux *m)249249+static void250250+vidtv_channel_destroy_service_list(struct vidtv_psi_desc_service_list_entry *e)365251{252252+ struct vidtv_psi_desc_service_list_entry *tmp;253253+254254+ while (e) {255255+ tmp = e;256256+ e = e->next;257257+ kfree(tmp);258258+ }259259+}260260+261261+static struct vidtv_psi_desc_service_list_entry262262+*vidtv_channel_build_service_list(struct vidtv_psi_table_sdt_service *s)263263+{264264+ struct vidtv_psi_desc_service_list_entry *curr_e = NULL;265265+ struct vidtv_psi_desc_service_list_entry *head_e = NULL;266266+ struct vidtv_psi_desc_service_list_entry *prev_e = NULL;267267+ struct vidtv_psi_desc *desc = s->descriptor;268268+ struct vidtv_psi_desc_service *s_desc;269269+270270+ while (s) {271271+ while (desc) {272272+ if (s->descriptor->type != SERVICE_DESCRIPTOR)273273+ goto next_desc;274274+275275+ s_desc = (struct vidtv_psi_desc_service *)desc;276276+277277+ curr_e = kzalloc(sizeof(*curr_e), GFP_KERNEL);278278+ if (!curr_e) {279279+ vidtv_channel_destroy_service_list(head_e);280280+ return NULL;281281+ }282282+283283+ curr_e->service_id = s->service_id;284284+ curr_e->service_type = s_desc->service_type;285285+286286+ if (!head_e)287287+ head_e = curr_e;288288+ if (prev_e)289289+ prev_e->next = curr_e;290290+291291+ prev_e = curr_e;292292+293293+next_desc:294294+ desc = desc->next;295295+ }296296+ s = s->next;297297+ }298298+ return head_e;299299+}300300+301301+int vidtv_channel_si_init(struct vidtv_mux *m)302302+{303303+ struct vidtv_psi_desc_service_list_entry *service_list = NULL;366304 struct vidtv_psi_table_pat_program *programs = NULL;367305 struct vidtv_psi_table_sdt_service *services = NULL;306306+ struct vidtv_psi_table_eit_event *events = NULL;368307369308 m->si.pat = vidtv_psi_pat_table_init(m->transport_stream_id);309309+ if (!m->si.pat)310310+ return -ENOMEM;370311371371- m->si.sdt = vidtv_psi_sdt_table_init(m->transport_stream_id);312312+ m->si.sdt = vidtv_psi_sdt_table_init(m->network_id,313313+ m->transport_stream_id);314314+ if (!m->si.sdt)315315+ goto free_pat;372316373317 programs = vidtv_channel_pat_prog_cat_into_new(m);318318+ if (!programs)319319+ goto free_sdt;374320 services = vidtv_channel_sdt_serv_cat_into_new(m);321321+ if (!services)322322+ goto free_programs;323323+324324+ events = vidtv_channel_eit_event_cat_into_new(m);325325+ if (!events)326326+ goto free_services;327327+328328+ /* look for a service descriptor for every service */329329+ service_list = vidtv_channel_build_service_list(services);330330+ if (!service_list)331331+ goto free_events;332332+333333+ /* use these descriptors to build the NIT */334334+ m->si.nit = vidtv_psi_nit_table_init(m->network_id,335335+ m->transport_stream_id,336336+ m->network_name,337337+ service_list);338338+ if (!m->si.nit)339339+ goto free_service_list;340340+341341+ m->si.eit = vidtv_psi_eit_table_init(m->network_id,342342+ m->transport_stream_id,343343+ programs->service_id);344344+ if (!m->si.eit)345345+ goto free_nit;375346376347 /* assemble all programs and assign to PAT */377348 vidtv_psi_pat_program_assign(m->si.pat, programs);···465264 /* assemble all services and assign to SDT */466265 vidtv_psi_sdt_service_assign(m->si.sdt, services);467266468468- m->si.pmt_secs = vidtv_psi_pmt_create_sec_for_each_pat_entry(m->si.pat, m->pcr_pid);267267+ /* assemble all events and assign to EIT */268268+ vidtv_psi_eit_event_assign(m->si.eit, events);269269+270270+ m->si.pmt_secs = vidtv_psi_pmt_create_sec_for_each_pat_entry(m->si.pat,271271+ m->pcr_pid);272272+ if (!m->si.pmt_secs)273273+ goto free_eit;469274470275 vidtv_channel_pmt_match_sections(m->channels,471276 m->si.pmt_secs,472472- m->si.pat->programs);277277+ m->si.pat->num_pmt);278278+279279+ vidtv_channel_destroy_service_list(service_list);280280+281281+ return 0;282282+283283+free_eit:284284+ vidtv_psi_eit_table_destroy(m->si.eit);285285+free_nit:286286+ vidtv_psi_nit_table_destroy(m->si.nit);287287+free_service_list:288288+ vidtv_channel_destroy_service_list(service_list);289289+free_events:290290+ vidtv_psi_eit_event_destroy(events);291291+free_services:292292+ vidtv_psi_sdt_service_destroy(services);293293+free_programs:294294+ vidtv_psi_pat_program_destroy(programs);295295+free_sdt:296296+ vidtv_psi_sdt_table_destroy(m->si.sdt);297297+free_pat:298298+ vidtv_psi_pat_table_destroy(m->si.pat);299299+ return 0;473300}474301475302void vidtv_channel_si_destroy(struct vidtv_mux *m)476303{477304 u32 i;478478- u16 num_programs = m->si.pat->programs;479305480306 vidtv_psi_pat_table_destroy(m->si.pat);481307482482- for (i = 0; i < num_programs; ++i)308308+ for (i = 0; i < m->si.pat->num_pmt; ++i)483309 vidtv_psi_pmt_table_destroy(m->si.pmt_secs[i]);484310485311 kfree(m->si.pmt_secs);486312 vidtv_psi_sdt_table_destroy(m->si.sdt);313313+ vidtv_psi_nit_table_destroy(m->si.nit);314314+ vidtv_psi_eit_table_destroy(m->si.eit);487315}488316489489-void vidtv_channels_init(struct vidtv_mux *m)317317+int vidtv_channels_init(struct vidtv_mux *m)490318{491319 /* this is the place to add new 'channels' for vidtv */492320 m->channels = vidtv_channel_s302m_init(NULL, m->transport_stream_id);321321+322322+ if (!m->channels)323323+ return -ENOMEM;324324+325325+ return 0;493326}494327495328void vidtv_channels_destroy(struct vidtv_mux *m)···537302 vidtv_psi_pat_program_destroy(curr->program);538303 vidtv_psi_pmt_stream_destroy(curr->streams);539304 vidtv_channel_encoder_destroy(curr->encoders);305305+ vidtv_psi_eit_event_destroy(curr->events);540306541307 tmp = curr;542308 curr = curr->next;
+8-3
drivers/media/test-drivers/vidtv/vidtv_channel.h
···99 * When vidtv boots, it will create some hardcoded channels.1010 * Their services will be concatenated to populate the SDT.1111 * Their programs will be concatenated to populate the PAT1212+ * Their events will be concatenated to populate the EIT1213 * For each program in the PAT, a PMT section will be created1314 * The PMT section for a channel will be assigned its streams.1415 * Every stream will have its corresponding encoder polled to produce TS packets···2322#define VIDTV_CHANNEL_H24232524#include <linux/types.h>2626-#include "vidtv_psi.h"2525+2726#include "vidtv_encoder.h"2827#include "vidtv_mux.h"2828+#include "vidtv_psi.h"29293030/**3131 * struct vidtv_channel - A 'channel' abstraction···3937 * Every stream will have its corresponding encoder polled to produce TS packets4038 * These packets may be interleaved by the mux and then delivered to the bridge4139 *4040+ * @name: name of the channel4241 * @transport_stream_id: a number to identify the TS, chosen at will.4342 * @service: A _single_ service. Will be concatenated into the SDT.4443 * @program_num: The link between PAT, PMT and SDT.···4744 * Will be concatenated into the PAT.4845 * @streams: A stream loop used to populate the PMT section for 'program'4946 * @encoders: A encoder loop. There must be one encoder for each stream.4747+ * @events: Optional event information. This will feed into the EIT.5048 * @next: Optionally chain this channel.5149 */5250struct vidtv_channel {···5854 struct vidtv_psi_table_pat_program *program;5955 struct vidtv_psi_table_pmt_stream *streams;6056 struct vidtv_encoder *encoders;5757+ struct vidtv_psi_table_eit_event *events;6158 struct vidtv_channel *next;6259};6360···6661 * vidtv_channel_si_init - Init the PSI tables from the channels in the mux6762 * @m: The mux containing the channels.6863 */6969-void vidtv_channel_si_init(struct vidtv_mux *m);6464+int vidtv_channel_si_init(struct vidtv_mux *m);7065void vidtv_channel_si_destroy(struct vidtv_mux *m);71667267/**7368 * vidtv_channels_init - Init hardcoded, fake 'channels'.7469 * @m: The mux to store the channels into.7570 */7676-void vidtv_channels_init(struct vidtv_mux *m);7171+int vidtv_channels_init(struct vidtv_mux *m);7772struct vidtv_channel7873*vidtv_channel_s302m_init(struct vidtv_channel *head, u16 transport_stream_id);7974void vidtv_channels_destroy(struct vidtv_mux *m);
···1212#define VIDTV_DEMOD_H13131414#include <linux/dvb/frontend.h>1515+1516#include <media/dvb_frontend.h>16171718/**···2019 * modulation and fec_inner2120 * @modulation: see enum fe_modulation2221 * @fec: see enum fe_fec_rate2222+ * @cnr_ok: S/N threshold to consider the signal as OK. Below that, there's2323+ * a chance of losing sync.2424+ * @cnr_good: S/N threshold to consider the signal strong.2325 *2426 * This struct matches values for 'good' and 'ok' CNRs given the combination2527 * of modulation and fec_inner in use. We might simulate some noise if the···5652 * struct vidtv_demod_state - The demodulator state5753 * @frontend: The frontend structure allocated by the demod.5854 * @config: The config used to init the demod.5959- * @poll_snr: The task responsible for periodically checking the simulated6060- * signal quality, eventually dropping or reacquiring the TS lock.6155 * @status: the demod status.6262- * @cold_start: Whether the demod has not been init yet.6363- * @poll_snr_thread_running: Whether the task responsible for periodically6464- * checking the simulated signal quality is running.6565- * @poll_snr_thread_restart: Whether we should restart the poll_snr task.5656+ * @tuner_cnr: current S/N ratio for the signal carrier6657 */6758struct vidtv_demod_state {6859 struct dvb_frontend frontend;
+4-5
drivers/media/test-drivers/vidtv/vidtv_encoder.h
···2828 struct vidtv_access_unit *next;2929};30303131-/* Some musical notes, used by a tone generator */3131+/* Some musical notes, used by a tone generator. Values are in Hz */3232enum musical_notes {3333 NOTE_SILENT = 0,3434···103103 * @encoder_buf_sz: The encoder buffer size, in bytes104104 * @encoder_buf_offset: Our byte position in the encoder buffer.105105 * @sample_count: How many samples we have encoded in total.106106+ * @access_units: encoder payload units, used for clock references106107 * @src_buf: The source of raw data to be encoded, encoder might set a107108 * default if null.109109+ * @src_buf_sz: size of @src_buf.108110 * @src_buf_offset: Our position in the source buffer.109111 * @is_video_encoder: Whether this a video encoder (as opposed to audio)110112 * @ctx: Encoder-specific state.111113 * @stream_id: Examples: Audio streams (0xc0-0xdf), Video streams112114 * (0xe0-0xef).113113- * @es_id: The TS PID to use for the elementary stream in this encoder.115115+ * @es_pid: The TS PID to use for the elementary stream in this encoder.114116 * @encode: Prepare enough AUs for the given amount of time.115117 * @clear: Clear the encoder output.116118 * @sync: Attempt to synchronize with this encoder.···133131 u32 encoder_buf_offset;134132135133 u64 sample_count;136136- int last_duration;137137- int note_offset;138138- enum musical_notes last_tone;139134140135 struct vidtv_access_unit *access_units;141136
···1515#ifndef VIDTV_MUX_H1616#define VIDTV_MUX_H17171818-#include <linux/types.h>1918#include <linux/hashtable.h>1919+#include <linux/types.h>2020#include <linux/workqueue.h>2121+2122#include <media/dvb_frontend.h>22232324#include "vidtv_psi.h"···5958 * @pat: The PAT in use by the muxer.6059 * @pmt_secs: The PMT sections in use by the muxer. One for each program in the PAT.6160 * @sdt: The SDT in use by the muxer.6161+ * @nit: The NIT in use by the muxer.6262+ * @eit: the EIT in use by the muxer.6263 */6364struct vidtv_mux_si {6465 /* the SI tables */6566 struct vidtv_psi_table_pat *pat;6667 struct vidtv_psi_table_pmt **pmt_secs; /* the PMT sections */6768 struct vidtv_psi_table_sdt *sdt;6969+ struct vidtv_psi_table_nit *nit;7070+ struct vidtv_psi_table_eit *eit;6871};69727073/**···87828883/**8984 * struct vidtv_mux - A muxer abstraction loosely based in libavcodec/mpegtsenc.c9090- * @mux_rate_kbytes_sec: The bit rate for the TS, in kbytes.8585+ * @fe: The frontend structure allocated by the muxer.8686+ * @dev: pointer to struct device.9187 * @timing: Keeps track of timing related information.8888+ * @mux_rate_kbytes_sec: The bit rate for the TS, in kbytes.9289 * @pid_ctx: A hash table to keep track of per-PID metadata.9390 * @on_new_packets_available_cb: A callback to inform of new TS packets ready.9491 * @mux_buf: A pointer to a buffer for this muxer. TS packets are stored there···10699 * @pcr_pid: The TS PID used for the PSI packets. All channels will share the107100 * same PCR.108101 * @transport_stream_id: The transport stream ID102102+ * @network_id: The network ID103103+ * @network_name: The network name109104 * @priv: Private data.110105 */111106struct vidtv_mux {···137128138129 u16 pcr_pid;139130 u16 transport_stream_id;131131+ u16 network_id;132132+ char *network_name;140133 void *priv;141134};142135···153142 * same PCR.154143 * @transport_stream_id: The transport stream ID155144 * @channels: an optional list of channels to use145145+ * @network_id: The network ID146146+ * @network_name: The network name156147 * @priv: Private data.157148 */158149struct vidtv_mux_init_args {···166153 u16 pcr_pid;167154 u16 transport_stream_id;168155 struct vidtv_channel *channels;156156+ u16 network_id;157157+ char *network_name;169158 void *priv;170159};171160172161struct vidtv_mux *vidtv_mux_init(struct dvb_frontend *fe,173162 struct device *dev,174174- struct vidtv_mux_init_args args);163163+ struct vidtv_mux_init_args *args);175164void vidtv_mux_destroy(struct vidtv_mux *m);176165177166void vidtv_mux_start_thread(struct vidtv_mux *m);
+83-96
drivers/media/test-drivers/vidtv/vidtv_pes.c
···1616#include <linux/types.h>1717#include <linux/printk.h>1818#include <linux/ratelimit.h>1919-#include <asm/byteorder.h>20192120#include "vidtv_pes.h"2221#include "vidtv_common.h"···5657 return len;5758}58595959-static u32 vidtv_pes_write_header_stuffing(struct pes_header_write_args args)6060+static u32 vidtv_pes_write_header_stuffing(struct pes_header_write_args *args)6061{6162 /*6263 * This is a fixed 8-bit value equal to '0xFF' that can be inserted···6465 * It is discarded by the decoder. No more than 32 stuffing bytes shall6566 * be present in one PES packet header.6667 */6767- if (args.n_pes_h_s_bytes > PES_HEADER_MAX_STUFFING_BYTES) {6868+ if (args->n_pes_h_s_bytes > PES_HEADER_MAX_STUFFING_BYTES) {6869 pr_warn_ratelimited("More than %d stuffing bytes in PES packet header\n",6970 PES_HEADER_MAX_STUFFING_BYTES);7070- args.n_pes_h_s_bytes = PES_HEADER_MAX_STUFFING_BYTES;7171+ args->n_pes_h_s_bytes = PES_HEADER_MAX_STUFFING_BYTES;7172 }72737373- return vidtv_memset(args.dest_buf,7474- args.dest_offset,7575- args.dest_buf_sz,7474+ return vidtv_memset(args->dest_buf,7575+ args->dest_offset,7676+ args->dest_buf_sz,7677 TS_FILL_BYTE,7777- args.n_pes_h_s_bytes);7878+ args->n_pes_h_s_bytes);7879}79808080-static u32 vidtv_pes_write_pts_dts(struct pes_header_write_args args)8181+static u32 vidtv_pes_write_pts_dts(struct pes_header_write_args *args)8182{8283 u32 nbytes = 0; /* the number of bytes written by this function */8384···8990 u64 mask2;9091 u64 mask3;91929292- if (!args.send_pts && args.send_dts)9393+ if (!args->send_pts && args->send_dts)9394 return 0;94959596 mask1 = GENMASK_ULL(32, 30);···9798 mask3 = GENMASK_ULL(14, 0);989999100 /* see ISO/IEC 13818-1 : 2000 p. 32 */100100- if (args.send_pts && args.send_dts) {101101- pts_dts.pts1 = (0x3 << 4) | ((args.pts & mask1) >> 29) | 0x1;102102- pts_dts.pts2 = cpu_to_be16(((args.pts & mask2) >> 14) | 0x1);103103- pts_dts.pts3 = cpu_to_be16(((args.pts & mask3) << 1) | 0x1);101101+ if (args->send_pts && args->send_dts) {102102+ pts_dts.pts1 = (0x3 << 4) | ((args->pts & mask1) >> 29) | 0x1;103103+ pts_dts.pts2 = cpu_to_be16(((args->pts & mask2) >> 14) | 0x1);104104+ pts_dts.pts3 = cpu_to_be16(((args->pts & mask3) << 1) | 0x1);104105105105- pts_dts.dts1 = (0x1 << 4) | ((args.dts & mask1) >> 29) | 0x1;106106- pts_dts.dts2 = cpu_to_be16(((args.dts & mask2) >> 14) | 0x1);107107- pts_dts.dts3 = cpu_to_be16(((args.dts & mask3) << 1) | 0x1);106106+ pts_dts.dts1 = (0x1 << 4) | ((args->dts & mask1) >> 29) | 0x1;107107+ pts_dts.dts2 = cpu_to_be16(((args->dts & mask2) >> 14) | 0x1);108108+ pts_dts.dts3 = cpu_to_be16(((args->dts & mask3) << 1) | 0x1);108109109110 op = &pts_dts;110111 op_sz = sizeof(pts_dts);111112112112- } else if (args.send_pts) {113113- pts.pts1 = (0x1 << 5) | ((args.pts & mask1) >> 29) | 0x1;114114- pts.pts2 = cpu_to_be16(((args.pts & mask2) >> 14) | 0x1);115115- pts.pts3 = cpu_to_be16(((args.pts & mask3) << 1) | 0x1);113113+ } else if (args->send_pts) {114114+ pts.pts1 = (0x1 << 5) | ((args->pts & mask1) >> 29) | 0x1;115115+ pts.pts2 = cpu_to_be16(((args->pts & mask2) >> 14) | 0x1);116116+ pts.pts3 = cpu_to_be16(((args->pts & mask3) << 1) | 0x1);116117117118 op = &pts;118119 op_sz = sizeof(pts);119120 }120121121122 /* copy PTS/DTS optional */122122- nbytes += vidtv_memcpy(args.dest_buf,123123- args.dest_offset + nbytes,124124- args.dest_buf_sz,123123+ nbytes += vidtv_memcpy(args->dest_buf,124124+ args->dest_offset + nbytes,125125+ args->dest_buf_sz,125126 op,126127 op_sz);127128128129 return nbytes;129130}130131131131-static u32 vidtv_pes_write_h(struct pes_header_write_args args)132132+static u32 vidtv_pes_write_h(struct pes_header_write_args *args)132133{133134 u32 nbytes = 0; /* the number of bytes written by this function */134135135136 struct vidtv_mpeg_pes pes_header = {};136137 struct vidtv_pes_optional pes_optional = {};137137- struct pes_header_write_args pts_dts_args = args;138138- u32 stream_id = (args.encoder_id == S302M) ? PRIVATE_STREAM_1_ID : args.stream_id;138138+ struct pes_header_write_args pts_dts_args;139139+ u32 stream_id = (args->encoder_id == S302M) ? PRIVATE_STREAM_1_ID : args->stream_id;139140 u16 pes_opt_bitfield = 0x01 << 15;140141141142 pes_header.bitfield = cpu_to_be32((PES_START_CODE_PREFIX << 8) | stream_id);142143143143- pes_header.length = cpu_to_be16(vidtv_pes_op_get_len(args.send_pts,144144- args.send_dts) +145145- args.access_unit_len);144144+ pes_header.length = cpu_to_be16(vidtv_pes_op_get_len(args->send_pts,145145+ args->send_dts) +146146+ args->access_unit_len);146147147147- if (args.send_pts && args.send_dts)148148+ if (args->send_pts && args->send_dts)148149 pes_opt_bitfield |= (0x3 << 6);149149- else if (args.send_pts)150150+ else if (args->send_pts)150151 pes_opt_bitfield |= (0x1 << 7);151152152153 pes_optional.bitfield = cpu_to_be16(pes_opt_bitfield);153153- pes_optional.length = vidtv_pes_op_get_len(args.send_pts, args.send_dts) +154154- args.n_pes_h_s_bytes -154154+ pes_optional.length = vidtv_pes_op_get_len(args->send_pts, args->send_dts) +155155+ args->n_pes_h_s_bytes -155156 sizeof(struct vidtv_pes_optional);156157157158 /* copy header */158158- nbytes += vidtv_memcpy(args.dest_buf,159159- args.dest_offset + nbytes,160160- args.dest_buf_sz,159159+ nbytes += vidtv_memcpy(args->dest_buf,160160+ args->dest_offset + nbytes,161161+ args->dest_buf_sz,161162 &pes_header,162163 sizeof(pes_header));163164164165 /* copy optional header bits */165165- nbytes += vidtv_memcpy(args.dest_buf,166166- args.dest_offset + nbytes,167167- args.dest_buf_sz,166166+ nbytes += vidtv_memcpy(args->dest_buf,167167+ args->dest_offset + nbytes,168168+ args->dest_buf_sz,168169 &pes_optional,169170 sizeof(pes_optional));170171171172 /* copy the timing information */172172- pts_dts_args.dest_offset = args.dest_offset + nbytes;173173- nbytes += vidtv_pes_write_pts_dts(pts_dts_args);173173+ pts_dts_args = *args;174174+ pts_dts_args.dest_offset = args->dest_offset + nbytes;175175+ nbytes += vidtv_pes_write_pts_dts(&pts_dts_args);174176175177 /* write any PES header stuffing */176178 nbytes += vidtv_pes_write_header_stuffing(args);···300300 return nbytes;301301}302302303303-u32 vidtv_pes_write_into(struct pes_write_args args)303303+u32 vidtv_pes_write_into(struct pes_write_args *args)304304{305305- u32 unaligned_bytes = (args.dest_offset % TS_PACKET_LEN);306306- struct pes_ts_header_write_args ts_header_args = {};307307- struct pes_header_write_args pes_header_args = {};308308- u32 remaining_len = args.access_unit_len;305305+ u32 unaligned_bytes = (args->dest_offset % TS_PACKET_LEN);306306+ struct pes_ts_header_write_args ts_header_args = {307307+ .dest_buf = args->dest_buf,308308+ .dest_buf_sz = args->dest_buf_sz,309309+ .pid = args->pid,310310+ .pcr = args->pcr,311311+ .continuity_counter = args->continuity_counter,312312+ };313313+ struct pes_header_write_args pes_header_args = {314314+ .dest_buf = args->dest_buf,315315+ .dest_buf_sz = args->dest_buf_sz,316316+ .encoder_id = args->encoder_id,317317+ .send_pts = args->send_pts,318318+ .pts = args->pts,319319+ .send_dts = args->send_dts,320320+ .dts = args->dts,321321+ .stream_id = args->stream_id,322322+ .n_pes_h_s_bytes = args->n_pes_h_s_bytes,323323+ .access_unit_len = args->access_unit_len,324324+ };325325+ u32 remaining_len = args->access_unit_len;309326 bool wrote_pes_header = false;310310- u64 last_pcr = args.pcr;327327+ u64 last_pcr = args->pcr;311328 bool need_pcr = true;312329 u32 available_space;313330 u32 payload_size;···335318 pr_warn_ratelimited("buffer is misaligned, while starting PES\n");336319337320 /* forcibly align and hope for the best */338338- nbytes += vidtv_memset(args.dest_buf,339339- args.dest_offset + nbytes,340340- args.dest_buf_sz,321321+ nbytes += vidtv_memset(args->dest_buf,322322+ args->dest_offset + nbytes,323323+ args->dest_buf_sz,341324 TS_FILL_BYTE,342325 TS_PACKET_LEN - unaligned_bytes);343343- }344344-345345- if (args.send_dts && !args.send_pts) {346346- pr_warn_ratelimited("forbidden value '01' for PTS_DTS flags\n");347347- args.send_pts = true;348348- args.pts = args.dts;349349- }350350-351351- /* see SMPTE 302M clause 6.4 */352352- if (args.encoder_id == S302M) {353353- args.send_dts = false;354354- args.send_pts = true;355326 }356327357328 while (remaining_len) {···350345 * the space needed for the TS header _and_ for the PES header351346 */352347 if (!wrote_pes_header)353353- available_space -= vidtv_pes_h_get_len(args.send_pts,354354- args.send_dts);348348+ available_space -= vidtv_pes_h_get_len(args->send_pts,349349+ args->send_dts);355350356351 /*357352 * if the encoder has inserted stuffing bytes in the PES358353 * header, account for them.359354 */360360- available_space -= args.n_pes_h_s_bytes;355355+ available_space -= args->n_pes_h_s_bytes;361356362357 /* Take the extra adaptation into account if need to send PCR */363358 if (need_pcr) {···392387 }393388394389 /* write ts header */395395- ts_header_args.dest_buf = args.dest_buf;396396- ts_header_args.dest_offset = args.dest_offset + nbytes;397397- ts_header_args.dest_buf_sz = args.dest_buf_sz;398398- ts_header_args.pid = args.pid;399399- ts_header_args.pcr = args.pcr;400400- ts_header_args.continuity_counter = args.continuity_counter;401401- ts_header_args.wrote_pes_header = wrote_pes_header;402402- ts_header_args.n_stuffing_bytes = stuff_bytes;390390+ ts_header_args.dest_offset = args->dest_offset + nbytes;391391+ ts_header_args.wrote_pes_header = wrote_pes_header;392392+ ts_header_args.n_stuffing_bytes = stuff_bytes;403393404394 nbytes += vidtv_pes_write_ts_h(ts_header_args, need_pcr,405395 &last_pcr);···403403404404 if (!wrote_pes_header) {405405 /* write the PES header only once */406406- pes_header_args.dest_buf = args.dest_buf;407407-408408- pes_header_args.dest_offset = args.dest_offset +409409- nbytes;410410-411411- pes_header_args.dest_buf_sz = args.dest_buf_sz;412412- pes_header_args.encoder_id = args.encoder_id;413413- pes_header_args.send_pts = args.send_pts;414414- pes_header_args.pts = args.pts;415415- pes_header_args.send_dts = args.send_dts;416416- pes_header_args.dts = args.dts;417417- pes_header_args.stream_id = args.stream_id;418418- pes_header_args.n_pes_h_s_bytes = args.n_pes_h_s_bytes;419419- pes_header_args.access_unit_len = args.access_unit_len;420420-421421- nbytes += vidtv_pes_write_h(pes_header_args);422422- wrote_pes_header = true;406406+ pes_header_args.dest_offset = args->dest_offset +407407+ nbytes;408408+ nbytes += vidtv_pes_write_h(&pes_header_args);409409+ wrote_pes_header = true;423410 }424411425412 /* write as much of the payload as we possibly can */426426- nbytes += vidtv_memcpy(args.dest_buf,427427- args.dest_offset + nbytes,428428- args.dest_buf_sz,429429- args.from,413413+ nbytes += vidtv_memcpy(args->dest_buf,414414+ args->dest_offset + nbytes,415415+ args->dest_buf_sz,416416+ args->from,430417 payload_size);431418432432- args.from += payload_size;419419+ args->from += payload_size;433420434421 remaining_len -= payload_size;435422 }
+5-3
drivers/media/test-drivers/vidtv/vidtv_pes.h
···1414#ifndef VIDTV_PES_H1515#define VIDTV_PES_H16161717-#include <asm/byteorder.h>1817#include <linux/types.h>19182019#include "vidtv_common.h"···113114 * @dest_buf_sz: The size of the dest_buffer114115 * @pid: The PID to use for the TS packets.115116 * @continuity_counter: Incremented on every new TS packet.116116- * @n_pes_h_s_bytes: Padding bytes. Might be used by an encoder if needed, gets117117+ * @wrote_pes_header: Flag to indicate that the PES header was written118118+ * @n_stuffing_bytes: Padding bytes. Might be used by an encoder if needed, gets117119 * discarded by the decoder.120120+ * @pcr: counter driven by a 27Mhz clock.118121 */119122struct pes_ts_header_write_args {120123 void *dest_buf;···147146 * @dts: DTS value to send.148147 * @n_pes_h_s_bytes: Padding bytes. Might be used by an encoder if needed, gets149148 * discarded by the decoder.149149+ * @pcr: counter driven by a 27Mhz clock.150150 */151151struct pes_write_args {152152 void *dest_buf;···188186 * equal to the size of the access unit, since we need space for PES headers, TS headers189187 * and padding bytes, if any.190188 */191191-u32 vidtv_pes_write_into(struct pes_write_args args);189189+u32 vidtv_pes_write_into(struct pes_write_args *args);192190193191#endif // VIDTV_PES_H
+1110-407
drivers/media/test-drivers/vidtv/vidtv_psi.c
···66 * technically be broken into one or more sections, we do not do this here,77 * hence 'table' and 'section' are interchangeable for vidtv.88 *99- * This code currently supports three tables: PAT, PMT and SDT. These are the1010- * bare minimum to get userspace to recognize our MPEG transport stream. It can1111- * be extended to support more PSI tables in the future.1212- *139 * Copyright (C) 2020 Daniel W. S. Almeida1410 */15111612#define pr_fmt(fmt) KBUILD_MODNAME ":%s, %d: " fmt, __func__, __LINE__17131818-#include <linux/kernel.h>1919-#include <linux/types.h>2020-#include <linux/slab.h>1414+#include <linux/bcd.h>2115#include <linux/crc32.h>2222-#include <linux/string.h>1616+#include <linux/kernel.h>1717+#include <linux/ktime.h>2318#include <linux/printk.h>2419#include <linux/ratelimit.h>2020+#include <linux/slab.h>2521#include <linux/string.h>2626-#include <asm/byteorder.h>2222+#include <linux/string.h>2323+#include <linux/time.h>2424+#include <linux/types.h>27252828-#include "vidtv_psi.h"2926#include "vidtv_common.h"2727+#include "vidtv_psi.h"3028#include "vidtv_ts.h"31293230#define CRC_SIZE_IN_BYTES 43331#define MAX_VERSION_NUM 323232+#define INITIAL_CRC 0xffffffff3333+#define ISO_LANGUAGE_CODE_LEN 334343535static const u32 CRC_LUT[256] = {3636 /* from libdvbv5 */···7979 0xbcb4666d, 0xb8757bda, 0xb5365d03, 0xb1f740b48080};81818282-static inline u32 dvb_crc32(u32 crc, u8 *data, u32 len)8282+static u32 dvb_crc32(u32 crc, u8 *data, u32 len)8383{8484 /* from libdvbv5 */8585 while (len--)···9292 h->version++;9393}94949595-static inline u16 vidtv_psi_sdt_serv_get_desc_loop_len(struct vidtv_psi_table_sdt_service *s)9696-{9797- u16 mask;9898- u16 ret;9999-100100- mask = GENMASK(11, 0);101101-102102- ret = be16_to_cpu(s->bitfield) & mask;103103- return ret;104104-}105105-106106-static inline u16 vidtv_psi_pmt_stream_get_desc_loop_len(struct vidtv_psi_table_pmt_stream *s)107107-{108108- u16 mask;109109- u16 ret;110110-111111- mask = GENMASK(9, 0);112112-113113- ret = be16_to_cpu(s->bitfield2) & mask;114114- return ret;115115-}116116-117117-static inline u16 vidtv_psi_pmt_get_desc_loop_len(struct vidtv_psi_table_pmt *p)118118-{119119- u16 mask;120120- u16 ret;121121-122122- mask = GENMASK(9, 0);123123-124124- ret = be16_to_cpu(p->bitfield2) & mask;125125- return ret;126126-}127127-128128-static inline u16 vidtv_psi_get_sec_len(struct vidtv_psi_table_header *h)9595+static u16 vidtv_psi_get_sec_len(struct vidtv_psi_table_header *h)12996{13097 u16 mask;13198 u16 ret;···103136 return ret;104137}105138106106-inline u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p)139139+u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p)107140{108141 u16 mask;109142 u16 ret;···114147 return ret;115148}116149117117-inline u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s)150150+u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s)118151{119152 u16 mask;120153 u16 ret;···125158 return ret;126159}127160128128-static inline void vidtv_psi_set_desc_loop_len(__be16 *bitfield, u16 new_len, u8 desc_len_nbits)161161+static void vidtv_psi_set_desc_loop_len(__be16 *bitfield, u16 new_len,162162+ u8 desc_len_nbits)129163{130130- u16 mask;131164 __be16 new;165165+ u16 mask;132166133167 mask = GENMASK(15, desc_len_nbits);134168···156188 h->bitfield = new;157189}158190159159-static u32 vidtv_psi_ts_psi_write_into(struct psi_write_args args)191191+/*192192+ * Packetize PSI sections into TS packets:193193+ * push a TS header (4bytes) every 184 bytes194194+ * manage the continuity_counter195195+ * add stuffing (i.e. padding bytes) after the CRC196196+ */197197+static u32 vidtv_psi_ts_psi_write_into(struct psi_write_args *args)160198{161161- /*162162- * Packetize PSI sections into TS packets:163163- * push a TS header (4bytes) every 184 bytes164164- * manage the continuity_counter165165- * add stuffing (i.e. padding bytes) after the CRC166166- */167167-168168- u32 nbytes_past_boundary = (args.dest_offset % TS_PACKET_LEN);199199+ struct vidtv_mpeg_ts ts_header = {200200+ .sync_byte = TS_SYNC_BYTE,201201+ .bitfield = cpu_to_be16((args->new_psi_section << 14) | args->pid),202202+ .scrambling = 0,203203+ .payload = 1,204204+ .adaptation_field = 0, /* no adaptation field */205205+ };206206+ u32 nbytes_past_boundary = (args->dest_offset % TS_PACKET_LEN);169207 bool aligned = (nbytes_past_boundary == 0);170170- struct vidtv_mpeg_ts ts_header = {};171171-172172- /* number of bytes written by this function */173173- u32 nbytes = 0;174174- /* how much there is left to write */175175- u32 remaining_len = args.len;176176- /* how much we can be written in this packet */208208+ u32 remaining_len = args->len;177209 u32 payload_write_len = 0;178178- /* where we are in the source */179210 u32 payload_offset = 0;211211+ u32 nbytes = 0;180212181181- const u16 PAYLOAD_START = args.new_psi_section;182182-183183- if (!args.crc && !args.is_crc)213213+ if (!args->crc && !args->is_crc)184214 pr_warn_ratelimited("Missing CRC for chunk\n");185215186186- if (args.crc)187187- *args.crc = dvb_crc32(*args.crc, args.from, args.len);216216+ if (args->crc)217217+ *args->crc = dvb_crc32(*args->crc, args->from, args->len);188218189189- if (args.new_psi_section && !aligned) {219219+ if (args->new_psi_section && !aligned) {190220 pr_warn_ratelimited("Cannot write a new PSI section in a misaligned buffer\n");191221192222 /* forcibly align and hope for the best */193193- nbytes += vidtv_memset(args.dest_buf,194194- args.dest_offset + nbytes,195195- args.dest_buf_sz,223223+ nbytes += vidtv_memset(args->dest_buf,224224+ args->dest_offset + nbytes,225225+ args->dest_buf_sz,196226 TS_FILL_BYTE,197227 TS_PACKET_LEN - nbytes_past_boundary);198228 }199229200230 while (remaining_len) {201201- nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN;231231+ nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN;202232 aligned = (nbytes_past_boundary == 0);203233204234 if (aligned) {205235 /* if at a packet boundary, write a new TS header */206206- ts_header.sync_byte = TS_SYNC_BYTE;207207- ts_header.bitfield = cpu_to_be16((PAYLOAD_START << 14) | args.pid);208208- ts_header.scrambling = 0;209209- ts_header.continuity_counter = *args.continuity_counter;210210- ts_header.payload = 1;211211- /* no adaptation field */212212- ts_header.adaptation_field = 0;236236+ ts_header.continuity_counter = *args->continuity_counter;213237214214- /* copy the header */215215- nbytes += vidtv_memcpy(args.dest_buf,216216- args.dest_offset + nbytes,217217- args.dest_buf_sz,238238+ nbytes += vidtv_memcpy(args->dest_buf,239239+ args->dest_offset + nbytes,240240+ args->dest_buf_sz,218241 &ts_header,219242 sizeof(ts_header));220243 /*221244 * This will trigger a discontinuity if the buffer is full,222245 * effectively dropping the packet.223246 */224224- vidtv_ts_inc_cc(args.continuity_counter);247247+ vidtv_ts_inc_cc(args->continuity_counter);225248 }226249227250 /* write the pointer_field in the first byte of the payload */228228- if (args.new_psi_section)229229- nbytes += vidtv_memset(args.dest_buf,230230- args.dest_offset + nbytes,231231- args.dest_buf_sz,251251+ if (args->new_psi_section)252252+ nbytes += vidtv_memset(args->dest_buf,253253+ args->dest_offset + nbytes,254254+ args->dest_buf_sz,232255 0x0,233256 1);234257235258 /* write as much of the payload as possible */236236- nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN;259259+ nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN;237260 payload_write_len = min(TS_PACKET_LEN - nbytes_past_boundary, remaining_len);238261239239- nbytes += vidtv_memcpy(args.dest_buf,240240- args.dest_offset + nbytes,241241- args.dest_buf_sz,242242- args.from + payload_offset,262262+ nbytes += vidtv_memcpy(args->dest_buf,263263+ args->dest_offset + nbytes,264264+ args->dest_buf_sz,265265+ args->from + payload_offset,243266 payload_write_len);244267245268 /* 'payload_write_len' written from a total of 'len' requested*/···242283 * fill the rest of the packet if there is any remaining space unused243284 */244285245245- nbytes_past_boundary = (args.dest_offset + nbytes) % TS_PACKET_LEN;286286+ nbytes_past_boundary = (args->dest_offset + nbytes) % TS_PACKET_LEN;246287247247- if (args.is_crc)248248- nbytes += vidtv_memset(args.dest_buf,249249- args.dest_offset + nbytes,250250- args.dest_buf_sz,288288+ if (args->is_crc)289289+ nbytes += vidtv_memset(args->dest_buf,290290+ args->dest_offset + nbytes,291291+ args->dest_buf_sz,251292 TS_FILL_BYTE,252293 TS_PACKET_LEN - nbytes_past_boundary);253294254295 return nbytes;255296}256297257257-static u32 table_section_crc32_write_into(struct crc32_write_args args)298298+static u32 table_section_crc32_write_into(struct crc32_write_args *args)258299{300300+ struct psi_write_args psi_args = {301301+ .dest_buf = args->dest_buf,302302+ .from = &args->crc,303303+ .len = CRC_SIZE_IN_BYTES,304304+ .dest_offset = args->dest_offset,305305+ .pid = args->pid,306306+ .new_psi_section = false,307307+ .continuity_counter = args->continuity_counter,308308+ .is_crc = true,309309+ .dest_buf_sz = args->dest_buf_sz,310310+ };311311+259312 /* the CRC is the last entry in the section */260260- u32 nbytes = 0;261261- struct psi_write_args psi_args = {};262313263263- psi_args.dest_buf = args.dest_buf;264264- psi_args.from = &args.crc;265265- psi_args.len = CRC_SIZE_IN_BYTES;266266- psi_args.dest_offset = args.dest_offset;267267- psi_args.pid = args.pid;268268- psi_args.new_psi_section = false;269269- psi_args.continuity_counter = args.continuity_counter;270270- psi_args.is_crc = true;271271- psi_args.dest_buf_sz = args.dest_buf_sz;314314+ return vidtv_psi_ts_psi_write_into(&psi_args);315315+}272316273273- nbytes += vidtv_psi_ts_psi_write_into(psi_args);317317+static void vidtv_psi_desc_chain(struct vidtv_psi_desc *head, struct vidtv_psi_desc *desc)318318+{319319+ if (head) {320320+ while (head->next)321321+ head = head->next;274322275275- return nbytes;323323+ head->next = desc;324324+ }276325}277326278327struct vidtv_psi_desc_service *vidtv_psi_service_desc_init(struct vidtv_psi_desc *head,···293326 u32 provider_name_len = provider_name ? strlen(provider_name) : 0;294327295328 desc = kzalloc(sizeof(*desc), GFP_KERNEL);329329+ if (!desc)330330+ return NULL;296331297332 desc->type = SERVICE_DESCRIPTOR;298333···316347 if (provider_name && provider_name_len)317348 desc->provider_name = kstrdup(provider_name, GFP_KERNEL);318349319319- if (head) {320320- while (head->next)321321- head = head->next;322322-323323- head->next = (struct vidtv_psi_desc *)desc;324324- }350350+ vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);325351 return desc;326352}327353···329365 struct vidtv_psi_desc_registration *desc;330366331367 desc = kzalloc(sizeof(*desc) + sizeof(format_id) + additional_info_len, GFP_KERNEL);368368+ if (!desc)369369+ return NULL;332370333371 desc->type = REGISTRATION_DESCRIPTOR;334372···344378 additional_ident_info,345379 additional_info_len);346380347347- if (head) {348348- while (head->next)349349- head = head->next;381381+ vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);382382+ return desc;383383+}350384351351- head->next = (struct vidtv_psi_desc *)desc;385385+struct vidtv_psi_desc_network_name386386+*vidtv_psi_network_name_desc_init(struct vidtv_psi_desc *head, char *network_name)387387+{388388+ u32 network_name_len = network_name ? strlen(network_name) : 0;389389+ struct vidtv_psi_desc_network_name *desc;390390+391391+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);392392+ if (!desc)393393+ return NULL;394394+395395+ desc->type = NETWORK_NAME_DESCRIPTOR;396396+397397+ desc->length = network_name_len;398398+399399+ if (network_name && network_name_len)400400+ desc->network_name = kstrdup(network_name, GFP_KERNEL);401401+402402+ vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);403403+ return desc;404404+}405405+406406+struct vidtv_psi_desc_service_list407407+*vidtv_psi_service_list_desc_init(struct vidtv_psi_desc *head,408408+ struct vidtv_psi_desc_service_list_entry *entry)409409+{410410+ struct vidtv_psi_desc_service_list_entry *curr_e = NULL;411411+ struct vidtv_psi_desc_service_list_entry *head_e = NULL;412412+ struct vidtv_psi_desc_service_list_entry *prev_e = NULL;413413+ struct vidtv_psi_desc_service_list *desc;414414+ u16 length = 0;415415+416416+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);417417+ if (!desc)418418+ return NULL;419419+420420+ desc->type = SERVICE_LIST_DESCRIPTOR;421421+422422+ while (entry) {423423+ curr_e = kzalloc(sizeof(*curr_e), GFP_KERNEL);424424+ if (!curr_e) {425425+ while (head_e) {426426+ curr_e = head_e;427427+ head_e = head_e->next;428428+ kfree(curr_e);429429+ }430430+ kfree(desc);431431+ return NULL;432432+ }433433+434434+ curr_e->service_id = entry->service_id;435435+ curr_e->service_type = entry->service_type;436436+437437+ length += sizeof(struct vidtv_psi_desc_service_list_entry) -438438+ sizeof(struct vidtv_psi_desc_service_list_entry *);439439+440440+ if (!head_e)441441+ head_e = curr_e;442442+ if (prev_e)443443+ prev_e->next = curr_e;444444+445445+ prev_e = curr_e;446446+ entry = entry->next;352447 }353448449449+ desc->length = length;450450+ desc->service_list = head_e;451451+452452+ vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);453453+ return desc;454454+}455455+456456+struct vidtv_psi_desc_short_event457457+*vidtv_psi_short_event_desc_init(struct vidtv_psi_desc *head,458458+ char *iso_language_code,459459+ char *event_name,460460+ char *text)461461+{462462+ u32 iso_len = iso_language_code ? strlen(iso_language_code) : 0;463463+ u32 event_name_len = event_name ? strlen(event_name) : 0;464464+ struct vidtv_psi_desc_short_event *desc;465465+ u32 text_len = text ? strlen(text) : 0;466466+467467+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);468468+ if (!desc)469469+ return NULL;470470+471471+ desc->type = SHORT_EVENT_DESCRIPTOR;472472+473473+ desc->length = ISO_LANGUAGE_CODE_LEN +474474+ sizeof_field(struct vidtv_psi_desc_short_event, event_name_len) +475475+ event_name_len +476476+ sizeof_field(struct vidtv_psi_desc_short_event, text_len) +477477+ text_len;478478+479479+ desc->event_name_len = event_name_len;480480+ desc->text_len = text_len;481481+482482+ if (iso_len != ISO_LANGUAGE_CODE_LEN)483483+ iso_language_code = "eng";484484+485485+ desc->iso_language_code = kstrdup(iso_language_code, GFP_KERNEL);486486+487487+ if (event_name && event_name_len)488488+ desc->event_name = kstrdup(event_name, GFP_KERNEL);489489+490490+ if (text && text_len)491491+ desc->text = kstrdup(text, GFP_KERNEL);492492+493493+ vidtv_psi_desc_chain(head, (struct vidtv_psi_desc *)desc);354494 return desc;355495}356496357497struct vidtv_psi_desc *vidtv_psi_desc_clone(struct vidtv_psi_desc *desc)358498{499499+ struct vidtv_psi_desc_network_name *desc_network_name;500500+ struct vidtv_psi_desc_service_list *desc_service_list;501501+ struct vidtv_psi_desc_short_event *desc_short_event;502502+ struct vidtv_psi_desc_service *service;359503 struct vidtv_psi_desc *head = NULL;360504 struct vidtv_psi_desc *prev = NULL;361505 struct vidtv_psi_desc *curr = NULL;362362-363363- struct vidtv_psi_desc_service *service;364506365507 while (desc) {366508 switch (desc->type) {367509 case SERVICE_DESCRIPTOR:368510 service = (struct vidtv_psi_desc_service *)desc;369511 curr = (struct vidtv_psi_desc *)370370- vidtv_psi_service_desc_init(head,371371- service->service_type,372372- service->service_name,373373- service->provider_name);512512+ vidtv_psi_service_desc_init(head,513513+ service->service_type,514514+ service->service_name,515515+ service->provider_name);516516+ break;517517+518518+ case NETWORK_NAME_DESCRIPTOR:519519+ desc_network_name = (struct vidtv_psi_desc_network_name *)desc;520520+ curr = (struct vidtv_psi_desc *)521521+ vidtv_psi_network_name_desc_init(head,522522+ desc_network_name->network_name);523523+ break;524524+525525+ case SERVICE_LIST_DESCRIPTOR:526526+ desc_service_list = (struct vidtv_psi_desc_service_list *)desc;527527+ curr = (struct vidtv_psi_desc *)528528+ vidtv_psi_service_list_desc_init(head,529529+ desc_service_list->service_list);530530+ break;531531+532532+ case SHORT_EVENT_DESCRIPTOR:533533+ desc_short_event = (struct vidtv_psi_desc_short_event *)desc;534534+ curr = (struct vidtv_psi_desc *)535535+ vidtv_psi_short_event_desc_init(head,536536+ desc_short_event->iso_language_code,537537+ desc_short_event->event_name,538538+ desc_short_event->text);374539 break;375540376541 case REGISTRATION_DESCRIPTOR:377542 default:378543 curr = kzalloc(sizeof(*desc) + desc->length, GFP_KERNEL);544544+ if (!curr)545545+ return NULL;379546 memcpy(curr, desc, sizeof(*desc) + desc->length);380380- break;381381- }547547+ }382548383383- if (curr)384384- curr->next = NULL;549549+ if (!curr)550550+ return NULL;551551+552552+ curr->next = NULL;385553 if (!head)386554 head = curr;387555 if (prev)···530430531431void vidtv_psi_desc_destroy(struct vidtv_psi_desc *desc)532432{433433+ struct vidtv_psi_desc_service_list_entry *sl_entry_tmp = NULL;434434+ struct vidtv_psi_desc_service_list_entry *sl_entry = NULL;533435 struct vidtv_psi_desc *curr = desc;534436 struct vidtv_psi_desc *tmp = NULL;535437···548446 case REGISTRATION_DESCRIPTOR:549447 /* nothing to do */550448 break;449449+450450+ case NETWORK_NAME_DESCRIPTOR:451451+ kfree(((struct vidtv_psi_desc_network_name *)tmp)->network_name);452452+ break;453453+454454+ case SERVICE_LIST_DESCRIPTOR:455455+ sl_entry = ((struct vidtv_psi_desc_service_list *)tmp)->service_list;456456+ while (sl_entry) {457457+ sl_entry_tmp = sl_entry;458458+ sl_entry = sl_entry->next;459459+ kfree(sl_entry_tmp);460460+ }461461+ break;462462+463463+ case SHORT_EVENT_DESCRIPTOR:464464+ kfree(((struct vidtv_psi_desc_short_event *)tmp)->iso_language_code);465465+ kfree(((struct vidtv_psi_desc_short_event *)tmp)->event_name);466466+ kfree(((struct vidtv_psi_desc_short_event *)tmp)->text);467467+ break;551468552469 default:553470 pr_warn_ratelimited("Possible leak: not handling descriptor type %d\n",···634513 vidtv_psi_update_version_num(&sdt->header);635514}636515637637-static u32 vidtv_psi_desc_write_into(struct desc_write_args args)516516+static u32 vidtv_psi_desc_write_into(struct desc_write_args *args)638517{639639- /* the number of bytes written by this function */518518+ struct psi_write_args psi_args = {519519+ .dest_buf = args->dest_buf,520520+ .from = &args->desc->type,521521+ .pid = args->pid,522522+ .new_psi_section = false,523523+ .continuity_counter = args->continuity_counter,524524+ .is_crc = false,525525+ .dest_buf_sz = args->dest_buf_sz,526526+ .crc = args->crc,527527+ .len = sizeof_field(struct vidtv_psi_desc, type) +528528+ sizeof_field(struct vidtv_psi_desc, length),529529+ };530530+ struct vidtv_psi_desc_service_list_entry *serv_list_entry = NULL;640531 u32 nbytes = 0;641641- struct psi_write_args psi_args = {};642532643643- psi_args.dest_buf = args.dest_buf;644644- psi_args.from = &args.desc->type;533533+ psi_args.dest_offset = args->dest_offset + nbytes;645534646646- psi_args.len = sizeof_field(struct vidtv_psi_desc, type) +647647- sizeof_field(struct vidtv_psi_desc, length);535535+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);648536649649- psi_args.dest_offset = args.dest_offset + nbytes;650650- psi_args.pid = args.pid;651651- psi_args.new_psi_section = false;652652- psi_args.continuity_counter = args.continuity_counter;653653- psi_args.is_crc = false;654654- psi_args.dest_buf_sz = args.dest_buf_sz;655655- psi_args.crc = args.crc;656656-657657- nbytes += vidtv_psi_ts_psi_write_into(psi_args);658658-659659- switch (args.desc->type) {537537+ switch (args->desc->type) {660538 case SERVICE_DESCRIPTOR:661661- psi_args.dest_offset = args.dest_offset + nbytes;539539+ psi_args.dest_offset = args->dest_offset + nbytes;662540 psi_args.len = sizeof_field(struct vidtv_psi_desc_service, service_type) +663541 sizeof_field(struct vidtv_psi_desc_service, provider_name_len);664664- psi_args.from = &((struct vidtv_psi_desc_service *)args.desc)->service_type;542542+ psi_args.from = &((struct vidtv_psi_desc_service *)args->desc)->service_type;665543666666- nbytes += vidtv_psi_ts_psi_write_into(psi_args);544544+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);667545668668- psi_args.dest_offset = args.dest_offset + nbytes;669669- psi_args.len = ((struct vidtv_psi_desc_service *)args.desc)->provider_name_len;670670- psi_args.from = ((struct vidtv_psi_desc_service *)args.desc)->provider_name;546546+ psi_args.dest_offset = args->dest_offset + nbytes;547547+ psi_args.len = ((struct vidtv_psi_desc_service *)args->desc)->provider_name_len;548548+ psi_args.from = ((struct vidtv_psi_desc_service *)args->desc)->provider_name;671549672672- nbytes += vidtv_psi_ts_psi_write_into(psi_args);550550+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);673551674674- psi_args.dest_offset = args.dest_offset + nbytes;552552+ psi_args.dest_offset = args->dest_offset + nbytes;675553 psi_args.len = sizeof_field(struct vidtv_psi_desc_service, service_name_len);676676- psi_args.from = &((struct vidtv_psi_desc_service *)args.desc)->service_name_len;554554+ psi_args.from = &((struct vidtv_psi_desc_service *)args->desc)->service_name_len;677555678678- nbytes += vidtv_psi_ts_psi_write_into(psi_args);556556+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);679557680680- psi_args.dest_offset = args.dest_offset + nbytes;681681- psi_args.len = ((struct vidtv_psi_desc_service *)args.desc)->service_name_len;682682- psi_args.from = ((struct vidtv_psi_desc_service *)args.desc)->service_name;558558+ psi_args.dest_offset = args->dest_offset + nbytes;559559+ psi_args.len = ((struct vidtv_psi_desc_service *)args->desc)->service_name_len;560560+ psi_args.from = ((struct vidtv_psi_desc_service *)args->desc)->service_name;683561684684- nbytes += vidtv_psi_ts_psi_write_into(psi_args);562562+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);563563+ break;564564+565565+ case NETWORK_NAME_DESCRIPTOR:566566+ psi_args.dest_offset = args->dest_offset + nbytes;567567+ psi_args.len = args->desc->length;568568+ psi_args.from = ((struct vidtv_psi_desc_network_name *)args->desc)->network_name;569569+570570+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);571571+ break;572572+573573+ case SERVICE_LIST_DESCRIPTOR:574574+ serv_list_entry = ((struct vidtv_psi_desc_service_list *)args->desc)->service_list;575575+ while (serv_list_entry) {576576+ psi_args.dest_offset = args->dest_offset + nbytes;577577+ psi_args.len = sizeof(struct vidtv_psi_desc_service_list_entry) -578578+ sizeof(struct vidtv_psi_desc_service_list_entry *);579579+ psi_args.from = serv_list_entry;580580+581581+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);582582+583583+ serv_list_entry = serv_list_entry->next;584584+ }585585+ break;586586+587587+ case SHORT_EVENT_DESCRIPTOR:588588+ psi_args.dest_offset = args->dest_offset + nbytes;589589+ psi_args.len = ISO_LANGUAGE_CODE_LEN;590590+ psi_args.from = ((struct vidtv_psi_desc_short_event *)591591+ args->desc)->iso_language_code;592592+593593+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);594594+595595+ psi_args.dest_offset = args->dest_offset + nbytes;596596+ psi_args.len = sizeof_field(struct vidtv_psi_desc_short_event, event_name_len);597597+ psi_args.from = &((struct vidtv_psi_desc_short_event *)598598+ args->desc)->event_name_len;599599+600600+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);601601+602602+ psi_args.dest_offset = args->dest_offset + nbytes;603603+ psi_args.len = ((struct vidtv_psi_desc_short_event *)args->desc)->event_name_len;604604+ psi_args.from = ((struct vidtv_psi_desc_short_event *)args->desc)->event_name;605605+606606+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);607607+608608+ psi_args.dest_offset = args->dest_offset + nbytes;609609+ psi_args.len = sizeof_field(struct vidtv_psi_desc_short_event, text_len);610610+ psi_args.from = &((struct vidtv_psi_desc_short_event *)args->desc)->text_len;611611+612612+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);613613+614614+ psi_args.dest_offset = args->dest_offset + nbytes;615615+ psi_args.len = ((struct vidtv_psi_desc_short_event *)args->desc)->text_len;616616+ psi_args.from = ((struct vidtv_psi_desc_short_event *)args->desc)->text;617617+618618+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);619619+685620 break;686621687622 case REGISTRATION_DESCRIPTOR:688623 default:689689- psi_args.dest_offset = args.dest_offset + nbytes;690690- psi_args.len = args.desc->length;691691- psi_args.from = &args.desc->data;624624+ psi_args.dest_offset = args->dest_offset + nbytes;625625+ psi_args.len = args->desc->length;626626+ psi_args.from = &args->desc->data;692627693693- nbytes += vidtv_psi_ts_psi_write_into(psi_args);628628+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);694629 break;695630 }696631···754577}755578756579static u32757757-vidtv_psi_table_header_write_into(struct header_write_args args)580580+vidtv_psi_table_header_write_into(struct header_write_args *args)758581{759759- /* the number of bytes written by this function */760760- u32 nbytes = 0;761761- struct psi_write_args psi_args = {};582582+ struct psi_write_args psi_args = {583583+ .dest_buf = args->dest_buf,584584+ .from = args->h,585585+ .len = sizeof(struct vidtv_psi_table_header),586586+ .dest_offset = args->dest_offset,587587+ .pid = args->pid,588588+ .new_psi_section = true,589589+ .continuity_counter = args->continuity_counter,590590+ .is_crc = false,591591+ .dest_buf_sz = args->dest_buf_sz,592592+ .crc = args->crc,593593+ };762594763763- psi_args.dest_buf = args.dest_buf;764764- psi_args.from = args.h;765765- psi_args.len = sizeof(struct vidtv_psi_table_header);766766- psi_args.dest_offset = args.dest_offset;767767- psi_args.pid = args.pid;768768- psi_args.new_psi_section = true;769769- psi_args.continuity_counter = args.continuity_counter;770770- psi_args.is_crc = false;771771- psi_args.dest_buf_sz = args.dest_buf_sz;772772- psi_args.crc = args.crc;773773-774774- nbytes += vidtv_psi_ts_psi_write_into(psi_args);775775-776776- return nbytes;595595+ return vidtv_psi_ts_psi_write_into(&psi_args);777596}778597779598void780599vidtv_psi_pat_table_update_sec_len(struct vidtv_psi_table_pat *pat)781600{782782- /* see ISO/IEC 13818-1 : 2000 p.43 */783601 u16 length = 0;784602 u32 i;603603+604604+ /* see ISO/IEC 13818-1 : 2000 p.43 */785605786606 /* from immediately after 'section_length' until 'last_section_number'*/787607 length += PAT_LEN_UNTIL_LAST_SECTION_NUMBER;788608789609 /* do not count the pointer */790790- for (i = 0; i < pat->programs; ++i)610610+ for (i = 0; i < pat->num_pat; ++i)791611 length += sizeof(struct vidtv_psi_table_pat_program) -792612 sizeof(struct vidtv_psi_table_pat_program *);793613···795621796622void vidtv_psi_pmt_table_update_sec_len(struct vidtv_psi_table_pmt *pmt)797623{798798- /* see ISO/IEC 13818-1 : 2000 p.46 */799799- u16 length = 0;800624 struct vidtv_psi_table_pmt_stream *s = pmt->stream;801625 u16 desc_loop_len;626626+ u16 length = 0;627627+628628+ /* see ISO/IEC 13818-1 : 2000 p.46 */802629803630 /* from immediately after 'section_length' until 'program_info_length'*/804631 length += PMT_LEN_UNTIL_PROGRAM_INFO_LENGTH;···830655831656void vidtv_psi_sdt_table_update_sec_len(struct vidtv_psi_table_sdt *sdt)832657{833833- /* see ETSI EN 300 468 V 1.10.1 p.24 */834834- u16 length = 0;835658 struct vidtv_psi_table_sdt_service *s = sdt->service;836659 u16 desc_loop_len;660660+ u16 length = 0;661661+662662+ /* see ETSI EN 300 468 V 1.10.1 p.24 */837663838664 /*839665 * from immediately after 'section_length' until···857681 }858682859683 length += CRC_SIZE_IN_BYTES;860860-861684 vidtv_psi_set_sec_len(&sdt->header, length);862685}863686···869694 const u16 RESERVED = 0x07;870695871696 program = kzalloc(sizeof(*program), GFP_KERNEL);697697+ if (!program)698698+ return NULL;872699873700 program->service_id = cpu_to_be16(service_id);874701···891714void892715vidtv_psi_pat_program_destroy(struct vidtv_psi_table_pat_program *p)893716{894894- struct vidtv_psi_table_pat_program *curr = p;895717 struct vidtv_psi_table_pat_program *tmp = NULL;718718+ struct vidtv_psi_table_pat_program *curr = p;896719897720 while (curr) {898721 tmp = curr;···901724 }902725}903726727727+/* This function transfers ownership of p to the table */904728void905729vidtv_psi_pat_program_assign(struct vidtv_psi_table_pat *pat,906730 struct vidtv_psi_table_pat_program *p)907731{908908- /* This function transfers ownership of p to the table */732732+ struct vidtv_psi_table_pat_program *program;733733+ u16 program_count;909734910910- u16 program_count = 0;911911- struct vidtv_psi_table_pat_program *program = p;735735+ do {736736+ program_count = 0;737737+ program = p;912738913913- if (p == pat->program)914914- return;739739+ if (p == pat->program)740740+ return;915741916916- while (program) {917917- ++program_count;918918- program = program->next;919919- }742742+ while (program) {743743+ ++program_count;744744+ program = program->next;745745+ }920746921921- pat->programs = program_count;922922- pat->program = p;747747+ pat->num_pat = program_count;748748+ pat->program = p;923749924924- /* Recompute section length */925925- vidtv_psi_pat_table_update_sec_len(pat);750750+ /* Recompute section length */751751+ vidtv_psi_pat_table_update_sec_len(pat);926752927927- if (vidtv_psi_get_sec_len(&pat->header) > MAX_SECTION_LEN)928928- vidtv_psi_pat_program_assign(pat, NULL);753753+ p = NULL;754754+ } while (vidtv_psi_get_sec_len(&pat->header) > MAX_SECTION_LEN);929755930756 vidtv_psi_update_version_num(&pat->header);931757}932758933759struct vidtv_psi_table_pat *vidtv_psi_pat_table_init(u16 transport_stream_id)934760{935935- struct vidtv_psi_table_pat *pat = kzalloc(sizeof(*pat), GFP_KERNEL);761761+ struct vidtv_psi_table_pat *pat;936762 const u16 SYNTAX = 0x1;937763 const u16 ZERO = 0x0;938764 const u16 ONES = 0x03;765765+766766+ pat = kzalloc(sizeof(*pat), GFP_KERNEL);767767+ if (!pat)768768+ return NULL;939769940770 pat->header.table_id = 0x0;941771···956772 pat->header.section_id = 0x0;957773 pat->header.last_section = 0x0;958774959959- pat->programs = 0;960960-961775 vidtv_psi_pat_table_update_sec_len(pat);962776963777 return pat;964778}965779966966-u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args args)780780+u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args *args)967781{968968- /* the number of bytes written by this function */782782+ struct vidtv_psi_table_pat_program *p = args->pat->program;783783+ struct header_write_args h_args = {784784+ .dest_buf = args->buf,785785+ .dest_offset = args->offset,786786+ .pid = VIDTV_PAT_PID,787787+ .h = &args->pat->header,788788+ .continuity_counter = args->continuity_counter,789789+ .dest_buf_sz = args->buf_sz,790790+ };791791+ struct psi_write_args psi_args = {792792+ .dest_buf = args->buf,793793+ .pid = VIDTV_PAT_PID,794794+ .new_psi_section = false,795795+ .continuity_counter = args->continuity_counter,796796+ .is_crc = false,797797+ .dest_buf_sz = args->buf_sz,798798+ };799799+ struct crc32_write_args c_args = {800800+ .dest_buf = args->buf,801801+ .pid = VIDTV_PAT_PID,802802+ .dest_buf_sz = args->buf_sz,803803+ };804804+ u32 crc = INITIAL_CRC;969805 u32 nbytes = 0;970970- const u16 pat_pid = VIDTV_PAT_PID;971971- u32 crc = 0xffffffff;972806973973- struct vidtv_psi_table_pat_program *p = args.pat->program;807807+ vidtv_psi_pat_table_update_sec_len(args->pat);974808975975- struct header_write_args h_args = {};976976- struct psi_write_args psi_args = {};977977- struct crc32_write_args c_args = {};978978-979979- vidtv_psi_pat_table_update_sec_len(args.pat);980980-981981- h_args.dest_buf = args.buf;982982- h_args.dest_offset = args.offset;983983- h_args.h = &args.pat->header;984984- h_args.pid = pat_pid;985985- h_args.continuity_counter = args.continuity_counter;986986- h_args.dest_buf_sz = args.buf_sz;987809 h_args.crc = &crc;988810989989- nbytes += vidtv_psi_table_header_write_into(h_args);811811+ nbytes += vidtv_psi_table_header_write_into(&h_args);990812991813 /* note that the field 'u16 programs' is not really part of the PAT */992814993993- psi_args.dest_buf = args.buf;994994- psi_args.pid = pat_pid;995995- psi_args.new_psi_section = false;996996- psi_args.continuity_counter = args.continuity_counter;997997- psi_args.is_crc = false;998998- psi_args.dest_buf_sz = args.buf_sz;999999- psi_args.crc = &crc;815815+ psi_args.crc = &crc;10008161001817 while (p) {1002818 /* copy the PAT programs */1003819 psi_args.from = p;1004820 /* skip the pointer */1005821 psi_args.len = sizeof(*p) -10061006- sizeof(struct vidtv_psi_table_pat_program *);10071007- psi_args.dest_offset = args.offset + nbytes;822822+ sizeof(struct vidtv_psi_table_pat_program *);823823+ psi_args.dest_offset = args->offset + nbytes;824824+ psi_args.continuity_counter = args->continuity_counter;100882510091009- nbytes += vidtv_psi_ts_psi_write_into(psi_args);826826+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);10108271011828 p = p->next;1012829 }101383010141014- c_args.dest_buf = args.buf;10151015- c_args.dest_offset = args.offset + nbytes;831831+ c_args.dest_offset = args->offset + nbytes;832832+ c_args.continuity_counter = args->continuity_counter;1016833 c_args.crc = cpu_to_be32(crc);10171017- c_args.pid = pat_pid;10181018- c_args.continuity_counter = args.continuity_counter;10191019- c_args.dest_buf_sz = args.buf_sz;10208341021835 /* Write the CRC32 at the end */10221022- nbytes += table_section_crc32_write_into(c_args);836836+ nbytes += table_section_crc32_write_into(&c_args);10238371024838 return nbytes;1025839}···1041859 u16 desc_loop_len;10428601043861 stream = kzalloc(sizeof(*stream), GFP_KERNEL);862862+ if (!stream)863863+ return NULL;10448641045865 stream->type = stream_type;1046866···10678831068884void vidtv_psi_pmt_stream_destroy(struct vidtv_psi_table_pmt_stream *s)1069885{10701070- struct vidtv_psi_table_pmt_stream *curr_stream = s;1071886 struct vidtv_psi_table_pmt_stream *tmp_stream = NULL;887887+ struct vidtv_psi_table_pmt_stream *curr_stream = s;10728881073889 while (curr_stream) {1074890 tmp_stream = curr_stream;···1081897void vidtv_psi_pmt_stream_assign(struct vidtv_psi_table_pmt *pmt,1082898 struct vidtv_psi_table_pmt_stream *s)1083899{10841084- /* This function transfers ownership of s to the table */10851085- if (s == pmt->stream)10861086- return;900900+ do {901901+ /* This function transfers ownership of s to the table */902902+ if (s == pmt->stream)903903+ return;108790410881088- pmt->stream = s;10891089- vidtv_psi_pmt_table_update_sec_len(pmt);905905+ pmt->stream = s;906906+ vidtv_psi_pmt_table_update_sec_len(pmt);109090710911091- if (vidtv_psi_get_sec_len(&pmt->header) > MAX_SECTION_LEN)10921092- vidtv_psi_pmt_stream_assign(pmt, NULL);908908+ s = NULL;909909+ } while (vidtv_psi_get_sec_len(&pmt->header) > MAX_SECTION_LEN);10939101094911 vidtv_psi_update_version_num(&pmt->header);1095912}···1118933struct vidtv_psi_table_pmt *vidtv_psi_pmt_table_init(u16 program_number,1119934 u16 pcr_pid)1120935{11211121- struct vidtv_psi_table_pmt *pmt = kzalloc(sizeof(*pmt), GFP_KERNEL);11221122- const u16 SYNTAX = 0x1;11231123- const u16 ZERO = 0x0;11241124- const u16 ONES = 0x03;936936+ struct vidtv_psi_table_pmt *pmt;1125937 const u16 RESERVED1 = 0x07;1126938 const u16 RESERVED2 = 0x0f;939939+ const u16 SYNTAX = 0x1;940940+ const u16 ONES = 0x03;941941+ const u16 ZERO = 0x0;1127942 u16 desc_loop_len;943943+944944+ pmt = kzalloc(sizeof(*pmt), GFP_KERNEL);945945+ if (!pmt)946946+ return NULL;11289471129948 if (!pcr_pid)1130949 pcr_pid = 0x1fff;···1159970 return pmt;1160971}116197211621162-u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args args)973973+u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args)1163974{11641164- /* the number of bytes written by this function */975975+ struct vidtv_psi_desc *table_descriptor = args->pmt->descriptor;976976+ struct vidtv_psi_table_pmt_stream *stream = args->pmt->stream;977977+ struct vidtv_psi_desc *stream_descriptor;978978+ struct header_write_args h_args = {979979+ .dest_buf = args->buf,980980+ .dest_offset = args->offset,981981+ .h = &args->pmt->header,982982+ .pid = args->pid,983983+ .continuity_counter = args->continuity_counter,984984+ .dest_buf_sz = args->buf_sz,985985+ };986986+ struct psi_write_args psi_args = {987987+ .dest_buf = args->buf,988988+ .from = &args->pmt->bitfield,989989+ .len = sizeof_field(struct vidtv_psi_table_pmt, bitfield) +990990+ sizeof_field(struct vidtv_psi_table_pmt, bitfield2),991991+ .pid = args->pid,992992+ .new_psi_section = false,993993+ .is_crc = false,994994+ .dest_buf_sz = args->buf_sz,995995+ };996996+ struct desc_write_args d_args = {997997+ .dest_buf = args->buf,998998+ .desc = table_descriptor,999999+ .pid = args->pid,10001000+ .dest_buf_sz = args->buf_sz,10011001+ };10021002+ struct crc32_write_args c_args = {10031003+ .dest_buf = args->buf,10041004+ .pid = args->pid,10051005+ .dest_buf_sz = args->buf_sz,10061006+ };10071007+ u32 crc = INITIAL_CRC;11651008 u32 nbytes = 0;11661166- u32 crc = 0xffffffff;1167100911681168- struct vidtv_psi_desc *table_descriptor = args.pmt->descriptor;11691169- struct vidtv_psi_table_pmt_stream *stream = args.pmt->stream;11701170- struct vidtv_psi_desc *stream_descriptor = (stream) ?11711171- args.pmt->stream->descriptor :11721172- NULL;10101010+ vidtv_psi_pmt_table_update_sec_len(args->pmt);1173101111741174- struct header_write_args h_args = {};11751175- struct psi_write_args psi_args = {};11761176- struct desc_write_args d_args = {};11771177- struct crc32_write_args c_args = {};11781178-11791179- vidtv_psi_pmt_table_update_sec_len(args.pmt);11801180-11811181- h_args.dest_buf = args.buf;11821182- h_args.dest_offset = args.offset;11831183- h_args.h = &args.pmt->header;11841184- h_args.pid = args.pid;11851185- h_args.continuity_counter = args.continuity_counter;11861186- h_args.dest_buf_sz = args.buf_sz;11871012 h_args.crc = &crc;1188101311891189- nbytes += vidtv_psi_table_header_write_into(h_args);10141014+ nbytes += vidtv_psi_table_header_write_into(&h_args);1190101511911016 /* write the two bitfields */11921192- psi_args.dest_buf = args.buf;11931193- psi_args.from = &args.pmt->bitfield;11941194- psi_args.len = sizeof_field(struct vidtv_psi_table_pmt, bitfield) +11951195- sizeof_field(struct vidtv_psi_table_pmt, bitfield2);11961196-11971197- psi_args.dest_offset = args.offset + nbytes;11981198- psi_args.pid = args.pid;11991199- psi_args.new_psi_section = false;12001200- psi_args.continuity_counter = args.continuity_counter;12011201- psi_args.is_crc = false;12021202- psi_args.dest_buf_sz = args.buf_sz;12031203- psi_args.crc = &crc;12041204-12051205- nbytes += vidtv_psi_ts_psi_write_into(psi_args);10171017+ psi_args.dest_offset = args->offset + nbytes;10181018+ psi_args.continuity_counter = args->continuity_counter;10191019+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);1206102012071021 while (table_descriptor) {12081022 /* write the descriptors, if any */12091209- d_args.dest_buf = args.buf;12101210- d_args.dest_offset = args.offset + nbytes;12111211- d_args.desc = table_descriptor;12121212- d_args.pid = args.pid;12131213- d_args.continuity_counter = args.continuity_counter;12141214- d_args.dest_buf_sz = args.buf_sz;10231023+ d_args.dest_offset = args->offset + nbytes;10241024+ d_args.continuity_counter = args->continuity_counter;12151025 d_args.crc = &crc;1216102612171217- nbytes += vidtv_psi_desc_write_into(d_args);10271027+ nbytes += vidtv_psi_desc_write_into(&d_args);1218102812191029 table_descriptor = table_descriptor->next;12201030 }1221103110321032+ psi_args.len += sizeof_field(struct vidtv_psi_table_pmt_stream, type);12221033 while (stream) {12231034 /* write the streams, if any */12241035 psi_args.from = stream;12251225- psi_args.len = sizeof_field(struct vidtv_psi_table_pmt_stream, type) +12261226- sizeof_field(struct vidtv_psi_table_pmt_stream, bitfield) +12271227- sizeof_field(struct vidtv_psi_table_pmt_stream, bitfield2);12281228- psi_args.dest_offset = args.offset + nbytes;10361036+ psi_args.dest_offset = args->offset + nbytes;10371037+ psi_args.continuity_counter = args->continuity_counter;1229103812301230- nbytes += vidtv_psi_ts_psi_write_into(psi_args);10391039+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);10401040+10411041+ stream_descriptor = stream->descriptor;1231104212321043 while (stream_descriptor) {12331044 /* write the stream descriptors, if any */12341234- d_args.dest_buf = args.buf;12351235- d_args.dest_offset = args.offset + nbytes;10451045+ d_args.dest_offset = args->offset + nbytes;12361046 d_args.desc = stream_descriptor;12371237- d_args.pid = args.pid;12381238- d_args.continuity_counter = args.continuity_counter;12391239- d_args.dest_buf_sz = args.buf_sz;10471047+ d_args.continuity_counter = args->continuity_counter;12401048 d_args.crc = &crc;1241104912421242- nbytes += vidtv_psi_desc_write_into(d_args);10501050+ nbytes += vidtv_psi_desc_write_into(&d_args);1243105112441052 stream_descriptor = stream_descriptor->next;12451053 }···12441058 stream = stream->next;12451059 }1246106012471247- c_args.dest_buf = args.buf;12481248- c_args.dest_offset = args.offset + nbytes;10611061+ c_args.dest_offset = args->offset + nbytes;12491062 c_args.crc = cpu_to_be32(crc);12501250- c_args.pid = args.pid;12511251- c_args.continuity_counter = args.continuity_counter;12521252- c_args.dest_buf_sz = args.buf_sz;10631063+ c_args.continuity_counter = args->continuity_counter;1253106412541065 /* Write the CRC32 at the end */12551255- nbytes += table_section_crc32_write_into(c_args);10661066+ nbytes += table_section_crc32_write_into(&c_args);1256106712571068 return nbytes;12581069}···12611078 kfree(pmt);12621079}1263108012641264-struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 transport_stream_id)10811081+struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 network_id,10821082+ u16 transport_stream_id)12651083{12661266- struct vidtv_psi_table_sdt *sdt = kzalloc(sizeof(*sdt), GFP_KERNEL);12671267- const u16 SYNTAX = 0x1;12681268- const u16 ONE = 0x1;12691269- const u16 ONES = 0x03;10841084+ struct vidtv_psi_table_sdt *sdt;12701085 const u16 RESERVED = 0xff;10861086+ const u16 SYNTAX = 0x1;10871087+ const u16 ONES = 0x03;10881088+ const u16 ONE = 0x1;10891089+10901090+ sdt = kzalloc(sizeof(*sdt), GFP_KERNEL);10911091+ if (!sdt)10921092+ return NULL;1271109312721094 sdt->header.table_id = 0x42;12731273-12741095 sdt->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12));1275109612761097 /*···12981111 * This can be changed to something more useful, when support for12991112 * NIT gets added13001113 */13011301- sdt->network_id = cpu_to_be16(0xff01);11141114+ sdt->network_id = cpu_to_be16(network_id);13021115 sdt->reserved = RESERVED;1303111613041117 vidtv_psi_sdt_table_update_sec_len(sdt);···13061119 return sdt;13071120}1308112113091309-u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args args)11221122+u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args *args)13101123{11241124+ struct header_write_args h_args = {11251125+ .dest_buf = args->buf,11261126+ .dest_offset = args->offset,11271127+ .h = &args->sdt->header,11281128+ .pid = VIDTV_SDT_PID,11291129+ .dest_buf_sz = args->buf_sz,11301130+ };11311131+ struct psi_write_args psi_args = {11321132+ .dest_buf = args->buf,11331133+ .len = sizeof_field(struct vidtv_psi_table_sdt, network_id) +11341134+ sizeof_field(struct vidtv_psi_table_sdt, reserved),11351135+ .pid = VIDTV_SDT_PID,11361136+ .new_psi_section = false,11371137+ .is_crc = false,11381138+ .dest_buf_sz = args->buf_sz,11391139+ };11401140+ struct desc_write_args d_args = {11411141+ .dest_buf = args->buf,11421142+ .pid = VIDTV_SDT_PID,11431143+ .dest_buf_sz = args->buf_sz,11441144+ };11451145+ struct crc32_write_args c_args = {11461146+ .dest_buf = args->buf,11471147+ .pid = VIDTV_SDT_PID,11481148+ .dest_buf_sz = args->buf_sz,11491149+ };11501150+ struct vidtv_psi_table_sdt_service *service = args->sdt->service;11511151+ struct vidtv_psi_desc *service_desc;13111152 u32 nbytes = 0;13121312- u16 sdt_pid = VIDTV_SDT_PID; /* see ETSI EN 300 468 v1.15.1 p. 11 */11531153+ u32 crc = INITIAL_CRC;1313115413141314- u32 crc = 0xffffffff;11551155+ /* see ETSI EN 300 468 v1.15.1 p. 11 */1315115613161316- struct vidtv_psi_table_sdt_service *service = args.sdt->service;13171317- struct vidtv_psi_desc *service_desc = (args.sdt->service) ?13181318- args.sdt->service->descriptor :13191319- NULL;11571157+ vidtv_psi_sdt_table_update_sec_len(args->sdt);1320115813211321- struct header_write_args h_args = {};13221322- struct psi_write_args psi_args = {};13231323- struct desc_write_args d_args = {};13241324- struct crc32_write_args c_args = {};13251325-13261326- vidtv_psi_sdt_table_update_sec_len(args.sdt);13271327-13281328- h_args.dest_buf = args.buf;13291329- h_args.dest_offset = args.offset;13301330- h_args.h = &args.sdt->header;13311331- h_args.pid = sdt_pid;13321332- h_args.continuity_counter = args.continuity_counter;13331333- h_args.dest_buf_sz = args.buf_sz;11591159+ h_args.continuity_counter = args->continuity_counter;13341160 h_args.crc = &crc;1335116113361336- nbytes += vidtv_psi_table_header_write_into(h_args);11621162+ nbytes += vidtv_psi_table_header_write_into(&h_args);1337116313381338- psi_args.dest_buf = args.buf;13391339- psi_args.from = &args.sdt->network_id;13401340-13411341- psi_args.len = sizeof_field(struct vidtv_psi_table_sdt, network_id) +13421342- sizeof_field(struct vidtv_psi_table_sdt, reserved);13431343-13441344- psi_args.dest_offset = args.offset + nbytes;13451345- psi_args.pid = sdt_pid;13461346- psi_args.new_psi_section = false;13471347- psi_args.continuity_counter = args.continuity_counter;13481348- psi_args.is_crc = false;13491349- psi_args.dest_buf_sz = args.buf_sz;11641164+ psi_args.from = &args->sdt->network_id;11651165+ psi_args.dest_offset = args->offset + nbytes;11661166+ psi_args.continuity_counter = args->continuity_counter;13501167 psi_args.crc = &crc;1351116813521169 /* copy u16 network_id + u8 reserved)*/13531353- nbytes += vidtv_psi_ts_psi_write_into(psi_args);11701170+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);11711171+11721172+ /* skip both pointers at the end */11731173+ psi_args.len = sizeof(struct vidtv_psi_table_sdt_service) -11741174+ sizeof(struct vidtv_psi_desc *) -11751175+ sizeof(struct vidtv_psi_table_sdt_service *);1354117613551177 while (service) {13561178 /* copy the services, if any */13571179 psi_args.from = service;13581358- /* skip both pointers at the end */13591359- psi_args.len = sizeof(struct vidtv_psi_table_sdt_service) -13601360- sizeof(struct vidtv_psi_desc *) -13611361- sizeof(struct vidtv_psi_table_sdt_service *);13621362- psi_args.dest_offset = args.offset + nbytes;11801180+ psi_args.dest_offset = args->offset + nbytes;11811181+ psi_args.continuity_counter = args->continuity_counter;1363118213641364- nbytes += vidtv_psi_ts_psi_write_into(psi_args);11831183+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);11841184+11851185+ service_desc = service->descriptor;1365118613661187 while (service_desc) {13671188 /* copy the service descriptors, if any */13681368- d_args.dest_buf = args.buf;13691369- d_args.dest_offset = args.offset + nbytes;11891189+ d_args.dest_offset = args->offset + nbytes;13701190 d_args.desc = service_desc;13711371- d_args.pid = sdt_pid;13721372- d_args.continuity_counter = args.continuity_counter;13731373- d_args.dest_buf_sz = args.buf_sz;11911191+ d_args.continuity_counter = args->continuity_counter;13741192 d_args.crc = &crc;1375119313761376- nbytes += vidtv_psi_desc_write_into(d_args);11941194+ nbytes += vidtv_psi_desc_write_into(&d_args);1377119513781196 service_desc = service_desc->next;13791197 }···13861194 service = service->next;13871195 }1388119613891389- c_args.dest_buf = args.buf;13901390- c_args.dest_offset = args.offset + nbytes;11971197+ c_args.dest_offset = args->offset + nbytes;13911198 c_args.crc = cpu_to_be32(crc);13921392- c_args.pid = sdt_pid;13931393- c_args.continuity_counter = args.continuity_counter;13941394- c_args.dest_buf_sz = args.buf_sz;11991199+ c_args.continuity_counter = args->continuity_counter;1395120013961201 /* Write the CRC at the end */13971397- nbytes += table_section_crc32_write_into(c_args);12021202+ nbytes += table_section_crc32_write_into(&c_args);1398120313991204 return nbytes;14001205}···1404121514051216struct vidtv_psi_table_sdt_service14061217*vidtv_psi_sdt_service_init(struct vidtv_psi_table_sdt_service *head,14071407- u16 service_id)12181218+ u16 service_id,12191219+ bool eit_schedule,12201220+ bool eit_present_following)14081221{14091222 struct vidtv_psi_table_sdt_service *service;1410122314111224 service = kzalloc(sizeof(*service), GFP_KERNEL);12251225+ if (!service)12261226+ return NULL;1412122714131228 /*14141229 * ETSI 300 468: this is a 16bit field which serves as a label to···14211228 * corresponding program_map_section14221229 */14231230 service->service_id = cpu_to_be16(service_id);14241424- service->EIT_schedule = 0x0;14251425- service->EIT_present_following = 0x0;12311231+ service->EIT_schedule = eit_schedule;12321232+ service->EIT_present_following = eit_present_following;14261233 service->reserved = 0x3f;1427123414281235 service->bitfield = cpu_to_be16(RUNNING << 13);···14551262vidtv_psi_sdt_service_assign(struct vidtv_psi_table_sdt *sdt,14561263 struct vidtv_psi_table_sdt_service *service)14571264{14581458- if (service == sdt->service)14591459- return;12651265+ do {12661266+ if (service == sdt->service)12671267+ return;1460126814611461- sdt->service = service;12691269+ sdt->service = service;1462127014631463- /* recompute section length */14641464- vidtv_psi_sdt_table_update_sec_len(sdt);12711271+ /* recompute section length */12721272+ vidtv_psi_sdt_table_update_sec_len(sdt);1465127314661466- if (vidtv_psi_get_sec_len(&sdt->header) > MAX_SECTION_LEN)14671467- vidtv_psi_sdt_service_assign(sdt, NULL);12741274+ service = NULL;12751275+ } while (vidtv_psi_get_sec_len(&sdt->header) > MAX_SECTION_LEN);1468127614691277 vidtv_psi_update_version_num(&sdt->header);14701278}1471127912801280+/*12811281+ * PMTs contain information about programs. For each program,12821282+ * there is one PMT section. This function will create a section12831283+ * for each program found in the PAT12841284+ */14721285struct vidtv_psi_table_pmt**14731473-vidtv_psi_pmt_create_sec_for_each_pat_entry(struct vidtv_psi_table_pat *pat, u16 pcr_pid)12861286+vidtv_psi_pmt_create_sec_for_each_pat_entry(struct vidtv_psi_table_pat *pat,12871287+ u16 pcr_pid)1474128814751289{14761476- /*14771477- * PMTs contain information about programs. For each program,14781478- * there is one PMT section. This function will create a section14791479- * for each program found in the PAT14801480- */14811481- struct vidtv_psi_table_pat_program *program = pat->program;12901290+ struct vidtv_psi_table_pat_program *program;14821291 struct vidtv_psi_table_pmt **pmt_secs;14831483- u32 i = 0;12921292+ u32 i = 0, num_pmt = 0;1484129314851485- /* a section for each program_id */14861486- pmt_secs = kcalloc(pat->programs,14871487- sizeof(struct vidtv_psi_table_pmt *),14881488- GFP_KERNEL);14891489-12941294+ /*12951295+ * The number of PMT entries is the number of PAT entries12961296+ * that contain service_id. That exclude special tables, like NIT12971297+ */12981298+ program = pat->program;14901299 while (program) {14911491- pmt_secs[i] = vidtv_psi_pmt_table_init(be16_to_cpu(program->service_id), pcr_pid);14921492- ++i;13001300+ if (program->service_id)13011301+ num_pmt++;14931302 program = program->next;14941303 }13041304+13051305+ pmt_secs = kcalloc(num_pmt,13061306+ sizeof(struct vidtv_psi_table_pmt *),13071307+ GFP_KERNEL);13081308+ if (!pmt_secs)13091309+ return NULL;13101310+13111311+ for (program = pat->program; program; program = program->next) {13121312+ if (!program->service_id)13131313+ continue;13141314+ pmt_secs[i] = vidtv_psi_pmt_table_init(be16_to_cpu(program->service_id),13151315+ pcr_pid);13161316+13171317+ if (!pmt_secs[i]) {13181318+ while (i > 0) {13191319+ i--;13201320+ vidtv_psi_pmt_table_destroy(pmt_secs[i]);13211321+ }13221322+ return NULL;13231323+ }13241324+ i++;13251325+ }13261326+ pat->num_pmt = num_pmt;1495132714961328 return pmt_secs;14971329}1498133013311331+/* find the PMT section associated with 'program_num' */14991332struct vidtv_psi_table_pmt15001333*vidtv_psi_find_pmt_sec(struct vidtv_psi_table_pmt **pmt_sections,15011334 u16 nsections,15021335 u16 program_num)15031336{15041504- /* find the PMT section associated with 'program_num' */15051337 struct vidtv_psi_table_pmt *sec = NULL;15061338 u32 i;15071339···15371319 }1538132015391321 return NULL; /* not found */13221322+}13231323+13241324+static void vidtv_psi_nit_table_update_sec_len(struct vidtv_psi_table_nit *nit)13251325+{13261326+ u16 length = 0;13271327+ struct vidtv_psi_table_transport *t = nit->transport;13281328+ u16 desc_loop_len;13291329+ u16 transport_loop_len = 0;13301330+13311331+ /*13321332+ * from immediately after 'section_length' until13331333+ * 'network_descriptor_length'13341334+ */13351335+ length += NIT_LEN_UNTIL_NETWORK_DESCRIPTOR_LEN;13361336+13371337+ desc_loop_len = vidtv_psi_desc_comp_loop_len(nit->descriptor);13381338+ vidtv_psi_set_desc_loop_len(&nit->bitfield, desc_loop_len, 12);13391339+13401340+ length += desc_loop_len;13411341+13421342+ length += sizeof_field(struct vidtv_psi_table_nit, bitfield2);13431343+13441344+ while (t) {13451345+ /* skip both pointers at the end */13461346+ transport_loop_len += sizeof(struct vidtv_psi_table_transport) -13471347+ sizeof(struct vidtv_psi_desc *) -13481348+ sizeof(struct vidtv_psi_table_transport *);13491349+13501350+ length += transport_loop_len;13511351+13521352+ desc_loop_len = vidtv_psi_desc_comp_loop_len(t->descriptor);13531353+ vidtv_psi_set_desc_loop_len(&t->bitfield, desc_loop_len, 12);13541354+13551355+ length += desc_loop_len;13561356+13571357+ t = t->next;13581358+ }13591359+13601360+ // Actually sets the transport stream loop len, maybe rename this function later13611361+ vidtv_psi_set_desc_loop_len(&nit->bitfield2, transport_loop_len, 12);13621362+ length += CRC_SIZE_IN_BYTES;13631363+13641364+ vidtv_psi_set_sec_len(&nit->header, length);13651365+}13661366+13671367+struct vidtv_psi_table_nit13681368+*vidtv_psi_nit_table_init(u16 network_id,13691369+ u16 transport_stream_id,13701370+ char *network_name,13711371+ struct vidtv_psi_desc_service_list_entry *service_list)13721372+{13731373+ struct vidtv_psi_table_transport *transport;13741374+ struct vidtv_psi_table_nit *nit;13751375+ const u16 SYNTAX = 0x1;13761376+ const u16 ONES = 0x03;13771377+ const u16 ONE = 0x1;13781378+13791379+ nit = kzalloc(sizeof(*nit), GFP_KERNEL);13801380+ if (!nit)13811381+ return NULL;13821382+13831383+ transport = kzalloc(sizeof(*transport), GFP_KERNEL);13841384+ if (!transport)13851385+ goto free_nit;13861386+13871387+ nit->header.table_id = 0x40; // ACTUAL_NETWORK13881388+13891389+ nit->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12));13901390+13911391+ nit->header.id = cpu_to_be16(network_id);13921392+ nit->header.current_next = ONE;13931393+13941394+ nit->header.version = 0x1f;13951395+13961396+ nit->header.one2 = ONES;13971397+ nit->header.section_id = 0;13981398+ nit->header.last_section = 0;13991399+14001400+ nit->bitfield = cpu_to_be16(0xf);14011401+ nit->bitfield2 = cpu_to_be16(0xf);14021402+14031403+ nit->descriptor = (struct vidtv_psi_desc *)14041404+ vidtv_psi_network_name_desc_init(NULL, network_name);14051405+ if (!nit->descriptor)14061406+ goto free_transport;14071407+14081408+ transport->transport_id = cpu_to_be16(transport_stream_id);14091409+ transport->network_id = cpu_to_be16(network_id);14101410+ transport->bitfield = cpu_to_be16(0xf);14111411+ transport->descriptor = (struct vidtv_psi_desc *)14121412+ vidtv_psi_service_list_desc_init(NULL, service_list);14131413+ if (!transport->descriptor)14141414+ goto free_nit_desc;14151415+14161416+ nit->transport = transport;14171417+14181418+ vidtv_psi_nit_table_update_sec_len(nit);14191419+14201420+ return nit;14211421+14221422+free_nit_desc:14231423+ vidtv_psi_desc_destroy((struct vidtv_psi_desc *)nit->descriptor);14241424+14251425+free_transport:14261426+ kfree(transport);14271427+free_nit:14281428+ kfree(nit);14291429+ return NULL;14301430+}14311431+14321432+u32 vidtv_psi_nit_write_into(struct vidtv_psi_nit_write_args *args)14331433+{14341434+ struct header_write_args h_args = {14351435+ .dest_buf = args->buf,14361436+ .dest_offset = args->offset,14371437+ .h = &args->nit->header,14381438+ .pid = VIDTV_NIT_PID,14391439+ .dest_buf_sz = args->buf_sz,14401440+ };14411441+ struct psi_write_args psi_args = {14421442+ .dest_buf = args->buf,14431443+ .from = &args->nit->bitfield,14441444+ .len = sizeof_field(struct vidtv_psi_table_nit, bitfield),14451445+ .pid = VIDTV_NIT_PID,14461446+ .new_psi_section = false,14471447+ .is_crc = false,14481448+ .dest_buf_sz = args->buf_sz,14491449+ };14501450+ struct desc_write_args d_args = {14511451+ .dest_buf = args->buf,14521452+ .pid = VIDTV_NIT_PID,14531453+ .dest_buf_sz = args->buf_sz,14541454+ };14551455+ struct crc32_write_args c_args = {14561456+ .dest_buf = args->buf,14571457+ .pid = VIDTV_NIT_PID,14581458+ .dest_buf_sz = args->buf_sz,14591459+ };14601460+ struct vidtv_psi_desc *table_descriptor = args->nit->descriptor;14611461+ struct vidtv_psi_table_transport *transport = args->nit->transport;14621462+ struct vidtv_psi_desc *transport_descriptor;14631463+ u32 crc = INITIAL_CRC;14641464+ u32 nbytes = 0;14651465+14661466+ vidtv_psi_nit_table_update_sec_len(args->nit);14671467+14681468+ h_args.continuity_counter = args->continuity_counter;14691469+ h_args.crc = &crc;14701470+14711471+ nbytes += vidtv_psi_table_header_write_into(&h_args);14721472+14731473+ /* write the bitfield */14741474+14751475+ psi_args.dest_offset = args->offset + nbytes;14761476+ psi_args.continuity_counter = args->continuity_counter;14771477+ psi_args.crc = &crc;14781478+14791479+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);14801480+14811481+ while (table_descriptor) {14821482+ /* write the descriptors, if any */14831483+ d_args.dest_offset = args->offset + nbytes;14841484+ d_args.desc = table_descriptor;14851485+ d_args.continuity_counter = args->continuity_counter;14861486+ d_args.crc = &crc;14871487+14881488+ nbytes += vidtv_psi_desc_write_into(&d_args);14891489+14901490+ table_descriptor = table_descriptor->next;14911491+ }14921492+14931493+ /* write the second bitfield */14941494+ psi_args.from = &args->nit->bitfield2;14951495+ psi_args.len = sizeof_field(struct vidtv_psi_table_nit, bitfield2);14961496+ psi_args.dest_offset = args->offset + nbytes;14971497+14981498+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);14991499+15001500+ psi_args.len = sizeof_field(struct vidtv_psi_table_transport, transport_id) +15011501+ sizeof_field(struct vidtv_psi_table_transport, network_id) +15021502+ sizeof_field(struct vidtv_psi_table_transport, bitfield);15031503+ while (transport) {15041504+ /* write the transport sections, if any */15051505+ psi_args.from = transport;15061506+ psi_args.dest_offset = args->offset + nbytes;15071507+15081508+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);15091509+15101510+ transport_descriptor = transport->descriptor;15111511+15121512+ while (transport_descriptor) {15131513+ /* write the transport descriptors, if any */15141514+ d_args.dest_offset = args->offset + nbytes;15151515+ d_args.desc = transport_descriptor;15161516+ d_args.continuity_counter = args->continuity_counter;15171517+ d_args.crc = &crc;15181518+15191519+ nbytes += vidtv_psi_desc_write_into(&d_args);15201520+15211521+ transport_descriptor = transport_descriptor->next;15221522+ }15231523+15241524+ transport = transport->next;15251525+ }15261526+15271527+ c_args.dest_offset = args->offset + nbytes;15281528+ c_args.crc = cpu_to_be32(crc);15291529+ c_args.continuity_counter = args->continuity_counter;15301530+15311531+ /* Write the CRC32 at the end */15321532+ nbytes += table_section_crc32_write_into(&c_args);15331533+15341534+ return nbytes;15351535+}15361536+15371537+static void vidtv_psi_transport_destroy(struct vidtv_psi_table_transport *t)15381538+{15391539+ struct vidtv_psi_table_transport *tmp_t = NULL;15401540+ struct vidtv_psi_table_transport *curr_t = t;15411541+15421542+ while (curr_t) {15431543+ tmp_t = curr_t;15441544+ curr_t = curr_t->next;15451545+ vidtv_psi_desc_destroy(tmp_t->descriptor);15461546+ kfree(tmp_t);15471547+ }15481548+}15491549+15501550+void vidtv_psi_nit_table_destroy(struct vidtv_psi_table_nit *nit)15511551+{15521552+ vidtv_psi_desc_destroy(nit->descriptor);15531553+ vidtv_psi_transport_destroy(nit->transport);15541554+ kfree(nit);15551555+}15561556+15571557+void vidtv_psi_eit_table_update_sec_len(struct vidtv_psi_table_eit *eit)15581558+{15591559+ struct vidtv_psi_table_eit_event *e = eit->event;15601560+ u16 desc_loop_len;15611561+ u16 length = 0;15621562+15631563+ /*15641564+ * from immediately after 'section_length' until15651565+ * 'last_table_id'15661566+ */15671567+ length += EIT_LEN_UNTIL_LAST_TABLE_ID;15681568+15691569+ while (e) {15701570+ /* skip both pointers at the end */15711571+ length += sizeof(struct vidtv_psi_table_eit_event) -15721572+ sizeof(struct vidtv_psi_desc *) -15731573+ sizeof(struct vidtv_psi_table_eit_event *);15741574+15751575+ desc_loop_len = vidtv_psi_desc_comp_loop_len(e->descriptor);15761576+ vidtv_psi_set_desc_loop_len(&e->bitfield, desc_loop_len, 12);15771577+15781578+ length += desc_loop_len;15791579+15801580+ e = e->next;15811581+ }15821582+15831583+ length += CRC_SIZE_IN_BYTES;15841584+15851585+ vidtv_psi_set_sec_len(&eit->header, length);15861586+}15871587+15881588+void vidtv_psi_eit_event_assign(struct vidtv_psi_table_eit *eit,15891589+ struct vidtv_psi_table_eit_event *e)15901590+{15911591+ do {15921592+ if (e == eit->event)15931593+ return;15941594+15951595+ eit->event = e;15961596+ vidtv_psi_eit_table_update_sec_len(eit);15971597+15981598+ e = NULL;15991599+ } while (vidtv_psi_get_sec_len(&eit->header) > EIT_MAX_SECTION_LEN);16001600+16011601+ vidtv_psi_update_version_num(&eit->header);16021602+}16031603+16041604+struct vidtv_psi_table_eit16051605+*vidtv_psi_eit_table_init(u16 network_id,16061606+ u16 transport_stream_id,16071607+ __be16 service_id)16081608+{16091609+ struct vidtv_psi_table_eit *eit;16101610+ const u16 SYNTAX = 0x1;16111611+ const u16 ONE = 0x1;16121612+ const u16 ONES = 0x03;16131613+16141614+ eit = kzalloc(sizeof(*eit), GFP_KERNEL);16151615+ if (!eit)16161616+ return NULL;16171617+16181618+ eit->header.table_id = 0x4e; //actual_transport_stream: present/following16191619+16201620+ eit->header.bitfield = cpu_to_be16((SYNTAX << 15) | (ONE << 14) | (ONES << 12));16211621+16221622+ eit->header.id = service_id;16231623+ eit->header.current_next = ONE;16241624+16251625+ eit->header.version = 0x1f;16261626+16271627+ eit->header.one2 = ONES;16281628+ eit->header.section_id = 0;16291629+ eit->header.last_section = 0;16301630+16311631+ eit->transport_id = cpu_to_be16(transport_stream_id);16321632+ eit->network_id = cpu_to_be16(network_id);16331633+16341634+ eit->last_segment = eit->header.last_section; /* not implemented */16351635+ eit->last_table_id = eit->header.table_id; /* not implemented */16361636+16371637+ vidtv_psi_eit_table_update_sec_len(eit);16381638+16391639+ return eit;16401640+}16411641+16421642+u32 vidtv_psi_eit_write_into(struct vidtv_psi_eit_write_args *args)16431643+{16441644+ struct header_write_args h_args = {16451645+ .dest_buf = args->buf,16461646+ .dest_offset = args->offset,16471647+ .h = &args->eit->header,16481648+ .pid = VIDTV_EIT_PID,16491649+ .dest_buf_sz = args->buf_sz,16501650+ };16511651+ struct psi_write_args psi_args = {16521652+ .dest_buf = args->buf,16531653+ .len = sizeof_field(struct vidtv_psi_table_eit, transport_id) +16541654+ sizeof_field(struct vidtv_psi_table_eit, network_id) +16551655+ sizeof_field(struct vidtv_psi_table_eit, last_segment) +16561656+ sizeof_field(struct vidtv_psi_table_eit, last_table_id),16571657+ .pid = VIDTV_EIT_PID,16581658+ .new_psi_section = false,16591659+ .is_crc = false,16601660+ .dest_buf_sz = args->buf_sz,16611661+ };16621662+ struct desc_write_args d_args = {16631663+ .dest_buf = args->buf,16641664+ .pid = VIDTV_EIT_PID,16651665+ .dest_buf_sz = args->buf_sz,16661666+ };16671667+ struct crc32_write_args c_args = {16681668+ .dest_buf = args->buf,16691669+ .pid = VIDTV_EIT_PID,16701670+ .dest_buf_sz = args->buf_sz,16711671+ };16721672+ struct vidtv_psi_table_eit_event *event = args->eit->event;16731673+ struct vidtv_psi_desc *event_descriptor;16741674+ u32 crc = INITIAL_CRC;16751675+ u32 nbytes = 0;16761676+16771677+ vidtv_psi_eit_table_update_sec_len(args->eit);16781678+16791679+ h_args.continuity_counter = args->continuity_counter;16801680+ h_args.crc = &crc;16811681+16821682+ nbytes += vidtv_psi_table_header_write_into(&h_args);16831683+16841684+ psi_args.from = &args->eit->transport_id;16851685+ psi_args.dest_offset = args->offset + nbytes;16861686+ psi_args.continuity_counter = args->continuity_counter;16871687+ psi_args.crc = &crc;16881688+16891689+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);16901690+16911691+ /* skip both pointers at the end */16921692+ psi_args.len = sizeof(struct vidtv_psi_table_eit_event) -16931693+ sizeof(struct vidtv_psi_desc *) -16941694+ sizeof(struct vidtv_psi_table_eit_event *);16951695+ while (event) {16961696+ /* copy the events, if any */16971697+ psi_args.from = event;16981698+ psi_args.dest_offset = args->offset + nbytes;16991699+17001700+ nbytes += vidtv_psi_ts_psi_write_into(&psi_args);17011701+17021702+ event_descriptor = event->descriptor;17031703+17041704+ while (event_descriptor) {17051705+ /* copy the event descriptors, if any */17061706+ d_args.dest_offset = args->offset + nbytes;17071707+ d_args.desc = event_descriptor;17081708+ d_args.continuity_counter = args->continuity_counter;17091709+ d_args.crc = &crc;17101710+17111711+ nbytes += vidtv_psi_desc_write_into(&d_args);17121712+17131713+ event_descriptor = event_descriptor->next;17141714+ }17151715+17161716+ event = event->next;17171717+ }17181718+17191719+ c_args.dest_offset = args->offset + nbytes;17201720+ c_args.crc = cpu_to_be32(crc);17211721+ c_args.continuity_counter = args->continuity_counter;17221722+17231723+ /* Write the CRC at the end */17241724+ nbytes += table_section_crc32_write_into(&c_args);17251725+17261726+ return nbytes;17271727+}17281728+17291729+struct vidtv_psi_table_eit_event17301730+*vidtv_psi_eit_event_init(struct vidtv_psi_table_eit_event *head, u16 event_id)17311731+{17321732+ const u8 DURATION[] = {0x23, 0x59, 0x59}; /* BCD encoded */17331733+ struct vidtv_psi_table_eit_event *e;17341734+ struct timespec64 ts;17351735+ struct tm time;17361736+ int mjd, l;17371737+ __be16 mjd_be;17381738+17391739+ e = kzalloc(sizeof(*e), GFP_KERNEL);17401740+ if (!e)17411741+ return NULL;17421742+17431743+ e->event_id = cpu_to_be16(event_id);17441744+17451745+ ts = ktime_to_timespec64(ktime_get_real());17461746+ time64_to_tm(ts.tv_sec, 0, &time);17471747+17481748+ /* Convert date to Modified Julian Date - per EN 300 468 Annex C */17491749+ if (time.tm_mon < 2)17501750+ l = 1;17511751+ else17521752+ l = 0;17531753+17541754+ mjd = 14956 + time.tm_mday;17551755+ mjd += (time.tm_year - l) * 36525 / 100;17561756+ mjd += (time.tm_mon + 2 + l * 12) * 306001 / 10000;17571757+ mjd_be = cpu_to_be16(mjd);17581758+17591759+ /*17601760+ * Store MJD and hour/min/sec to the event.17611761+ *17621762+ * Let's make the event to start on a full hour17631763+ */17641764+ memcpy(e->start_time, &mjd_be, sizeof(mjd_be));17651765+ e->start_time[2] = bin2bcd(time.tm_hour);17661766+ e->start_time[3] = 0;17671767+ e->start_time[4] = 0;17681768+17691769+ /*17701770+ * TODO: for now, the event will last for a day. Should be17711771+ * enough for testing purposes, but if one runs the driver17721772+ * for more than that, the current event will become invalid.17731773+ * So, we need a better code here in order to change the start17741774+ * time once the event expires.17751775+ */17761776+ memcpy(e->duration, DURATION, sizeof(e->duration));17771777+17781778+ e->bitfield = cpu_to_be16(RUNNING << 13);17791779+17801780+ if (head) {17811781+ while (head->next)17821782+ head = head->next;17831783+17841784+ head->next = e;17851785+ }17861786+17871787+ return e;17881788+}17891789+17901790+void vidtv_psi_eit_event_destroy(struct vidtv_psi_table_eit_event *e)17911791+{17921792+ struct vidtv_psi_table_eit_event *tmp_e = NULL;17931793+ struct vidtv_psi_table_eit_event *curr_e = e;17941794+17951795+ while (curr_e) {17961796+ tmp_e = curr_e;17971797+ curr_e = curr_e->next;17981798+ vidtv_psi_desc_destroy(tmp_e->descriptor);17991799+ kfree(tmp_e);18001800+ }18011801+}18021802+18031803+void vidtv_psi_eit_table_destroy(struct vidtv_psi_table_eit *eit)18041804+{18051805+ vidtv_psi_eit_event_destroy(eit->event);18061806+ kfree(eit);15401807}
+257-25
drivers/media/test-drivers/vidtv/vidtv_psi.h
···66 * technically be broken into one or more sections, we do not do this here,77 * hence 'table' and 'section' are interchangeable for vidtv.88 *99- * This code currently supports three tables: PAT, PMT and SDT. These are the1010- * bare minimum to get userspace to recognize our MPEG transport stream. It can1111- * be extended to support more PSI tables in the future.1212- *139 * Copyright (C) 2020 Daniel W. S. Almeida1410 */1511···1317#define VIDTV_PSI_H14181519#include <linux/types.h>1616-#include <asm/byteorder.h>17201821/*1922 * all section lengths start immediately after the 'section_length' field···2227#define PAT_LEN_UNTIL_LAST_SECTION_NUMBER 52328#define PMT_LEN_UNTIL_PROGRAM_INFO_LENGTH 92429#define SDT_LEN_UNTIL_RESERVED_FOR_FUTURE_USE 83030+#define NIT_LEN_UNTIL_NETWORK_DESCRIPTOR_LEN 73131+#define EIT_LEN_UNTIL_LAST_TABLE_ID 112532#define MAX_SECTION_LEN 10213333+#define EIT_MAX_SECTION_LEN 4093 /* see ETSI 300 468 v.1.10.1 p. 26 */2634#define VIDTV_PAT_PID 0 /* mandated by the specs */2735#define VIDTV_SDT_PID 0x0011 /* mandated by the specs */3636+#define VIDTV_NIT_PID 0x0010 /* mandated by the specs */3737+#define VIDTV_EIT_PID 0x0012 /*mandated by the specs */28382939enum vidtv_psi_descriptors {3040 REGISTRATION_DESCRIPTOR = 0x05, /* See ISO/IEC 13818-1 section 2.6.8 */4141+ NETWORK_NAME_DESCRIPTOR = 0x40, /* See ETSI EN 300 468 section 6.2.27 */4242+ SERVICE_LIST_DESCRIPTOR = 0x41, /* See ETSI EN 300 468 section 6.2.35 */3143 SERVICE_DESCRIPTOR = 0x48, /* See ETSI EN 300 468 section 6.2.33 */4444+ SHORT_EVENT_DESCRIPTOR = 0x4d, /* See ETSI EN 300 468 section 6.2.37 */3245};33463447enum vidtv_psi_stream_types {3548 STREAM_PRIVATE_DATA = 0x06, /* see ISO/IEC 13818-1 2000 p. 48 */3649};37503838-/**5151+/*3952 * struct vidtv_psi_desc - A generic PSI descriptor type.4053 * The descriptor length is an 8-bit field specifying the total number of bytes of the data portion4154 * of the descriptor following the byte defining the value of this field.···5552 u8 data[];5653} __packed;57545858-/**5555+/*5956 * struct vidtv_psi_desc_service - Service descriptor.6057 * See ETSI EN 300 468 section 6.2.33.6158 */···7168 char *service_name;7269} __packed;73707474-/**7171+/*7572 * struct vidtv_psi_desc_registration - A registration descriptor.7673 * See ISO/IEC 13818-1 section 2.6.87774 */···9390 u8 additional_identification_info[];9491} __packed;95929696-/**9393+/*9494+ * struct vidtv_psi_desc_network_name - A network name descriptor9595+ * see ETSI EN 300 468 v1.15.1 section 6.2.279696+ */9797+struct vidtv_psi_desc_network_name {9898+ struct vidtv_psi_desc *next;9999+ u8 type;100100+ u8 length;101101+ char *network_name;102102+} __packed;103103+104104+struct vidtv_psi_desc_service_list_entry {105105+ __be16 service_id;106106+ u8 service_type;107107+ struct vidtv_psi_desc_service_list_entry *next;108108+} __packed;109109+110110+/*111111+ * struct vidtv_psi_desc_service_list - A service list descriptor112112+ * see ETSI EN 300 468 v1.15.1 section 6.2.35113113+ */114114+struct vidtv_psi_desc_service_list {115115+ struct vidtv_psi_desc *next;116116+ u8 type;117117+ u8 length;118118+ struct vidtv_psi_desc_service_list_entry *service_list;119119+} __packed;120120+121121+/*122122+ * struct vidtv_psi_desc_short_event - A short event descriptor123123+ * see ETSI EN 300 468 v1.15.1 section 6.2.37124124+ */125125+struct vidtv_psi_desc_short_event {126126+ struct vidtv_psi_desc *next;127127+ u8 type;128128+ u8 length;129129+ char *iso_language_code;130130+ u8 event_name_len;131131+ char *event_name;132132+ u8 text_len;133133+ char *text;134134+} __packed;135135+136136+struct vidtv_psi_desc_short_event137137+*vidtv_psi_short_event_desc_init(struct vidtv_psi_desc *head,138138+ char *iso_language_code,139139+ char *event_name,140140+ char *text);141141+142142+/*97143 * struct vidtv_psi_table_header - A header that is present for all PSI tables.98144 */99145struct vidtv_psi_table_header {···158106 u8 last_section; /* last_section_number */159107} __packed;160108161161-/**109109+/*162110 * struct vidtv_psi_table_pat_program - A single program in the PAT163111 * See ISO/IEC 13818-1 : 2000 p.43164112 */···168116 struct vidtv_psi_table_pat_program *next;169117} __packed;170118171171-/**119119+/*172120 * struct vidtv_psi_table_pat - The Program Allocation Table (PAT)173121 * See ISO/IEC 13818-1 : 2000 p.43174122 */175123struct vidtv_psi_table_pat {176124 struct vidtv_psi_table_header header;177177- u16 programs; /* Included by libdvbv5, not part of the table and not actually serialized */125125+ u16 num_pat;126126+ u16 num_pmt;178127 struct vidtv_psi_table_pat_program *program;179128} __packed;180129181181-/**130130+/*182131 * struct vidtv_psi_table_sdt_service - Represents a service in the SDT.183132 * see ETSI EN 300 468 v1.15.1 section 5.2.3.184133 */···193140 struct vidtv_psi_table_sdt_service *next;194141} __packed;195142196196-/**143143+/*197144 * struct vidtv_psi_table_sdt - Represents the Service Description Table198145 * see ETSI EN 300 468 v1.15.1 section 5.2.3.199146 */···205152 struct vidtv_psi_table_sdt_service *service;206153} __packed;207154208208-/**155155+/*209156 * enum service_running_status - Status of a SDT service.210157 * see ETSI EN 300 468 v1.15.1 section 5.2.3 table 6.211158 */···213160 RUNNING = 0x4,214161};215162216216-/**163163+/*217164 * enum service_type - The type of a SDT service.218165 * see ETSI EN 300 468 v1.15.1 section 6.2.33, table 81.219166 */220167enum service_type {221168 /* see ETSI EN 300 468 v1.15.1 p. 77 */222169 DIGITAL_TELEVISION_SERVICE = 0x1,170170+ DIGITAL_RADIO_SOUND_SERVICE = 0X2,223171};224172225225-/**173173+/*226174 * struct vidtv_psi_table_pmt_stream - A single stream in the PMT.227175 * See ISO/IEC 13818-1 : 2000 p.46.228176 */···235181 struct vidtv_psi_table_pmt_stream *next;236182} __packed;237183238238-/**184184+/*239185 * struct vidtv_psi_table_pmt - The Program Map Table (PMT).240186 * See ISO/IEC 13818-1 : 2000 p.46.241187 */···344290 u8 *additional_ident_info,345291 u32 additional_info_len);346292293293+struct vidtv_psi_desc_network_name294294+*vidtv_psi_network_name_desc_init(struct vidtv_psi_desc *head, char *network_name);295295+296296+struct vidtv_psi_desc_service_list297297+*vidtv_psi_service_list_desc_init(struct vidtv_psi_desc *head,298298+ struct vidtv_psi_desc_service_list_entry *entry);299299+347300struct vidtv_psi_table_pat_program348301*vidtv_psi_pat_program_init(struct vidtv_psi_table_pat_program *head,349302 u16 service_id,···366305struct vidtv_psi_table_pmt *vidtv_psi_pmt_table_init(u16 program_number,367306 u16 pcr_pid);368307369369-struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 transport_stream_id);308308+struct vidtv_psi_table_sdt *vidtv_psi_sdt_table_init(u16 network_id,309309+ u16 transport_stream_id);370310371311struct vidtv_psi_table_sdt_service*372312vidtv_psi_sdt_service_init(struct vidtv_psi_table_sdt_service *head,373373- u16 service_id);313313+ u16 service_id,314314+ bool eit_schedule,315315+ bool eit_present_following);374316375317void376318vidtv_psi_desc_destroy(struct vidtv_psi_desc *desc);···477413 * vidtv_psi_create_sec_for_each_pat_entry - Create a PMT section for each478414 * program found in the PAT479415 * @pat: The PAT to look for programs.480480- * @s: The stream loop (one or more streams)481416 * @pcr_pid: packet ID for the PCR to be used for the program described in this482417 * PMT section483418 */···555492 * equal to the size of the PAT, since more space is needed for TS headers during TS556493 * encapsulation.557494 */558558-u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args args);495495+u32 vidtv_psi_pat_write_into(struct vidtv_psi_pat_write_args *args);559496560497/**561498 * struct vidtv_psi_sdt_write_args - Arguments for writing a SDT table···587524 * equal to the size of the SDT, since more space is needed for TS headers during TS588525 * encapsulation.589526 */590590-u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args args);527527+u32 vidtv_psi_sdt_write_into(struct vidtv_psi_sdt_write_args *args);591528592529/**593530 * struct vidtv_psi_pmt_write_args - Arguments for writing a PMT section594531 * @buf: The destination buffer.595532 * @offset: The offset into the destination buffer.596533 * @pmt: A pointer to the PMT.534534+ * @pid: Program ID597535 * @buf_sz: The size of the destination buffer.598536 * @continuity_counter: A pointer to the CC. Incremented on every new packet.599599- *537537+ * @pcr_pid: The TS PID used for the PSI packets. All channels will share the538538+ * same PCR.600539 */601540struct vidtv_psi_pmt_write_args {602541 char *buf;···622557 * equal to the size of the PMT section, since more space is needed for TS headers623558 * during TS encapsulation.624559 */625625-u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args args);560560+u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args);626561627562/**628563 * vidtv_psi_find_pmt_sec - Finds the PMT section for 'program_num'···638573639574u16 vidtv_psi_get_pat_program_pid(struct vidtv_psi_table_pat_program *p);640575u16 vidtv_psi_pmt_stream_get_elem_pid(struct vidtv_psi_table_pmt_stream *s);576576+577577+/**578578+ * struct vidtv_psi_table_transport - A entry in the TS loop for the NIT and/or other tables.579579+ * See ETSI 300 468 section 5.2.1580580+ * @transport_id: The TS ID being described581581+ * @network_id: The network_id that contains the TS ID582582+ * @bitfield: Contains the descriptor loop length583583+ * @descriptor: A descriptor loop584584+ * @next: Pointer to the next entry585585+ *586586+ */587587+struct vidtv_psi_table_transport {588588+ __be16 transport_id;589589+ __be16 network_id;590590+ __be16 bitfield; /* desc_len: 12, reserved: 4 */591591+ struct vidtv_psi_desc *descriptor;592592+ struct vidtv_psi_table_transport *next;593593+} __packed;594594+595595+/**596596+ * struct vidtv_psi_table_nit - A Network Information Table (NIT). See ETSI 300597597+ * 468 section 5.2.1598598+ * @header: A PSI table header599599+ * @bitfield: Contains the network descriptor length600600+ * @descriptor: A descriptor loop describing the network601601+ * @bitfield2: Contains the transport stream loop length602602+ * @transport: The transport stream loop603603+ *604604+ */605605+struct vidtv_psi_table_nit {606606+ struct vidtv_psi_table_header header;607607+ __be16 bitfield; /* network_desc_len: 12, reserved:4 */608608+ struct vidtv_psi_desc *descriptor;609609+ __be16 bitfield2; /* ts_loop_len: 12, reserved: 4 */610610+ struct vidtv_psi_table_transport *transport;611611+} __packed;612612+613613+struct vidtv_psi_table_nit614614+*vidtv_psi_nit_table_init(u16 network_id,615615+ u16 transport_stream_id,616616+ char *network_name,617617+ struct vidtv_psi_desc_service_list_entry *service_list);618618+619619+/**620620+ * struct vidtv_psi_nit_write_args - Arguments for writing a NIT section621621+ * @buf: The destination buffer.622622+ * @offset: The offset into the destination buffer.623623+ * @nit: A pointer to the NIT624624+ * @buf_sz: The size of the destination buffer.625625+ * @continuity_counter: A pointer to the CC. Incremented on every new packet.626626+ *627627+ */628628+struct vidtv_psi_nit_write_args {629629+ char *buf;630630+ u32 offset;631631+ struct vidtv_psi_table_nit *nit;632632+ u32 buf_sz;633633+ u8 *continuity_counter;634634+};635635+636636+/**637637+ * vidtv_psi_nit_write_into - Write NIT as MPEG-TS packets into a buffer.638638+ * @args: an instance of struct vidtv_psi_nit_write_args639639+ *640640+ * This function writes the MPEG TS packets for a NIT table into a buffer.641641+ * Calling code will usually generate the NIT via a call to its init function642642+ * and thus is responsible for freeing it.643643+ *644644+ * Return: The number of bytes written into the buffer. This is NOT645645+ * equal to the size of the NIT, since more space is needed for TS headers during TS646646+ * encapsulation.647647+ */648648+u32 vidtv_psi_nit_write_into(struct vidtv_psi_nit_write_args *args);649649+650650+void vidtv_psi_nit_table_destroy(struct vidtv_psi_table_nit *nit);651651+652652+/*653653+ * struct vidtv_psi_desc_short_event - A short event descriptor654654+ * see ETSI EN 300 468 v1.15.1 section 6.2.37655655+ */656656+struct vidtv_psi_table_eit_event {657657+ __be16 event_id;658658+ u8 start_time[5];659659+ u8 duration[3];660660+ __be16 bitfield; /* desc_length: 12, free_CA_mode: 1, running_status: 1 */661661+ struct vidtv_psi_desc *descriptor;662662+ struct vidtv_psi_table_eit_event *next;663663+} __packed;664664+665665+/*666666+ * struct vidtv_psi_table_eit - A Event Information Table (EIT)667667+ * See ETSI 300 468 section 5.2.4668668+ */669669+struct vidtv_psi_table_eit {670670+ struct vidtv_psi_table_header header;671671+ __be16 transport_id;672672+ __be16 network_id;673673+ u8 last_segment;674674+ u8 last_table_id;675675+ struct vidtv_psi_table_eit_event *event;676676+} __packed;677677+678678+struct vidtv_psi_table_eit679679+*vidtv_psi_eit_table_init(u16 network_id,680680+ u16 transport_stream_id,681681+ u16 service_id);682682+683683+/**684684+ * struct vidtv_psi_eit_write_args - Arguments for writing an EIT section685685+ * @buf: The destination buffer.686686+ * @offset: The offset into the destination buffer.687687+ * @eit: A pointer to the EIT688688+ * @buf_sz: The size of the destination buffer.689689+ * @continuity_counter: A pointer to the CC. Incremented on every new packet.690690+ *691691+ */692692+struct vidtv_psi_eit_write_args {693693+ char *buf;694694+ u32 offset;695695+ struct vidtv_psi_table_eit *eit;696696+ u32 buf_sz;697697+ u8 *continuity_counter;698698+};699699+700700+/**701701+ * vidtv_psi_eit_write_into - Write EIT as MPEG-TS packets into a buffer.702702+ * @args: an instance of struct vidtv_psi_nit_write_args703703+ *704704+ * This function writes the MPEG TS packets for a EIT table into a buffer.705705+ * Calling code will usually generate the EIT via a call to its init function706706+ * and thus is responsible for freeing it.707707+ *708708+ * Return: The number of bytes written into the buffer. This is NOT709709+ * equal to the size of the EIT, since more space is needed for TS headers during TS710710+ * encapsulation.711711+ */712712+u32 vidtv_psi_eit_write_into(struct vidtv_psi_eit_write_args *args);713713+714714+void vidtv_psi_eit_table_destroy(struct vidtv_psi_table_eit *eit);715715+716716+/**717717+ * vidtv_psi_eit_table_update_sec_len - Recompute and update the EIT section length.718718+ * @eit: The EIT whose length is to be updated.719719+ *720720+ * This will traverse the table and accumulate the length of its components,721721+ * which is then used to replace the 'section_length' field.722722+ *723723+ * If section_length > EIT_MAX_SECTION_LEN, the operation fails.724724+ */725725+void vidtv_psi_eit_table_update_sec_len(struct vidtv_psi_table_eit *eit);726726+727727+/**728728+ * vidtv_psi_eit_event_assign - Assigns the event loop to the EIT.729729+ * @eit: The EIT to assign to.730730+ * @e: The event loop731731+ *732732+ * This will free the previous event loop in the table.733733+ * This will assign ownership of the stream loop to the table, i.e. the table734734+ * will free this stream loop when a call to its destroy function is made.735735+ */736736+void vidtv_psi_eit_event_assign(struct vidtv_psi_table_eit *eit,737737+ struct vidtv_psi_table_eit_event *e);738738+739739+struct vidtv_psi_table_eit_event740740+*vidtv_psi_eit_event_init(struct vidtv_psi_table_eit_event *head, u16 event_id);741741+742742+void vidtv_psi_eit_event_destroy(struct vidtv_psi_table_eit_event *e);641743642744#endif // VIDTV_PSI_H
···1919#define VIDTV_S302M_H20202121#include <linux/types.h>2222-#include <asm/byteorder.h>23222423#include "vidtv_encoder.h"2524···3334 * @enc: A pointer to the containing encoder structure.3435 * @frame_index: The current frame in a block3536 * @au_count: The total number of access units encoded up to now3737+ * @last_duration: Duration of the tone currently being played3838+ * @note_offset: Position at the music tone array3939+ * @last_tone: Tone currently being played3640 */3741struct vidtv_s302m_ctx {3842 struct vidtv_encoder *enc;3943 u32 frame_index;4044 u32 au_count;4545+ int last_duration;4646+ unsigned int note_offset;4747+ enum musical_notes last_tone;4148};42494343-/**5050+/*4451 * struct vidtv_smpte_s302m_es - s302m MPEG Elementary Stream header.4552 *4653 * See SMPTE 302M 2007 table 1.
···1111#define VIDTV_TS_H12121313#include <linux/types.h>1414-#include <asm/byteorder.h>15141615#define TS_SYNC_BYTE 0x471716#define TS_PACKET_LEN 188···5354 * @dest_offset: The byte offset into the buffer.5455 * @pid: The TS PID for the PCR packets.5556 * @buf_sz: The size of the buffer in bytes.5656- * @countinuity_counter: The TS continuity_counter.5757+ * @continuity_counter: The TS continuity_counter.5758 * @pcr: A sample from the system clock.5859 */5960struct pcr_write_args {···7071 * @dest_buf: The buffer to write into.7172 * @dest_offset: The byte offset into the buffer.7273 * @buf_sz: The size of the buffer in bytes.7373- * @countinuity_counter: The TS continuity_counter.7474+ * @continuity_counter: The TS continuity_counter.7475 */7576struct null_packet_write_args {7677 void *dest_buf;
···3030#define SDHCI_ARASAN_VENDOR_REGISTER 0x7831313232#define SDHCI_ARASAN_ITAPDLY_REGISTER 0xF0F83333+#define SDHCI_ARASAN_ITAPDLY_SEL_MASK 0xFF3434+3335#define SDHCI_ARASAN_OTAPDLY_REGISTER 0xF0FC3636+#define SDHCI_ARASAN_OTAPDLY_SEL_MASK 0x3F34373538#define SDHCI_ARASAN_CQE_BASE_ADDR 0x2003639#define VENDOR_ENHANCED_STROBE BIT(0)···603600 u8 tap_delay, tap_max = 0;604601 int ret;605602606606- /*607607- * This is applicable for SDHCI_SPEC_300 and above608608- * ZynqMP does not set phase for <=25MHz clock.609609- * If degrees is zero, no need to do anything.610610- */611611- if (host->version < SDHCI_SPEC_300 ||612612- host->timing == MMC_TIMING_LEGACY ||613613- host->timing == MMC_TIMING_UHS_SDR12 || !degrees)603603+ /* This is applicable for SDHCI_SPEC_300 and above */604604+ if (host->version < SDHCI_SPEC_300)614605 return 0;615606616607 switch (host->timing) {···634637 ret = zynqmp_pm_set_sd_tapdelay(node_id, PM_TAPDELAY_OUTPUT, tap_delay);635638 if (ret)636639 pr_err("Error setting Output Tap Delay\n");640640+641641+ /* Release DLL Reset */642642+ zynqmp_pm_sd_dll_reset(node_id, PM_DLL_RESET_RELEASE);637643638644 return ret;639645}···668668 u8 tap_delay, tap_max = 0;669669 int ret;670670671671- /*672672- * This is applicable for SDHCI_SPEC_300 and above673673- * ZynqMP does not set phase for <=25MHz clock.674674- * If degrees is zero, no need to do anything.675675- */676676- if (host->version < SDHCI_SPEC_300 ||677677- host->timing == MMC_TIMING_LEGACY ||678678- host->timing == MMC_TIMING_UHS_SDR12 || !degrees)671671+ /* This is applicable for SDHCI_SPEC_300 and above */672672+ if (host->version < SDHCI_SPEC_300)679673 return 0;674674+675675+ /* Assert DLL Reset */676676+ zynqmp_pm_sd_dll_reset(node_id, PM_DLL_RESET_ASSERT);680677681678 switch (host->timing) {682679 case MMC_TIMING_MMC_HS:···730733 struct sdhci_host *host = sdhci_arasan->host;731734 u8 tap_delay, tap_max = 0;732735733733- /*734734- * This is applicable for SDHCI_SPEC_300 and above735735- * Versal does not set phase for <=25MHz clock.736736- * If degrees is zero, no need to do anything.737737- */738738- if (host->version < SDHCI_SPEC_300 ||739739- host->timing == MMC_TIMING_LEGACY ||740740- host->timing == MMC_TIMING_UHS_SDR12 || !degrees)736736+ /* This is applicable for SDHCI_SPEC_300 and above */737737+ if (host->version < SDHCI_SPEC_300)741738 return 0;742739743740 switch (host->timing) {···764773 regval = sdhci_readl(host, SDHCI_ARASAN_OTAPDLY_REGISTER);765774 regval |= SDHCI_OTAPDLY_ENABLE;766775 sdhci_writel(host, regval, SDHCI_ARASAN_OTAPDLY_REGISTER);776776+ regval &= ~SDHCI_ARASAN_OTAPDLY_SEL_MASK;767777 regval |= tap_delay;768778 sdhci_writel(host, regval, SDHCI_ARASAN_OTAPDLY_REGISTER);769779 }···796804 struct sdhci_host *host = sdhci_arasan->host;797805 u8 tap_delay, tap_max = 0;798806799799- /*800800- * This is applicable for SDHCI_SPEC_300 and above801801- * Versal does not set phase for <=25MHz clock.802802- * If degrees is zero, no need to do anything.803803- */804804- if (host->version < SDHCI_SPEC_300 ||805805- host->timing == MMC_TIMING_LEGACY ||806806- host->timing == MMC_TIMING_UHS_SDR12 || !degrees)807807+ /* This is applicable for SDHCI_SPEC_300 and above */808808+ if (host->version < SDHCI_SPEC_300)807809 return 0;808810809811 switch (host->timing) {···832846 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER);833847 regval |= SDHCI_ITAPDLY_ENABLE;834848 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER);849849+ regval &= ~SDHCI_ARASAN_ITAPDLY_SEL_MASK;835850 regval |= tap_delay;836851 sdhci_writel(host, regval, SDHCI_ARASAN_ITAPDLY_REGISTER);837852 regval &= ~SDHCI_ITAPDLY_CHGWIN;
···544544 /* need to wait for hw response, can't free tx_info yet. */545545 if (tx_info->open_state == CH_KTLS_OPEN_PENDING)546546 tx_info->pending_close = true;547547- /* free the lock after the cleanup */547547+ else548548+ spin_unlock_bh(&tx_info->lock);549549+ /* if in pending close, free the lock after the cleanup */548550 goto put_module;549551 }550552 spin_unlock_bh(&tx_info->lock);
+2
drivers/net/ethernet/freescale/dpaa2/Kconfig
···44 depends on FSL_MC_BUS && FSL_MC_DPIO55 select PHYLINK66 select PCS_LYNX77+ select FSL_XGMAC_MDIO88+ select NET_DEVLINK79 help810 This is the DPAA2 Ethernet driver supporting Freescale SoCs911 with DPAA2 (DataPath Acceleration Architecture v2).
···140140 __I40E_CLIENT_RESET,141141 __I40E_VIRTCHNL_OP_PENDING,142142 __I40E_RECOVERY_MODE,143143+ __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */143144 /* This must be last as it determines the size of the BITMAP */144145 __I40E_STATE_SIZE__,145146};
+15-7
drivers/net/ethernet/intel/i40e/i40e_main.c
···40104010 }4011401140124012 if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) {40134013- ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK;40144014- set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);40134013+ /* disable any further VFLR event notifications */40144014+ if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state)) {40154015+ u32 reg = rd32(hw, I40E_PFINT_ICR0_ENA);40164016+40174017+ reg &= ~I40E_PFINT_ICR0_VFLR_MASK;40184018+ wr32(hw, I40E_PFINT_ICR0_ENA, reg);40194019+ } else {40204020+ ena_mask &= ~I40E_PFINT_ICR0_ENA_VFLR_MASK;40214021+ set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);40224022+ }40154023 }4016402440174025 if (icr0 & I40E_PFINT_ICR0_GRST_MASK) {···1531915311 while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))1532015312 usleep_range(1000, 2000);15321153131531415314+ if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {1531515315+ set_bit(__I40E_VF_RESETS_DISABLED, pf->state);1531615316+ i40e_free_vfs(pf);1531715317+ pf->flags &= ~I40E_FLAG_SRIOV_ENABLED;1531815318+ }1532215319 /* no more scheduling of any task */1532315320 set_bit(__I40E_SUSPENDED, pf->state);1532415321 set_bit(__I40E_DOWN, pf->state);···1534915336 * has been stopped.1535015337 */1535115338 i40e_notify_client_of_netdev_close(pf->vsi[pf->lan_vsi], false);1535215352-1535315353- if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {1535415354- i40e_free_vfs(pf);1535515355- pf->flags &= ~I40E_FLAG_SRIOV_ENABLED;1535615356- }15357153391535815340 i40e_fdir_teardown(pf);1535915341
···14031403 * @vf: pointer to the VF structure14041404 * @flr: VFLR was issued or not14051405 *14061406- * Returns true if the VF is reset, false otherwise.14061406+ * Returns true if the VF is in reset, resets successfully, or resets14071407+ * are disabled and false otherwise.14071408 **/14081409bool i40e_reset_vf(struct i40e_vf *vf, bool flr)14091410{···14141413 u32 reg;14151414 int i;1416141514161416+ if (test_bit(__I40E_VF_RESETS_DISABLED, pf->state))14171417+ return true;14181418+14171419 /* If the VFs have been disabled, this means something else is14181420 * resetting the VF, so we shouldn't continue.14191421 */14201422 if (test_and_set_bit(__I40E_VF_DISABLE, pf->state))14211421- return false;14231423+ return true;1422142414231425 i40e_trigger_vf_reset(vf, flr);14241426···1585158115861582 i40e_notify_client_of_vf_enable(pf, 0);1587158315841584+ /* Disable IOV before freeing resources. This lets any VF drivers15851585+ * running in the host get themselves cleaned up before we yank15861586+ * the carpet out from underneath their feet.15871587+ */15881588+ if (!pci_vfs_assigned(pf->pdev))15891589+ pci_disable_sriov(pf->pdev);15901590+ else15911591+ dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");15921592+15881593 /* Amortize wait time by stopping all VFs at the same time */15891594 for (i = 0; i < pf->num_alloc_vfs; i++) {15901595 if (test_bit(I40E_VF_STATE_INIT, &pf->vf[i].vf_states))···1608159516091596 i40e_vsi_wait_queues_disabled(pf->vsi[pf->vf[i].lan_vsi_idx]);16101597 }16111611-16121612- /* Disable IOV before freeing resources. This lets any VF drivers16131613- * running in the host get themselves cleaned up before we yank16141614- * the carpet out from underneath their feet.16151615- */16161616- if (!pci_vfs_assigned(pf->pdev))16171617- pci_disable_sriov(pf->pdev);16181618- else16191619- dev_warn(&pf->pdev->dev, "VFs are assigned - not disabling SR-IOV\n");1620159816211599 /* free up VF resources */16221600 tmp = pf->num_alloc_vfs;
···88 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.99 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH1010 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH1111- * Copyright(c) 2018 - 2019 Intel Corporation1111+ * Copyright(c) 2018 - 2020 Intel Corporation1212 *1313 * This program is free software; you can redistribute it and/or modify1414 * it under the terms of version 2 of the GNU General Public License as···3131 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.3232 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH3333 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH3434- * Copyright(c) 2018 - 2019 Intel Corporation3434+ * Copyright(c) 2018 - 2020 Intel Corporation3535 * All rights reserved.3636 *3737 * Redistribution and use in source and binary forms, with or without···421421 * able to run the GO Negotiation. Will not be fragmented and not422422 * repetitive. Valid only on the P2P Device MAC. Only the duration will423423 * be taken into account.424424+ * @SESSION_PROTECT_CONF_MAX_ID: not used424425 */425426enum iwl_mvm_session_prot_conf_id {426427 SESSION_PROTECT_CONF_ASSOC,427428 SESSION_PROTECT_CONF_GO_CLIENT_ASSOC,428429 SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV,429430 SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION,431431+ SESSION_PROTECT_CONF_MAX_ID,430432}; /* SESSION_PROTECTION_CONF_ID_E_VER_1 */431433432434/**···461459 * @mac_id: the mac id for which the session protection started / ended462460 * @status: 1 means success, 0 means failure463461 * @start: 1 means the session protection started, 0 means it ended464464- * @conf_id: the configuration id of the session that started / eneded462462+ * @conf_id: see &enum iwl_mvm_session_prot_conf_id465463 *466464 * Note that any session protection will always get two notifications: start467465 * and end even the firmware could not schedule it.
···3080308030813081 /* this would be a mac80211 bug ... but don't crash */30823082 if (WARN_ON_ONCE(!mvmvif->phy_ctxt))30833083- return -EINVAL;30833083+ return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) ? 0 : -EINVAL;3084308430853085 /*30863086 * If we are in a STA removal flow and in DQA mode:···31263126 ret = -EINVAL;31273127 goto out_unlock;31283128 }31293129+31303130+ if (vif->type == NL80211_IFTYPE_STATION)31313131+ vif->bss_conf.he_support = sta->he_cap.has_he;3129313231303133 if (sta->tdls &&31313134 (vif->p2p ||
+18
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
···196196 mpdu_dens = sta->ht_cap.ampdu_density;197197 }198198199199+199200 if (sta->vht_cap.vht_supported) {200201 agg_size = sta->vht_cap.cap &201202 IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;···205204 } else if (sta->ht_cap.ht_supported) {206205 agg_size = sta->ht_cap.ampdu_factor;207206 }207207+208208+ /* D6.0 10.12.2 A-MPDU length limit rules209209+ * A STA indicates the maximum length of the A-MPDU preEOF padding210210+ * that it can receive in an HE PPDU in the Maximum A-MPDU Length211211+ * Exponent field in its HT Capabilities, VHT Capabilities,212212+ * and HE 6 GHz Band Capabilities elements (if present) and the213213+ * Maximum AMPDU Length Exponent Extension field in its HE214214+ * Capabilities element215215+ */216216+ if (sta->he_cap.has_he)217217+ agg_size += u8_get_bits(sta->he_cap.he_cap_elem.mac_cap_info[3],218218+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_MASK);219219+220220+ /* Limit to max A-MPDU supported by FW */221221+ if (agg_size > (STA_FLG_MAX_AGG_SIZE_4M >> STA_FLG_MAX_AGG_SIZE_SHIFT))222222+ agg_size = (STA_FLG_MAX_AGG_SIZE_4M >>223223+ STA_FLG_MAX_AGG_SIZE_SHIFT);208224209225 add_sta_cmd.station_flags |=210226 cpu_to_le32(agg_size << STA_FLG_MAX_AGG_SIZE_SHIFT);
···641641 }642642}643643644644+static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,645645+ struct iwl_mvm_vif *mvmvif)646646+{647647+ struct iwl_mvm_session_prot_cmd cmd = {648648+ .id_and_color =649649+ cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,650650+ mvmvif->color)),651651+ .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),652652+ .conf_id = cpu_to_le32(mvmvif->time_event_data.id),653653+ };654654+ int ret;655655+656656+ ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,657657+ MAC_CONF_GROUP, 0),658658+ 0, sizeof(cmd), &cmd);659659+ if (ret)660660+ IWL_ERR(mvm,661661+ "Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret);662662+}663663+644664static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,645665 struct iwl_mvm_time_event_data *te_data,646666 u32 *uid)647667{648668 u32 id;669669+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);649670650671 /*651672 * It is possible that by the time we got to this point the time···684663 iwl_mvm_te_clear_data(mvm, te_data);685664 spin_unlock_bh(&mvm->time_event_lock);686665687687- /*688688- * It is possible that by the time we try to remove it, the time event689689- * has already ended and removed. In such a case there is no need to690690- * send a removal command.666666+ /* When session protection is supported, the te_data->id field667667+ * is reused to save session protection's configuration.691668 */692692- if (id == TE_MAX) {693693- IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid);669669+ if (fw_has_capa(&mvm->fw->ucode_capa,670670+ IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {671671+ if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) {672672+ /* Session protection is still ongoing. Cancel it */673673+ iwl_mvm_cancel_session_protection(mvm, mvmvif);674674+ if (te_data->vif->type == NL80211_IFTYPE_P2P_DEVICE) {675675+ set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);676676+ iwl_mvm_roc_finished(mvm);677677+ }678678+ }694679 return false;680680+ } else {681681+ /* It is possible that by the time we try to remove it, the682682+ * time event has already ended and removed. In such a case683683+ * there is no need to send a removal command.684684+ */685685+ if (id == TE_MAX) {686686+ IWL_DEBUG_TE(mvm, "TE 0x%x has already ended\n", *uid);687687+ return false;688688+ }695689 }696690697691 return true;···807771 struct iwl_rx_packet *pkt = rxb_addr(rxb);808772 struct iwl_mvm_session_prot_notif *notif = (void *)pkt->data;809773 struct ieee80211_vif *vif;774774+ struct iwl_mvm_vif *mvmvif;810775811776 rcu_read_lock();812777 vif = iwl_mvm_rcu_dereference_vif_id(mvm, le32_to_cpu(notif->mac_id),···816779 if (!vif)817780 goto out_unlock;818781782782+ mvmvif = iwl_mvm_vif_from_mac80211(vif);783783+819784 /* The vif is not a P2P_DEVICE, maintain its time_event_data */820785 if (vif->type != NL80211_IFTYPE_P2P_DEVICE) {821821- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);822786 struct iwl_mvm_time_event_data *te_data =823787 &mvmvif->time_event_data;824788···854816855817 if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) {856818 /* End TE, notify mac80211 */819819+ mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;857820 ieee80211_remain_on_channel_expired(mvm->hw);858821 set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);859822 iwl_mvm_roc_finished(mvm);860823 } else if (le32_to_cpu(notif->start)) {824824+ if (WARN_ON(mvmvif->time_event_data.id !=825825+ le32_to_cpu(notif->conf_id)))826826+ goto out_unlock;861827 set_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status);862828 ieee80211_ready_on_channel(mvm->hw); /* Start TE */863829 }···887845888846 lockdep_assert_held(&mvm->mutex);889847848848+ /* The time_event_data.id field is reused to save session849849+ * protection's configuration.850850+ */890851 switch (type) {891852 case IEEE80211_ROC_TYPE_NORMAL:892892- cmd.conf_id =893893- cpu_to_le32(SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV);853853+ mvmvif->time_event_data.id =854854+ SESSION_PROTECT_CONF_P2P_DEVICE_DISCOV;894855 break;895856 case IEEE80211_ROC_TYPE_MGMT_TX:896896- cmd.conf_id =897897- cpu_to_le32(SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION);857857+ mvmvif->time_event_data.id =858858+ SESSION_PROTECT_CONF_P2P_GO_NEGOTIATION;898859 break;899860 default:900861 WARN_ONCE(1, "Got an invalid ROC type\n");901862 return -EINVAL;902863 }903864865865+ cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);904866 return iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,905867 MAC_CONF_GROUP, 0),906868 0, sizeof(cmd), &cmd);···1006960 __iwl_mvm_remove_time_event(mvm, te_data, &uid);1007961}100896210091009-static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,10101010- struct iwl_mvm_vif *mvmvif)10111011-{10121012- struct iwl_mvm_session_prot_cmd cmd = {10131013- .id_and_color =10141014- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,10151015- mvmvif->color)),10161016- .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),10171017- };10181018- int ret;10191019-10201020- ret = iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(SESSION_PROTECTION_CMD,10211021- MAC_CONF_GROUP, 0),10221022- 0, sizeof(cmd), &cmd);10231023- if (ret)10241024- IWL_ERR(mvm,10251025- "Couldn't send the SESSION_PROTECTION_CMD: %d\n", ret);10261026-}10271027-1028963void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)1029964{1030965 struct iwl_mvm_vif *mvmvif;···1015988 IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {1016989 mvmvif = iwl_mvm_vif_from_mac80211(vif);101799010181018- iwl_mvm_cancel_session_protection(mvm, mvmvif);10191019-10201020- if (vif->type == NL80211_IFTYPE_P2P_DEVICE)991991+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {992992+ iwl_mvm_cancel_session_protection(mvm, mvmvif);1021993 set_bit(IWL_MVM_STATUS_NEED_FLUSH_P2P, &mvm->status);994994+ } else {995995+ iwl_mvm_remove_aux_roc_te(mvm, mvmvif,996996+ &mvmvif->time_event_data);997997+ }10229981023999 iwl_mvm_roc_finished(mvm);10241000···11561126 cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,11571127 mvmvif->color)),11581128 .action = cpu_to_le32(FW_CTXT_ACTION_ADD),11591159- .conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),11601129 .duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),11611130 };11311131+11321132+ /* The time_event_data.id field is reused to save session11331133+ * protection's configuration.11341134+ */11351135+ mvmvif->time_event_data.id = SESSION_PROTECT_CONF_ASSOC;11361136+ cmd.conf_id = cpu_to_le32(mvmvif->time_event_data.id);1162113711631138 lockdep_assert_held(&mvm->mutex);11641139
···252252253253 iwl_set_bit(trans, CSR_CTXT_INFO_BOOT_CTRL,254254 CSR_AUTO_FUNC_BOOT_ENA);255255+256256+ if (trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {257257+ /*258258+ * The firmware initializes this again later (to a smaller259259+ * value), but for the boot process initialize the LTR to260260+ * ~250 usec.261261+ */262262+ u32 val = CSR_LTR_LONG_VAL_AD_NO_SNOOP_REQ |263263+ u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,264264+ CSR_LTR_LONG_VAL_AD_NO_SNOOP_SCALE) |265265+ u32_encode_bits(250,266266+ CSR_LTR_LONG_VAL_AD_NO_SNOOP_VAL) |267267+ CSR_LTR_LONG_VAL_AD_SNOOP_REQ |268268+ u32_encode_bits(CSR_LTR_LONG_VAL_AD_SCALE_USEC,269269+ CSR_LTR_LONG_VAL_AD_SNOOP_SCALE) |270270+ u32_encode_bits(250, CSR_LTR_LONG_VAL_AD_SNOOP_VAL);271271+272272+ iwl_write32(trans, CSR_LTR_LONG_VAL_AD, val);273273+ }274274+255275 if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)256276 iwl_write_umac_prph(trans, UREG_CPU_INIT_RUN, 1);257277 else
+27-9
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
···21562156 void *buf, int dwords)21572157{21582158 unsigned long flags;21592159- int offs, ret = 0;21592159+ int offs = 0;21602160 u32 *vals = buf;2161216121622162- if (iwl_trans_grab_nic_access(trans, &flags)) {21632163- iwl_write32(trans, HBUS_TARG_MEM_RADDR, addr);21642164- for (offs = 0; offs < dwords; offs++)21652165- vals[offs] = iwl_read32(trans, HBUS_TARG_MEM_RDAT);21662166- iwl_trans_release_nic_access(trans, &flags);21672167- } else {21682168- ret = -EBUSY;21622162+ while (offs < dwords) {21632163+ /* limit the time we spin here under lock to 1/2s */21642164+ ktime_t timeout = ktime_add_us(ktime_get(), 500 * USEC_PER_MSEC);21652165+21662166+ if (iwl_trans_grab_nic_access(trans, &flags)) {21672167+ iwl_write32(trans, HBUS_TARG_MEM_RADDR,21682168+ addr + 4 * offs);21692169+21702170+ while (offs < dwords) {21712171+ vals[offs] = iwl_read32(trans,21722172+ HBUS_TARG_MEM_RDAT);21732173+ offs++;21742174+21752175+ /* calling ktime_get is expensive so21762176+ * do it once in 128 reads21772177+ */21782178+ if (offs % 128 == 0 && ktime_after(ktime_get(),21792179+ timeout))21802180+ break;21812181+ }21822182+ iwl_trans_release_nic_access(trans, &flags);21832183+ } else {21842184+ return -EBUSY;21852185+ }21692186 }21702170- return ret;21872187+21882188+ return 0;21712189}2172219021732191static int iwl_trans_pcie_write_mem(struct iwl_trans *trans, u32 addr,
+1-1
drivers/net/wireless/realtek/rtw88/fw.c
···14821482int rtw_fw_dump_fifo(struct rtw_dev *rtwdev, u8 fifo_sel, u32 addr, u32 size,14831483 u32 *buffer)14841484{14851485- if (!rtwdev->chip->fw_fifo_addr) {14851485+ if (!rtwdev->chip->fw_fifo_addr[0]) {14861486 rtw_dbg(rtwdev, RTW_DBG_FW, "chip not support dump fw fifo\n");14871487 return -ENOTSUPP;14881488 }
+2-2
drivers/nfc/s3fwrn5/i2c.c
···2525 struct i2c_client *i2c_dev;2626 struct nci_dev *ndev;27272828- unsigned int gpio_en;2929- unsigned int gpio_fw_wake;2828+ int gpio_en;2929+ int gpio_fw_wake;30303131 struct mutex mutex;3232
···103103 return 0;104104}105105106106-static int idtcm_strverscmp(const char *ver1, const char *ver2)106106+static int idtcm_strverscmp(const char *version1, const char *version2)107107{108108- u8 num1;109109- u8 num2;110110- int result = 0;108108+ u8 ver1[3], ver2[3];109109+ int i;111110112112- /* loop through each level of the version string */113113- while (result == 0) {114114- /* extract leading version numbers */115115- if (kstrtou8(ver1, 10, &num1) < 0)111111+ if (sscanf(version1, "%hhu.%hhu.%hhu",112112+ &ver1[0], &ver1[1], &ver1[2]) != 3)113113+ return -1;114114+ if (sscanf(version2, "%hhu.%hhu.%hhu",115115+ &ver2[0], &ver2[1], &ver2[2]) != 3)116116+ return -1;117117+118118+ for (i = 0; i < 3; i++) {119119+ if (ver1[i] > ver2[i])120120+ return 1;121121+ if (ver1[i] < ver2[i])116122 return -1;117117-118118- if (kstrtou8(ver2, 10, &num2) < 0)119119- return -1;120120-121121- /* if numbers differ, then set the result */122122- if (num1 < num2)123123- result = -1;124124- else if (num1 > num2)125125- result = 1;126126- else {127127- /* if numbers are the same, go to next level */128128- ver1 = strchr(ver1, '.');129129- ver2 = strchr(ver2, '.');130130- if (!ver1 && !ver2)131131- break;132132- else if (!ver1)133133- result = -1;134134- else if (!ver2)135135- result = 1;136136- else {137137- ver1++;138138- ver2++;139139- }140140- }141123 }142142- return result;124124+125125+ return 0;143126}144127145128static int idtcm_xfer_read(struct idtcm *idtcm,
+6
drivers/s390/block/dasd.c
···2980298029812981 if (!block)29822982 return -EINVAL;29832983+ /*29842984+ * If the request is an ERP request there is nothing to requeue.29852985+ * This will be done with the remaining original request.29862986+ */29872987+ if (cqr->refers)29882988+ return 0;29832989 spin_lock_irq(&cqr->dq->lock);29842990 req = (struct request *) cqr->callback_data;29852991 blk_mq_requeue_request(req, false);
+6-3
drivers/s390/net/qeth_core.h
···417417 QETH_QDIO_BUF_EMPTY,418418 /* Filled by driver; owned by hardware in order to be sent. */419419 QETH_QDIO_BUF_PRIMED,420420- /* Identified to be pending in TPQ. */420420+ /* Discovered by the TX completion code: */421421 QETH_QDIO_BUF_PENDING,422422- /* Found in completion queue. */423423- QETH_QDIO_BUF_IN_CQ,422422+ /* Finished by the TX completion code: */423423+ QETH_QDIO_BUF_NEED_QAOB,424424+ /* Received QAOB notification on CQ: */425425+ QETH_QDIO_BUF_QAOB_OK,426426+ QETH_QDIO_BUF_QAOB_ERROR,424427 /* Handled via transfer pending / completion queue. */425428 QETH_QDIO_BUF_HANDLED_DELAYED,426429};
+54-28
drivers/s390/net/qeth_core_main.c
···33333434#include <net/iucv/af_iucv.h>3535#include <net/dsfield.h>3636+#include <net/sock.h>36373738#include <asm/ebcdic.h>3839#include <asm/chpid.h>···500499501500 }502501 }503503- if (forced_cleanup && (atomic_read(&(q->bufs[bidx]->state)) ==504504- QETH_QDIO_BUF_HANDLED_DELAYED)) {505505- /* for recovery situations */506506- qeth_init_qdio_out_buf(q, bidx);507507- QETH_CARD_TEXT(q->card, 2, "clprecov");508508- }509502}510503511504static void qeth_qdio_handle_aob(struct qeth_card *card,512505 unsigned long phys_aob_addr)513506{507507+ enum qeth_qdio_out_buffer_state new_state = QETH_QDIO_BUF_QAOB_OK;514508 struct qaob *aob;515509 struct qeth_qdio_out_buffer *buffer;516510 enum iucv_tx_notify notification;···516520 QETH_CARD_TEXT_(card, 5, "%lx", phys_aob_addr);517521 buffer = (struct qeth_qdio_out_buffer *) aob->user1;518522 QETH_CARD_TEXT_(card, 5, "%lx", aob->user1);519519-520520- if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,521521- QETH_QDIO_BUF_IN_CQ) == QETH_QDIO_BUF_PRIMED) {522522- notification = TX_NOTIFY_OK;523523- } else {524524- WARN_ON_ONCE(atomic_read(&buffer->state) !=525525- QETH_QDIO_BUF_PENDING);526526- atomic_set(&buffer->state, QETH_QDIO_BUF_IN_CQ);527527- notification = TX_NOTIFY_DELAYED_OK;528528- }529529-530530- if (aob->aorc != 0) {531531- QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);532532- notification = qeth_compute_cq_notification(aob->aorc, 1);533533- }534534- qeth_notify_skbs(buffer->q, buffer, notification);535523536524 /* Free dangling allocations. The attached skbs are handled by537525 * qeth_cleanup_handled_pending().···528548 if (data && buffer->is_header[i])529549 kmem_cache_free(qeth_core_header_cache, data);530550 }531531- atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);551551+552552+ if (aob->aorc) {553553+ QETH_CARD_TEXT_(card, 2, "aorc%02X", aob->aorc);554554+ new_state = QETH_QDIO_BUF_QAOB_ERROR;555555+ }556556+557557+ switch (atomic_xchg(&buffer->state, new_state)) {558558+ case QETH_QDIO_BUF_PRIMED:559559+ /* Faster than TX completion code. */560560+ notification = qeth_compute_cq_notification(aob->aorc, 0);561561+ qeth_notify_skbs(buffer->q, buffer, notification);562562+ atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);563563+ break;564564+ case QETH_QDIO_BUF_PENDING:565565+ /* TX completion code is active and will handle the async566566+ * completion for us.567567+ */568568+ break;569569+ case QETH_QDIO_BUF_NEED_QAOB:570570+ /* TX completion code is already finished. */571571+ notification = qeth_compute_cq_notification(aob->aorc, 1);572572+ qeth_notify_skbs(buffer->q, buffer, notification);573573+ atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);574574+ break;575575+ default:576576+ WARN_ON_ONCE(1);577577+ }532578533579 qdio_release_aob(aob);534580}···14111405 skb_queue_walk(&buf->skb_list, skb) {14121406 QETH_CARD_TEXT_(q->card, 5, "skbn%d", notification);14131407 QETH_CARD_TEXT_(q->card, 5, "%lx", (long) skb);14141414- if (skb->protocol == htons(ETH_P_AF_IUCV) && skb->sk)14081408+ if (skb->sk && skb->sk->sk_family == PF_IUCV)14151409 iucv_sk(skb->sk)->sk_txnotify(skb, notification);14161410 }14171411}···14211415{14221416 struct qeth_qdio_out_q *queue = buf->q;14231417 struct sk_buff *skb;14241424-14251425- /* release may never happen from within CQ tasklet scope */14261426- WARN_ON_ONCE(atomic_read(&buf->state) == QETH_QDIO_BUF_IN_CQ);1427141814281419 if (atomic_read(&buf->state) == QETH_QDIO_BUF_PENDING)14291420 qeth_notify_skbs(queue, buf, TX_NOTIFY_GENERALERROR);···6081607860826079 if (atomic_cmpxchg(&buffer->state, QETH_QDIO_BUF_PRIMED,60836080 QETH_QDIO_BUF_PENDING) ==60846084- QETH_QDIO_BUF_PRIMED)60816081+ QETH_QDIO_BUF_PRIMED) {60856082 qeth_notify_skbs(queue, buffer, TX_NOTIFY_PENDING);60836083+60846084+ /* Handle race with qeth_qdio_handle_aob(): */60856085+ switch (atomic_xchg(&buffer->state,60866086+ QETH_QDIO_BUF_NEED_QAOB)) {60876087+ case QETH_QDIO_BUF_PENDING:60886088+ /* No concurrent QAOB notification. */60896089+ break;60906090+ case QETH_QDIO_BUF_QAOB_OK:60916091+ qeth_notify_skbs(queue, buffer,60926092+ TX_NOTIFY_DELAYED_OK);60936093+ atomic_set(&buffer->state,60946094+ QETH_QDIO_BUF_HANDLED_DELAYED);60956095+ break;60966096+ case QETH_QDIO_BUF_QAOB_ERROR:60976097+ qeth_notify_skbs(queue, buffer,60986098+ TX_NOTIFY_DELAYED_GENERALERROR);60996099+ atomic_set(&buffer->state,61006100+ QETH_QDIO_BUF_HANDLED_DELAYED);61016101+ break;61026102+ default:61036103+ WARN_ON_ONCE(1);61046104+ }61056105+ }6086610660876107 QETH_CARD_TEXT_(card, 5, "pel%u", bidx);60886108
+2-16
drivers/s390/net/qeth_l2_main.c
···983983 * change notification' and thus can support the learning_sync bridgeport984984 * attribute985985 * @card: qeth_card structure pointer986986- *987987- * This is a destructive test and must be called before dev2br or988988- * bridgeport address notification is enabled!989986 */990987static void qeth_l2_detect_dev2br_support(struct qeth_card *card)991988{992989 struct qeth_priv *priv = netdev_priv(card->dev);993990 bool dev2br_supported;994994- int rc;995991996992 QETH_CARD_TEXT(card, 2, "d2brsup");997993 if (!IS_IQD(card))998994 return;9999951000996 /* dev2br requires valid cssid,iid,chid */10011001- if (!card->info.ids_valid) {10021002- dev2br_supported = false;10031003- } else if (css_general_characteristics.enarf) {10041004- dev2br_supported = true;10051005- } else {10061006- /* Old machines don't have the feature bit:10071007- * Probe by testing whether a disable succeeds10081008- */10091009- rc = qeth_l2_pnso(card, PNSO_OC_NET_ADDR_INFO, 0, NULL, NULL);10101010- dev2br_supported = !rc;10111011- }997997+ dev2br_supported = card->info.ids_valid &&998998+ css_general_characteristics.enarf;1012999 QETH_CARD_TEXT_(card, 2, "D2Bsup%02x", dev2br_supported);1013100010141001 if (dev2br_supported)···22202233 struct net_device *dev = card->dev;22212234 int rc = 0;2222223522232223- /* query before bridgeport_notification may be enabled */22242236 qeth_l2_detect_dev2br_support(card);2225223722262238 mutex_lock(&card->sbp_lock);
+15-8
drivers/scsi/libiscsi.c
···533533 if (conn->task == task)534534 conn->task = NULL;535535536536- if (conn->ping_task == task)537537- conn->ping_task = NULL;536536+ if (READ_ONCE(conn->ping_task) == task)537537+ WRITE_ONCE(conn->ping_task, NULL);538538539539 /* release get from queueing */540540 __iscsi_put_task(task);···737737 task->hdr->itt = build_itt(task->itt,738738 task->conn->session->age);739739 }740740+741741+ if (unlikely(READ_ONCE(conn->ping_task) == INVALID_SCSI_TASK))742742+ WRITE_ONCE(conn->ping_task, task);740743741744 if (!ihost->workq) {742745 if (iscsi_prep_mgmt_task(conn, task))···944941 struct iscsi_nopout hdr;945942 struct iscsi_task *task;946943947947- if (!rhdr && conn->ping_task)948948- return -EINVAL;944944+ if (!rhdr) {945945+ if (READ_ONCE(conn->ping_task))946946+ return -EINVAL;947947+ WRITE_ONCE(conn->ping_task, INVALID_SCSI_TASK);948948+ }949949950950 memset(&hdr, 0, sizeof(struct iscsi_nopout));951951 hdr.opcode = ISCSI_OP_NOOP_OUT | ISCSI_OP_IMMEDIATE;···963957964958 task = __iscsi_conn_send_pdu(conn, (struct iscsi_hdr *)&hdr, NULL, 0);965959 if (!task) {960960+ if (!rhdr)961961+ WRITE_ONCE(conn->ping_task, NULL);966962 iscsi_conn_printk(KERN_ERR, conn, "Could not send nopout\n");967963 return -EIO;968964 } else if (!rhdr) {969965 /* only track our nops */970970- conn->ping_task = task;971966 conn->last_ping = jiffies;972967 }973968···991984 struct iscsi_conn *conn = task->conn;992985 int rc = 0;993986994994- if (conn->ping_task != task) {987987+ if (READ_ONCE(conn->ping_task) != task) {995988 /*996989 * If this is not in response to one of our997990 * nops then it must be from userspace.···19301923 */19311924static int iscsi_has_ping_timed_out(struct iscsi_conn *conn)19321925{19331933- if (conn->ping_task &&19261926+ if (READ_ONCE(conn->ping_task) &&19341927 time_before_eq(conn->last_recv + (conn->recv_timeout * HZ) +19351928 (conn->ping_timeout * HZ), jiffies))19361929 return 1;···20652058 * Checking the transport already or nop from a cmd timeout still20662059 * running20672060 */20682068- if (conn->ping_task) {20612061+ if (READ_ONCE(conn->ping_task)) {20692062 task->have_checked_conn = true;20702063 rc = BLK_EH_RESET_TIMER;20712064 goto done;
+23-14
drivers/scsi/ufs/ufshcd.c
···12941294 }12951295 spin_unlock_irqrestore(hba->host->host_lock, irq_flags);1296129612971297+ pm_runtime_get_noresume(hba->dev);12981298+ if (!pm_runtime_active(hba->dev)) {12991299+ pm_runtime_put_noidle(hba->dev);13001300+ ret = -EAGAIN;13011301+ goto out;13021302+ }12971303 start = ktime_get();12981304 ret = ufshcd_devfreq_scale(hba, scale_up);13051305+ pm_runtime_put(hba->dev);1299130613001307 trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),13011308 (scale_up ? "up" : "down"),···31993192 /* Get the length of descriptor */32003193 ufshcd_map_desc_id_to_length(hba, desc_id, &buff_len);32013194 if (!buff_len) {32023202- dev_err(hba->dev, "%s: Failed to get desc length", __func__);31953195+ dev_err(hba->dev, "%s: Failed to get desc length\n", __func__);31963196+ return -EINVAL;31973197+ }31983198+31993199+ if (param_offset >= buff_len) {32003200+ dev_err(hba->dev, "%s: Invalid offset 0x%x in descriptor IDN 0x%x, length 0x%x\n",32013201+ __func__, param_offset, desc_id, buff_len);32033202 return -EINVAL;32043203 }3205320432063205 /* Check whether we need temp memory */32073206 if (param_offset != 0 || param_size < buff_len) {32083208- desc_buf = kmalloc(buff_len, GFP_KERNEL);32073207+ desc_buf = kzalloc(buff_len, GFP_KERNEL);32093208 if (!desc_buf)32103209 return -ENOMEM;32113210 } else {···32253212 desc_buf, &buff_len);3226321332273214 if (ret) {32283228- dev_err(hba->dev, "%s: Failed reading descriptor. desc_id %d, desc_index %d, param_offset %d, ret %d",32153215+ dev_err(hba->dev, "%s: Failed reading descriptor. desc_id %d, desc_index %d, param_offset %d, ret %d\n",32293216 __func__, desc_id, desc_index, param_offset, ret);32303217 goto out;32313218 }3232321932333220 /* Sanity check */32343221 if (desc_buf[QUERY_DESC_DESC_TYPE_OFFSET] != desc_id) {32353235- dev_err(hba->dev, "%s: invalid desc_id %d in descriptor header",32223222+ dev_err(hba->dev, "%s: invalid desc_id %d in descriptor header\n",32363223 __func__, desc_buf[QUERY_DESC_DESC_TYPE_OFFSET]);32373224 ret = -EINVAL;32383225 goto out;···32423229 buff_len = desc_buf[QUERY_DESC_LENGTH_OFFSET];32433230 ufshcd_update_desc_length(hba, desc_id, desc_index, buff_len);3244323132453245- /* Check wherher we will not copy more data, than available */32463246- if (is_kmalloc && (param_offset + param_size) > buff_len)32473247- param_size = buff_len - param_offset;32483248-32493249- if (is_kmalloc)32323232+ if (is_kmalloc) {32333233+ /* Make sure we don't copy more data than available */32343234+ if (param_offset + param_size > buff_len)32353235+ param_size = buff_len - param_offset;32503236 memcpy(param_read_buf, &desc_buf[param_offset], param_size);32373237+ }32513238out:32523239 if (is_kmalloc)32533240 kfree(desc_buf);···89138900 if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba))89148901 goto out;8915890289168916- if (pm_runtime_suspended(hba->dev)) {89178917- ret = ufshcd_runtime_resume(hba);89188918- if (ret)89198919- goto out;89208920- }89038903+ pm_runtime_get_sync(hba->dev);8921890489228905 ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);89238906out:
+1-4
drivers/soc/fsl/dpio/dpio-driver.c
···9595{9696 int error;9797 struct fsl_mc_device_irq *irq;9898- cpumask_t mask;999810099 irq = dpio_dev->irqs[0];101100 error = devm_request_irq(&dpio_dev->dev,···111112 }112113113114 /* set the affinity hint */114114- cpumask_clear(&mask);115115- cpumask_set_cpu(cpu, &mask);116116- if (irq_set_affinity_hint(irq->msi_desc->irq, &mask))115115+ if (irq_set_affinity_hint(irq->msi_desc->irq, cpumask_of(cpu)))117116 dev_err(&dpio_dev->dev,118117 "irq_set_affinity failed irq %d cpu %d\n",119118 irq->msi_desc->irq, cpu);
···942942 struct imx_port *sport = dev_id;943943 unsigned int usr1, usr2, ucr1, ucr2, ucr3, ucr4;944944 irqreturn_t ret = IRQ_NONE;945945+ unsigned long flags = 0;945946946946- spin_lock(&sport->port.lock);947947+ /*948948+ * IRQs might not be disabled upon entering this interrupt handler,949949+ * e.g. when interrupt handlers are forced to be threaded. To support950950+ * this scenario as well, disable IRQs when acquiring the spinlock.951951+ */952952+ spin_lock_irqsave(&sport->port.lock, flags);947953948954 usr1 = imx_uart_readl(sport, USR1);949955 usr2 = imx_uart_readl(sport, USR2);···10191013 ret = IRQ_HANDLED;10201014 }1021101510221022- spin_unlock(&sport->port.lock);10161016+ spin_unlock_irqrestore(&sport->port.lock, flags);1023101710241018 return ret;10251019}···20082002 unsigned int ucr1;20092003 unsigned long flags = 0;20102004 int locked = 1;20112011- int retval;20122012-20132013- retval = clk_enable(sport->clk_per);20142014- if (retval)20152015- return;20162016- retval = clk_enable(sport->clk_ipg);20172017- if (retval) {20182018- clk_disable(sport->clk_per);20192019- return;20202020- }2021200520222006 if (sport->port.sysrq)20232007 locked = 0;···2043204720442048 if (locked)20452049 spin_unlock_irqrestore(&sport->port.lock, flags);20462046-20472047- clk_disable(sport->clk_ipg);20482048- clk_disable(sport->clk_per);20492050}2050205120512052/*···2143215021442151 retval = uart_set_options(&sport->port, co, baud, parity, bits, flow);2145215221462146- clk_disable(sport->clk_ipg);21472153 if (retval) {21482148- clk_unprepare(sport->clk_ipg);21542154+ clk_disable_unprepare(sport->clk_ipg);21492155 goto error_console;21502156 }2151215721522152- retval = clk_prepare(sport->clk_per);21582158+ retval = clk_prepare_enable(sport->clk_per);21532159 if (retval)21542154- clk_unprepare(sport->clk_ipg);21602160+ clk_disable_unprepare(sport->clk_ipg);2155216121562162error_console:21572163 return retval;
+6-1
drivers/video/fbdev/hyperv_fb.c
···10931093 goto err1;10941094 }1095109510961096- fb_virt = ioremap(par->mem->start, screen_fb_size);10961096+ /*10971097+ * Map the VRAM cacheable for performance. This is also required for10981098+ * VM Connect to display properly for ARM64 Linux VM, as the host also10991099+ * maps the VRAM cacheable.11001100+ */11011101+ fb_virt = ioremap_cache(par->mem->start, screen_fb_size);10971102 if (!fb_virt)10981103 goto err2;10991104
···294294 op->flags &= ~AFS_OPERATION_DIR_CONFLICT;295295 }296296 } else if (vp->scb.have_status) {297297+ if (vp->dv_before + vp->dv_delta != vp->scb.status.data_version &&298298+ vp->speculative)299299+ /* Ignore the result of a speculative bulk status fetch300300+ * if it splits around a modification op, thereby301301+ * appearing to regress the data version.302302+ */303303+ goto out;297304 afs_apply_status(op, vp);298305 if (vp->scb.have_cb)299306 afs_apply_callback(op, vp);···312305 }313306 }314307308308+out:315309 write_sequnlock(&vnode->cb_lock);316310317311 if (vp->scb.have_status)
+1
fs/afs/internal.h
···755755 bool update_ctime:1; /* Need to update the ctime */756756 bool set_size:1; /* Must update i_size */757757 bool op_unlinked:1; /* True if file was unlinked by op */758758+ bool speculative:1; /* T if speculative status fetch (no vnode lock) */758759};759760760761/*
+4-1
fs/btrfs/ctree.h
···878878 */879879 struct ulist *qgroup_ulist;880880881881- /* protect user change for quota operations */881881+ /*882882+ * Protect user change for quota operations. If a transaction is needed,883883+ * it must be started before locking this lock.884884+ */882885 struct mutex qgroup_ioctl_lock;883886884887 /* list of dirty qgroups to be written at next commit */
-57
fs/btrfs/file.c
···452452 }453453}454454455455-static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode,456456- const u64 start,457457- const u64 len,458458- struct extent_state **cached_state)459459-{460460- u64 search_start = start;461461- const u64 end = start + len - 1;462462-463463- while (search_start < end) {464464- const u64 search_len = end - search_start + 1;465465- struct extent_map *em;466466- u64 em_len;467467- int ret = 0;468468-469469- em = btrfs_get_extent(inode, NULL, 0, search_start, search_len);470470- if (IS_ERR(em))471471- return PTR_ERR(em);472472-473473- if (em->block_start != EXTENT_MAP_HOLE)474474- goto next;475475-476476- em_len = em->len;477477- if (em->start < search_start)478478- em_len -= search_start - em->start;479479- if (em_len > search_len)480480- em_len = search_len;481481-482482- ret = set_extent_bit(&inode->io_tree, search_start,483483- search_start + em_len - 1,484484- EXTENT_DELALLOC_NEW,485485- NULL, cached_state, GFP_NOFS);486486-next:487487- search_start = extent_map_end(em);488488- free_extent_map(em);489489- if (ret)490490- return ret;491491- }492492- return 0;493493-}494494-495455/*496456 * after copy_from_user, pages need to be dirtied and we need to make497457 * sure holes are created between the current EOF and the start of···487527 clear_extent_bit(&inode->io_tree, start_pos, end_of_last_block,488528 EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING | EXTENT_DEFRAG,489529 0, 0, cached);490490-491491- if (!btrfs_is_free_space_inode(inode)) {492492- if (start_pos >= isize &&493493- !(inode->flags & BTRFS_INODE_PREALLOC)) {494494- /*495495- * There can't be any extents following eof in this case496496- * so just set the delalloc new bit for the range497497- * directly.498498- */499499- extra_bits |= EXTENT_DELALLOC_NEW;500500- } else {501501- err = btrfs_find_new_delalloc_bytes(inode, start_pos,502502- num_bytes, cached);503503- if (err)504504- return err;505505- }506506- }507530508531 err = btrfs_set_extent_delalloc(inode, start_pos, end_of_last_block,509532 extra_bits, cached);
+58
fs/btrfs/inode.c
···22532253 return 0;22542254}2255225522562256+static int btrfs_find_new_delalloc_bytes(struct btrfs_inode *inode,22572257+ const u64 start,22582258+ const u64 len,22592259+ struct extent_state **cached_state)22602260+{22612261+ u64 search_start = start;22622262+ const u64 end = start + len - 1;22632263+22642264+ while (search_start < end) {22652265+ const u64 search_len = end - search_start + 1;22662266+ struct extent_map *em;22672267+ u64 em_len;22682268+ int ret = 0;22692269+22702270+ em = btrfs_get_extent(inode, NULL, 0, search_start, search_len);22712271+ if (IS_ERR(em))22722272+ return PTR_ERR(em);22732273+22742274+ if (em->block_start != EXTENT_MAP_HOLE)22752275+ goto next;22762276+22772277+ em_len = em->len;22782278+ if (em->start < search_start)22792279+ em_len -= search_start - em->start;22802280+ if (em_len > search_len)22812281+ em_len = search_len;22822282+22832283+ ret = set_extent_bit(&inode->io_tree, search_start,22842284+ search_start + em_len - 1,22852285+ EXTENT_DELALLOC_NEW,22862286+ NULL, cached_state, GFP_NOFS);22872287+next:22882288+ search_start = extent_map_end(em);22892289+ free_extent_map(em);22902290+ if (ret)22912291+ return ret;22922292+ }22932293+ return 0;22942294+}22952295+22562296int btrfs_set_extent_delalloc(struct btrfs_inode *inode, u64 start, u64 end,22572297 unsigned int extra_bits,22582298 struct extent_state **cached_state)22592299{22602300 WARN_ON(PAGE_ALIGNED(end));23012301+23022302+ if (start >= i_size_read(&inode->vfs_inode) &&23032303+ !(inode->flags & BTRFS_INODE_PREALLOC)) {23042304+ /*23052305+ * There can't be any extents following eof in this case so just23062306+ * set the delalloc new bit for the range directly.23072307+ */23082308+ extra_bits |= EXTENT_DELALLOC_NEW;23092309+ } else {23102310+ int ret;23112311+23122312+ ret = btrfs_find_new_delalloc_bytes(inode, start,23132313+ end + 1 - start,23142314+ cached_state);23152315+ if (ret)23162316+ return ret;23172317+ }23182318+22612319 return set_extent_delalloc(&inode->io_tree, start, end, extra_bits,22622320 cached_state);22632321}
+78-10
fs/btrfs/qgroup.c
···1111#include <linux/slab.h>1212#include <linux/workqueue.h>1313#include <linux/btrfs.h>1414+#include <linux/sched/mm.h>14151516#include "ctree.h"1617#include "transaction.h"···498497 break;499498 }500499out:500500+ btrfs_free_path(path);501501 fs_info->qgroup_flags |= flags;502502 if (!(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_ON))503503 clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);504504 else if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN &&505505 ret >= 0)506506 ret = qgroup_rescan_init(fs_info, rescan_progress, 0);507507- btrfs_free_path(path);508507509508 if (ret < 0) {510509 ulist_free(fs_info->qgroup_ulist);···937936 struct btrfs_key found_key;938937 struct btrfs_qgroup *qgroup = NULL;939938 struct btrfs_trans_handle *trans = NULL;939939+ struct ulist *ulist = NULL;940940 int ret = 0;941941 int slot;942942···945943 if (fs_info->quota_root)946944 goto out;947945948948- fs_info->qgroup_ulist = ulist_alloc(GFP_KERNEL);949949- if (!fs_info->qgroup_ulist) {946946+ ulist = ulist_alloc(GFP_KERNEL);947947+ if (!ulist) {950948 ret = -ENOMEM;951949 goto out;952950 }···954952 ret = btrfs_sysfs_add_qgroups(fs_info);955953 if (ret < 0)956954 goto out;955955+956956+ /*957957+ * Unlock qgroup_ioctl_lock before starting the transaction. This is to958958+ * avoid lock acquisition inversion problems (reported by lockdep) between959959+ * qgroup_ioctl_lock and the vfs freeze semaphores, acquired when we960960+ * start a transaction.961961+ * After we started the transaction lock qgroup_ioctl_lock again and962962+ * check if someone else created the quota root in the meanwhile. If so,963963+ * just return success and release the transaction handle.964964+ *965965+ * Also we don't need to worry about someone else calling966966+ * btrfs_sysfs_add_qgroups() after we unlock and getting an error because967967+ * that function returns 0 (success) when the sysfs entries already exist.968968+ */969969+ mutex_unlock(&fs_info->qgroup_ioctl_lock);970970+957971 /*958972 * 1 for quota root item959973 * 1 for BTRFS_QGROUP_STATUS item···979961 * would be a lot of overkill.980962 */981963 trans = btrfs_start_transaction(tree_root, 2);964964+965965+ mutex_lock(&fs_info->qgroup_ioctl_lock);982966 if (IS_ERR(trans)) {983967 ret = PTR_ERR(trans);984968 trans = NULL;985969 goto out;986970 }971971+972972+ if (fs_info->quota_root)973973+ goto out;974974+975975+ fs_info->qgroup_ulist = ulist;976976+ ulist = NULL;987977988978 /*989979 * initially create the quota tree···11501124 if (ret) {11511125 ulist_free(fs_info->qgroup_ulist);11521126 fs_info->qgroup_ulist = NULL;11531153- if (trans)11541154- btrfs_end_transaction(trans);11551127 btrfs_sysfs_del_qgroups(fs_info);11561128 }11571129 mutex_unlock(&fs_info->qgroup_ioctl_lock);11301130+ if (ret && trans)11311131+ btrfs_end_transaction(trans);11321132+ else if (trans)11331133+ ret = btrfs_end_transaction(trans);11341134+ ulist_free(ulist);11581135 return ret;11591136}11601137···11701141 mutex_lock(&fs_info->qgroup_ioctl_lock);11711142 if (!fs_info->quota_root)11721143 goto out;11441144+ mutex_unlock(&fs_info->qgroup_ioctl_lock);1173114511741146 /*11751147 * 1 For the root item11761148 *11771149 * We should also reserve enough items for the quota tree deletion in11781150 * btrfs_clean_quota_tree but this is not done.11511151+ *11521152+ * Also, we must always start a transaction without holding the mutex11531153+ * qgroup_ioctl_lock, see btrfs_quota_enable().11791154 */11801155 trans = btrfs_start_transaction(fs_info->tree_root, 1);11561156+11571157+ mutex_lock(&fs_info->qgroup_ioctl_lock);11811158 if (IS_ERR(trans)) {11821159 ret = PTR_ERR(trans);11601160+ trans = NULL;11831161 goto out;11841162 }11631163+11641164+ if (!fs_info->quota_root)11651165+ goto out;1185116611861167 clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);11871168 btrfs_qgroup_wait_for_completion(fs_info, false);···12061167 ret = btrfs_clean_quota_tree(trans, quota_root);12071168 if (ret) {12081169 btrfs_abort_transaction(trans, ret);12091209- goto end_trans;11701170+ goto out;12101171 }1211117212121173 ret = btrfs_del_root(trans, "a_root->root_key);12131174 if (ret) {12141175 btrfs_abort_transaction(trans, ret);12151215- goto end_trans;11761176+ goto out;12161177 }1217117812181179 list_del("a_root->dirty_list);···1224118512251186 btrfs_put_root(quota_root);1226118712271227-end_trans:12281228- ret = btrfs_end_transaction(trans);12291188out:12301189 mutex_unlock(&fs_info->qgroup_ioctl_lock);11901190+ if (ret && trans)11911191+ btrfs_end_transaction(trans);11921192+ else if (trans)11931193+ ret = btrfs_end_transaction(trans);11941194+12311195 return ret;12321196}12331197···13661324 struct btrfs_qgroup *member;13671325 struct btrfs_qgroup_list *list;13681326 struct ulist *tmp;13271327+ unsigned int nofs_flag;13691328 int ret = 0;1370132913711330 /* Check the level of src and dst first */13721331 if (btrfs_qgroup_level(src) >= btrfs_qgroup_level(dst))13731332 return -EINVAL;1374133313341334+ /* We hold a transaction handle open, must do a NOFS allocation. */13351335+ nofs_flag = memalloc_nofs_save();13751336 tmp = ulist_alloc(GFP_KERNEL);13371337+ memalloc_nofs_restore(nofs_flag);13761338 if (!tmp)13771339 return -ENOMEM;13781340···14331387 struct btrfs_qgroup_list *list;14341388 struct ulist *tmp;14351389 bool found = false;13901390+ unsigned int nofs_flag;14361391 int ret = 0;14371392 int ret2;1438139313941394+ /* We hold a transaction handle open, must do a NOFS allocation. */13951395+ nofs_flag = memalloc_nofs_save();14391396 tmp = ulist_alloc(GFP_KERNEL);13971397+ memalloc_nofs_restore(nofs_flag);14401398 if (!tmp)14411399 return -ENOMEM;14421400···35623512{35633513 struct btrfs_trans_handle *trans;35643514 int ret;35153515+ bool can_commit = true;3565351635663517 /*35673518 * We don't want to run flush again and again, so if there is a running···35733522 !test_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state));35743523 return 0;35753524 }35253525+35263526+ /*35273527+ * If current process holds a transaction, we shouldn't flush, as we35283528+ * assume all space reservation happens before a transaction handle is35293529+ * held.35303530+ *35313531+ * But there are cases like btrfs_delayed_item_reserve_metadata() where35323532+ * we try to reserve space with one transction handle already held.35333533+ * In that case we can't commit transaction, but at least try to end it35343534+ * and hope the started data writes can free some space.35353535+ */35363536+ if (current->journal_info &&35373537+ current->journal_info != BTRFS_SEND_TRANS_STUB)35383538+ can_commit = false;3576353935773540 ret = btrfs_start_delalloc_snapshot(root);35783541 if (ret < 0)···35993534 goto out;36003535 }3601353636023602- ret = btrfs_commit_transaction(trans);35373537+ if (can_commit)35383538+ ret = btrfs_commit_transaction(trans);35393539+ else35403540+ ret = btrfs_end_transaction(trans);36033541out:36043542 clear_bit(BTRFS_ROOT_QGROUP_FLUSHING, &root->state);36053543 wake_up(&root->qgroup_flush_wait);
···10681068 "invalid root item size, have %u expect %zu or %u",10691069 btrfs_item_size_nr(leaf, slot), sizeof(ri),10701070 btrfs_legacy_root_item_size());10711071+ return -EUCLEAN;10711072 }1072107310731074 /*···14241423 "invalid item size, have %u expect aligned to %zu for key type %u",14251424 btrfs_item_size_nr(leaf, slot),14261425 sizeof(*dref), key->type);14261426+ return -EUCLEAN;14271427 }14281428 if (!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize)) {14291429 generic_err(leaf, slot,···14531451 extent_err(leaf, slot,14541452 "invalid extent data backref offset, have %llu expect aligned to %u",14551453 offset, leaf->fs_info->sectorsize);14541454+ return -EUCLEAN;14561455 }14571456 }14581457 return 0;
+7-1
fs/btrfs/volumes.c
···940940 if (device->bdev != path_bdev) {941941 bdput(path_bdev);942942 mutex_unlock(&fs_devices->device_list_mutex);943943- btrfs_warn_in_rcu(device->fs_info,943943+ /*944944+ * device->fs_info may not be reliable here, so945945+ * pass in a NULL instead. This avoids a946946+ * possible use-after-free when the fs_info and947947+ * fs_info->sb are already torn down.948948+ */949949+ btrfs_warn_in_rcu(NULL,944950 "duplicate device %s devid %llu generation %llu scanned by %s (%d)",945951 path, devid, found_transid,946952 current->comm,
···26952695 struct ext4_filename *fname);26962696static inline void ext4_update_dx_flag(struct inode *inode)26972697{26982698- if (!ext4_has_feature_dir_index(inode->i_sb)) {26982698+ if (!ext4_has_feature_dir_index(inode->i_sb) &&26992699+ ext4_test_inode_flag(inode, EXT4_INODE_INDEX)) {26992700 /* ext4_iget() should have caught this... */27002701 WARN_ON_ONCE(ext4_has_feature_metadata_csum(inode->i_sb));27012702 ext4_clear_inode_flag(inode, EXT4_INODE_INDEX);
-4
fs/ext4/super.c
···26382638 } else if (test_opt2(sb, DAX_INODE)) {26392639 SEQ_OPTS_PUTS("dax=inode");26402640 }26412641-26422642- if (test_opt2(sb, JOURNAL_FAST_COMMIT))26432643- SEQ_OPTS_PUTS("fast_commit");26442644-26452641 ext4_show_quota_options(seq, sb);26462642 return 0;26472643}
+62-34
fs/io_uring.c
···205205 struct list_head file_list;206206 struct fixed_file_data *file_data;207207 struct llist_node llist;208208+ bool done;208209};209210210211struct fixed_file_data {···479478struct io_open {480479 struct file *file;481480 int dfd;481481+ bool ignore_nonblock;482482 struct filename *filename;483483 struct open_how how;484484 unsigned long nofile;···13131311 return false;13141312 req->work.flags |= IO_WQ_WORK_FSIZE;13151313 }13161316-13171317- if (!(req->work.flags & IO_WQ_WORK_FILES) &&13181318- (def->work_flags & IO_WQ_WORK_FILES) &&13191319- !(req->flags & REQ_F_NO_FILE_TABLE)) {13201320- if (id->files != current->files ||13211321- id->nsproxy != current->nsproxy)13221322- return false;13231323- atomic_inc(&id->files->count);13241324- get_nsproxy(id->nsproxy);13251325- req->flags |= REQ_F_INFLIGHT;13261326-13271327- spin_lock_irq(&ctx->inflight_lock);13281328- list_add(&req->inflight_entry, &ctx->inflight_list);13291329- spin_unlock_irq(&ctx->inflight_lock);13301330- req->work.flags |= IO_WQ_WORK_FILES;13311331- }13321314#ifdef CONFIG_BLK_CGROUP13331315 if (!(req->work.flags & IO_WQ_WORK_BLKCG) &&13341316 (def->work_flags & IO_WQ_WORK_BLKCG)) {···13531367 req->work.flags |= IO_WQ_WORK_CANCEL;13541368 }13551369 spin_unlock(¤t->fs->lock);13701370+ }13711371+ if (!(req->work.flags & IO_WQ_WORK_FILES) &&13721372+ (def->work_flags & IO_WQ_WORK_FILES) &&13731373+ !(req->flags & REQ_F_NO_FILE_TABLE)) {13741374+ if (id->files != current->files ||13751375+ id->nsproxy != current->nsproxy)13761376+ return false;13771377+ atomic_inc(&id->files->count);13781378+ get_nsproxy(id->nsproxy);13791379+ req->flags |= REQ_F_INFLIGHT;13801380+13811381+ spin_lock_irq(&ctx->inflight_lock);13821382+ list_add(&req->inflight_entry, &ctx->inflight_list);13831383+ spin_unlock_irq(&ctx->inflight_lock);13841384+ req->work.flags |= IO_WQ_WORK_FILES;13561385 }1357138613581387 return true;···25782577 }25792578end_req:25802579 req_set_fail_links(req);25812581- io_req_complete(req, ret);25822580 return false;25832581}25842582#endif···31923192 rw->free_iovec = iovec;31933193 rw->bytes_done = 0;31943194 /* can only be fixed buffers, no need to do anything */31953195- if (iter->type == ITER_BVEC)31953195+ if (iov_iter_is_bvec(iter))31963196 return;31973197 if (!iovec) {31983198 unsigned iov_off = 0;···37953795 return ret;37963796 }37973797 req->open.nofile = rlimit(RLIMIT_NOFILE);37983798+ req->open.ignore_nonblock = false;37983799 req->flags |= REQ_F_NEED_CLEANUP;37993800 return 0;38003801}···38393838 struct file *file;38403839 int ret;3841384038423842- if (force_nonblock)38413841+ if (force_nonblock && !req->open.ignore_nonblock)38433842 return -EAGAIN;3844384338453844 ret = build_open_flags(&req->open.how, &op);···38543853 if (IS_ERR(file)) {38553854 put_unused_fd(ret);38563855 ret = PTR_ERR(file);38563856+ /*38573857+ * A work-around to ensure that /proc/self works that way38583858+ * that it should - if we get -EOPNOTSUPP back, then assume38593859+ * that proc_self_get_link() failed us because we're in async38603860+ * context. We should be safe to retry this from the task38613861+ * itself with force_nonblock == false set, as it should not38623862+ * block on lookup. Would be nice to know this upfront and38633863+ * avoid the async dance, but doesn't seem feasible.38643864+ */38653865+ if (ret == -EOPNOTSUPP && io_wq_current_is_worker()) {38663866+ req->open.ignore_nonblock = true;38673867+ refcount_inc(&req->refs);38683868+ io_req_task_queue(req);38693869+ return 0;38703870+ }38573871 } else {38583872 fsnotify_open(file);38593873 fd_install(ret, file);···69736957 return -ENXIO;6974695869756959 spin_lock(&data->lock);69766976- if (!list_empty(&data->ref_list))69776977- ref_node = list_first_entry(&data->ref_list,69786978- struct fixed_file_ref_node, node);69606960+ ref_node = data->node;69796961 spin_unlock(&data->lock);69806962 if (ref_node)69816963 percpu_ref_kill(&ref_node->refs);···73227308 kfree(pfile);73237309 }7324731073257325- spin_lock(&file_data->lock);73267326- list_del(&ref_node->node);73277327- spin_unlock(&file_data->lock);73287328-73297311 percpu_ref_exit(&ref_node->refs);73307312 kfree(ref_node);73317313 percpu_ref_put(&file_data->refs);···73487338static void io_file_data_ref_zero(struct percpu_ref *ref)73497339{73507340 struct fixed_file_ref_node *ref_node;73417341+ struct fixed_file_data *data;73517342 struct io_ring_ctx *ctx;73527352- bool first_add;73437343+ bool first_add = false;73537344 int delay = HZ;7354734573557346 ref_node = container_of(ref, struct fixed_file_ref_node, refs);73567356- ctx = ref_node->file_data->ctx;73477347+ data = ref_node->file_data;73487348+ ctx = data->ctx;7357734973587358- if (percpu_ref_is_dying(&ctx->file_data->refs))73507350+ spin_lock(&data->lock);73517351+ ref_node->done = true;73527352+73537353+ while (!list_empty(&data->ref_list)) {73547354+ ref_node = list_first_entry(&data->ref_list,73557355+ struct fixed_file_ref_node, node);73567356+ /* recycle ref nodes in order */73577357+ if (!ref_node->done)73587358+ break;73597359+ list_del(&ref_node->node);73607360+ first_add |= llist_add(&ref_node->llist, &ctx->file_put_llist);73617361+ }73627362+ spin_unlock(&data->lock);73637363+73647364+ if (percpu_ref_is_dying(&data->refs))73597365 delay = 0;7360736673617361- first_add = llist_add(&ref_node->llist, &ctx->file_put_llist);73627367 if (!delay)73637368 mod_delayed_work(system_wq, &ctx->file_put_work, 0);73647369 else if (first_add)···73977372 INIT_LIST_HEAD(&ref_node->node);73987373 INIT_LIST_HEAD(&ref_node->file_list);73997374 ref_node->file_data = ctx->file_data;73757375+ ref_node->done = false;74007376 return ref_node;74017377}74027378···7493746774947468 file_data->node = ref_node;74957469 spin_lock(&file_data->lock);74967496- list_add(&ref_node->node, &file_data->ref_list);74707470+ list_add_tail(&ref_node->node, &file_data->ref_list);74977471 spin_unlock(&file_data->lock);74987472 percpu_ref_get(&file_data->refs);74997473 return ret;···76527626 if (needs_switch) {76537627 percpu_ref_kill(&data->node->refs);76547628 spin_lock(&data->lock);76557655- list_add(&ref_node->node, &data->ref_list);76297629+ list_add_tail(&ref_node->node, &data->ref_list);76567630 data->node = ref_node;76577631 spin_unlock(&data->lock);76587632 percpu_ref_get(&ctx->file_data->refs);···92519225 * to a power-of-two, if it isn't already. We do NOT impose92529226 * any cq vs sq ring sizing.92539227 */92549254- p->cq_entries = roundup_pow_of_two(p->cq_entries);92559255- if (p->cq_entries < p->sq_entries)92289228+ if (!p->cq_entries)92569229 return -EINVAL;92579230 if (p->cq_entries > IORING_MAX_CQ_ENTRIES) {92589231 if (!(p->flags & IORING_SETUP_CLAMP))92599232 return -EINVAL;92609233 p->cq_entries = IORING_MAX_CQ_ENTRIES;92619234 }92359235+ p->cq_entries = roundup_pow_of_two(p->cq_entries);92369236+ if (p->cq_entries < p->sq_entries)92379237+ return -EINVAL;92629238 } else {92639239 p->cq_entries = 2 * p->sq_entries;92649240 }
+18-16
fs/jbd2/journal.c
···566566}567567568568/**569569- * Force and wait upon a commit if the calling process is not within570570- * transaction. This is used for forcing out undo-protected data which contains571571- * bitmaps, when the fs is running out of space.569569+ * jbd2_journal_force_commit_nested - Force and wait upon a commit if the570570+ * calling process is not within transaction.572571 *573572 * @journal: journal to force574573 * Returns true if progress was made.574574+ *575575+ * This is used for forcing out undo-protected data which contains576576+ * bitmaps, when the fs is running out of space.575577 */576578int jbd2_journal_force_commit_nested(journal_t *journal)577579{···584582}585583586584/**587587- * int journal_force_commit() - force any uncommitted transactions585585+ * jbd2_journal_force_commit() - force any uncommitted transactions588586 * @journal: journal to force589587 *590588 * Caller want unconditional commit. We can only force the running transaction···188318811884188218851883/**18861886- * int jbd2_journal_load() - Read journal from disk.18841884+ * jbd2_journal_load() - Read journal from disk.18871885 * @journal: Journal to act on.18881886 *18891887 * Given a journal_t structure which tells us which disk blocks contain···19531951}1954195219551953/**19561956- * void jbd2_journal_destroy() - Release a journal_t structure.19541954+ * jbd2_journal_destroy() - Release a journal_t structure.19571955 * @journal: Journal to act on.19581956 *19591957 * Release a journal_t structure once it is no longer in use by the···203020282031202920322030/**20332033- *int jbd2_journal_check_used_features() - Check if features specified are used.20312031+ * jbd2_journal_check_used_features() - Check if features specified are used.20342032 * @journal: Journal to check.20352033 * @compat: bitmask of compatible features20362034 * @ro: bitmask of features that force read-only mount···20652063}2066206420672065/**20682068- * int jbd2_journal_check_available_features() - Check feature set in journalling layer20662066+ * jbd2_journal_check_available_features() - Check feature set in journalling layer20692067 * @journal: Journal to check.20702068 * @compat: bitmask of compatible features20712069 * @ro: bitmask of features that force read-only mount···21282126}2129212721302128/**21312131- * int jbd2_journal_set_features() - Mark a given journal feature in the superblock21292129+ * jbd2_journal_set_features() - Mark a given journal feature in the superblock21322130 * @journal: Journal to act on.21332131 * @compat: bitmask of compatible features21342132 * @ro: bitmask of features that force read-only mount···22192217}2220221822212219/*22222222- * jbd2_journal_clear_features () - Clear a given journal feature in the22202220+ * jbd2_journal_clear_features() - Clear a given journal feature in the22232221 * superblock22242222 * @journal: Journal to act on.22252223 * @compat: bitmask of compatible features···22482246EXPORT_SYMBOL(jbd2_journal_clear_features);2249224722502248/**22512251- * int jbd2_journal_flush () - Flush journal22492249+ * jbd2_journal_flush() - Flush journal22522250 * @journal: Journal to act on.22532251 *22542252 * Flush all data for a given journal to disk and empty the journal.···23232321}2324232223252323/**23262326- * int jbd2_journal_wipe() - Wipe journal contents23242324+ * jbd2_journal_wipe() - Wipe journal contents23272325 * @journal: Journal to act on.23282326 * @write: flag (see below)23292327 *···23642362}2365236323662364/**23672367- * void jbd2_journal_abort () - Shutdown the journal immediately.23652365+ * jbd2_journal_abort () - Shutdown the journal immediately.23682366 * @journal: the journal to shutdown.23692367 * @errno: an error number to record in the journal indicating23702368 * the reason for the shutdown.···24552453}2456245424572455/**24582458- * int jbd2_journal_errno () - returns the journal's error state.24562456+ * jbd2_journal_errno() - returns the journal's error state.24592457 * @journal: journal to examine.24602458 *24612459 * This is the errno number set with jbd2_journal_abort(), the last···24792477}2480247824812479/**24822482- * int jbd2_journal_clear_err () - clears the journal's error state24802480+ * jbd2_journal_clear_err() - clears the journal's error state24832481 * @journal: journal to act on.24842482 *24852483 * An error must be cleared or acked to take a FS out of readonly···24992497}2500249825012499/**25022502- * void jbd2_journal_ack_err() - Ack journal err.25002500+ * jbd2_journal_ack_err() - Ack journal err.25032501 * @journal: journal to act on.25042502 *25052503 * An error must be cleared or acked to take a FS out of readonly
+16-15
fs/jbd2/transaction.c
···519519520520521521/**522522- * handle_t *jbd2_journal_start() - Obtain a new handle.522522+ * jbd2_journal_start() - Obtain a new handle.523523 * @journal: Journal to start transaction on.524524 * @nblocks: number of block buffer we might modify525525 *···566566EXPORT_SYMBOL(jbd2_journal_free_reserved);567567568568/**569569- * int jbd2_journal_start_reserved() - start reserved handle569569+ * jbd2_journal_start_reserved() - start reserved handle570570 * @handle: handle to start571571 * @type: for handle statistics572572 * @line_no: for handle statistics···620620EXPORT_SYMBOL(jbd2_journal_start_reserved);621621622622/**623623- * int jbd2_journal_extend() - extend buffer credits.623623+ * jbd2_journal_extend() - extend buffer credits.624624 * @handle: handle to 'extend'625625 * @nblocks: nr blocks to try to extend by.626626 * @revoke_records: number of revoke records to try to extend by.···745745}746746747747/**748748- * int jbd2_journal_restart() - restart a handle .748748+ * jbd2__journal_restart() - restart a handle .749749 * @handle: handle to restart750750 * @nblocks: nr credits requested751751 * @revoke_records: number of revoke record credits requested···815815EXPORT_SYMBOL(jbd2_journal_restart);816816817817/**818818- * void jbd2_journal_lock_updates () - establish a transaction barrier.818818+ * jbd2_journal_lock_updates () - establish a transaction barrier.819819 * @journal: Journal to establish a barrier on.820820 *821821 * This locks out any further updates from being started, and blocks···874874}875875876876/**877877- * void jbd2_journal_unlock_updates (journal_t* journal) - release barrier877877+ * jbd2_journal_unlock_updates () - release barrier878878 * @journal: Journal to release the barrier on.879879 *880880 * Release a transaction barrier obtained with jbd2_journal_lock_updates().···11821182}1183118311841184/**11851185- * int jbd2_journal_get_write_access() - notify intent to modify a buffer for metadata (not data) update.11851185+ * jbd2_journal_get_write_access() - notify intent to modify a buffer11861186+ * for metadata (not data) update.11861187 * @handle: transaction to add buffer modifications to11871188 * @bh: bh to be used for metadata writes11881189 *···12271226 * unlocked buffer beforehand. */1228122712291228/**12301230- * int jbd2_journal_get_create_access () - notify intent to use newly created bh12291229+ * jbd2_journal_get_create_access () - notify intent to use newly created bh12311230 * @handle: transaction to new buffer to12321231 * @bh: new buffer.12331232 *···13071306}1308130713091308/**13101310- * int jbd2_journal_get_undo_access() - Notify intent to modify metadata with13091309+ * jbd2_journal_get_undo_access() - Notify intent to modify metadata with13111310 * non-rewindable consequences13121311 * @handle: transaction13131312 * @bh: buffer to undo···13841383}1385138413861385/**13871387- * void jbd2_journal_set_triggers() - Add triggers for commit writeout13861386+ * jbd2_journal_set_triggers() - Add triggers for commit writeout13881387 * @bh: buffer to trigger on13891388 * @type: struct jbd2_buffer_trigger_type containing the trigger(s).13901389 *···14261425}1427142614281427/**14291429- * int jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata14281428+ * jbd2_journal_dirty_metadata() - mark a buffer as containing dirty metadata14301429 * @handle: transaction to add buffer to.14311430 * @bh: buffer to mark14321431 *···15941593}1595159415961595/**15971597- * void jbd2_journal_forget() - bforget() for potentially-journaled buffers.15961596+ * jbd2_journal_forget() - bforget() for potentially-journaled buffers.15981597 * @handle: transaction handle15991598 * @bh: bh to 'forget'16001599 *···17631762}1764176317651764/**17661766- * int jbd2_journal_stop() - complete a transaction17651765+ * jbd2_journal_stop() - complete a transaction17671766 * @handle: transaction to complete.17681767 *17691768 * All done for a particular handle.···20812080}2082208120832082/**20842084- * int jbd2_journal_try_to_free_buffers() - try to free page buffers.20832083+ * jbd2_journal_try_to_free_buffers() - try to free page buffers.20852084 * @journal: journal for operation20862085 * @page: to try and free20872086 *···24122411}2413241224142413/**24152415- * void jbd2_journal_invalidatepage()24142414+ * jbd2_journal_invalidatepage()24162415 * @journal: journal to use for flush...24172416 * @page: page to flush24182417 * @offset: start of the range to invalidate
+4-2
fs/libfs.c
···959959 size_t len, loff_t *ppos)960960{961961 struct simple_attr *attr;962962- u64 val;962962+ unsigned long long val;963963 size_t size;964964 ssize_t ret;965965···977977 goto out;978978979979 attr->set_buf[size] = '\0';980980- val = simple_strtoll(attr->set_buf, NULL, 0);980980+ ret = kstrtoull(attr->set_buf, 0, &val);981981+ if (ret)982982+ goto out;981983 ret = attr->set(attr->data, val);982984 if (ret == 0)983985 ret = len; /* on success, claim we got the whole input */
+7-5
fs/notify/fsnotify.c
···178178 struct inode *inode = d_inode(dentry);179179 struct dentry *parent;180180 bool parent_watched = dentry->d_flags & DCACHE_FSNOTIFY_PARENT_WATCHED;181181+ bool parent_needed, parent_interested;181182 __u32 p_mask;182183 struct inode *p_inode = NULL;183184 struct name_snapshot name;···194193 return 0;195194196195 parent = NULL;197197- if (!parent_watched && !fsnotify_event_needs_parent(inode, mnt, mask))196196+ parent_needed = fsnotify_event_needs_parent(inode, mnt, mask);197197+ if (!parent_watched && !parent_needed)198198 goto notify;199199200200 /* Does parent inode care about events on children? */···207205208206 /*209207 * Include parent/name in notification either if some notification210210- * groups require parent info (!parent_watched case) or the parent is211211- * interested in this event.208208+ * groups require parent info or the parent is interested in this event.212209 */213213- if (!parent_watched || (mask & p_mask & ALL_FSNOTIFY_EVENTS)) {210210+ parent_interested = mask & p_mask & ALL_FSNOTIFY_EVENTS;211211+ if (parent_needed || parent_interested) {214212 /* When notifying parent, child should be passed as data */215213 WARN_ON_ONCE(inode != fsnotify_data_inode(data, data_type));216214217215 /* Notify both parent and child with child name info */218216 take_dentry_name_snapshot(&name, dentry);219217 file_name = &name.name;220220- if (parent_watched)218218+ if (parent_interested)221219 mask |= FS_EVENT_ON_CHILD;222220 }223221
+7
fs/proc/self.c
···1616 pid_t tgid = task_tgid_nr_ns(current, ns);1717 char *name;18181919+ /*2020+ * Not currently supported. Once we can inherit all of struct pid,2121+ * we can allow this.2222+ */2323+ if (current->flags & PF_KTHREAD)2424+ return ERR_PTR(-EOPNOTSUPP);2525+1926 if (!tgid)2027 return ERR_PTR(-ENOENT);2128 /* max length of unsigned int in decimal + NULL term */
+7-1
fs/xfs/libxfs/xfs_attr_leaf.c
···515515 *========================================================================*/516516517517/*518518- * Query whether the requested number of additional bytes of extended518518+ * Query whether the total requested number of attr fork bytes of extended519519 * attribute space will be able to fit inline.520520 *521521 * Returns zero if not, else the di_forkoff fork offset to be used in the···534534 int minforkoff;535535 int maxforkoff;536536 int offset;537537+538538+ /*539539+ * Check if the new size could fit at all first:540540+ */541541+ if (bytes > XFS_LITINO(mp))542542+ return 0;537543538544 /* rounded down */539545 offset = (XFS_LITINO(mp) - bytes) >> 3;
+8-8
fs/xfs/libxfs/xfs_rmap_btree.c
···243243 else if (y > x)244244 return -1;245245246246- x = be64_to_cpu(kp->rm_offset);247247- y = xfs_rmap_irec_offset_pack(rec);246246+ x = XFS_RMAP_OFF(be64_to_cpu(kp->rm_offset));247247+ y = rec->rm_offset;248248 if (x > y)249249 return 1;250250 else if (y > x)···275275 else if (y > x)276276 return -1;277277278278- x = be64_to_cpu(kp1->rm_offset);279279- y = be64_to_cpu(kp2->rm_offset);278278+ x = XFS_RMAP_OFF(be64_to_cpu(kp1->rm_offset));279279+ y = XFS_RMAP_OFF(be64_to_cpu(kp2->rm_offset));280280 if (x > y)281281 return 1;282282 else if (y > x)···390390 return 1;391391 else if (a > b)392392 return 0;393393- a = be64_to_cpu(k1->rmap.rm_offset);394394- b = be64_to_cpu(k2->rmap.rm_offset);393393+ a = XFS_RMAP_OFF(be64_to_cpu(k1->rmap.rm_offset));394394+ b = XFS_RMAP_OFF(be64_to_cpu(k2->rmap.rm_offset));395395 if (a <= b)396396 return 1;397397 return 0;···420420 return 1;421421 else if (a > b)422422 return 0;423423- a = be64_to_cpu(r1->rmap.rm_offset);424424- b = be64_to_cpu(r2->rmap.rm_offset);423423+ a = XFS_RMAP_OFF(be64_to_cpu(r1->rmap.rm_offset));424424+ b = XFS_RMAP_OFF(be64_to_cpu(r2->rmap.rm_offset));425425 if (a <= b)426426 return 1;427427 return 0;
···452452 int level,453453 struct xfs_btree_block *block)454454{455455- unsigned int numrecs;456456- int ok_level;457457-458458- numrecs = be16_to_cpu(block->bb_numrecs);455455+ struct xfs_btree_cur *cur = bs->cur;456456+ unsigned int root_level = cur->bc_nlevels - 1;457457+ unsigned int numrecs = be16_to_cpu(block->bb_numrecs);459458460459 /* More records than minrecs means the block is ok. */461461- if (numrecs >= bs->cur->bc_ops->get_minrecs(bs->cur, level))460460+ if (numrecs >= cur->bc_ops->get_minrecs(cur, level))462461 return;463462464463 /*465465- * Certain btree blocks /can/ have fewer than minrecs records. Any466466- * level greater than or equal to the level of the highest dedicated467467- * btree block are allowed to violate this constraint.468468- *469469- * For a btree rooted in a block, the btree root can have fewer than470470- * minrecs records. If the btree is rooted in an inode and does not471471- * store records in the root, the direct children of the root and the472472- * root itself can have fewer than minrecs records.464464+ * For btrees rooted in the inode, it's possible that the root block465465+ * contents spilled into a regular ondisk block because there wasn't466466+ * enough space in the inode root. The number of records in that467467+ * child block might be less than the standard minrecs, but that's ok468468+ * provided that there's only one direct child of the root.473469 */474474- ok_level = bs->cur->bc_nlevels - 1;475475- if (bs->cur->bc_flags & XFS_BTREE_ROOT_IN_INODE)476476- ok_level--;477477- if (level >= ok_level)478478- return;470470+ if ((cur->bc_flags & XFS_BTREE_ROOT_IN_INODE) &&471471+ level == cur->bc_nlevels - 2) {472472+ struct xfs_btree_block *root_block;473473+ struct xfs_buf *root_bp;474474+ int root_maxrecs;479475480480- xchk_btree_set_corrupt(bs->sc, bs->cur, level);476476+ root_block = xfs_btree_get_block(cur, root_level, &root_bp);477477+ root_maxrecs = cur->bc_ops->get_dmaxrecs(cur, root_level);478478+ if (be16_to_cpu(root_block->bb_numrecs) != 1 ||479479+ numrecs <= root_maxrecs)480480+ xchk_btree_set_corrupt(bs->sc, cur, level);481481+ return;482482+ }483483+484484+ /*485485+ * Otherwise, only the root level is allowed to have fewer than minrecs486486+ * records or keyptrs.487487+ */488488+ if (level < root_level)489489+ xchk_btree_set_corrupt(bs->sc, cur, level);481490}482491483492/*
+17-4
fs/xfs/scrub/dir.c
···558558 /* Check all the bestfree entries. */559559 for (i = 0; i < bestcount; i++, bestp++) {560560 best = be16_to_cpu(*bestp);561561- if (best == NULLDATAOFF)562562- continue;563561 error = xfs_dir3_data_read(sc->tp, sc->ip,564564- i * args->geo->fsbcount, 0, &dbp);562562+ xfs_dir2_db_to_da(args->geo, i),563563+ XFS_DABUF_MAP_HOLE_OK,564564+ &dbp);565565 if (!xchk_fblock_process_error(sc, XFS_DATA_FORK, lblk,566566 &error))567567 break;568568- xchk_directory_check_freesp(sc, lblk, dbp, best);568568+569569+ if (!dbp) {570570+ if (best != NULLDATAOFF) {571571+ xchk_fblock_set_corrupt(sc, XFS_DATA_FORK,572572+ lblk);573573+ break;574574+ }575575+ continue;576576+ }577577+578578+ if (best == NULLDATAOFF)579579+ xchk_fblock_set_corrupt(sc, XFS_DATA_FORK, lblk);580580+ else581581+ xchk_directory_check_freesp(sc, lblk, dbp, best);569582 xfs_trans_brelse(sc->tp, dbp);570583 if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)571584 break;
+29
fs/xfs/xfs_iomap.c
···706706 return 0;707707}708708709709+/*710710+ * Check that the imap we are going to return to the caller spans the entire711711+ * range that the caller requested for the IO.712712+ */713713+static bool714714+imap_spans_range(715715+ struct xfs_bmbt_irec *imap,716716+ xfs_fileoff_t offset_fsb,717717+ xfs_fileoff_t end_fsb)718718+{719719+ if (imap->br_startoff > offset_fsb)720720+ return false;721721+ if (imap->br_startoff + imap->br_blockcount < end_fsb)722722+ return false;723723+ return true;724724+}725725+709726static int710727xfs_direct_write_iomap_begin(711728 struct inode *inode,···782765783766 if (imap_needs_alloc(inode, flags, &imap, nimaps))784767 goto allocate_blocks;768768+769769+ /*770770+ * NOWAIT IO needs to span the entire requested IO with a single map so771771+ * that we avoid partial IO failures due to the rest of the IO range not772772+ * covered by this map triggering an EAGAIN condition when it is773773+ * subsequently mapped and aborting the IO.774774+ */775775+ if ((flags & IOMAP_NOWAIT) &&776776+ !imap_spans_range(&imap, offset_fsb, end_fsb)) {777777+ error = -EAGAIN;778778+ goto out_unlock;779779+ }785780786781 xfs_iunlock(ip, lockmode);787782 trace_xfs_iomap_found(ip, offset, length, XFS_DATA_FORK, &imap);
+24-3
fs/xfs/xfs_iwalk.c
···5555 /* Where do we start the traversal? */5656 xfs_ino_t startino;57575858+ /* What was the last inode number we saw when iterating the inobt? */5959+ xfs_ino_t lastino;6060+5861 /* Array of inobt records we cache. */5962 struct xfs_inobt_rec_incore *recs;6063···304301 if (XFS_IS_CORRUPT(mp, *has_more != 1))305302 return -EFSCORRUPTED;306303304304+ iwag->lastino = XFS_AGINO_TO_INO(mp, agno,305305+ irec->ir_startino + XFS_INODES_PER_CHUNK - 1);306306+307307 /*308308 * If the LE lookup yielded an inobt record before the cursor position,309309 * skip it and see if there's another one after it.···353347 struct xfs_mount *mp = iwag->mp;354348 struct xfs_trans *tp = iwag->tp;355349 struct xfs_inobt_rec_incore *irec;356356- xfs_agino_t restart;350350+ xfs_agino_t next_agino;357351 int error;352352+353353+ next_agino = XFS_INO_TO_AGINO(mp, iwag->lastino) + 1;358354359355 ASSERT(iwag->nr_recs > 0);360356361357 /* Delete cursor but remember the last record we cached... */362358 xfs_iwalk_del_inobt(tp, curpp, agi_bpp, 0);363359 irec = &iwag->recs[iwag->nr_recs - 1];364364- restart = irec->ir_startino + XFS_INODES_PER_CHUNK - 1;360360+ ASSERT(next_agino == irec->ir_startino + XFS_INODES_PER_CHUNK);365361366362 error = xfs_iwalk_ag_recs(iwag);367363 if (error)···380372 if (error)381373 return error;382374383383- return xfs_inobt_lookup(*curpp, restart, XFS_LOOKUP_GE, has_more);375375+ return xfs_inobt_lookup(*curpp, next_agino, XFS_LOOKUP_GE, has_more);384376}385377386378/* Walk all inodes in a single AG, from @iwag->startino to the end of the AG. */···404396405397 while (!error && has_more) {406398 struct xfs_inobt_rec_incore *irec;399399+ xfs_ino_t rec_fsino;407400408401 cond_resched();409402 if (xfs_pwork_want_abort(&iwag->pwork))···415406 error = xfs_inobt_get_rec(cur, irec, &has_more);416407 if (error || !has_more)417408 break;409409+410410+ /* Make sure that we always move forward. */411411+ rec_fsino = XFS_AGINO_TO_INO(mp, agno, irec->ir_startino);412412+ if (iwag->lastino != NULLFSINO &&413413+ XFS_IS_CORRUPT(mp, iwag->lastino >= rec_fsino)) {414414+ error = -EFSCORRUPTED;415415+ goto out;416416+ }417417+ iwag->lastino = rec_fsino + XFS_INODES_PER_CHUNK - 1;418418419419 /* No allocated inodes in this chunk; skip it. */420420 if (iwag->skip_empty && irec->ir_freecount == irec->ir_count) {···553535 .trim_start = 1,554536 .skip_empty = 1,555537 .pwork = XFS_PWORK_SINGLE_THREADED,538538+ .lastino = NULLFSINO,556539 };557540 xfs_agnumber_t agno = XFS_INO_TO_AGNO(mp, startino);558541 int error;···642623 iwag->data = data;643624 iwag->startino = startino;644625 iwag->sz_recs = xfs_iwalk_prefetch(inode_records);626626+ iwag->lastino = NULLFSINO;645627 xfs_pwork_queue(&pctl, &iwag->pwork);646628 startino = XFS_AGINO_TO_INO(mp, agno + 1, 0);647629 if (flags & XFS_INOBT_WALK_SAME_AG)···716696 .startino = startino,717697 .sz_recs = xfs_inobt_walk_prefetch(inobt_records),718698 .pwork = XFS_PWORK_SINGLE_THREADED,699699+ .lastino = NULLFSINO,719700 };720701 xfs_agnumber_t agno = XFS_INO_TO_AGNO(mp, startino);721702 int error;
+8-3
fs/xfs/xfs_mount.c
···194194 }195195196196 pag = kmem_zalloc(sizeof(*pag), KM_MAYFAIL);197197- if (!pag)197197+ if (!pag) {198198+ error = -ENOMEM;198199 goto out_unwind_new_pags;200200+ }199201 pag->pag_agno = index;200202 pag->pag_mount = mp;201203 spin_lock_init(&pag->pag_ici_lock);202204 INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC);203203- if (xfs_buf_hash_init(pag))205205+206206+ error = xfs_buf_hash_init(pag);207207+ if (error)204208 goto out_free_pag;205209 init_waitqueue_head(&pag->pagb_wait);206210 spin_lock_init(&pag->pagb_lock);207211 pag->pagb_count = 0;208212 pag->pagb_tree = RB_ROOT;209213210210- if (radix_tree_preload(GFP_NOFS))214214+ error = radix_tree_preload(GFP_NOFS);215215+ if (error)211216 goto out_hash_destroy;212217213218 spin_lock(&mp->m_perag_lock);
+2
include/linux/compiler-clang.h
···88 + __clang_patchlevel__)991010#if CLANG_VERSION < 1000011111+#ifndef __BPF_TRACING__1112# error Sorry, your version of Clang is too old - please use 10.0.1 or newer.1313+#endif1214#endif13151416/* Compiler specific definitions for Clang compiler */
···798798extern int iommu_calculate_max_sagaw(struct intel_iommu *iommu);799799extern int dmar_disabled;800800extern int intel_iommu_enabled;801801-extern int intel_iommu_tboot_noforce;802801extern int intel_iommu_gfx_mapped;803802#else804803static inline int iommu_calculate_agaw(struct intel_iommu *iommu)
+1-1
include/linux/jbd2.h
···401401#define JI_WAIT_DATA (1 << __JI_WAIT_DATA)402402403403/**404404- * struct jbd_inode - The jbd_inode type is the structure linking inodes in404404+ * struct jbd2_inode - The jbd_inode type is the structure linking inodes in405405 * ordered mode present in a transaction so that we can sync them during commit.406406 */407407struct jbd2_inode {
+14-14
include/linux/memcontrol.h
···282282283283 MEMCG_PADDING(_pad1_);284284285285- /*286286- * set > 0 if pages under this cgroup are moving to other cgroup.287287- */288288- atomic_t moving_account;289289- struct task_struct *move_lock_task;290290-291291- /* Legacy local VM stats and events */292292- struct memcg_vmstats_percpu __percpu *vmstats_local;293293-294294- /* Subtree VM stats and events (batched updates) */295295- struct memcg_vmstats_percpu __percpu *vmstats_percpu;296296-297297- MEMCG_PADDING(_pad2_);298298-299285 atomic_long_t vmstats[MEMCG_NR_STAT];300286 atomic_long_t vmevents[NR_VM_EVENT_ITEMS];301287···302316 struct obj_cgroup __rcu *objcg;303317 struct list_head objcg_list; /* list of inherited objcgs */304318#endif319319+320320+ MEMCG_PADDING(_pad2_);321321+322322+ /*323323+ * set > 0 if pages under this cgroup are moving to other cgroup.324324+ */325325+ atomic_t moving_account;326326+ struct task_struct *move_lock_task;327327+328328+ /* Legacy local VM stats and events */329329+ struct memcg_vmstats_percpu __percpu *vmstats_local;330330+331331+ /* Subtree VM stats and events (batched updates) */332332+ struct memcg_vmstats_percpu __percpu *vmstats_percpu;305333306334#ifdef CONFIG_CGROUP_WRITEBACK307335 struct list_head cgwb_list;
-14
include/linux/memory_hotplug.h
···281281}282282#endif /* ! CONFIG_MEMORY_HOTPLUG */283283284284-#ifdef CONFIG_NUMA285285-extern int memory_add_physaddr_to_nid(u64 start);286286-extern int phys_to_target_node(u64 start);287287-#else288288-static inline int memory_add_physaddr_to_nid(u64 start)289289-{290290- return 0;291291-}292292-static inline int phys_to_target_node(u64 start)293293-{294294- return 0;295295-}296296-#endif297297-298284#if defined(CONFIG_MEMORY_HOTPLUG) || defined(CONFIG_DEFERRED_STRUCT_PAGE_INIT)299285/*300286 * pgdat resizing functions
+5
include/linux/netdevice.h
···31633163 return false;31643164}3165316531663166+static inline bool dev_has_header(const struct net_device *dev)31673167+{31683168+ return dev->header_ops && dev->header_ops->create;31693169+}31703170+31663171typedef int gifconf_func_t(struct net_device * dev, char __user * bufptr,31673172 int len, int size);31683173int register_gifconf(unsigned int family, gifconf_func_t *gifconf);
+29-1
include/linux/numa.h
···2121#endif22222323#ifdef CONFIG_NUMA2424+#include <linux/printk.h>2525+#include <asm/sparsemem.h>2626+2427/* Generic implementation available */2528int numa_map_to_online_node(int node);2626-#else2929+3030+#ifndef memory_add_physaddr_to_nid3131+static inline int memory_add_physaddr_to_nid(u64 start)3232+{3333+ pr_info_once("Unknown online node for memory at 0x%llx, assuming node 0\n",3434+ start);3535+ return 0;3636+}3737+#endif3838+#ifndef phys_to_target_node3939+static inline int phys_to_target_node(u64 start)4040+{4141+ pr_info_once("Unknown target node for memory at 0x%llx, assuming node 0\n",4242+ start);4343+ return 0;4444+}4545+#endif4646+#else /* !CONFIG_NUMA */2747static inline int numa_map_to_online_node(int node)2848{2949 return NUMA_NO_NODE;5050+}5151+static inline int memory_add_physaddr_to_nid(u64 start)5252+{5353+ return 0;5454+}5555+static inline int phys_to_target_node(u64 start)5656+{5757+ return 0;3058}3159#endif3260
···552552 * overruns.553553 */554554 unsigned int dl_throttled : 1;555555- unsigned int dl_boosted : 1;556555 unsigned int dl_yielded : 1;557556 unsigned int dl_non_contending : 1;558557 unsigned int dl_overrun : 1;···570571 * time.571572 */572573 struct hrtimer inactive_timer;574574+575575+#ifdef CONFIG_RT_MUTEXES576576+ /*577577+ * Priority Inheritance. When a DEADLINE scheduling entity is boosted578578+ * pi_se points to the donor, otherwise points to the dl_se it belongs579579+ * to (the original one/itself).580580+ */581581+ struct sched_dl_entity *pi_se;582582+#endif573583};574584575585#ifdef CONFIG_UCLAMP_TASK···778770 unsigned sched_reset_on_fork:1;779771 unsigned sched_contributes_to_load:1;780772 unsigned sched_migrated:1;781781- unsigned sched_remote_wakeup:1;782773#ifdef CONFIG_PSI783774 unsigned sched_psi_wake_requeue:1;784775#endif···786779 unsigned :0;787780788781 /* Unserialized, strictly 'current' */782782+783783+ /*784784+ * This field must not be in the scheduler word above due to wakelist785785+ * queueing no longer being serialized by p->on_cpu. However:786786+ *787787+ * p->XXX = X; ttwu()788788+ * schedule() if (p->on_rq && ..) // false789789+ * smp_mb__after_spinlock(); if (smp_load_acquire(&p->on_cpu) && //true790790+ * deactivate_task() ttwu_queue_wakelist())791791+ * p->on_rq = 0; p->sched_remote_wakeup = Y;792792+ *793793+ * guarantees all stores of 'current' are visible before794794+ * ->sched_remote_wakeup gets used, so it can be in this word.795795+ */796796+ unsigned sched_remote_wakeup:1;789797790798 /* Bit to tell LSMs we're in execve(): */791799 unsigned in_execve:1;
···199199 * to be atomic.200200 */201201 TLS_TX_SYNC_SCHED = 1,202202+ /* tls_dev_del was called for the RX side, device state was released,203203+ * but tls_ctx->netdev might still be kept, because TX-side driver204204+ * resources might not be released yet. Used to prevent the second205205+ * tls_dev_del call in tls_device_down if it happens simultaneously.206206+ */207207+ TLS_RX_DEV_CLOSED = 2,202208};203209204210struct cipher_context {
···719719 with more CPUs. Therefore this value is used only when the sum of720720 contributions is greater than the half of the default kernel ring721721 buffer as defined by LOG_BUF_SHIFT. The default values are set722722- so that more than 64 CPUs are needed to trigger the allocation.722722+ so that more than 16 CPUs are needed to trigger the allocation.723723724724 Also this option is ignored when "log_buf_len" kernel parameter is725725 used as it forces an exact (power of two) size of the ring buffer.
···528528 if (dev_info)529529 memcpy(&r.info->dev_info, dev_info, sizeof(r.info->dev_info));530530531531- /* insert message */532532- if ((flags & LOG_CONT) || !(flags & LOG_NEWLINE))531531+ /* A message without a trailing newline can be continued. */532532+ if (!(flags & LOG_NEWLINE))533533 prb_commit(&e);534534 else535535 prb_final_commit(&e);
···264264 return ret;265265}266266267267-static bool ptrace_has_cap(const struct cred *cred, struct user_namespace *ns,268268- unsigned int mode)267267+static bool ptrace_has_cap(struct user_namespace *ns, unsigned int mode)269268{270270- int ret;271271-272269 if (mode & PTRACE_MODE_NOAUDIT)273273- ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NOAUDIT);274274- else275275- ret = security_capable(cred, ns, CAP_SYS_PTRACE, CAP_OPT_NONE);276276-277277- return ret == 0;270270+ return ns_capable_noaudit(ns, CAP_SYS_PTRACE);271271+ return ns_capable(ns, CAP_SYS_PTRACE);278272}279273280274/* Returns 0 on success, -errno on denial. */···320326 gid_eq(caller_gid, tcred->sgid) &&321327 gid_eq(caller_gid, tcred->gid))322328 goto ok;323323- if (ptrace_has_cap(cred, tcred->user_ns, mode))329329+ if (ptrace_has_cap(tcred->user_ns, mode))324330 goto ok;325331 rcu_read_unlock();326332 return -EPERM;···339345 mm = task->mm;340346 if (mm &&341347 ((get_dumpable(mm) != SUID_DUMP_USER) &&342342- !ptrace_has_cap(cred, mm->user_ns, mode)))348348+ !ptrace_has_cap(mm->user_ns, mode)))343349 return -EPERM;344350345351 return security_ptrace_access_check(task, mode);
+16-10
kernel/sched/core.c
···25012501#ifdef CONFIG_SMP25022502 if (wake_flags & WF_MIGRATED)25032503 en_flags |= ENQUEUE_MIGRATED;25042504+ else25042505#endif25062506+ if (p->in_iowait) {25072507+ delayacct_blkio_end(p);25082508+ atomic_dec(&task_rq(p)->nr_iowait);25092509+ }2505251025062511 activate_task(rq, p, en_flags);25072512 ttwu_do_wakeup(rq, p, wake_flags, rf);···28932888 if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))28942889 goto unlock;2895289028962896- if (p->in_iowait) {28972897- delayacct_blkio_end(p);28982898- atomic_dec(&task_rq(p)->nr_iowait);28992899- }29002900-29012891#ifdef CONFIG_SMP29022892 /*29032893 * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be···2963296329642964 cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags);29652965 if (task_cpu(p) != cpu) {29662966+ if (p->in_iowait) {29672967+ delayacct_blkio_end(p);29682968+ atomic_dec(&task_rq(p)->nr_iowait);29692969+ }29702970+29662971 wake_flags |= WF_MIGRATED;29672972 psi_ttwu_dequeue(p);29682973 set_task_cpu(p, cpu);···49124907 if (!dl_prio(p->normal_prio) ||49134908 (pi_task && dl_prio(pi_task->prio) &&49144909 dl_entity_preempt(&pi_task->dl, &p->dl))) {49154915- p->dl.dl_boosted = 1;49104910+ p->dl.pi_se = pi_task->dl.pi_se;49164911 queue_flag |= ENQUEUE_REPLENISH;49174917- } else49184918- p->dl.dl_boosted = 0;49124912+ } else {49134913+ p->dl.pi_se = &p->dl;49144914+ }49194915 p->sched_class = &dl_sched_class;49204916 } else if (rt_prio(prio)) {49214917 if (dl_prio(oldprio))49224922- p->dl.dl_boosted = 0;49184918+ p->dl.pi_se = &p->dl;49234919 if (oldprio < prio)49244920 queue_flag |= ENQUEUE_HEAD;49254921 p->sched_class = &rt_sched_class;49264922 } else {49274923 if (dl_prio(oldprio))49284928- p->dl.dl_boosted = 0;49244924+ p->dl.pi_se = &p->dl;49294925 if (rt_prio(oldprio))49304926 p->rt.timeout = 0;49314927 p->sched_class = &fair_sched_class;
+53-44
kernel/sched/deadline.c
···4343 return !RB_EMPTY_NODE(&dl_se->rb_node);4444}45454646+#ifdef CONFIG_RT_MUTEXES4747+static inline struct sched_dl_entity *pi_of(struct sched_dl_entity *dl_se)4848+{4949+ return dl_se->pi_se;5050+}5151+5252+static inline bool is_dl_boosted(struct sched_dl_entity *dl_se)5353+{5454+ return pi_of(dl_se) != dl_se;5555+}5656+#else5757+static inline struct sched_dl_entity *pi_of(struct sched_dl_entity *dl_se)5858+{5959+ return dl_se;6060+}6161+6262+static inline bool is_dl_boosted(struct sched_dl_entity *dl_se)6363+{6464+ return false;6565+}6666+#endif6767+4668#ifdef CONFIG_SMP4769static inline struct dl_bw *dl_bw_of(int i)4870{···720698 struct dl_rq *dl_rq = dl_rq_of_se(dl_se);721699 struct rq *rq = rq_of_dl_rq(dl_rq);722700723723- WARN_ON(dl_se->dl_boosted);701701+ WARN_ON(is_dl_boosted(dl_se));724702 WARN_ON(dl_time_before(rq_clock(rq), dl_se->deadline));725703726704 /*···758736 * could happen are, typically, a entity voluntarily trying to overcome its759737 * runtime, or it just underestimated it during sched_setattr().760738 */761761-static void replenish_dl_entity(struct sched_dl_entity *dl_se,762762- struct sched_dl_entity *pi_se)739739+static void replenish_dl_entity(struct sched_dl_entity *dl_se)763740{764741 struct dl_rq *dl_rq = dl_rq_of_se(dl_se);765742 struct rq *rq = rq_of_dl_rq(dl_rq);766743767767- BUG_ON(pi_se->dl_runtime <= 0);744744+ BUG_ON(pi_of(dl_se)->dl_runtime <= 0);768745769746 /*770747 * This could be the case for a !-dl task that is boosted.771748 * Just go with full inherited parameters.772749 */773750 if (dl_se->dl_deadline == 0) {774774- dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;775775- dl_se->runtime = pi_se->dl_runtime;751751+ dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline;752752+ dl_se->runtime = pi_of(dl_se)->dl_runtime;776753 }777754778755 if (dl_se->dl_yielded && dl_se->runtime > 0)···784763 * arbitrary large.785764 */786765 while (dl_se->runtime <= 0) {787787- dl_se->deadline += pi_se->dl_period;788788- dl_se->runtime += pi_se->dl_runtime;766766+ dl_se->deadline += pi_of(dl_se)->dl_period;767767+ dl_se->runtime += pi_of(dl_se)->dl_runtime;789768 }790769791770 /*···799778 */800779 if (dl_time_before(dl_se->deadline, rq_clock(rq))) {801780 printk_deferred_once("sched: DL replenish lagged too much\n");802802- dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;803803- dl_se->runtime = pi_se->dl_runtime;781781+ dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline;782782+ dl_se->runtime = pi_of(dl_se)->dl_runtime;804783 }805784806785 if (dl_se->dl_yielded)···833812 * task with deadline equal to period this is the same of using834813 * dl_period instead of dl_deadline in the equation above.835814 */836836-static bool dl_entity_overflow(struct sched_dl_entity *dl_se,837837- struct sched_dl_entity *pi_se, u64 t)815815+static bool dl_entity_overflow(struct sched_dl_entity *dl_se, u64 t)838816{839817 u64 left, right;840818···855835 * of anything below microseconds resolution is actually fiction856836 * (but still we want to give the user that illusion >;).857837 */858858- left = (pi_se->dl_deadline >> DL_SCALE) * (dl_se->runtime >> DL_SCALE);838838+ left = (pi_of(dl_se)->dl_deadline >> DL_SCALE) * (dl_se->runtime >> DL_SCALE);859839 right = ((dl_se->deadline - t) >> DL_SCALE) *860860- (pi_se->dl_runtime >> DL_SCALE);840840+ (pi_of(dl_se)->dl_runtime >> DL_SCALE);861841862842 return dl_time_before(right, left);863843}···942922 * Please refer to the comments update_dl_revised_wakeup() function to find943923 * more about the Revised CBS rule.944924 */945945-static void update_dl_entity(struct sched_dl_entity *dl_se,946946- struct sched_dl_entity *pi_se)925925+static void update_dl_entity(struct sched_dl_entity *dl_se)947926{948927 struct dl_rq *dl_rq = dl_rq_of_se(dl_se);949928 struct rq *rq = rq_of_dl_rq(dl_rq);950929951930 if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||952952- dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {931931+ dl_entity_overflow(dl_se, rq_clock(rq))) {953932954933 if (unlikely(!dl_is_implicit(dl_se) &&955934 !dl_time_before(dl_se->deadline, rq_clock(rq)) &&956956- !dl_se->dl_boosted)){935935+ !is_dl_boosted(dl_se))) {957936 update_dl_revised_wakeup(dl_se, rq);958937 return;959938 }960939961961- dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;962962- dl_se->runtime = pi_se->dl_runtime;940940+ dl_se->deadline = rq_clock(rq) + pi_of(dl_se)->dl_deadline;941941+ dl_se->runtime = pi_of(dl_se)->dl_runtime;963942 }964943}965944···10571038 * The task might have been boosted by someone else and might be in the10581039 * boosting/deboosting path, its not throttled.10591040 */10601060- if (dl_se->dl_boosted)10411041+ if (is_dl_boosted(dl_se))10611042 goto unlock;1062104310631044 /*···10851066 * but do not enqueue -- wait for our wakeup to do that.10861067 */10871068 if (!task_on_rq_queued(p)) {10881088- replenish_dl_entity(dl_se, dl_se);10691069+ replenish_dl_entity(dl_se);10891070 goto unlock;10901071 }10911072···1175115611761157 if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&11771158 dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {11781178- if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))11591159+ if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(p)))11791160 return;11801161 dl_se->dl_throttled = 1;11811162 if (dl_se->runtime > 0)···13061287 dl_se->dl_overrun = 1;1307128813081289 __dequeue_task_dl(rq, curr, 0);13091309- if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr)))12901290+ if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(curr)))13101291 enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH);1311129213121293 if (!is_leftmost(curr, &rq->dl))···15001481}1501148215021483static void15031503-enqueue_dl_entity(struct sched_dl_entity *dl_se,15041504- struct sched_dl_entity *pi_se, int flags)14841484+enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags)15051485{15061486 BUG_ON(on_dl_rq(dl_se));15071487···15111493 */15121494 if (flags & ENQUEUE_WAKEUP) {15131495 task_contending(dl_se, flags);15141514- update_dl_entity(dl_se, pi_se);14961496+ update_dl_entity(dl_se);15151497 } else if (flags & ENQUEUE_REPLENISH) {15161516- replenish_dl_entity(dl_se, pi_se);14981498+ replenish_dl_entity(dl_se);15171499 } else if ((flags & ENQUEUE_RESTORE) &&15181500 dl_time_before(dl_se->deadline,15191501 rq_clock(rq_of_dl_rq(dl_rq_of_se(dl_se))))) {···1530151215311513static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)15321514{15331533- struct task_struct *pi_task = rt_mutex_get_top_task(p);15341534- struct sched_dl_entity *pi_se = &p->dl;15351535-15361536- /*15371537- * Use the scheduling parameters of the top pi-waiter task if:15381538- * - we have a top pi-waiter which is a SCHED_DEADLINE task AND15391539- * - our dl_boosted is set (i.e. the pi-waiter's (absolute) deadline is15401540- * smaller than our deadline OR we are a !SCHED_DEADLINE task getting15411541- * boosted due to a SCHED_DEADLINE pi-waiter).15421542- * Otherwise we keep our runtime and deadline.15431543- */15441544- if (pi_task && dl_prio(pi_task->normal_prio) && p->dl.dl_boosted) {15451545- pi_se = &pi_task->dl;15151515+ if (is_dl_boosted(&p->dl)) {15461516 /*15471517 * Because of delays in the detection of the overrun of a15481518 * thread's runtime, it might be the case that a thread···15631557 * the throttle.15641558 */15651559 p->dl.dl_throttled = 0;15661566- BUG_ON(!p->dl.dl_boosted || flags != ENQUEUE_REPLENISH);15601560+ BUG_ON(!is_dl_boosted(&p->dl) || flags != ENQUEUE_REPLENISH);15671561 return;15681562 }15691563···16001594 return;16011595 }1602159616031603- enqueue_dl_entity(&p->dl, pi_se, flags);15971597+ enqueue_dl_entity(&p->dl, flags);1604159816051599 if (!task_current(rq, p) && p->nr_cpus_allowed > 1)16061600 enqueue_pushable_dl_task(rq, p);···27932787 dl_se->dl_bw = 0;27942788 dl_se->dl_density = 0;2795278927962796- dl_se->dl_boosted = 0;27972790 dl_se->dl_throttled = 0;27982791 dl_se->dl_yielded = 0;27992792 dl_se->dl_non_contending = 0;28002793 dl_se->dl_overrun = 0;27942794+27952795+#ifdef CONFIG_RT_MUTEXES27962796+ dl_se->pi_se = dl_se;27972797+#endif28012798}2802279928032800bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
+2-1
kernel/sched/fair.c
···54775477 struct cfs_rq *cfs_rq;54785478 struct sched_entity *se = &p->se;54795479 int idle_h_nr_running = task_has_idle_policy(p);54805480+ int task_new = !(flags & ENQUEUE_WAKEUP);5480548154815482 /*54825483 * The code below (indirectly) updates schedutil which looks at···55505549 * into account, but that is not straightforward to implement,55515550 * and the following generally works well enough in practice.55525551 */55535553- if (flags & ENQUEUE_WAKEUP)55525552+ if (!task_new)55545553 update_overutilized_status(rq);5555555455565555enqueue_throttle:
+2-3
kernel/seccomp.c
···3838#include <linux/filter.h>3939#include <linux/pid.h>4040#include <linux/ptrace.h>4141-#include <linux/security.h>4141+#include <linux/capability.h>4242#include <linux/tracehook.h>4343#include <linux/uaccess.h>4444#include <linux/anon_inodes.h>···558558 * behavior of privileged children.559559 */560560 if (!task_no_new_privs(current) &&561561- security_capable(current_cred(), current_user_ns(),562562- CAP_SYS_ADMIN, CAP_OPT_NOAUDIT) != 0)561561+ !ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))563562 return ERR_PTR(-EACCES);564563565564 /* Allocate a new seccomp_filter */
+22-4
mm/filemap.c
···14841484 rotate_reclaimable_page(page);14851485 }1486148614871487+ /*14881488+ * Writeback does not hold a page reference of its own, relying14891489+ * on truncation to wait for the clearing of PG_writeback.14901490+ * But here we must make sure that the page is not freed and14911491+ * reused before the wake_up_page().14921492+ */14931493+ get_page(page);14871494 if (!test_clear_page_writeback(page))14881495 BUG();1489149614901497 smp_mb__after_atomic();14911498 wake_up_page(page, PG_writeback);14991499+ put_page(page);14921500}14931501EXPORT_SYMBOL(end_page_writeback);14941502···2355234723562348page_not_up_to_date:23572349 /* Get exclusive access to the page ... */23582358- if (iocb->ki_flags & IOCB_WAITQ)23502350+ if (iocb->ki_flags & IOCB_WAITQ) {23512351+ if (written) {23522352+ put_page(page);23532353+ goto out;23542354+ }23592355 error = lock_page_async(page, iocb->ki_waitq);23602360- else23562356+ } else {23612357 error = lock_page_killable(page);23582358+ }23622359 if (unlikely(error))23632360 goto readpage_error;23642361···24062393 }2407239424082395 if (!PageUptodate(page)) {24092409- if (iocb->ki_flags & IOCB_WAITQ)23962396+ if (iocb->ki_flags & IOCB_WAITQ) {23972397+ if (written) {23982398+ put_page(page);23992399+ goto out;24002400+ }24102401 error = lock_page_async(page, iocb->ki_waitq);24112411- else24022402+ } else {24122403 error = lock_page_killable(page);24042404+ }2413240524142406 if (unlikely(error))24152407 goto readpage_error;
+4-5
mm/huge_memory.c
···710710 transparent_hugepage_use_zero_page()) {711711 pgtable_t pgtable;712712 struct page *zero_page;713713- bool set;714713 vm_fault_t ret;715714 pgtable = pte_alloc_one(vma->vm_mm);716715 if (unlikely(!pgtable))···722723 }723724 vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);724725 ret = 0;725725- set = false;726726 if (pmd_none(*vmf->pmd)) {727727 ret = check_stable_address_space(vma->vm_mm);728728 if (ret) {729729 spin_unlock(vmf->ptl);730730+ pte_free(vma->vm_mm, pgtable);730731 } else if (userfaultfd_missing(vma)) {731732 spin_unlock(vmf->ptl);733733+ pte_free(vma->vm_mm, pgtable);732734 ret = handle_userfault(vmf, VM_UFFD_MISSING);733735 VM_BUG_ON(ret & VM_FAULT_FALLBACK);734736 } else {735737 set_huge_zero_page(pgtable, vma->vm_mm, vma,736738 haddr, vmf->pmd, zero_page);737739 spin_unlock(vmf->ptl);738738- set = true;739740 }740740- } else741741+ } else {741742 spin_unlock(vmf->ptl);742742- if (!set)743743 pte_free(vma->vm_mm, pgtable);744744+ }744745 return ret;745746 }746747 gfp = alloc_hugepage_direct_gfpmask(vma);
···867867 rcu_read_lock();868868 memcg = mem_cgroup_from_obj(p);869869870870- /* Untracked pages have no memcg, no lruvec. Update only the node */871871- if (!memcg || memcg == root_mem_cgroup) {870870+ /*871871+ * Untracked pages have no memcg, no lruvec. Update only the872872+ * node. If we reparent the slab objects to the root memcg,873873+ * when we free the slab object, we need to update the per-memcg874874+ * vmstats to keep it correct for the root memcg.875875+ */876876+ if (!memcg) {872877 __mod_node_page_state(pgdat, idx, val);873878 } else {874879 lruvec = mem_cgroup_lruvec(memcg, pgdat);
-18
mm/memory_hotplug.c
···350350 return err;351351}352352353353-#ifdef CONFIG_NUMA354354-int __weak memory_add_physaddr_to_nid(u64 start)355355-{356356- pr_info_once("Unknown online node for memory at 0x%llx, assuming node 0\n",357357- start);358358- return 0;359359-}360360-EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);361361-362362-int __weak phys_to_target_node(u64 start)363363-{364364- pr_info_once("Unknown target node for memory at 0x%llx, assuming node 0\n",365365- start);366366- return 0;367367-}368368-EXPORT_SYMBOL_GPL(phys_to_target_node);369369-#endif370370-371353/* find the smallest valid pfn in the range [start_pfn, end_pfn) */372354static unsigned long find_smallest_section_pfn(int nid, struct zone *zone,373355 unsigned long start_pfn,
-6
mm/page-writeback.c
···27542754 } else {27552755 ret = TestClearPageWriteback(page);27562756 }27572757- /*27582758- * NOTE: Page might be free now! Writeback doesn't hold a page27592759- * reference on its own, it relies on truncation to wait for27602760- * the clearing of PG_writeback. The below can only access27612761- * page state that is static across allocation cycles.27622762- */27632757 if (ret) {27642758 dec_lruvec_state(lruvec, NR_WRITEBACK);27652759 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
···541541542542 /* Check for bugs in CAN protocol implementations using af_can.c:543543 * 'rcv' will be NULL if no matching list item was found for removal.544544+ * As this case may potentially happen when closing a socket while545545+ * the notifier for removing the CAN netdev is running we just print546546+ * a warning here.544547 */545548 if (!rcv) {546546- WARN(1, "BUG: receive list entry not found for dev %s, id %03X, mask %03X\n",547547- DNAME(dev), can_id, mask);549549+ pr_warn("can: receive list entry not found for dev %s, id %03X, mask %03X\n",550550+ DNAME(dev), can_id, mask);548551 goto out;549552 }550553
+39-17
net/core/devlink.c
···517517 return test_bit(limit, &devlink->ops->reload_limits);518518}519519520520-static int devlink_reload_stat_put(struct sk_buff *msg, enum devlink_reload_action action,520520+static int devlink_reload_stat_put(struct sk_buff *msg,521521 enum devlink_reload_limit limit, u32 value)522522{523523 struct nlattr *reload_stats_entry;···526526 if (!reload_stats_entry)527527 return -EMSGSIZE;528528529529- if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, action) ||530530- nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) ||529529+ if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_STATS_LIMIT, limit) ||531530 nla_put_u32(msg, DEVLINK_ATTR_RELOAD_STATS_VALUE, value))532531 goto nla_put_failure;533532 nla_nest_end(msg, reload_stats_entry);···539540540541static int devlink_reload_stats_put(struct sk_buff *msg, struct devlink *devlink, bool is_remote)541542{542542- struct nlattr *reload_stats_attr;543543+ struct nlattr *reload_stats_attr, *act_info, *act_stats;543544 int i, j, stat_idx;544545 u32 value;545546···551552 if (!reload_stats_attr)552553 return -EMSGSIZE;553554554554- for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) {555555- /* Remote stats are shown even if not locally supported. Stats556556- * of actions with unspecified limit are shown though drivers557557- * don't need to register unspecified limit.558558- */559559- if (!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC &&560560- !devlink_reload_limit_is_supported(devlink, j))555555+ for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) {556556+ if ((!is_remote &&557557+ !devlink_reload_action_is_supported(devlink, i)) ||558558+ i == DEVLINK_RELOAD_ACTION_UNSPEC)561559 continue;562562- for (i = 0; i <= DEVLINK_RELOAD_ACTION_MAX; i++) {563563- if ((!is_remote && !devlink_reload_action_is_supported(devlink, i)) ||564564- i == DEVLINK_RELOAD_ACTION_UNSPEC ||560560+ act_info = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_INFO);561561+ if (!act_info)562562+ goto nla_put_failure;563563+564564+ if (nla_put_u8(msg, DEVLINK_ATTR_RELOAD_ACTION, i))565565+ goto action_info_nest_cancel;566566+ act_stats = nla_nest_start(msg, DEVLINK_ATTR_RELOAD_ACTION_STATS);567567+ if (!act_stats)568568+ goto action_info_nest_cancel;569569+570570+ for (j = 0; j <= DEVLINK_RELOAD_LIMIT_MAX; j++) {571571+ /* Remote stats are shown even if not locally supported.572572+ * Stats of actions with unspecified limit are shown573573+ * though drivers don't need to register unspecified574574+ * limit.575575+ */576576+ if ((!is_remote && j != DEVLINK_RELOAD_LIMIT_UNSPEC &&577577+ !devlink_reload_limit_is_supported(devlink, j)) ||565578 devlink_reload_combination_is_invalid(i, j))566579 continue;567580···582571 value = devlink->stats.reload_stats[stat_idx];583572 else584573 value = devlink->stats.remote_reload_stats[stat_idx];585585- if (devlink_reload_stat_put(msg, i, j, value))586586- goto nla_put_failure;574574+ if (devlink_reload_stat_put(msg, j, value))575575+ goto action_stats_nest_cancel;587576 }577577+ nla_nest_end(msg, act_stats);578578+ nla_nest_end(msg, act_info);588579 }589580 nla_nest_end(msg, reload_stats_attr);590581 return 0;591582583583+action_stats_nest_cancel:584584+ nla_nest_cancel(msg, act_stats);585585+action_info_nest_cancel:586586+ nla_nest_cancel(msg, act_info);592587nla_put_failure:593588 nla_nest_cancel(msg, reload_stats_attr);594589 return -EMSGSIZE;···772755 if (nla_put_u32(msg, DEVLINK_ATTR_PORT_INDEX, devlink_port->index))773756 goto nla_put_failure;774757758758+ /* Hold rtnl lock while accessing port's netdev attributes. */759759+ rtnl_lock();775760 spin_lock_bh(&devlink_port->type_lock);776761 if (nla_put_u16(msg, DEVLINK_ATTR_PORT_TYPE, devlink_port->type))777762 goto nla_put_failure_type_locked;···782763 devlink_port->desired_type))783764 goto nla_put_failure_type_locked;784765 if (devlink_port->type == DEVLINK_PORT_TYPE_ETH) {766766+ struct net *net = devlink_net(devlink_port->devlink);785767 struct net_device *netdev = devlink_port->type_dev;786768787787- if (netdev &&769769+ if (netdev && net_eq(net, dev_net(netdev)) &&788770 (nla_put_u32(msg, DEVLINK_ATTR_PORT_NETDEV_IFINDEX,789771 netdev->ifindex) ||790772 nla_put_string(msg, DEVLINK_ATTR_PORT_NETDEV_NAME,···801781 goto nla_put_failure_type_locked;802782 }803783 spin_unlock_bh(&devlink_port->type_lock);784784+ rtnl_unlock();804785 if (devlink_nl_port_attrs_put(msg, devlink_port))805786 goto nla_put_failure;806787 if (devlink_nl_port_function_attrs_put(msg, devlink_port, extack))···812791813792nla_put_failure_type_locked:814793 spin_unlock_bh(&devlink_port->type_lock);794794+ rtnl_unlock();815795nla_put_failure:816796 genlmsg_cancel(msg, hdr);817797 return -EMSGSIZE;
+6-1
net/core/gro_cells.c
···9999 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);100100101101 napi_disable(&cell->napi);102102- netif_napi_del(&cell->napi);102102+ __netif_napi_del(&cell->napi);103103 __skb_queue_purge(&cell->napi_skbs);104104 }105105+ /* This barrier is needed because netpoll could access dev->napi_list106106+ * under rcu protection.107107+ */108108+ synchronize_net();109109+105110 free_percpu(gcells->cells);106111 gcells->cells = NULL;107112}
···533533 dccp_done(newsk);534534 goto out;535535 }536536- *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));536536+ *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);537537 /* Clone pktoptions received with SYN, if we own the req */538538 if (*own_req && ireq->pktopts) {539539 newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);
+1-1
net/ipv4/inet_connection_sock.c
···787787 timer_setup(&req->rsk_timer, reqsk_timer_handler, TIMER_PINNED);788788 mod_timer(&req->rsk_timer, jiffies + timeout);789789790790- inet_ehash_insert(req_to_sk(req), NULL);790790+ inet_ehash_insert(req_to_sk(req), NULL, NULL);791791 /* before letting lookups find us, make sure all req fields792792 * are committed to memory and refcnt initialized.793793 */
+60-8
net/ipv4/inet_hashtables.c
···2020#include <net/addrconf.h>2121#include <net/inet_connection_sock.h>2222#include <net/inet_hashtables.h>2323+#if IS_ENABLED(CONFIG_IPV6)2424+#include <net/inet6_hashtables.h>2525+#endif2326#include <net/secure_seq.h>2427#include <net/ip.h>2528#include <net/tcp.h>···511508 inet->inet_dport);512509}513510514514-/* insert a socket into ehash, and eventually remove another one515515- * (The another one can be a SYN_RECV or TIMEWAIT511511+/* Searches for an exsiting socket in the ehash bucket list.512512+ * Returns true if found, false otherwise.516513 */517517-bool inet_ehash_insert(struct sock *sk, struct sock *osk)514514+static bool inet_ehash_lookup_by_sk(struct sock *sk,515515+ struct hlist_nulls_head *list)516516+{517517+ const __portpair ports = INET_COMBINED_PORTS(sk->sk_dport, sk->sk_num);518518+ const int sdif = sk->sk_bound_dev_if;519519+ const int dif = sk->sk_bound_dev_if;520520+ const struct hlist_nulls_node *node;521521+ struct net *net = sock_net(sk);522522+ struct sock *esk;523523+524524+ INET_ADDR_COOKIE(acookie, sk->sk_daddr, sk->sk_rcv_saddr);525525+526526+ sk_nulls_for_each_rcu(esk, node, list) {527527+ if (esk->sk_hash != sk->sk_hash)528528+ continue;529529+ if (sk->sk_family == AF_INET) {530530+ if (unlikely(INET_MATCH(esk, net, acookie,531531+ sk->sk_daddr,532532+ sk->sk_rcv_saddr,533533+ ports, dif, sdif))) {534534+ return true;535535+ }536536+ }537537+#if IS_ENABLED(CONFIG_IPV6)538538+ else if (sk->sk_family == AF_INET6) {539539+ if (unlikely(INET6_MATCH(esk, net,540540+ &sk->sk_v6_daddr,541541+ &sk->sk_v6_rcv_saddr,542542+ ports, dif, sdif))) {543543+ return true;544544+ }545545+ }546546+#endif547547+ }548548+ return false;549549+}550550+551551+/* Insert a socket into ehash, and eventually remove another one552552+ * (The another one can be a SYN_RECV or TIMEWAIT)553553+ * If an existing socket already exists, socket sk is not inserted,554554+ * and sets found_dup_sk parameter to true.555555+ */556556+bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)518557{519558 struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;520559 struct hlist_nulls_head *list;···575530 if (osk) {576531 WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);577532 ret = sk_nulls_del_node_init_rcu(osk);533533+ } else if (found_dup_sk) {534534+ *found_dup_sk = inet_ehash_lookup_by_sk(sk, list);535535+ if (*found_dup_sk)536536+ ret = false;578537 }538538+579539 if (ret)580540 __sk_nulls_add_node_rcu(sk, list);541541+581542 spin_unlock(lock);543543+582544 return ret;583545}584546585585-bool inet_ehash_nolisten(struct sock *sk, struct sock *osk)547547+bool inet_ehash_nolisten(struct sock *sk, struct sock *osk, bool *found_dup_sk)586548{587587- bool ok = inet_ehash_insert(sk, osk);549549+ bool ok = inet_ehash_insert(sk, osk, found_dup_sk);588550589551 if (ok) {590552 sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);···635583 int err = 0;636584637585 if (sk->sk_state != TCP_LISTEN) {638638- inet_ehash_nolisten(sk, osk);586586+ inet_ehash_nolisten(sk, osk, NULL);639587 return 0;640588 }641589 WARN_ON(!sk_unhashed(sk));···731679 tb = inet_csk(sk)->icsk_bind_hash;732680 spin_lock_bh(&head->lock);733681 if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {734734- inet_ehash_nolisten(sk, NULL);682682+ inet_ehash_nolisten(sk, NULL, NULL);735683 spin_unlock_bh(&head->lock);736684 return 0;737685 }···810758 inet_bind_hash(sk, tb, port);811759 if (sk_unhashed(sk)) {812760 inet_sk(sk)->inet_sport = htons(port);813813- inet_ehash_nolisten(sk, (struct sock *)tw);761761+ inet_ehash_nolisten(sk, (struct sock *)tw, NULL);814762 }815763 if (tw)816764 inet_twsk_bind_unhash(tw, hinfo);
···958958{959959 /* The first action is always 'OVS_DEC_TTL_ATTR_ARG'. */960960 struct nlattr *dec_ttl_arg = nla_data(attr);961961- int rem = nla_len(attr);962961963962 if (nla_len(dec_ttl_arg)) {964964- struct nlattr *actions = nla_next(dec_ttl_arg, &rem);963963+ struct nlattr *actions = nla_data(dec_ttl_arg);965964966965 if (actions)967967- return clone_execute(dp, skb, key, 0, actions, rem,968968- last, false);966966+ return clone_execute(dp, skb, key, 0, nla_data(actions),967967+ nla_len(actions), last, false);969968 }970969 consume_skb(skb);971970 return 0;
+55-19
net/openvswitch/flow_netlink.c
···25032503 __be16 eth_type, __be16 vlan_tci,25042504 u32 mpls_label_count, bool log)25052505{25062506- int start, err;25072507- u32 nested = true;25062506+ const struct nlattr *attrs[OVS_DEC_TTL_ATTR_MAX + 1];25072507+ int start, action_start, err, rem;25082508+ const struct nlattr *a, *actions;2508250925092509- if (!nla_len(attr))25102510- return ovs_nla_add_action(sfa, OVS_ACTION_ATTR_DEC_TTL,25112511- NULL, 0, log);25102510+ memset(attrs, 0, sizeof(attrs));25112511+ nla_for_each_nested(a, attr, rem) {25122512+ int type = nla_type(a);25132513+25142514+ /* Ignore unknown attributes to be future proof. */25152515+ if (type > OVS_DEC_TTL_ATTR_MAX)25162516+ continue;25172517+25182518+ if (!type || attrs[type])25192519+ return -EINVAL;25202520+25212521+ attrs[type] = a;25222522+ }25232523+25242524+ actions = attrs[OVS_DEC_TTL_ATTR_ACTION];25252525+ if (rem || !actions || (nla_len(actions) && nla_len(actions) < NLA_HDRLEN))25262526+ return -EINVAL;2512252725132528 start = add_nested_action_start(sfa, OVS_ACTION_ATTR_DEC_TTL, log);25142529 if (start < 0)25152530 return start;2516253125172517- err = ovs_nla_add_action(sfa, OVS_DEC_TTL_ATTR_ACTION, &nested,25182518- sizeof(nested), log);25322532+ action_start = add_nested_action_start(sfa, OVS_DEC_TTL_ATTR_ACTION, log);25332533+ if (action_start < 0)25342534+ return start;2519253525202520- if (err)25212521- return err;25222522-25232523- err = __ovs_nla_copy_actions(net, attr, key, sfa, eth_type,25362536+ err = __ovs_nla_copy_actions(net, actions, key, sfa, eth_type,25242537 vlan_tci, mpls_label_count, log);25252538 if (err)25262539 return err;2527254025412541+ add_nested_action_end(*sfa, action_start);25282542 add_nested_action_end(*sfa, start);25292543 return 0;25302544}···35013487static int dec_ttl_action_to_attr(const struct nlattr *attr,35023488 struct sk_buff *skb)35033489{35043504- int err = 0, rem = nla_len(attr);35053505- struct nlattr *start;34903490+ struct nlattr *start, *action_start;34913491+ const struct nlattr *a;34923492+ int err = 0, rem;3506349335073494 start = nla_nest_start_noflag(skb, OVS_ACTION_ATTR_DEC_TTL);35083508-35093495 if (!start)35103496 return -EMSGSIZE;3511349735123512- err = ovs_nla_put_actions(nla_data(attr), rem, skb);35133513- if (err)35143514- nla_nest_cancel(skb, start);35153515- else35163516- nla_nest_end(skb, start);34983498+ nla_for_each_attr(a, nla_data(attr), nla_len(attr), rem) {34993499+ switch (nla_type(a)) {35003500+ case OVS_DEC_TTL_ATTR_ACTION:3517350135023502+ action_start = nla_nest_start_noflag(skb, OVS_DEC_TTL_ATTR_ACTION);35033503+ if (!action_start) {35043504+ err = -EMSGSIZE;35053505+ goto out;35063506+ }35073507+35083508+ err = ovs_nla_put_actions(nla_data(a), nla_len(a), skb);35093509+ if (err)35103510+ goto out;35113511+35123512+ nla_nest_end(skb, action_start);35133513+ break;35143514+35153515+ default:35163516+ /* Ignore all other option to be future compatible */35173517+ break;35183518+ }35193519+ }35203520+35213521+ nla_nest_end(skb, start);35223522+ return 0;35233523+35243524+out:35253525+ nla_nest_cancel(skb, start);35183526 return err;35193527}35203528
+9-9
net/packet/af_packet.c
···94949595/*9696 Assumptions:9797- - If the device has no dev->header_ops, there is no LL header visible9898- above the device. In this case, its hard_header_len should be 0.9797+ - If the device has no dev->header_ops->create, there is no LL header9898+ visible above the device. In this case, its hard_header_len should be 0.9999 The device may prepend its own header internally. In this case, its100100 needed_headroom should be set to the space needed for it to add its101101 internal header.···109109On receive:110110-----------111111112112-Incoming, dev->header_ops != NULL112112+Incoming, dev_has_header(dev) == true113113 mac_header -> ll header114114 data -> data115115116116-Outgoing, dev->header_ops != NULL116116+Outgoing, dev_has_header(dev) == true117117 mac_header -> ll header118118 data -> ll header119119120120-Incoming, dev->header_ops == NULL120120+Incoming, dev_has_header(dev) == false121121 mac_header -> data122122 However drivers often make it point to the ll header.123123 This is incorrect because the ll header should be invisible to us.124124 data -> data125125126126-Outgoing, dev->header_ops == NULL126126+Outgoing, dev_has_header(dev) == false127127 mac_header -> data. ll header is invisible to us.128128 data -> data129129130130Resume131131- If dev->header_ops == NULL we are unable to restore the ll header,131131+ If dev_has_header(dev) == false we are unable to restore the ll header,132132 because it is invisible to us.133133134134···2083208320842084 skb->dev = dev;2085208520862086- if (dev->header_ops) {20862086+ if (dev_has_header(dev)) {20872087 /* The device has an explicit notion of ll header,20882088 * exported to higher levels.20892089 *···22122212 if (!net_eq(dev_net(dev), sock_net(sk)))22132213 goto drop;2214221422152215- if (dev->header_ops) {22152215+ if (dev_has_header(dev)) {22162216 if (sk->sk_type != SOCK_DGRAM)22172217 skb_push(skb, skb->data - skb_mac_header(skb));22182218 else if (skb->pkt_type == PACKET_OUTGOING) {
+13-4
net/rose/rose_loopback.c
···9696 }97979898 if (frametype == ROSE_CALL_REQUEST) {9999- if ((dev = rose_dev_get(dest)) != NULL) {100100- if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0)101101- kfree_skb(skb);102102- } else {9999+ if (!rose_loopback_neigh->dev) {100100+ kfree_skb(skb);101101+ continue;102102+ }103103+104104+ dev = rose_dev_get(dest);105105+ if (!dev) {106106+ kfree_skb(skb);107107+ continue;108108+ }109109+110110+ if (rose_rx_call_request(skb, dev, rose_loopback_neigh, lci_o) == 0) {111111+ dev_put(dev);103112 kfree_skb(skb);104113 }105114 } else {
+4-1
net/tls/tls_device.c
···12621262 if (tls_ctx->tx_conf != TLS_HW) {12631263 dev_put(netdev);12641264 tls_ctx->netdev = NULL;12651265+ } else {12661266+ set_bit(TLS_RX_DEV_CLOSED, &tls_ctx->flags);12651267 }12661268out:12671269 up_read(&device_offload_lock);···12931291 if (ctx->tx_conf == TLS_HW)12941292 netdev->tlsdev_ops->tls_dev_del(netdev, ctx,12951293 TLS_OFFLOAD_CTX_DIR_TX);12961296- if (ctx->rx_conf == TLS_HW)12941294+ if (ctx->rx_conf == TLS_HW &&12951295+ !test_bit(TLS_RX_DEV_CLOSED, &ctx->flags))12971296 netdev->tlsdev_ops->tls_dev_del(netdev, ctx,12981297 TLS_OFFLOAD_CTX_DIR_RX);12991298 WRITE_ONCE(ctx->netdev, NULL);
···841841 virtio_transport_free_pkt(pkt);842842 }843843844844- if (remove_sock)844844+ if (remove_sock) {845845+ sock_set_flag(sk, SOCK_DONE);845846 vsock_remove_sock(vsk);847847+ }846848}847849EXPORT_SYMBOL_GPL(virtio_transport_release);848850···1134113211351133 lock_sock(sk);1136113411371137- /* Check if sk has been released before lock_sock */11381138- if (sk->sk_shutdown == SHUTDOWN_MASK) {11351135+ /* Check if sk has been closed before lock_sock */11361136+ if (sock_flag(sk, SOCK_DONE)) {11391137 (void)virtio_transport_reset_no_sock(t, pkt);11401138 release_sock(sk);11411139 sock_put(sk);
···700700 switch (level) {701701 case SND_SOC_BIAS_PREPARE:702702 if (dapm->bias_level == SND_SOC_BIAS_ON) {703703+ if (!__clk_is_enabled(priv->mclk))704704+ return 0;703705 dev_dbg(card->dev, "Disable mclk");704706 clk_disable_unprepare(priv->mclk);705707 } else {
+4-5
sound/soc/intel/catpt/pcm.c
···458458 if (ret)459459 return CATPT_IPC_ERROR(ret);460460461461- ret = catpt_dsp_update_lpclock(cdev);462462- if (ret)463463- return ret;464464-465461 ret = catpt_dai_apply_usettings(dai, stream);466462 if (ret)467463 return ret;···496500 case SNDRV_PCM_TRIGGER_RESUME:497501 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:498502 resume_stream:503503+ catpt_dsp_update_lpclock(cdev);499504 ret = catpt_ipc_resume_stream(cdev, stream->info.stream_hw_id);500505 if (ret)501506 return CATPT_IPC_ERROR(ret);···504507505508 case SNDRV_PCM_TRIGGER_STOP:506509 stream->prepared = false;507507- catpt_dsp_update_lpclock(cdev);508510 fallthrough;509511 case SNDRV_PCM_TRIGGER_SUSPEND:510512 case SNDRV_PCM_TRIGGER_PAUSE_PUSH:511513 ret = catpt_ipc_pause_stream(cdev, stream->info.stream_hw_id);514514+ catpt_dsp_update_lpclock(cdev);512515 if (ret)513516 return CATPT_IPC_ERROR(ret);514517 break;···531534532535 dsppos = bytes_to_frames(r, pos->stream_position);533536537537+ if (!stream->prepared)538538+ goto exit;534539 /* only offload is set_write_pos driven */535540 if (stream->template->type != CATPT_STRM_TYPE_RENDER)536541 goto exit;