···7979 <partintro>8080 <para>8181 This first part of the DRM Developer's Guide documents core DRM code,8282- helper libraries for writting drivers and generic userspace interfaces8282+ helper libraries for writing drivers and generic userspace interfaces8383 exposed by DRM drivers.8484 </para>8585 </partintro>···459459 providing a solution to every graphics memory-related problems, GEM460460 identified common code between drivers and created a support library to461461 share it. GEM has simpler initialization and execution requirements than462462- TTM, but has no video RAM management capabitilies and is thus limited to462462+ TTM, but has no video RAM management capabilities and is thus limited to463463 UMA devices.464464 </para>465465 <sect2>···889889 vice versa. Drivers must use the kernel dma-buf buffer sharing framework890890 to manage the PRIME file descriptors. Similar to the mode setting891891 API PRIME is agnostic to the underlying buffer object manager, as892892- long as handles are 32bit unsinged integers.892892+ long as handles are 32bit unsigned integers.893893 </para>894894 <para>895895 While non-GEM drivers must implement the operations themselves, GEM···23562356 first create properties and then create and associate individual instances23572357 of those properties to objects. A property can be instantiated multiple23582358 times and associated with different objects. Values are stored in property23592359- instances, and all other property information are stored in the propery23592359+ instances, and all other property information are stored in the property23602360 and shared between all instances of the property.23612361 </para>23622362 <para>···26972697 <sect1>26982698 <title>Legacy Support Code</title>26992699 <para>27002700- The section very brievely covers some of the old legacy support code which27002700+ The section very briefly covers some of the old legacy support code which27012701 is only used by old DRM drivers which have done a so-called shadow-attach27022702 to the underlying device instead of registering as a real driver. This27032703- also includes some of the old generic buffer mangement and command27032703+ also includes some of the old generic buffer management and command27042704 submission code. Do not use any of this in new and modern drivers.27052705 </para>27062706
···201201202202- Edit your Thunderbird config settings so that it won't use format=flowed.203203 Go to "edit->preferences->advanced->config editor" to bring up the204204- thunderbird's registry editor, and set "mailnews.send_plaintext_flowed" to205205- "false".204204+ thunderbird's registry editor.206205207207-- Disable HTML Format: Set "mail.identity.id1.compose_html" to "false".206206+- Set "mailnews.send_plaintext_flowed" to "false"208207209209-- Enable "preformat" mode: Set "editor.quotesPreformatted" to "true".208208+- Set "mailnews.wraplength" from "72" to "0"210209211211-- Enable UTF8: Set "prefs.converted-to-utf8" to "true".210210+- "View" > "Message Body As" > "Plain Text"212211213213-- Install the "toggle wordwrap" extension. Download the file from:214214- https://addons.mozilla.org/thunderbird/addon/2351/215215- Then go to "tools->add ons", select "install" at the bottom of the screen,216216- and browse to where you saved the .xul file. This adds an "Enable217217- Wordwrap" entry under the Options menu of the message composer.212212+- "View" > "Character Encoding" > "Unicode (UTF-8)"218213219214~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~220215TkRat (GUI)
+3-2
Documentation/filesystems/proc.txt
···1245124512461246The "intr" line gives counts of interrupts serviced since boot time, for each12471247of the possible system interrupts. The first column is the total of all12481248-interrupts serviced; each subsequent column is the total for that particular12491249-interrupt.12481248+interrupts serviced including unnumbered architecture specific interrupts;12491249+each subsequent column is the total for that particular numbered interrupt.12501250+Unnumbered interrupts are not shown, only summed into the total.1250125112511252The "ctxt" line gives the total number of context switches across all CPUs.12521253
+14
Documentation/hwmon/sysfs-interface
···327327 from the max value.328328 RW329329330330+temp[1-*]_min_hyst331331+ Temperature hysteresis value for min limit.332332+ Unit: millidegree Celsius333333+ Must be reported as an absolute temperature, NOT a delta334334+ from the min value.335335+ RW336336+330337temp[1-*]_input Temperature input value.331338 Unit: millidegree Celsius332339 RO···367360temp[1-*]_lcrit Temperature critical min value, typically lower than368361 corresponding temp_min values.369362 Unit: millidegree Celsius363363+ RW364364+365365+temp[1-*]_lcrit_hyst366366+ Temperature hysteresis value for critical min limit.367367+ Unit: millidegree Celsius368368+ Must be reported as an absolute temperature, NOT a delta369369+ from the critical min value.370370 RW371371372372temp[1-*]_offset
+8
Documentation/java.txt
···188188#define CP_METHODREF 10189189#define CP_INTERFACEMETHODREF 11190190#define CP_NAMEANDTYPE 12191191+#define CP_METHODHANDLE 15192192+#define CP_METHODTYPE 16193193+#define CP_INVOKEDYNAMIC 18191194192195/* Define some commonly used error messages */193196···245242 break;246243 case CP_CLASS:247244 case CP_STRING:245245+ case CP_METHODTYPE:248246 seekerr = fseek(classfile, 2, SEEK_CUR);247247+ break;248248+ case CP_METHODHANDLE:249249+ seekerr = fseek(classfile, 3, SEEK_CUR);249250 break;250251 case CP_INTEGER:251252 case CP_FLOAT:···257250 case CP_METHODREF:258251 case CP_INTERFACEMETHODREF:259252 case CP_NAMEANDTYPE:253253+ case CP_INVOKEDYNAMIC:260254 seekerr = fseek(classfile, 4, SEEK_CUR);261255 break;262256 case CP_LONG:
+1-1
Documentation/networking/filter.txt
···277277 mark skb->mark278278 queue skb->queue_mapping279279 hatype skb->dev->type280280- rxhash skb->rxhash280280+ rxhash skb->hash281281 cpu raw_smp_processor_id()282282 vlan_tci vlan_tx_tag_get(skb)283283 vlan_pr vlan_tx_tag_present(skb)
+1-1
Documentation/networking/packet_mmap.txt
···578578579579Currently implemented fanout policies are:580580581581- - PACKET_FANOUT_HASH: schedule to socket by skb's rxhash581581+ - PACKET_FANOUT_HASH: schedule to socket by skb's packet hash582582 - PACKET_FANOUT_LB: schedule to socket by round-robin583583 - PACKET_FANOUT_CPU: schedule to socket by CPU packet arrives on584584 - PACKET_FANOUT_RND: schedule to socket by random selection
···811811 return NULL;812812 memset(header, 0, sz);813813 header->pages = sz / PAGE_SIZE;814814- hole = sz - (bpfsize + sizeof(*header));814814+ hole = min(sz - (bpfsize + sizeof(*header)), PAGE_SIZE - sizeof(*header));815815 /* Insert random number of illegal instructions before BPF code816816 * and make sure the first instruction starts at an even address.817817 */
+4-2
arch/sparc/include/asm/pgtable_64.h
···24242525/* The kernel image occupies 0x4000000 to 0x6000000 (4MB --> 96MB).2626 * The page copy blockops can use 0x6000000 to 0x8000000.2727- * The TSB is mapped in the 0x8000000 to 0xa000000 range.2727+ * The 8K TSB is mapped in the 0x8000000 to 0x8400000 range.2828+ * The 4M TSB is mapped in the 0x8400000 to 0x8800000 range.2829 * The PROM resides in an area spanning 0xf0000000 to 0x100000000.2930 * The vmalloc area spans 0x100000000 to 0x200000000.3031 * Since modules need to be in the lowest 32-bits of the address space,···3433 * 0x400000000.3534 */3635#define TLBTEMP_BASE _AC(0x0000000006000000,UL)3737-#define TSBMAP_BASE _AC(0x0000000008000000,UL)3636+#define TSBMAP_8K_BASE _AC(0x0000000008000000,UL)3737+#define TSBMAP_4M_BASE _AC(0x0000000008400000,UL)3838#define MODULES_VADDR _AC(0x0000000010000000,UL)3939#define MODULES_LEN _AC(0x00000000e0000000,UL)4040#define MODULES_END _AC(0x00000000f0000000,UL)
+1-1
arch/sparc/kernel/sysfs.c
···151151 size_t count)152152{153153 unsigned long val, err;154154- int ret = sscanf(buf, "%ld", &val);154154+ int ret = sscanf(buf, "%lu", &val);155155156156 if (ret != 1)157157 return -EINVAL;
···577577 return r;578578 }579579580580- r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);581581- if (r) {582582- radeon_vm_fini(rdev, &fpriv->vm);583583- kfree(fpriv);584584- return r;580580+ if (rdev->accel_working) {581581+ r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);582582+ if (r) {583583+ radeon_vm_fini(rdev, &fpriv->vm);584584+ kfree(fpriv);585585+ return r;586586+ }587587+588588+ /* map the ib pool buffer read only into589589+ * virtual address space */590590+ bo_va = radeon_vm_bo_add(rdev, &fpriv->vm,591591+ rdev->ring_tmp_bo.bo);592592+ r = radeon_vm_bo_set_addr(rdev, bo_va, RADEON_VA_IB_OFFSET,593593+ RADEON_VM_PAGE_READABLE |594594+ RADEON_VM_PAGE_SNOOPED);595595+596596+ radeon_bo_unreserve(rdev->ring_tmp_bo.bo);597597+ if (r) {598598+ radeon_vm_fini(rdev, &fpriv->vm);599599+ kfree(fpriv);600600+ return r;601601+ }585602 }586586-587587- /* map the ib pool buffer read only into588588- * virtual address space */589589- bo_va = radeon_vm_bo_add(rdev, &fpriv->vm,590590- rdev->ring_tmp_bo.bo);591591- r = radeon_vm_bo_set_addr(rdev, bo_va, RADEON_VA_IB_OFFSET,592592- RADEON_VM_PAGE_READABLE |593593- RADEON_VM_PAGE_SNOOPED);594594-595595- radeon_bo_unreserve(rdev->ring_tmp_bo.bo);596596- if (r) {597597- radeon_vm_fini(rdev, &fpriv->vm);598598- kfree(fpriv);599599- return r;600600- }601601-602603 file_priv->driver_priv = fpriv;603604 }604605···627626 struct radeon_bo_va *bo_va;628627 int r;629628630630- r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);631631- if (!r) {632632- bo_va = radeon_vm_bo_find(&fpriv->vm,633633- rdev->ring_tmp_bo.bo);634634- if (bo_va)635635- radeon_vm_bo_rmv(rdev, bo_va);636636- radeon_bo_unreserve(rdev->ring_tmp_bo.bo);629629+ if (rdev->accel_working) {630630+ r = radeon_bo_reserve(rdev->ring_tmp_bo.bo, false);631631+ if (!r) {632632+ bo_va = radeon_vm_bo_find(&fpriv->vm,633633+ rdev->ring_tmp_bo.bo);634634+ if (bo_va)635635+ radeon_vm_bo_rmv(rdev, bo_va);636636+ radeon_bo_unreserve(rdev->ring_tmp_bo.bo);637637+ }637638 }638639639640 radeon_vm_fini(rdev, &fpriv->vm);
+24-16
drivers/gpu/drm/radeon/radeon_object.c
···458458 * into account. We don't want to disallow buffer moves459459 * completely.460460 */461461- if (current_domain != RADEON_GEM_DOMAIN_CPU &&461461+ if ((lobj->alt_domain & current_domain) != 0 &&462462 (domain & current_domain) == 0 && /* will be moved */463463 bytes_moved > bytes_moved_threshold) {464464 /* don't move it */···699699 rbo = container_of(bo, struct radeon_bo, tbo);700700 radeon_bo_check_tiling(rbo, 0, 0);701701 rdev = rbo->rdev;702702- if (bo->mem.mem_type == TTM_PL_VRAM) {703703- size = bo->mem.num_pages << PAGE_SHIFT;704704- offset = bo->mem.start << PAGE_SHIFT;705705- if ((offset + size) > rdev->mc.visible_vram_size) {706706- /* hurrah the memory is not visible ! */707707- radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);708708- rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;709709- r = ttm_bo_validate(bo, &rbo->placement, false, false);710710- if (unlikely(r != 0))711711- return r;712712- offset = bo->mem.start << PAGE_SHIFT;713713- /* this should not happen */714714- if ((offset + size) > rdev->mc.visible_vram_size)715715- return -EINVAL;716716- }702702+ if (bo->mem.mem_type != TTM_PL_VRAM)703703+ return 0;704704+705705+ size = bo->mem.num_pages << PAGE_SHIFT;706706+ offset = bo->mem.start << PAGE_SHIFT;707707+ if ((offset + size) <= rdev->mc.visible_vram_size)708708+ return 0;709709+710710+ /* hurrah the memory is not visible ! */711711+ radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_VRAM);712712+ rbo->placement.lpfn = rdev->mc.visible_vram_size >> PAGE_SHIFT;713713+ r = ttm_bo_validate(bo, &rbo->placement, false, false);714714+ if (unlikely(r == -ENOMEM)) {715715+ radeon_ttm_placement_from_domain(rbo, RADEON_GEM_DOMAIN_GTT);716716+ return ttm_bo_validate(bo, &rbo->placement, false, false);717717+ } else if (unlikely(r != 0)) {718718+ return r;717719 }720720+721721+ offset = bo->mem.start << PAGE_SHIFT;722722+ /* this should never happen */723723+ if ((offset + size) > rdev->mc.visible_vram_size)724724+ return -EINVAL;725725+718726 return 0;719727}720728
+41-1
drivers/gpu/drm/radeon/radeon_pm.c
···361361 struct drm_device *ddev = dev_get_drvdata(dev);362362 struct radeon_device *rdev = ddev->dev_private;363363364364+ /* Can't set profile when the card is off */365365+ if ((rdev->flags & RADEON_IS_PX) &&366366+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))367367+ return -EINVAL;368368+364369 mutex_lock(&rdev->pm.mutex);365370 if (rdev->pm.pm_method == PM_METHOD_PROFILE) {366371 if (strncmp("default", buf, strlen("default")) == 0)···414409 struct drm_device *ddev = dev_get_drvdata(dev);415410 struct radeon_device *rdev = ddev->dev_private;416411412412+ /* Can't set method when the card is off */413413+ if ((rdev->flags & RADEON_IS_PX) &&414414+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON)) {415415+ count = -EINVAL;416416+ goto fail;417417+ }418418+417419 /* we don't support the legacy modes with dpm */418420 if (rdev->pm.pm_method == PM_METHOD_DPM) {419421 count = -EINVAL;···458446 struct radeon_device *rdev = ddev->dev_private;459447 enum radeon_pm_state_type pm = rdev->pm.dpm.user_state;460448449449+ if ((rdev->flags & RADEON_IS_PX) &&450450+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))451451+ return snprintf(buf, PAGE_SIZE, "off\n");452452+461453 return snprintf(buf, PAGE_SIZE, "%s\n",462454 (pm == POWER_STATE_TYPE_BATTERY) ? "battery" :463455 (pm == POWER_STATE_TYPE_BALANCED) ? "balanced" : "performance");···474458{475459 struct drm_device *ddev = dev_get_drvdata(dev);476460 struct radeon_device *rdev = ddev->dev_private;461461+462462+ /* Can't set dpm state when the card is off */463463+ if ((rdev->flags & RADEON_IS_PX) &&464464+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))465465+ return -EINVAL;477466478467 mutex_lock(&rdev->pm.mutex);479468 if (strncmp("battery", buf, strlen("battery")) == 0)···506485 struct radeon_device *rdev = ddev->dev_private;507486 enum radeon_dpm_forced_level level = rdev->pm.dpm.forced_level;508487488488+ if ((rdev->flags & RADEON_IS_PX) &&489489+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))490490+ return snprintf(buf, PAGE_SIZE, "off\n");491491+509492 return snprintf(buf, PAGE_SIZE, "%s\n",510493 (level == RADEON_DPM_FORCED_LEVEL_AUTO) ? "auto" :511494 (level == RADEON_DPM_FORCED_LEVEL_LOW) ? "low" : "high");···524499 struct radeon_device *rdev = ddev->dev_private;525500 enum radeon_dpm_forced_level level;526501 int ret = 0;502502+503503+ /* Can't force performance level when the card is off */504504+ if ((rdev->flags & RADEON_IS_PX) &&505505+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))506506+ return -EINVAL;527507528508 mutex_lock(&rdev->pm.mutex);529509 if (strncmp("low", buf, strlen("low")) == 0) {···568538 char *buf)569539{570540 struct radeon_device *rdev = dev_get_drvdata(dev);541541+ struct drm_device *ddev = rdev->ddev;571542 int temp;543543+544544+ /* Can't get temperature when the card is off */545545+ if ((rdev->flags & RADEON_IS_PX) &&546546+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))547547+ return -EINVAL;572548573549 if (rdev->asic->pm.get_temperature)574550 temp = radeon_get_temperature(rdev);···16501614 struct drm_info_node *node = (struct drm_info_node *) m->private;16511615 struct drm_device *dev = node->minor->dev;16521616 struct radeon_device *rdev = dev->dev_private;16171617+ struct drm_device *ddev = rdev->ddev;1653161816541654- if (rdev->pm.dpm_enabled) {16191619+ if ((rdev->flags & RADEON_IS_PX) &&16201620+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON)) {16211621+ seq_printf(m, "PX asic powered off\n");16221622+ } else if (rdev->pm.dpm_enabled) {16551623 mutex_lock(&rdev->pm.mutex);16561624 if (rdev->asic->dpm.debugfs_print_current_performance_level)16571625 radeon_dpm_debugfs_print_current_performance_level(rdev, m);
+100-30
drivers/gpu/drm/radeon/radeon_vce.c
···443443 * @p: parser context444444 * @lo: address of lower dword445445 * @hi: address of higher dword446446+ * @size: size of checker for relocation buffer446447 *447448 * Patch relocation inside command stream with real buffer address448449 */449449-int radeon_vce_cs_reloc(struct radeon_cs_parser *p, int lo, int hi)450450+int radeon_vce_cs_reloc(struct radeon_cs_parser *p, int lo, int hi,451451+ unsigned size)450452{451453 struct radeon_cs_chunk *relocs_chunk;452452- uint64_t offset;454454+ struct radeon_cs_reloc *reloc;455455+ uint64_t start, end, offset;453456 unsigned idx;454457455458 relocs_chunk = &p->chunks[p->chunk_relocs_idx];···465462 return -EINVAL;466463 }467464468468- offset += p->relocs_ptr[(idx / 4)]->gpu_offset;465465+ reloc = p->relocs_ptr[(idx / 4)];466466+ start = reloc->gpu_offset;467467+ end = start + radeon_bo_size(reloc->robj);468468+ start += offset;469469470470- p->ib.ptr[lo] = offset & 0xFFFFFFFF;471471- p->ib.ptr[hi] = offset >> 32;470470+ p->ib.ptr[lo] = start & 0xFFFFFFFF;471471+ p->ib.ptr[hi] = start >> 32;472472+473473+ if (end <= start) {474474+ DRM_ERROR("invalid reloc offset %llX!\n", offset);475475+ return -EINVAL;476476+ }477477+ if ((end - start) < size) {478478+ DRM_ERROR("buffer to small (%d / %d)!\n",479479+ (unsigned)(end - start), size);480480+ return -EINVAL;481481+ }472482473483 return 0;484484+}485485+486486+/**487487+ * radeon_vce_validate_handle - validate stream handle488488+ *489489+ * @p: parser context490490+ * @handle: handle to validate491491+ *492492+ * Validates the handle and return the found session index or -EINVAL493493+ * we we don't have another free session index.494494+ */495495+int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle)496496+{497497+ unsigned i;498498+499499+ /* validate the handle */500500+ for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {501501+ if (atomic_read(&p->rdev->vce.handles[i]) == handle)502502+ return i;503503+ }504504+505505+ /* handle not found try to alloc a new one */506506+ for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {507507+ if (!atomic_cmpxchg(&p->rdev->vce.handles[i], 0, handle)) {508508+ p->rdev->vce.filp[i] = p->filp;509509+ p->rdev->vce.img_size[i] = 0;510510+ return i;511511+ }512512+ }513513+514514+ DRM_ERROR("No more free VCE handles!\n");515515+ return -EINVAL;474516}475517476518/**···526478 */527479int radeon_vce_cs_parse(struct radeon_cs_parser *p)528480{529529- uint32_t handle = 0;530530- bool destroy = false;481481+ int session_idx = -1;482482+ bool destroyed = false;483483+ uint32_t tmp, handle = 0;484484+ uint32_t *size = &tmp;531485 int i, r;532486533487 while (p->idx < p->chunks[p->chunk_ib_idx].length_dw) {···541491 return -EINVAL;542492 }543493494494+ if (destroyed) {495495+ DRM_ERROR("No other command allowed after destroy!\n");496496+ return -EINVAL;497497+ }498498+544499 switch (cmd) {545500 case 0x00000001: // session546501 handle = radeon_get_ib_value(p, p->idx + 2);502502+ session_idx = radeon_vce_validate_handle(p, handle);503503+ if (session_idx < 0)504504+ return session_idx;505505+ size = &p->rdev->vce.img_size[session_idx];547506 break;548507549508 case 0x00000002: // task info509509+ break;510510+550511 case 0x01000001: // create512512+ *size = radeon_get_ib_value(p, p->idx + 8) *513513+ radeon_get_ib_value(p, p->idx + 10) *514514+ 8 * 3 / 2;515515+ break;516516+551517 case 0x04000001: // config extension552518 case 0x04000002: // pic control553519 case 0x04000005: // rate control···572506 break;573507574508 case 0x03000001: // encode575575- r = radeon_vce_cs_reloc(p, p->idx + 10, p->idx + 9);509509+ r = radeon_vce_cs_reloc(p, p->idx + 10, p->idx + 9,510510+ *size);576511 if (r)577512 return r;578513579579- r = radeon_vce_cs_reloc(p, p->idx + 12, p->idx + 11);514514+ r = radeon_vce_cs_reloc(p, p->idx + 12, p->idx + 11,515515+ *size / 3);580516 if (r)581517 return r;582518 break;583519584520 case 0x02000001: // destroy585585- destroy = true;521521+ destroyed = true;586522 break;587523588524 case 0x05000001: // context buffer525525+ r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,526526+ *size * 2);527527+ if (r)528528+ return r;529529+ break;530530+589531 case 0x05000004: // video bitstream buffer532532+ tmp = radeon_get_ib_value(p, p->idx + 4);533533+ r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,534534+ tmp);535535+ if (r)536536+ return r;537537+ break;538538+590539 case 0x05000005: // feedback buffer591591- r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2);540540+ r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2,541541+ 4096);592542 if (r)593543 return r;594544 break;···614532 return -EINVAL;615533 }616534535535+ if (session_idx == -1) {536536+ DRM_ERROR("no session command at start of IB\n");537537+ return -EINVAL;538538+ }539539+617540 p->idx += len / 4;618541 }619542620620- if (destroy) {543543+ if (destroyed) {621544 /* IB contains a destroy msg, free the handle */622545 for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i)623546 atomic_cmpxchg(&p->rdev->vce.handles[i], handle, 0);624624-625625- return 0;626626- }627627-628628- /* create or encode, validate the handle */629629- for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {630630- if (atomic_read(&p->rdev->vce.handles[i]) == handle)631631- return 0;632547 }633548634634- /* handle not found try to alloc a new one */635635- for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) {636636- if (!atomic_cmpxchg(&p->rdev->vce.handles[i], 0, handle)) {637637- p->rdev->vce.filp[i] = p->filp;638638- return 0;639639- }640640- }641641-642642- DRM_ERROR("No more free VCE handles!\n");643643- return -EINVAL;549549+ return 0;644550}645551646552/**
+1-1
drivers/gpu/drm/radeon/radeon_vm.c
···595595 ndw = 64;596596597597 /* assume the worst case */598598- ndw += vm->max_pde_used * 12;598598+ ndw += vm->max_pde_used * 16;599599600600 /* update too big for an IB */601601 if (ndw > 0xfffff)
···1053105310541054config SENSORS_NTC_THERMISTOR10551055 tristate "NTC thermistor support"10561056- depends on (!OF && !IIO) || (OF && IIO)10561056+ depends on !OF || IIO=n || IIO10571057 help10581058 This driver supports NTC thermistors sensor reading and its10591059 interpretation. The driver can also monitor the temperature and
···522522 int steer_qpn_count;523523 int steer_qpn_base;524524 int steering_support;525525+ struct mlx4_ib_qp *qp1_proxy[MLX4_MAX_PORTS];526526+ /* lock when destroying qp1_proxy and getting netdev events */527527+ struct mutex qp1_proxy_lock[MLX4_MAX_PORTS];525528};526529527530struct ib_event_work {
···8282}83838484/* Forward declaration */8585-static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[]);8585+static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[],8686+ bool strict_match);8687static void rlb_purge_src_ip(struct bonding *bond, struct arp_pkt *arp);8788static void rlb_src_unlink(struct bonding *bond, u32 index);8889static void rlb_src_link(struct bonding *bond, u32 ip_src_hash,···460459461460 bond->alb_info.rlb_promisc_timeout_counter = 0;462461463463- alb_send_learning_packets(bond->curr_active_slave, addr);462462+ alb_send_learning_packets(bond->curr_active_slave, addr, true);464463}465464466465/* slave being removed should not be active at this point···996995/*********************** tlb/rlb shared functions *********************/997996998997static void alb_send_lp_vid(struct slave *slave, u8 mac_addr[],999999- u16 vid)998998+ __be16 vlan_proto, u16 vid)1000999{10011000 struct learning_pkt pkt;10021001 struct sk_buff *skb;···10221021 skb->dev = slave->dev;1023102210241023 if (vid) {10251025- skb = vlan_put_tag(skb, htons(ETH_P_8021Q), vid);10241024+ skb = vlan_put_tag(skb, vlan_proto, vid);10261025 if (!skb) {10271026 pr_err("%s: Error: failed to insert VLAN tag\n",10281027 slave->bond->dev->name);···10331032 dev_queue_xmit(skb);10341033}1035103410361036-10371037-static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[])10351035+static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[],10361036+ bool strict_match)10381037{10391038 struct bonding *bond = bond_get_bond_by_slave(slave);10401039 struct net_device *upper;10411040 struct list_head *iter;1042104110431042 /* send untagged */10441044- alb_send_lp_vid(slave, mac_addr, 0);10431043+ alb_send_lp_vid(slave, mac_addr, 0, 0);1045104410461045 /* loop through vlans and send one packet for each */10471046 rcu_read_lock();10481047 netdev_for_each_all_upper_dev_rcu(bond->dev, upper, iter) {10491049- if (upper->priv_flags & IFF_802_1Q_VLAN)10501050- alb_send_lp_vid(slave, mac_addr,10511051- vlan_dev_vlan_id(upper));10481048+ if (is_vlan_dev(upper) && vlan_get_encap_level(upper) == 0) {10491049+ if (strict_match &&10501050+ ether_addr_equal_64bits(mac_addr,10511051+ upper->dev_addr)) {10521052+ alb_send_lp_vid(slave, mac_addr,10531053+ vlan_dev_vlan_proto(upper),10541054+ vlan_dev_vlan_id(upper));10551055+ } else if (!strict_match) {10561056+ alb_send_lp_vid(slave, upper->dev_addr,10571057+ vlan_dev_vlan_proto(upper),10581058+ vlan_dev_vlan_id(upper));10591059+ }10601060+ }10521061 }10531062 rcu_read_unlock();10541063}···1118110711191108 /* fasten the change in the switch */11201109 if (SLAVE_IS_OK(slave1)) {11211121- alb_send_learning_packets(slave1, slave1->dev->dev_addr);11101110+ alb_send_learning_packets(slave1, slave1->dev->dev_addr, false);11221111 if (bond->alb_info.rlb_enabled) {11231112 /* inform the clients that the mac address11241113 * has changed···11301119 }1131112011321121 if (SLAVE_IS_OK(slave2)) {11331133- alb_send_learning_packets(slave2, slave2->dev->dev_addr);11221122+ alb_send_learning_packets(slave2, slave2->dev->dev_addr, false);11341123 if (bond->alb_info.rlb_enabled) {11351124 /* inform the clients that the mac address11361125 * has changed···1501149015021491 /* send learning packets */15031492 if (bond_info->lp_counter >= BOND_ALB_LP_TICKS(bond)) {14931493+ bool strict_match;14941494+15041495 /* change of curr_active_slave involves swapping of mac addresses.15051496 * in order to avoid this swapping from happening while15061497 * sending the learning packets, the curr_slave_lock must be held for···15101497 */15111498 read_lock(&bond->curr_slave_lock);1512149915131513- bond_for_each_slave_rcu(bond, slave, iter)15141514- alb_send_learning_packets(slave, slave->dev->dev_addr);15001500+ bond_for_each_slave_rcu(bond, slave, iter) {15011501+ /* If updating current_active, use all currently15021502+ * user mac addreses (!strict_match). Otherwise, only15031503+ * use mac of the slave device.15041504+ */15051505+ strict_match = (slave != bond->curr_active_slave);15061506+ alb_send_learning_packets(slave, slave->dev->dev_addr,15071507+ strict_match);15081508+ }1515150915161510 read_unlock(&bond->curr_slave_lock);15171511···17411721 } else {17421722 /* set the new_slave to the bond mac address */17431723 alb_set_slave_mac_addr(new_slave, bond->dev->dev_addr);17441744- alb_send_learning_packets(new_slave, bond->dev->dev_addr);17241724+ alb_send_learning_packets(new_slave, bond->dev->dev_addr,17251725+ false);17451726 }1746172717471728 write_lock_bh(&bond->curr_slave_lock);···17851764 alb_set_slave_mac_addr(bond->curr_active_slave, bond_dev->dev_addr);1786176517871766 read_lock(&bond->lock);17881788- alb_send_learning_packets(bond->curr_active_slave, bond_dev->dev_addr);17671767+ alb_send_learning_packets(bond->curr_active_slave,17681768+ bond_dev->dev_addr, false);17891769 if (bond->alb_info.rlb_enabled) {17901770 /* inform clients mac address has changed */17911771 rlb_req_update_slave_clients(bond, bond->curr_active_slave);
+65-69
drivers/net/bonding/bond_main.c
···21262126 */21272127static void bond_arp_send(struct net_device *slave_dev, int arp_op,21282128 __be32 dest_ip, __be32 src_ip,21292129- struct bond_vlan_tag *inner,21302130- struct bond_vlan_tag *outer)21292129+ struct bond_vlan_tag *tags)21312130{21322131 struct sk_buff *skb;21322132+ int i;2133213321342134 pr_debug("arp %d on slave %s: dst %pI4 src %pI4\n",21352135 arp_op, slave_dev->name, &dest_ip, &src_ip);···21412141 net_err_ratelimited("ARP packet allocation failed\n");21422142 return;21432143 }21442144- if (outer->vlan_id) {21452145- if (inner->vlan_id) {21462146- pr_debug("inner tag: proto %X vid %X\n",21472147- ntohs(inner->vlan_proto), inner->vlan_id);21482148- skb = __vlan_put_tag(skb, inner->vlan_proto,21492149- inner->vlan_id);21502150- if (!skb) {21512151- net_err_ratelimited("failed to insert inner VLAN tag\n");21522152- return;21532153- }21542154- }2155214421562156- pr_debug("outer reg: proto %X vid %X\n",21572157- ntohs(outer->vlan_proto), outer->vlan_id);21582158- skb = vlan_put_tag(skb, outer->vlan_proto, outer->vlan_id);21452145+ /* Go through all the tags backwards and add them to the packet */21462146+ for (i = BOND_MAX_VLAN_ENCAP - 1; i > 0; i--) {21472147+ if (!tags[i].vlan_id)21482148+ continue;21492149+21502150+ pr_debug("inner tag: proto %X vid %X\n",21512151+ ntohs(tags[i].vlan_proto), tags[i].vlan_id);21522152+ skb = __vlan_put_tag(skb, tags[i].vlan_proto,21532153+ tags[i].vlan_id);21542154+ if (!skb) {21552155+ net_err_ratelimited("failed to insert inner VLAN tag\n");21562156+ return;21572157+ }21582158+ }21592159+ /* Set the outer tag */21602160+ if (tags[0].vlan_id) {21612161+ pr_debug("outer tag: proto %X vid %X\n",21622162+ ntohs(tags[0].vlan_proto), tags[0].vlan_id);21632163+ skb = vlan_put_tag(skb, tags[0].vlan_proto, tags[0].vlan_id);21592164 if (!skb) {21602165 net_err_ratelimited("failed to insert outer VLAN tag\n");21612166 return;···21692164 arp_xmit(skb);21702165}2171216621672167+/* Validate the device path between the @start_dev and the @end_dev.21682168+ * The path is valid if the @end_dev is reachable through device21692169+ * stacking.21702170+ * When the path is validated, collect any vlan information in the21712171+ * path.21722172+ */21732173+static bool bond_verify_device_path(struct net_device *start_dev,21742174+ struct net_device *end_dev,21752175+ struct bond_vlan_tag *tags)21762176+{21772177+ struct net_device *upper;21782178+ struct list_head *iter;21792179+ int idx;21802180+21812181+ if (start_dev == end_dev)21822182+ return true;21832183+21842184+ netdev_for_each_upper_dev_rcu(start_dev, upper, iter) {21852185+ if (bond_verify_device_path(upper, end_dev, tags)) {21862186+ if (is_vlan_dev(upper)) {21872187+ idx = vlan_get_encap_level(upper);21882188+ if (idx >= BOND_MAX_VLAN_ENCAP)21892189+ return false;21902190+21912191+ tags[idx].vlan_proto =21922192+ vlan_dev_vlan_proto(upper);21932193+ tags[idx].vlan_id = vlan_dev_vlan_id(upper);21942194+ }21952195+ return true;21962196+ }21972197+ }21982198+21992199+ return false;22002200+}2172220121732202static void bond_arp_send_all(struct bonding *bond, struct slave *slave)21742203{21752175- struct net_device *upper, *vlan_upper;21762176- struct list_head *iter, *vlan_iter;21772204 struct rtable *rt;21782178- struct bond_vlan_tag inner, outer;22052205+ struct bond_vlan_tag tags[BOND_MAX_VLAN_ENCAP];21792206 __be32 *targets = bond->params.arp_targets, addr;21802207 int i;22082208+ bool ret;2181220921822210 for (i = 0; i < BOND_MAX_ARP_TARGETS && targets[i]; i++) {21832211 pr_debug("basa: target %pI4\n", &targets[i]);21842184- inner.vlan_proto = 0;21852185- inner.vlan_id = 0;21862186- outer.vlan_proto = 0;21872187- outer.vlan_id = 0;22122212+ memset(tags, 0, sizeof(tags));2188221321892214 /* Find out through which dev should the packet go */21902215 rt = ip_route_output(dev_net(bond->dev), targets[i], 0,···22272192 net_warn_ratelimited("%s: no route to arp_ip_target %pI4 and arp_validate is set\n",22282193 bond->dev->name,22292194 &targets[i]);22302230- bond_arp_send(slave->dev, ARPOP_REQUEST, targets[i], 0, &inner, &outer);21952195+ bond_arp_send(slave->dev, ARPOP_REQUEST, targets[i],21962196+ 0, tags);22312197 continue;22322198 }22332199···22372201 goto found;2238220222392203 rcu_read_lock();22402240- /* first we search only for vlan devices. for every vlan22412241- * found we verify its upper dev list, searching for the22422242- * rt->dst.dev. If found we save the tag of the vlan and22432243- * proceed to send the packet.22442244- */22452245- netdev_for_each_all_upper_dev_rcu(bond->dev, vlan_upper,22462246- vlan_iter) {22472247- if (!is_vlan_dev(vlan_upper))22482248- continue;22492249-22502250- if (vlan_upper == rt->dst.dev) {22512251- outer.vlan_proto = vlan_dev_vlan_proto(vlan_upper);22522252- outer.vlan_id = vlan_dev_vlan_id(vlan_upper);22532253- rcu_read_unlock();22542254- goto found;22552255- }22562256- netdev_for_each_all_upper_dev_rcu(vlan_upper, upper,22572257- iter) {22582258- if (upper == rt->dst.dev) {22592259- /* If the upper dev is a vlan dev too,22602260- * set the vlan tag to inner tag.22612261- */22622262- if (is_vlan_dev(upper)) {22632263- inner.vlan_proto = vlan_dev_vlan_proto(upper);22642264- inner.vlan_id = vlan_dev_vlan_id(upper);22652265- }22662266- outer.vlan_proto = vlan_dev_vlan_proto(vlan_upper);22672267- outer.vlan_id = vlan_dev_vlan_id(vlan_upper);22682268- rcu_read_unlock();22692269- goto found;22702270- }22712271- }22722272- }22732273-22742274- /* if the device we're looking for is not on top of any of22752275- * our upper vlans, then just search for any dev that22762276- * matches, and in case it's a vlan - save the id22772277- */22782278- netdev_for_each_all_upper_dev_rcu(bond->dev, upper, iter) {22792279- if (upper == rt->dst.dev) {22802280- rcu_read_unlock();22812281- goto found;22822282- }22832283- }22042204+ ret = bond_verify_device_path(bond->dev, rt->dst.dev, tags);22842205 rcu_read_unlock();22062206+22072207+ if (ret)22082208+ goto found;2285220922862210 /* Not our device - skip */22872211 pr_debug("%s: no path to arp_ip_target %pI4 via rt.dev %s\n",···22552259 addr = bond_confirm_addr(rt->dst.dev, targets[i], 0);22562260 ip_rt_put(rt);22572261 bond_arp_send(slave->dev, ARPOP_REQUEST, targets[i],22582258- addr, &inner, &outer);22622262+ addr, tags);22592263 }22602264}22612265
···1414 SPEAr1310 and SPEAr320 evaluation boards & TI (www.ti.com)1515 boards like am335x, dm814x, dm813x and dm811x.16161717-config CAN_C_CAN_STRICT_FRAME_ORDERING1818- bool "Force a strict RX CAN frame order (may cause frame loss)"1919- ---help---2020- The RX split buffer prevents packet reordering but can cause packet2121- loss. Only enable this option when you accept to lose CAN frames2222- in favour of getting the received CAN frames in the correct order.2323-2417config CAN_C_CAN_PCI2518 tristate "Generic PCI Bus based C_CAN/D_CAN driver"2619 depends on PCI
-36
drivers/net/can/c_can/c_can.c
···732732static inline void c_can_rx_object_get(struct net_device *dev,733733 struct c_can_priv *priv, u32 obj)734734{735735-#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING736736- if (obj < C_CAN_MSG_RX_LOW_LAST)737737- c_can_object_get(dev, IF_RX, obj, IF_COMM_RCV_LOW);738738- else739739-#endif740735 c_can_object_get(dev, IF_RX, obj, priv->comm_rcv_high);741736}742737743738static inline void c_can_rx_finalize(struct net_device *dev,744739 struct c_can_priv *priv, u32 obj)745740{746746-#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING747747- if (obj < C_CAN_MSG_RX_LOW_LAST)748748- priv->rxmasked |= BIT(obj - 1);749749- else if (obj == C_CAN_MSG_RX_LOW_LAST) {750750- priv->rxmasked = 0;751751- /* activate all lower message objects */752752- c_can_activate_all_lower_rx_msg_obj(dev, IF_RX);753753- }754754-#endif755741 if (priv->type != BOSCH_D_CAN)756742 c_can_object_get(dev, IF_RX, obj, IF_COMM_CLR_NEWDAT);757743}···785799{786800 u32 pend = priv->read_reg(priv, C_CAN_NEWDAT1_REG);787801788788-#ifdef CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING789789- pend &= ~priv->rxmasked;790790-#endif791802 return pend;792803}793804···796813 * INTPND are set for this message object indicating that a new message797814 * has arrived. To work-around this issue, we keep two groups of message798815 * objects whose partitioning is defined by C_CAN_MSG_OBJ_RX_SPLIT.799799- *800800- * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = y801801- *802802- * To ensure in-order frame reception we use the following803803- * approach while re-activating a message object to receive further804804- * frames:805805- * - if the current message object number is lower than806806- * C_CAN_MSG_RX_LOW_LAST, do not clear the NEWDAT bit while clearing807807- * the INTPND bit.808808- * - if the current message object number is equal to809809- * C_CAN_MSG_RX_LOW_LAST then clear the NEWDAT bit of all lower810810- * receive message objects.811811- * - if the current message object number is greater than812812- * C_CAN_MSG_RX_LOW_LAST then clear the NEWDAT bit of813813- * only this message object.814814- *815815- * This can cause packet loss!816816- *817817- * If CONFIG_CAN_C_CAN_STRICT_FRAME_ORDERING = n818816 *819817 * We clear the newdat bit right away.820818 *
+9-5
drivers/net/can/sja1000/peak_pci.c
···551551{552552 struct sja1000_priv *priv;553553 struct peak_pci_chan *chan;554554- struct net_device *dev;554554+ struct net_device *dev, *prev_dev;555555 void __iomem *cfg_base, *reg_base;556556 u16 sub_sys_id, icr;557557 int i, err, channels;···688688 writew(0x0, cfg_base + PITA_ICR + 2);689689690690 chan = NULL;691691- for (dev = pci_get_drvdata(pdev); dev; dev = chan->prev_dev) {692692- unregister_sja1000dev(dev);693693- free_sja1000dev(dev);691691+ for (dev = pci_get_drvdata(pdev); dev; dev = prev_dev) {694692 priv = netdev_priv(dev);695693 chan = priv->priv;694694+ prev_dev = chan->prev_dev;695695+696696+ unregister_sja1000dev(dev);697697+ free_sja1000dev(dev);696698 }697699698700 /* free any PCIeC resources too */···728726729727 /* Loop over all registered devices */730728 while (1) {729729+ struct net_device *prev_dev = chan->prev_dev;730730+731731 dev_info(&pdev->dev, "removing device %s\n", dev->name);732732 unregister_sja1000dev(dev);733733 free_sja1000dev(dev);734734- dev = chan->prev_dev;734734+ dev = prev_dev;735735736736 if (!dev) {737737 /* do that only for first channel */
+12
drivers/net/ethernet/Kconfig
···3535source "drivers/net/ethernet/chelsio/Kconfig"3636source "drivers/net/ethernet/cirrus/Kconfig"3737source "drivers/net/ethernet/cisco/Kconfig"3838+3939+config CX_ECAT4040+ tristate "Beckhoff CX5020 EtherCAT master support"4141+ depends on PCI4242+ ---help---4343+ Driver for EtherCAT master module located on CCAT FPGA4444+ that can be found on Beckhoff CX5020, and possibly other of CX4545+ Beckhoff CX series industrial PCs.4646+4747+ To compile this driver as a module, choose M here. The module4848+ will be called ec_bhf.4949+3850source "drivers/net/ethernet/davicom/Kconfig"39514052config DNET
···747747out:748748 bnx2x_vfpf_finalize(bp, &req->first_tlv);749749750750- return 0;750750+ return rc;751751}752752753753/* request pf to config rss table for vf queues*/
+706
drivers/net/ethernet/ec_bhf.c
···11+ /*22+ * drivers/net/ethernet/beckhoff/ec_bhf.c33+ *44+ * Copyright (C) 2014 Darek Marcinkiewicz <reksio@newterm.pl>55+ *66+ * This software is licensed under the terms of the GNU General Public77+ * License version 2, as published by the Free Software Foundation, and88+ * may be copied, distributed, and modified under those terms.99+ *1010+ * This program is distributed in the hope that it will be useful,1111+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1212+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1313+ * GNU General Public License for more details.1414+ *1515+ */1616+1717+/* This is a driver for EtherCAT master module present on CCAT FPGA.1818+ * Those can be found on Bechhoff CX50xx industrial PCs.1919+ */2020+2121+#if 02222+#define DEBUG2323+#endif2424+#include <linux/kernel.h>2525+#include <linux/module.h>2626+#include <linux/moduleparam.h>2727+#include <linux/pci.h>2828+#include <linux/init.h>2929+3030+#include <linux/netdevice.h>3131+#include <linux/etherdevice.h>3232+#include <linux/ip.h>3333+#include <linux/skbuff.h>3434+#include <linux/hrtimer.h>3535+#include <linux/interrupt.h>3636+#include <linux/stat.h>3737+3838+#define TIMER_INTERVAL_NSEC 200003939+4040+#define INFO_BLOCK_SIZE 0x104141+#define INFO_BLOCK_TYPE 0x04242+#define INFO_BLOCK_REV 0x24343+#define INFO_BLOCK_BLK_CNT 0x44444+#define INFO_BLOCK_TX_CHAN 0x44545+#define INFO_BLOCK_RX_CHAN 0x54646+#define INFO_BLOCK_OFFSET 0x84747+4848+#define EC_MII_OFFSET 0x44949+#define EC_FIFO_OFFSET 0x85050+#define EC_MAC_OFFSET 0xc5151+5252+#define MAC_FRAME_ERR_CNT 0x05353+#define MAC_RX_ERR_CNT 0x15454+#define MAC_CRC_ERR_CNT 0x25555+#define MAC_LNK_LST_ERR_CNT 0x35656+#define MAC_TX_FRAME_CNT 0x105757+#define MAC_RX_FRAME_CNT 0x145858+#define MAC_TX_FIFO_LVL 0x205959+#define MAC_DROPPED_FRMS 0x286060+#define MAC_CONNECTED_CCAT_FLAG 0x786161+6262+#define MII_MAC_ADDR 0x86363+#define MII_MAC_FILT_FLAG 0xe6464+#define MII_LINK_STATUS 0xf6565+6666+#define FIFO_TX_REG 0x06767+#define FIFO_TX_RESET 0x86868+#define FIFO_RX_REG 0x106969+#define FIFO_RX_ADDR_VALID (1u << 31)7070+#define FIFO_RX_RESET 0x187171+7272+#define DMA_CHAN_OFFSET 0x10007373+#define DMA_CHAN_SIZE 0x87474+7575+#define DMA_WINDOW_SIZE_MASK 0xfffffffc7676+7777+static struct pci_device_id ids[] = {7878+ { PCI_DEVICE(0x15ec, 0x5000), },7979+ { 0, }8080+};8181+MODULE_DEVICE_TABLE(pci, ids);8282+8383+struct rx_header {8484+#define RXHDR_NEXT_ADDR_MASK 0xffffffu8585+#define RXHDR_NEXT_VALID (1u << 31)8686+ __le32 next;8787+#define RXHDR_NEXT_RECV_FLAG 0x18888+ __le32 recv;8989+#define RXHDR_LEN_MASK 0xfffu9090+ __le16 len;9191+ __le16 port;9292+ __le32 reserved;9393+ u8 timestamp[8];9494+} __packed;9595+9696+#define PKT_PAYLOAD_SIZE 0x7e89797+struct rx_desc {9898+ struct rx_header header;9999+ u8 data[PKT_PAYLOAD_SIZE];100100+} __packed;101101+102102+struct tx_header {103103+ __le16 len;104104+#define TX_HDR_PORT_0 0x1105105+#define TX_HDR_PORT_1 0x2106106+ u8 port;107107+ u8 ts_enable;108108+#define TX_HDR_SENT 0x1109109+ __le32 sent;110110+ u8 timestamp[8];111111+} __packed;112112+113113+struct tx_desc {114114+ struct tx_header header;115115+ u8 data[PKT_PAYLOAD_SIZE];116116+} __packed;117117+118118+#define FIFO_SIZE 64119119+120120+static long polling_frequency = TIMER_INTERVAL_NSEC;121121+122122+struct bhf_dma {123123+ u8 *buf;124124+ size_t len;125125+ dma_addr_t buf_phys;126126+127127+ u8 *alloc;128128+ size_t alloc_len;129129+ dma_addr_t alloc_phys;130130+};131131+132132+struct ec_bhf_priv {133133+ struct net_device *net_dev;134134+135135+ struct pci_dev *dev;136136+137137+ void * __iomem io;138138+ void * __iomem dma_io;139139+140140+ struct hrtimer hrtimer;141141+142142+ int tx_dma_chan;143143+ int rx_dma_chan;144144+ void * __iomem ec_io;145145+ void * __iomem fifo_io;146146+ void * __iomem mii_io;147147+ void * __iomem mac_io;148148+149149+ struct bhf_dma rx_buf;150150+ struct rx_desc *rx_descs;151151+ int rx_dnext;152152+ int rx_dcount;153153+154154+ struct bhf_dma tx_buf;155155+ struct tx_desc *tx_descs;156156+ int tx_dcount;157157+ int tx_dnext;158158+159159+ u64 stat_rx_bytes;160160+ u64 stat_tx_bytes;161161+};162162+163163+#define PRIV_TO_DEV(priv) (&(priv)->dev->dev)164164+165165+#define ETHERCAT_MASTER_ID 0x14166166+167167+static void ec_bhf_print_status(struct ec_bhf_priv *priv)168168+{169169+ struct device *dev = PRIV_TO_DEV(priv);170170+171171+ dev_dbg(dev, "Frame error counter: %d\n",172172+ ioread8(priv->mac_io + MAC_FRAME_ERR_CNT));173173+ dev_dbg(dev, "RX error counter: %d\n",174174+ ioread8(priv->mac_io + MAC_RX_ERR_CNT));175175+ dev_dbg(dev, "CRC error counter: %d\n",176176+ ioread8(priv->mac_io + MAC_CRC_ERR_CNT));177177+ dev_dbg(dev, "TX frame counter: %d\n",178178+ ioread32(priv->mac_io + MAC_TX_FRAME_CNT));179179+ dev_dbg(dev, "RX frame counter: %d\n",180180+ ioread32(priv->mac_io + MAC_RX_FRAME_CNT));181181+ dev_dbg(dev, "TX fifo level: %d\n",182182+ ioread8(priv->mac_io + MAC_TX_FIFO_LVL));183183+ dev_dbg(dev, "Dropped frames: %d\n",184184+ ioread8(priv->mac_io + MAC_DROPPED_FRMS));185185+ dev_dbg(dev, "Connected with CCAT slot: %d\n",186186+ ioread8(priv->mac_io + MAC_CONNECTED_CCAT_FLAG));187187+ dev_dbg(dev, "Link status: %d\n",188188+ ioread8(priv->mii_io + MII_LINK_STATUS));189189+}190190+191191+static void ec_bhf_reset(struct ec_bhf_priv *priv)192192+{193193+ iowrite8(0, priv->mac_io + MAC_FRAME_ERR_CNT);194194+ iowrite8(0, priv->mac_io + MAC_RX_ERR_CNT);195195+ iowrite8(0, priv->mac_io + MAC_CRC_ERR_CNT);196196+ iowrite8(0, priv->mac_io + MAC_LNK_LST_ERR_CNT);197197+ iowrite32(0, priv->mac_io + MAC_TX_FRAME_CNT);198198+ iowrite32(0, priv->mac_io + MAC_RX_FRAME_CNT);199199+ iowrite8(0, priv->mac_io + MAC_DROPPED_FRMS);200200+201201+ iowrite8(0, priv->fifo_io + FIFO_TX_RESET);202202+ iowrite8(0, priv->fifo_io + FIFO_RX_RESET);203203+204204+ iowrite8(0, priv->mac_io + MAC_TX_FIFO_LVL);205205+}206206+207207+static void ec_bhf_send_packet(struct ec_bhf_priv *priv, struct tx_desc *desc)208208+{209209+ u32 len = le16_to_cpu(desc->header.len) + sizeof(desc->header);210210+ u32 addr = (u8 *)desc - priv->tx_buf.buf;211211+212212+ iowrite32((ALIGN(len, 8) << 24) | addr, priv->fifo_io + FIFO_TX_REG);213213+214214+ dev_dbg(PRIV_TO_DEV(priv), "Done sending packet\n");215215+}216216+217217+static int ec_bhf_desc_sent(struct tx_desc *desc)218218+{219219+ return le32_to_cpu(desc->header.sent) & TX_HDR_SENT;220220+}221221+222222+static void ec_bhf_process_tx(struct ec_bhf_priv *priv)223223+{224224+ if (unlikely(netif_queue_stopped(priv->net_dev))) {225225+ /* Make sure that we perceive changes to tx_dnext. */226226+ smp_rmb();227227+228228+ if (ec_bhf_desc_sent(&priv->tx_descs[priv->tx_dnext]))229229+ netif_wake_queue(priv->net_dev);230230+ }231231+}232232+233233+static int ec_bhf_pkt_received(struct rx_desc *desc)234234+{235235+ return le32_to_cpu(desc->header.recv) & RXHDR_NEXT_RECV_FLAG;236236+}237237+238238+static void ec_bhf_add_rx_desc(struct ec_bhf_priv *priv, struct rx_desc *desc)239239+{240240+ iowrite32(FIFO_RX_ADDR_VALID | ((u8 *)(desc) - priv->rx_buf.buf),241241+ priv->fifo_io + FIFO_RX_REG);242242+}243243+244244+static void ec_bhf_process_rx(struct ec_bhf_priv *priv)245245+{246246+ struct rx_desc *desc = &priv->rx_descs[priv->rx_dnext];247247+ struct device *dev = PRIV_TO_DEV(priv);248248+249249+ while (ec_bhf_pkt_received(desc)) {250250+ int pkt_size = (le16_to_cpu(desc->header.len) &251251+ RXHDR_LEN_MASK) - sizeof(struct rx_header) - 4;252252+ u8 *data = desc->data;253253+ struct sk_buff *skb;254254+255255+ skb = netdev_alloc_skb_ip_align(priv->net_dev, pkt_size);256256+ dev_dbg(dev, "Received packet, size: %d\n", pkt_size);257257+258258+ if (skb) {259259+ memcpy(skb_put(skb, pkt_size), data, pkt_size);260260+ skb->protocol = eth_type_trans(skb, priv->net_dev);261261+ dev_dbg(dev, "Protocol type: %x\n", skb->protocol);262262+263263+ priv->stat_rx_bytes += pkt_size;264264+265265+ netif_rx(skb);266266+ } else {267267+ dev_err_ratelimited(dev,268268+ "Couldn't allocate a skb_buff for a packet of size %u\n",269269+ pkt_size);270270+ }271271+272272+ desc->header.recv = 0;273273+274274+ ec_bhf_add_rx_desc(priv, desc);275275+276276+ priv->rx_dnext = (priv->rx_dnext + 1) % priv->rx_dcount;277277+ desc = &priv->rx_descs[priv->rx_dnext];278278+ }279279+280280+}281281+282282+static enum hrtimer_restart ec_bhf_timer_fun(struct hrtimer *timer)283283+{284284+ struct ec_bhf_priv *priv = container_of(timer, struct ec_bhf_priv,285285+ hrtimer);286286+ ec_bhf_process_rx(priv);287287+ ec_bhf_process_tx(priv);288288+289289+ if (!netif_running(priv->net_dev))290290+ return HRTIMER_NORESTART;291291+292292+ hrtimer_forward_now(timer, ktime_set(0, polling_frequency));293293+ return HRTIMER_RESTART;294294+}295295+296296+static int ec_bhf_setup_offsets(struct ec_bhf_priv *priv)297297+{298298+ struct device *dev = PRIV_TO_DEV(priv);299299+ unsigned block_count, i;300300+ void * __iomem ec_info;301301+302302+ dev_dbg(dev, "Info block:\n");303303+ dev_dbg(dev, "Type of function: %x\n", (unsigned)ioread16(priv->io));304304+ dev_dbg(dev, "Revision of function: %x\n",305305+ (unsigned)ioread16(priv->io + INFO_BLOCK_REV));306306+307307+ block_count = ioread8(priv->io + INFO_BLOCK_BLK_CNT);308308+ dev_dbg(dev, "Number of function blocks: %x\n", block_count);309309+310310+ for (i = 0; i < block_count; i++) {311311+ u16 type = ioread16(priv->io + i * INFO_BLOCK_SIZE +312312+ INFO_BLOCK_TYPE);313313+ if (type == ETHERCAT_MASTER_ID)314314+ break;315315+ }316316+ if (i == block_count) {317317+ dev_err(dev, "EtherCAT master with DMA block not found\n");318318+ return -ENODEV;319319+ }320320+ dev_dbg(dev, "EtherCAT master with DMA block found at pos: %d\n", i);321321+322322+ ec_info = priv->io + i * INFO_BLOCK_SIZE;323323+ dev_dbg(dev, "EtherCAT master revision: %d\n",324324+ ioread16(ec_info + INFO_BLOCK_REV));325325+326326+ priv->tx_dma_chan = ioread8(ec_info + INFO_BLOCK_TX_CHAN);327327+ dev_dbg(dev, "EtherCAT master tx dma channel: %d\n",328328+ priv->tx_dma_chan);329329+330330+ priv->rx_dma_chan = ioread8(ec_info + INFO_BLOCK_RX_CHAN);331331+ dev_dbg(dev, "EtherCAT master rx dma channel: %d\n",332332+ priv->rx_dma_chan);333333+334334+ priv->ec_io = priv->io + ioread32(ec_info + INFO_BLOCK_OFFSET);335335+ priv->mii_io = priv->ec_io + ioread32(priv->ec_io + EC_MII_OFFSET);336336+ priv->fifo_io = priv->ec_io + ioread32(priv->ec_io + EC_FIFO_OFFSET);337337+ priv->mac_io = priv->ec_io + ioread32(priv->ec_io + EC_MAC_OFFSET);338338+339339+ dev_dbg(dev,340340+ "EtherCAT block addres: %p, fifo address: %p, mii address: %p, mac address: %p\n",341341+ priv->ec_io, priv->fifo_io, priv->mii_io, priv->mac_io);342342+343343+ return 0;344344+}345345+346346+static netdev_tx_t ec_bhf_start_xmit(struct sk_buff *skb,347347+ struct net_device *net_dev)348348+{349349+ struct ec_bhf_priv *priv = netdev_priv(net_dev);350350+ struct tx_desc *desc;351351+ unsigned len;352352+353353+ dev_dbg(PRIV_TO_DEV(priv), "Starting xmit\n");354354+355355+ desc = &priv->tx_descs[priv->tx_dnext];356356+357357+ skb_copy_and_csum_dev(skb, desc->data);358358+ len = skb->len;359359+360360+ memset(&desc->header, 0, sizeof(desc->header));361361+ desc->header.len = cpu_to_le16(len);362362+ desc->header.port = TX_HDR_PORT_0;363363+364364+ ec_bhf_send_packet(priv, desc);365365+366366+ priv->tx_dnext = (priv->tx_dnext + 1) % priv->tx_dcount;367367+368368+ if (!ec_bhf_desc_sent(&priv->tx_descs[priv->tx_dnext])) {369369+ /* Make sure that update updates to tx_dnext are perceived370370+ * by timer routine.371371+ */372372+ smp_wmb();373373+374374+ netif_stop_queue(net_dev);375375+376376+ dev_dbg(PRIV_TO_DEV(priv), "Stopping netif queue\n");377377+ ec_bhf_print_status(priv);378378+ }379379+380380+ priv->stat_tx_bytes += len;381381+382382+ dev_kfree_skb(skb);383383+384384+ return NETDEV_TX_OK;385385+}386386+387387+static int ec_bhf_alloc_dma_mem(struct ec_bhf_priv *priv,388388+ struct bhf_dma *buf,389389+ int channel,390390+ int size)391391+{392392+ int offset = channel * DMA_CHAN_SIZE + DMA_CHAN_OFFSET;393393+ struct device *dev = PRIV_TO_DEV(priv);394394+ u32 mask;395395+396396+ iowrite32(0xffffffff, priv->dma_io + offset);397397+398398+ mask = ioread32(priv->dma_io + offset);399399+ mask &= DMA_WINDOW_SIZE_MASK;400400+ dev_dbg(dev, "Read mask %x for channel %d\n", mask, channel);401401+402402+ /* We want to allocate a chunk of memory that is:403403+ * - aligned to the mask we just read404404+ * - is of size 2^mask bytes (at most)405405+ * In order to ensure that we will allocate buffer of406406+ * 2 * 2^mask bytes.407407+ */408408+ buf->len = min_t(int, ~mask + 1, size);409409+ buf->alloc_len = 2 * buf->len;410410+411411+ dev_dbg(dev, "Allocating %d bytes for channel %d",412412+ (int)buf->alloc_len, channel);413413+ buf->alloc = dma_alloc_coherent(dev, buf->alloc_len, &buf->alloc_phys,414414+ GFP_KERNEL);415415+ if (buf->alloc == NULL) {416416+ dev_info(dev, "Failed to allocate buffer\n");417417+ return -ENOMEM;418418+ }419419+420420+ buf->buf_phys = (buf->alloc_phys + buf->len) & mask;421421+ buf->buf = buf->alloc + (buf->buf_phys - buf->alloc_phys);422422+423423+ iowrite32(0, priv->dma_io + offset + 4);424424+ iowrite32(buf->buf_phys, priv->dma_io + offset);425425+ dev_dbg(dev, "Buffer: %x and read from dev: %x",426426+ (unsigned)buf->buf_phys, ioread32(priv->dma_io + offset));427427+428428+ return 0;429429+}430430+431431+static void ec_bhf_setup_tx_descs(struct ec_bhf_priv *priv)432432+{433433+ int i = 0;434434+435435+ priv->tx_dcount = priv->tx_buf.len / sizeof(struct tx_desc);436436+ priv->tx_descs = (struct tx_desc *) priv->tx_buf.buf;437437+ priv->tx_dnext = 0;438438+439439+ for (i = 0; i < priv->tx_dcount; i++)440440+ priv->tx_descs[i].header.sent = cpu_to_le32(TX_HDR_SENT);441441+}442442+443443+static void ec_bhf_setup_rx_descs(struct ec_bhf_priv *priv)444444+{445445+ int i;446446+447447+ priv->rx_dcount = priv->rx_buf.len / sizeof(struct rx_desc);448448+ priv->rx_descs = (struct rx_desc *) priv->rx_buf.buf;449449+ priv->rx_dnext = 0;450450+451451+ for (i = 0; i < priv->rx_dcount; i++) {452452+ struct rx_desc *desc = &priv->rx_descs[i];453453+ u32 next;454454+455455+ if (i != priv->rx_dcount - 1)456456+ next = (u8 *)(desc + 1) - priv->rx_buf.buf;457457+ else458458+ next = 0;459459+ next |= RXHDR_NEXT_VALID;460460+ desc->header.next = cpu_to_le32(next);461461+ desc->header.recv = 0;462462+ ec_bhf_add_rx_desc(priv, desc);463463+ }464464+}465465+466466+static int ec_bhf_open(struct net_device *net_dev)467467+{468468+ struct ec_bhf_priv *priv = netdev_priv(net_dev);469469+ struct device *dev = PRIV_TO_DEV(priv);470470+ int err = 0;471471+472472+ dev_info(dev, "Opening device\n");473473+474474+ ec_bhf_reset(priv);475475+476476+ err = ec_bhf_alloc_dma_mem(priv, &priv->rx_buf, priv->rx_dma_chan,477477+ FIFO_SIZE * sizeof(struct rx_desc));478478+ if (err) {479479+ dev_err(dev, "Failed to allocate rx buffer\n");480480+ goto out;481481+ }482482+ ec_bhf_setup_rx_descs(priv);483483+484484+ dev_info(dev, "RX buffer allocated, address: %x\n",485485+ (unsigned)priv->rx_buf.buf_phys);486486+487487+ err = ec_bhf_alloc_dma_mem(priv, &priv->tx_buf, priv->tx_dma_chan,488488+ FIFO_SIZE * sizeof(struct tx_desc));489489+ if (err) {490490+ dev_err(dev, "Failed to allocate tx buffer\n");491491+ goto error_rx_free;492492+ }493493+ dev_dbg(dev, "TX buffer allocated, addres: %x\n",494494+ (unsigned)priv->tx_buf.buf_phys);495495+496496+ iowrite8(0, priv->mii_io + MII_MAC_FILT_FLAG);497497+498498+ ec_bhf_setup_tx_descs(priv);499499+500500+ netif_start_queue(net_dev);501501+502502+ hrtimer_init(&priv->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);503503+ priv->hrtimer.function = ec_bhf_timer_fun;504504+ hrtimer_start(&priv->hrtimer, ktime_set(0, polling_frequency),505505+ HRTIMER_MODE_REL);506506+507507+ dev_info(PRIV_TO_DEV(priv), "Device open\n");508508+509509+ ec_bhf_print_status(priv);510510+511511+ return 0;512512+513513+error_rx_free:514514+ dma_free_coherent(dev, priv->rx_buf.alloc_len, priv->rx_buf.alloc,515515+ priv->rx_buf.alloc_len);516516+out:517517+ return err;518518+}519519+520520+static int ec_bhf_stop(struct net_device *net_dev)521521+{522522+ struct ec_bhf_priv *priv = netdev_priv(net_dev);523523+ struct device *dev = PRIV_TO_DEV(priv);524524+525525+ hrtimer_cancel(&priv->hrtimer);526526+527527+ ec_bhf_reset(priv);528528+529529+ netif_tx_disable(net_dev);530530+531531+ dma_free_coherent(dev, priv->tx_buf.alloc_len,532532+ priv->tx_buf.alloc, priv->tx_buf.alloc_phys);533533+ dma_free_coherent(dev, priv->rx_buf.alloc_len,534534+ priv->rx_buf.alloc, priv->rx_buf.alloc_phys);535535+536536+ return 0;537537+}538538+539539+static struct rtnl_link_stats64 *540540+ec_bhf_get_stats(struct net_device *net_dev,541541+ struct rtnl_link_stats64 *stats)542542+{543543+ struct ec_bhf_priv *priv = netdev_priv(net_dev);544544+545545+ stats->rx_errors = ioread8(priv->mac_io + MAC_RX_ERR_CNT) +546546+ ioread8(priv->mac_io + MAC_CRC_ERR_CNT) +547547+ ioread8(priv->mac_io + MAC_FRAME_ERR_CNT);548548+ stats->rx_packets = ioread32(priv->mac_io + MAC_RX_FRAME_CNT);549549+ stats->tx_packets = ioread32(priv->mac_io + MAC_TX_FRAME_CNT);550550+ stats->rx_dropped = ioread8(priv->mac_io + MAC_DROPPED_FRMS);551551+552552+ stats->tx_bytes = priv->stat_tx_bytes;553553+ stats->rx_bytes = priv->stat_rx_bytes;554554+555555+ return stats;556556+}557557+558558+static const struct net_device_ops ec_bhf_netdev_ops = {559559+ .ndo_start_xmit = ec_bhf_start_xmit,560560+ .ndo_open = ec_bhf_open,561561+ .ndo_stop = ec_bhf_stop,562562+ .ndo_get_stats64 = ec_bhf_get_stats,563563+ .ndo_change_mtu = eth_change_mtu,564564+ .ndo_validate_addr = eth_validate_addr,565565+ .ndo_set_mac_address = eth_mac_addr566566+};567567+568568+static int ec_bhf_probe(struct pci_dev *dev, const struct pci_device_id *id)569569+{570570+ struct net_device *net_dev;571571+ struct ec_bhf_priv *priv;572572+ void * __iomem dma_io;573573+ void * __iomem io;574574+ int err = 0;575575+576576+ err = pci_enable_device(dev);577577+ if (err)578578+ return err;579579+580580+ pci_set_master(dev);581581+582582+ err = pci_set_dma_mask(dev, DMA_BIT_MASK(32));583583+ if (err) {584584+ dev_err(&dev->dev,585585+ "Required dma mask not supported, failed to initialize device\n");586586+ err = -EIO;587587+ goto err_disable_dev;588588+ }589589+590590+ err = pci_set_consistent_dma_mask(dev, DMA_BIT_MASK(32));591591+ if (err) {592592+ dev_err(&dev->dev,593593+ "Required dma mask not supported, failed to initialize device\n");594594+ goto err_disable_dev;595595+ }596596+597597+ err = pci_request_regions(dev, "ec_bhf");598598+ if (err) {599599+ dev_err(&dev->dev, "Failed to request pci memory regions\n");600600+ goto err_disable_dev;601601+ }602602+603603+ io = pci_iomap(dev, 0, 0);604604+ if (!io) {605605+ dev_err(&dev->dev, "Failed to map pci card memory bar 0");606606+ err = -EIO;607607+ goto err_release_regions;608608+ }609609+610610+ dma_io = pci_iomap(dev, 2, 0);611611+ if (!dma_io) {612612+ dev_err(&dev->dev, "Failed to map pci card memory bar 2");613613+ err = -EIO;614614+ goto err_unmap;615615+ }616616+617617+ net_dev = alloc_etherdev(sizeof(struct ec_bhf_priv));618618+ if (net_dev == 0) {619619+ err = -ENOMEM;620620+ goto err_unmap_dma_io;621621+ }622622+623623+ pci_set_drvdata(dev, net_dev);624624+ SET_NETDEV_DEV(net_dev, &dev->dev);625625+626626+ net_dev->features = 0;627627+ net_dev->flags |= IFF_NOARP;628628+629629+ net_dev->netdev_ops = &ec_bhf_netdev_ops;630630+631631+ priv = netdev_priv(net_dev);632632+ priv->net_dev = net_dev;633633+ priv->io = io;634634+ priv->dma_io = dma_io;635635+ priv->dev = dev;636636+637637+ err = ec_bhf_setup_offsets(priv);638638+ if (err < 0)639639+ goto err_free_net_dev;640640+641641+ memcpy_fromio(net_dev->dev_addr, priv->mii_io + MII_MAC_ADDR, 6);642642+643643+ dev_dbg(&dev->dev, "CX5020 Ethercat master address: %pM\n",644644+ net_dev->dev_addr);645645+646646+ err = register_netdev(net_dev);647647+ if (err < 0)648648+ goto err_free_net_dev;649649+650650+ return 0;651651+652652+err_free_net_dev:653653+ free_netdev(net_dev);654654+err_unmap_dma_io:655655+ pci_iounmap(dev, dma_io);656656+err_unmap:657657+ pci_iounmap(dev, io);658658+err_release_regions:659659+ pci_release_regions(dev);660660+err_disable_dev:661661+ pci_clear_master(dev);662662+ pci_disable_device(dev);663663+664664+ return err;665665+}666666+667667+static void ec_bhf_remove(struct pci_dev *dev)668668+{669669+ struct net_device *net_dev = pci_get_drvdata(dev);670670+ struct ec_bhf_priv *priv = netdev_priv(net_dev);671671+672672+ unregister_netdev(net_dev);673673+ free_netdev(net_dev);674674+675675+ pci_iounmap(dev, priv->dma_io);676676+ pci_iounmap(dev, priv->io);677677+ pci_release_regions(dev);678678+ pci_clear_master(dev);679679+ pci_disable_device(dev);680680+}681681+682682+static struct pci_driver pci_driver = {683683+ .name = "ec_bhf",684684+ .id_table = ids,685685+ .probe = ec_bhf_probe,686686+ .remove = ec_bhf_remove,687687+};688688+689689+static int __init ec_bhf_init(void)690690+{691691+ return pci_register_driver(&pci_driver);692692+}693693+694694+static void __exit ec_bhf_exit(void)695695+{696696+ pci_unregister_driver(&pci_driver);697697+}698698+699699+module_init(ec_bhf_init);700700+module_exit(ec_bhf_exit);701701+702702+module_param(polling_frequency, long, S_IRUGO);703703+MODULE_PARM_DESC(polling_frequency, "Polling timer frequency in ns");704704+705705+MODULE_LICENSE("GPL");706706+MODULE_AUTHOR("Dariusz Marcinkiewicz <reksio@newterm.pl>");
+6
drivers/net/ethernet/emulex/benet/be_main.c
···49494949 if (status)49504950 goto err;4951495149524952+ /* On some BE3 FW versions, after a HW reset,49534953+ * interrupts will remain disabled for each function.49544954+ * So, explicitly enable interrupts49554955+ */49564956+ be_intr_set(adapter, true);49574957+49524958 /* tell fw we're ready to fire cmds */49534959 status = be_cmd_fw_init(adapter);49544960 if (status)
+47-6
drivers/net/ethernet/jme.c
···19881988 return idx;19891989}1990199019911991-static void19911991+static int19921992jme_fill_tx_map(struct pci_dev *pdev,19931993 struct txdesc *txdesc,19941994 struct jme_buffer_info *txbi,···20042004 page_offset,20052005 len,20062006 PCI_DMA_TODEVICE);20072007+20082008+ if (unlikely(pci_dma_mapping_error(pdev, dmaaddr)))20092009+ return -EINVAL;2007201020082011 pci_dma_sync_single_for_device(pdev,20092012 dmaaddr,···2024202120252022 txbi->mapping = dmaaddr;20262023 txbi->len = len;20242024+ return 0;20272025}2028202620292029-static void20272027+static void jme_drop_tx_map(struct jme_adapter *jme, int startidx, int count)20282028+{20292029+ struct jme_ring *txring = &(jme->txring[0]);20302030+ struct jme_buffer_info *txbi = txring->bufinf, *ctxbi;20312031+ int mask = jme->tx_ring_mask;20322032+ int j;20332033+20342034+ for (j = 0 ; j < count ; j++) {20352035+ ctxbi = txbi + ((startidx + j + 2) & (mask));20362036+ pci_unmap_page(jme->pdev,20372037+ ctxbi->mapping,20382038+ ctxbi->len,20392039+ PCI_DMA_TODEVICE);20402040+20412041+ ctxbi->mapping = 0;20422042+ ctxbi->len = 0;20432043+ }20442044+20452045+}20462046+20472047+static int20302048jme_map_tx_skb(struct jme_adapter *jme, struct sk_buff *skb, int idx)20312049{20322050 struct jme_ring *txring = &(jme->txring[0]);···20582034 int mask = jme->tx_ring_mask;20592035 const struct skb_frag_struct *frag;20602036 u32 len;20372037+ int ret = 0;2061203820622039 for (i = 0 ; i < nr_frags ; ++i) {20632040 frag = &skb_shinfo(skb)->frags[i];20642041 ctxdesc = txdesc + ((idx + i + 2) & (mask));20652042 ctxbi = txbi + ((idx + i + 2) & (mask));2066204320672067- jme_fill_tx_map(jme->pdev, ctxdesc, ctxbi,20442044+ ret = jme_fill_tx_map(jme->pdev, ctxdesc, ctxbi,20682045 skb_frag_page(frag),20692046 frag->page_offset, skb_frag_size(frag), hidma);20472047+ if (ret) {20482048+ jme_drop_tx_map(jme, idx, i);20492049+ goto out;20502050+ }20512051+20702052 }2071205320722054 len = skb_is_nonlinear(skb) ? skb_headlen(skb) : skb->len;20732055 ctxdesc = txdesc + ((idx + 1) & (mask));20742056 ctxbi = txbi + ((idx + 1) & (mask));20752075- jme_fill_tx_map(jme->pdev, ctxdesc, ctxbi, virt_to_page(skb->data),20572057+ ret = jme_fill_tx_map(jme->pdev, ctxdesc, ctxbi, virt_to_page(skb->data),20762058 offset_in_page(skb->data), len, hidma);20592059+ if (ret)20602060+ jme_drop_tx_map(jme, idx, i);20612061+20622062+out:20632063+ return ret;2077206420782065}20662066+2079206720802068static int20812069jme_tx_tso(struct sk_buff *skb, __le16 *mss, u8 *flags)···21672131 struct txdesc *txdesc;21682132 struct jme_buffer_info *txbi;21692133 u8 flags;21342134+ int ret = 0;2170213521712136 txdesc = (struct txdesc *)txring->desc + idx;21722137 txbi = txring->bufinf + idx;···21922155 if (jme_tx_tso(skb, &txdesc->desc1.mss, &flags))21932156 jme_tx_csum(jme, skb, &flags);21942157 jme_tx_vlan(skb, &txdesc->desc1.vlan, &flags);21952195- jme_map_tx_skb(jme, skb, idx);21582158+ ret = jme_map_tx_skb(jme, skb, idx);21592159+ if (ret)21602160+ return ret;21612161+21962162 txdesc->desc1.flags = flags;21972163 /*21982164 * Set tx buffer info after telling NIC to send···22682228 return NETDEV_TX_BUSY;22692229 }2270223022712271- jme_fill_tx_desc(jme, skb, idx);22312231+ if (jme_fill_tx_desc(jme, skb, idx))22322232+ return NETDEV_TX_OK;2272223322732234 jwrite32(jme, JME_TXCS, jme->reg_txcs |22742235 TXCS_SELECT_QUEUE0 |
···246246 int i;247247248248 for (i = 0; i < N_TX_RINGS; i++)249249- spin_lock(&cp->tx_lock[i]);249249+ spin_lock_nested(&cp->tx_lock[i], i);250250}251251252252static inline void cas_lock_all(struct cas *cp)
···183183 * this number of packets were received (typically 1)184184 * @passive2active: is auto switching from passive to active during scan allowed185185 * @rxchain_sel_flags: RXON_RX_CHAIN_*186186- * @max_out_time: in usecs, max out of serving channel time186186+ * @max_out_time: in TUs, max out of serving channel time187187 * @suspend_time: how long to pause scan when returning to service channel:188188- * bits 0-19: beacon interal in usecs (suspend before executing)188188+ * bits 0-19: beacon interal in TUs (suspend before executing)189189 * bits 20-23: reserved190190 * bits 24-31: number of beacons (suspend between channels)191191 * @rxon_flags: RXON_FLG_*···383383 * @quiet_plcp_th: quiet channel num of packets threshold384384 * @good_CRC_th: passive to active promotion threshold385385 * @rx_chain: RXON rx chain.386386- * @max_out_time: max uSec to be out of assoceated channel387387- * @suspend_time: pause scan this long when returning to service channel386386+ * @max_out_time: max TUs to be out of assoceated channel387387+ * @suspend_time: pause scan this TUs when returning to service channel388388 * @flags: RXON flags389389 * @filter_flags: RXONfilter390390 * @tx_cmd: tx command for active scan; for 2GHz and for 5GHz.
+7-2
drivers/net/wireless/iwlwifi/mvm/mac80211.c
···10071007 memcpy(cmd->bssid, vif->bss_conf.bssid, ETH_ALEN);10081008 len = roundup(sizeof(*cmd) + cmd->count * ETH_ALEN, 4);1009100910101010- ret = iwl_mvm_send_cmd_pdu(mvm, MCAST_FILTER_CMD, CMD_SYNC, len, cmd);10101010+ ret = iwl_mvm_send_cmd_pdu(mvm, MCAST_FILTER_CMD, CMD_ASYNC, len, cmd);10111011 if (ret)10121012 IWL_ERR(mvm, "mcast filter cmd error. ret=%d\n", ret);10131013}···10231023 if (WARN_ON_ONCE(!mvm->mcast_filter_cmd))10241024 return;1025102510261026- ieee80211_iterate_active_interfaces(10261026+ ieee80211_iterate_active_interfaces_atomic(10271027 mvm->hw, IEEE80211_IFACE_ITER_NORMAL,10281028 iwl_mvm_mc_iface_iterator, &iter_data);10291029}···18061806 int ret;1807180718081808 mutex_lock(&mvm->mutex);18091809+18101810+ if (!iwl_mvm_is_idle(mvm)) {18111811+ ret = -EBUSY;18121812+ goto out;18131813+ }1809181418101815 switch (mvm->scan_status) {18111816 case IWL_MVM_SCAN_OS:
···10101010 return;10111011 }1012101210131013-#ifdef CPTCFG_MAC80211_DEBUGFS10131013+#ifdef CONFIG_MAC80211_DEBUGFS10141014 /* Disable last tx check if we are debugging with fixed rate */10151015 if (lq_sta->dbg_fixed_rate) {10161016 IWL_DEBUG_RATE(mvm, "Fixed rate. avoid rate scaling\n");
+13-42
drivers/net/wireless/iwlwifi/mvm/scan.c
···277277 IEEE80211_IFACE_ITER_NORMAL,278278 iwl_mvm_scan_condition_iterator,279279 &global_bound);280280- /*281281- * Under low latency traffic passive scan is fragmented meaning282282- * that dwell on a particular channel will be fragmented. Each fragment283283- * dwell time is 20ms and fragments period is 105ms. Skipping to next284284- * channel will be delayed by the same period - 105ms. So suspend_time285285- * parameter describing both fragments and channels skipping periods is286286- * set to 105ms. This value is chosen so that overall passive scan287287- * duration will not be too long. Max_out_time in this case is set to288288- * 70ms, so for active scanning operating channel will be left for 70ms289289- * while for passive still for 20ms (fragment dwell).290290- */291291- if (global_bound) {292292- if (!iwl_mvm_low_latency(mvm)) {293293- params->suspend_time = ieee80211_tu_to_usec(100);294294- params->max_out_time = ieee80211_tu_to_usec(600);295295- } else {296296- params->suspend_time = ieee80211_tu_to_usec(105);297297- /* P2P doesn't support fragmented passive scan, so298298- * configure max_out_time to be at least longest dwell299299- * time for passive scan.300300- */301301- if (vif->type == NL80211_IFTYPE_STATION && !vif->p2p) {302302- params->max_out_time = ieee80211_tu_to_usec(70);303303- params->passive_fragmented = true;304304- } else {305305- u32 passive_dwell;306280307307- /*308308- * Use band G so that passive channel dwell time309309- * will be assigned with maximum value.310310- */311311- band = IEEE80211_BAND_2GHZ;312312- passive_dwell = iwl_mvm_get_passive_dwell(band);313313- params->max_out_time =314314- ieee80211_tu_to_usec(passive_dwell);315315- }316316- }281281+ if (!global_bound)282282+ goto not_bound;283283+284284+ params->suspend_time = 100;285285+ params->max_out_time = 600;286286+287287+ if (iwl_mvm_low_latency(mvm)) {288288+ params->suspend_time = 250;289289+ params->max_out_time = 250;317290 }318291292292+not_bound:293293+319294 for (band = IEEE80211_BAND_2GHZ; band < IEEE80211_NUM_BANDS; band++) {320320- if (params->passive_fragmented)321321- params->dwell[band].passive = 20;322322- else323323- params->dwell[band].passive =324324- iwl_mvm_get_passive_dwell(band);295295+ params->dwell[band].passive = iwl_mvm_get_passive_dwell(band);325296 params->dwell[band].active = iwl_mvm_get_active_dwell(band,326297 n_ssids);327298 }···732761 int band_2ghz = mvm->nvm_data->bands[IEEE80211_BAND_2GHZ].n_channels;733762 int band_5ghz = mvm->nvm_data->bands[IEEE80211_BAND_5GHZ].n_channels;734763 int head = 0;735735- int tail = band_2ghz + band_5ghz;764764+ int tail = band_2ghz + band_5ghz - 1;736765 u32 ssid_bitmap;737766 int cmd_len;738767 int ret;
···226226 grant_ref_t rx_ring_ref);227227228228/* Check for SKBs from frontend and schedule backend processing */229229-void xenvif_check_rx_xenvif(struct xenvif *vif);229229+void xenvif_napi_schedule_or_enable_events(struct xenvif *vif);230230231231/* Prevent the device from generating any further traffic. */232232void xenvif_carrier_off(struct xenvif *vif);
+3-27
drivers/net/xen-netback/interface.c
···7575 work_done = xenvif_tx_action(vif, budget);76767777 if (work_done < budget) {7878- int more_to_do = 0;7979- unsigned long flags;8080-8181- /* It is necessary to disable IRQ before calling8282- * RING_HAS_UNCONSUMED_REQUESTS. Otherwise we might8383- * lose event from the frontend.8484- *8585- * Consider:8686- * RING_HAS_UNCONSUMED_REQUESTS8787- * <frontend generates event to trigger napi_schedule>8888- * __napi_complete8989- *9090- * This handler is still in scheduled state so the9191- * event has no effect at all. After __napi_complete9292- * this handler is descheduled and cannot get9393- * scheduled again. We lose event in this case and the ring9494- * will be completely stalled.9595- */9696-9797- local_irq_save(flags);9898-9999- RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);100100- if (!more_to_do)101101- __napi_complete(napi);102102-103103- local_irq_restore(flags);7878+ napi_complete(napi);7979+ xenvif_napi_schedule_or_enable_events(vif);10480 }1058110682 return work_done;···170194 enable_irq(vif->tx_irq);171195 if (vif->tx_irq != vif->rx_irq)172196 enable_irq(vif->rx_irq);173173- xenvif_check_rx_xenvif(vif);197197+ xenvif_napi_schedule_or_enable_events(vif);174198}175199176200static void xenvif_down(struct xenvif *vif)
+82-20
drivers/net/xen-netback/netback.c
···104104105105/* Find the containing VIF's structure from a pointer in pending_tx_info array106106 */107107-static inline struct xenvif* ubuf_to_vif(struct ubuf_info *ubuf)107107+static inline struct xenvif *ubuf_to_vif(const struct ubuf_info *ubuf)108108{109109 u16 pending_idx = ubuf->desc;110110 struct pending_tx_info *temp =···323323}324324325325/*326326+ * Find the grant ref for a given frag in a chain of struct ubuf_info's327327+ * skb: the skb itself328328+ * i: the frag's number329329+ * ubuf: a pointer to an element in the chain. It should not be NULL330330+ *331331+ * Returns a pointer to the element in the chain where the page were found. If332332+ * not found, returns NULL.333333+ * See the definition of callback_struct in common.h for more details about334334+ * the chain.335335+ */336336+static const struct ubuf_info *xenvif_find_gref(const struct sk_buff *const skb,337337+ const int i,338338+ const struct ubuf_info *ubuf)339339+{340340+ struct xenvif *foreign_vif = ubuf_to_vif(ubuf);341341+342342+ do {343343+ u16 pending_idx = ubuf->desc;344344+345345+ if (skb_shinfo(skb)->frags[i].page.p ==346346+ foreign_vif->mmap_pages[pending_idx])347347+ break;348348+ ubuf = (struct ubuf_info *) ubuf->ctx;349349+ } while (ubuf);350350+351351+ return ubuf;352352+}353353+354354+/*326355 * Prepare an SKB to be transmitted to the frontend.327356 *328357 * This function is responsible for allocating grant operations, meta···375346 int head = 1;376347 int old_meta_prod;377348 int gso_type;378378- struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;379379- grant_ref_t foreign_grefs[MAX_SKB_FRAGS];380380- struct xenvif *foreign_vif = NULL;349349+ const struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;350350+ const struct ubuf_info *const head_ubuf = ubuf;381351382352 old_meta_prod = npo->meta_prod;383353···414386 npo->copy_off = 0;415387 npo->copy_gref = req->gref;416388417417- if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&418418- (ubuf->callback == &xenvif_zerocopy_callback)) {419419- int i = 0;420420- foreign_vif = ubuf_to_vif(ubuf);421421-422422- do {423423- u16 pending_idx = ubuf->desc;424424- foreign_grefs[i++] =425425- foreign_vif->pending_tx_info[pending_idx].req.gref;426426- ubuf = (struct ubuf_info *) ubuf->ctx;427427- } while (ubuf);428428- }429429-430389 data = skb->data;431390 while (data < skb_tail_pointer(skb)) {432391 unsigned int offset = offset_in_page(data);···430415 }431416432417 for (i = 0; i < nr_frags; i++) {418418+ /* This variable also signals whether foreign_gref has a real419419+ * value or not.420420+ */421421+ struct xenvif *foreign_vif = NULL;422422+ grant_ref_t foreign_gref;423423+424424+ if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&425425+ (ubuf->callback == &xenvif_zerocopy_callback)) {426426+ const struct ubuf_info *const startpoint = ubuf;427427+428428+ /* Ideally ubuf points to the chain element which429429+ * belongs to this frag. Or if frags were removed from430430+ * the beginning, then shortly before it.431431+ */432432+ ubuf = xenvif_find_gref(skb, i, ubuf);433433+434434+ /* Try again from the beginning of the list, if we435435+ * haven't tried from there. This only makes sense in436436+ * the unlikely event of reordering the original frags.437437+ * For injected local pages it's an unnecessary second438438+ * run.439439+ */440440+ if (unlikely(!ubuf) && startpoint != head_ubuf)441441+ ubuf = xenvif_find_gref(skb, i, head_ubuf);442442+443443+ if (likely(ubuf)) {444444+ u16 pending_idx = ubuf->desc;445445+446446+ foreign_vif = ubuf_to_vif(ubuf);447447+ foreign_gref = foreign_vif->pending_tx_info[pending_idx].req.gref;448448+ /* Just a safety measure. If this was the last449449+ * element on the list, the for loop will450450+ * iterate again if a local page were added to451451+ * the end. Using head_ubuf here prevents the452452+ * second search on the chain. Or the original453453+ * frags changed order, but that's less likely.454454+ * In any way, ubuf shouldn't be NULL.455455+ */456456+ ubuf = ubuf->ctx ?457457+ (struct ubuf_info *) ubuf->ctx :458458+ head_ubuf;459459+ } else460460+ /* This frag was a local page, added to the461461+ * array after the skb left netback.462462+ */463463+ ubuf = head_ubuf;464464+ }433465 xenvif_gop_frag_copy(vif, skb, npo,434466 skb_frag_page(&skb_shinfo(skb)->frags[i]),435467 skb_frag_size(&skb_shinfo(skb)->frags[i]),436468 skb_shinfo(skb)->frags[i].page_offset,437469 &head,438470 foreign_vif,439439- foreign_grefs[i]);471471+ foreign_vif ? foreign_gref : UINT_MAX);440472 }441473442474 return npo->meta_prod - old_meta_prod;···716654 notify_remote_via_irq(vif->rx_irq);717655}718656719719-void xenvif_check_rx_xenvif(struct xenvif *vif)657657+void xenvif_napi_schedule_or_enable_events(struct xenvif *vif)720658{721659 int more_to_do;722660···750688{751689 struct xenvif *vif = (struct xenvif *)data;752690 tx_add_credit(vif);753753- xenvif_check_rx_xenvif(vif);691691+ xenvif_napi_schedule_or_enable_events(vif);754692}755693756694static void xenvif_tx_err(struct xenvif *vif,
+2-1
drivers/ptp/Kconfig
···6677config PTP_1588_CLOCK88 tristate "PTP clock support"99+ depends on NET910 select PPS1011 select NET_PTP_CLASSIFY1112 help···7574config PTP_1588_CLOCK_PCH7675 tristate "Intel PCH EG20T as PTP clock"7776 depends on X86 || COMPILE_TEST7878- depends on HAS_IOMEM7777+ depends on HAS_IOMEM && NET7978 select PTP_1588_CLOCK8079 help8180 This driver adds support for using the PCH EG20T as a PTP
···130130{131131 _enter("");132132133133+ /* Break the callbacks here so that we do it after the final ACK is134134+ * received. The step number here must match the final number in135135+ * afs_deliver_cb_callback().136136+ */137137+ if (call->unmarshall == 6) {138138+ ASSERT(call->server && call->count && call->request);139139+ afs_break_callbacks(call->server, call->count, call->request);140140+ }141141+133142 afs_put_server(call->server);134143 call->server = NULL;135144 kfree(call->buffer);···281272 _debug("trailer");282273 if (skb->len != 0)283274 return -EBADMSG;275275+276276+ /* Record that the message was unmarshalled successfully so277277+ * that the call destructor can know do the callback breaking278278+ * work, even if the final ACK isn't received.279279+ *280280+ * If the step number changes, then afs_cm_destructor() must be281281+ * updated also.282282+ */283283+ call->unmarshall++;284284+ case 6:284285 break;285286 }286287
+1-1
fs/afs/internal.h
···7575 const struct afs_call_type *type; /* type of call */7676 const struct afs_wait_mode *wait_mode; /* completion wait mode */7777 wait_queue_head_t waitq; /* processes awaiting completion */7878- work_func_t async_workfn;7878+ void (*async_workfn)(struct afs_call *call); /* asynchronous work function */7979 struct work_struct async_work; /* asynchronous work processor */8080 struct work_struct work; /* actual work processor */8181 struct sk_buff_head rx_queue; /* received packets */
+43-43
fs/afs/rxrpc.c
···2525static int afs_wait_for_call_to_complete(struct afs_call *);2626static void afs_wake_up_async_call(struct afs_call *);2727static int afs_dont_wait_for_call_to_complete(struct afs_call *);2828-static void afs_process_async_call(struct work_struct *);2828+static void afs_process_async_call(struct afs_call *);2929static void afs_rx_interceptor(struct sock *, unsigned long, struct sk_buff *);3030static int afs_deliver_cm_op_id(struct afs_call *, struct sk_buff *, bool);3131···57575858static struct sk_buff_head afs_incoming_calls;5959static DECLARE_WORK(afs_collect_incoming_call_work, afs_collect_incoming_call);6060+6161+static void afs_async_workfn(struct work_struct *work)6262+{6363+ struct afs_call *call = container_of(work, struct afs_call, async_work);6464+6565+ call->async_workfn(call);6666+}60676168/*6269 * open an RxRPC socket and bind it to be a server for callback notifications···188181189182 kfree(call->request);190183 kfree(call);184184+}185185+186186+/*187187+ * End a call but do not free it188188+ */189189+static void afs_end_call_nofree(struct afs_call *call)190190+{191191+ if (call->rxcall) {192192+ rxrpc_kernel_end_call(call->rxcall);193193+ call->rxcall = NULL;194194+ }195195+ if (call->type->destructor)196196+ call->type->destructor(call);197197+}198198+199199+/*200200+ * End a call and free it201201+ */202202+static void afs_end_call(struct afs_call *call)203203+{204204+ afs_end_call_nofree(call);205205+ afs_free_call(call);191206}192207193208/*···355326 atomic_read(&afs_outstanding_calls));356327357328 call->wait_mode = wait_mode;358358- INIT_WORK(&call->async_work, afs_process_async_call);329329+ call->async_workfn = afs_process_async_call;330330+ INIT_WORK(&call->async_work, afs_async_workfn);359331360332 memset(&srx, 0, sizeof(srx));361333 srx.srx_family = AF_RXRPC;···413383 rxrpc_kernel_abort_call(rxcall, RX_USER_ABORT);414384 while ((skb = skb_dequeue(&call->rx_queue)))415385 afs_free_skb(skb);416416- rxrpc_kernel_end_call(rxcall);417417- call->rxcall = NULL;418386error_kill_call:419419- call->type->destructor(call);420420- afs_free_call(call);387387+ afs_end_call(call);421388 _leave(" = %d", ret);422389 return ret;423390}···536509 if (call->state >= AFS_CALL_COMPLETE) {537510 while ((skb = skb_dequeue(&call->rx_queue)))538511 afs_free_skb(skb);539539- if (call->incoming) {540540- rxrpc_kernel_end_call(call->rxcall);541541- call->rxcall = NULL;542542- call->type->destructor(call);543543- afs_free_call(call);544544- }512512+ if (call->incoming)513513+ afs_end_call(call);545514 }546515547516 _leave("");···587564 }588565589566 _debug("call complete");590590- rxrpc_kernel_end_call(call->rxcall);591591- call->rxcall = NULL;592592- call->type->destructor(call);593593- afs_free_call(call);567567+ afs_end_call(call);594568 _leave(" = %d", ret);595569 return ret;596570}···623603/*624604 * delete an asynchronous call625605 */626626-static void afs_delete_async_call(struct work_struct *work)606606+static void afs_delete_async_call(struct afs_call *call)627607{628628- struct afs_call *call =629629- container_of(work, struct afs_call, async_work);630630-631608 _enter("");632609633610 afs_free_call(call);···637620 * - on a multiple-thread workqueue this work item may try to run on several638621 * CPUs at the same time639622 */640640-static void afs_process_async_call(struct work_struct *work)623623+static void afs_process_async_call(struct afs_call *call)641624{642642- struct afs_call *call =643643- container_of(work, struct afs_call, async_work);644644-645625 _enter("");646626647627 if (!skb_queue_empty(&call->rx_queue))···651637 call->reply = NULL;652638653639 /* kill the call */654654- rxrpc_kernel_end_call(call->rxcall);655655- call->rxcall = NULL;656656- if (call->type->destructor)657657- call->type->destructor(call);640640+ afs_end_call_nofree(call);658641659642 /* we can't just delete the call because the work item may be660643 * queued */···672661 if (skb_copy_bits(skb, 0, call->buffer + call->reply_size, len) < 0)673662 BUG();674663 call->reply_size += len;675675-}676676-677677-static void afs_async_workfn(struct work_struct *work)678678-{679679- struct afs_call *call = container_of(work, struct afs_call, async_work);680680-681681- call->async_workfn(work);682664}683665684666/*···794790 _debug("oom");795791 rxrpc_kernel_abort_call(call->rxcall, RX_USER_ABORT);796792 default:797797- rxrpc_kernel_end_call(call->rxcall);798798- call->rxcall = NULL;799799- call->type->destructor(call);800800- afs_free_call(call);793793+ afs_end_call(call);801794 _leave(" [error]");802795 return;803796 }···824823 call->state = AFS_CALL_AWAIT_ACK;825824 n = rxrpc_kernel_send_data(call->rxcall, &msg, len);826825 if (n >= 0) {826826+ /* Success */827827 _leave(" [replied]");828828 return;829829 }830830+830831 if (n == -ENOMEM) {831832 _debug("oom");832833 rxrpc_kernel_abort_call(call->rxcall, RX_USER_ABORT);833834 }834834- rxrpc_kernel_end_call(call->rxcall);835835- call->rxcall = NULL;836836- call->type->destructor(call);837837- afs_free_call(call);835835+ afs_end_call(call);838836 _leave(" [error]");839837}840838
···5656 int numqueues;5757 netdev_features_t tap_features;5858 int minor;5959+ int nest_level;5960};60616162static inline void macvlan_count_rx(const struct macvlan_dev *vlan,
···3131#else /* CONFIG_OF */3232static inline int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)3333{3434- return -ENOSYS;3434+ /*3535+ * Fall back to the non-DT function to register a bus.3636+ * This way, we don't have to keep compat bits around in drivers.3737+ */3838+3939+ return mdiobus_register(mdio);3540}36413742static inline struct phy_device *of_phy_find_device(struct device_node *phy_np)
+2
include/linux/perf_event.h
···402402403403 struct ring_buffer *rb;404404 struct list_head rb_entry;405405+ unsigned long rcu_batches;406406+ int rcu_pending;405407406408 /* poll related */407409 wait_queue_head_t waitq;
+5
include/linux/rtnetlink.h
···4455#include <linux/mutex.h>66#include <linux/netdevice.h>77+#include <linux/wait.h>78#include <uapi/linux/rtnetlink.h>89910extern int rtnetlink_send(struct sk_buff *skb, struct net *net, u32 pid, u32 group, int echo);···2322extern void rtnl_unlock(void);2423extern int rtnl_trylock(void);2524extern int rtnl_is_locked(void);2525+2626+extern wait_queue_head_t netdev_unregistering_wq;2727+extern struct mutex net_mutex;2828+2629#ifdef CONFIG_PROVE_LOCKING2730extern int lockdep_rtnl_is_held(void);2831#else
+6-3
include/linux/sched.h
···220220#define TASK_PARKED 512221221#define TASK_STATE_MAX 1024222222223223-#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP"223223+#define TASK_STATE_TO_CHAR_STR "RSDTtXZxKWP"224224225225extern char ___assert_task_state[1 - 2*!!(226226 sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1)];···11531153 *11541154 * @dl_boosted tells if we are boosted due to DI. If so we are11551155 * outside bandwidth enforcement mechanism (but only until we11561156- * exit the critical section).11561156+ * exit the critical section);11571157+ *11581158+ * @dl_yielded tells if task gave up the cpu before consuming11591159+ * all its available runtime during the last job.11571160 */11581158- int dl_throttled, dl_new, dl_boosted;11611161+ int dl_throttled, dl_new, dl_boosted, dl_yielded;1159116211601163 /*11611164 * Bandwidth enforcement timer. Each -deadline task has its
+12
include/net/cfg80211.h
···36693669void cfg80211_sched_scan_stopped(struct wiphy *wiphy);3670367036713671/**36723672+ * cfg80211_sched_scan_stopped_rtnl - notify that the scheduled scan has stopped36733673+ *36743674+ * @wiphy: the wiphy on which the scheduled scan stopped36753675+ *36763676+ * The driver can call this function to inform cfg80211 that the36773677+ * scheduled scan had to be stopped, for whatever reason. The driver36783678+ * is then called back via the sched_scan_stop operation when done.36793679+ * This function should be called with rtnl locked.36803680+ */36813681+void cfg80211_sched_scan_stopped_rtnl(struct wiphy *wiphy);36823682+36833683+/**36723684 * cfg80211_inform_bss_width_frame - inform cfg80211 of a received BSS frame36733685 *36743686 * @wiphy: the wiphy reporting the BSS
+1
include/net/ip6_route.h
···127127void rt6_ifdown(struct net *net, struct net_device *dev);128128void rt6_mtu_change(struct net_device *dev, unsigned int mtu);129129void rt6_remove_prefsrc(struct inet6_ifaddr *ifp);130130+void rt6_clean_tohost(struct net *net, struct in6_addr *gateway);130131131132132133/*
+7-2
include/net/netns/ipv4.h
···2020 int range[2];2121};22222323+struct ping_group_range {2424+ seqlock_t lock;2525+ kgid_t range[2];2626+};2727+2328struct netns_ipv4 {2429#ifdef CONFIG_SYSCTL2530 struct ctl_table_header *forw_hdr;···7166 int sysctl_icmp_ratemask;7267 int sysctl_icmp_errors_use_inbound_ifaddr;73687474- struct local_ports sysctl_local_ports;6969+ struct local_ports ip_local_ports;75707671 int sysctl_tcp_ecn;7772 int sysctl_ip_no_pmtu_disc;7873 int sysctl_ip_fwd_use_pmtu;79748080- kgid_t sysctl_ping_group_range[2];7575+ struct ping_group_range ping_group_range;81768277 atomic_t dev_addr_genid;8378
+3-1
include/uapi/linux/nl80211.h
···38563856 * @NL80211_FEATURE_CELL_BASE_REG_HINTS: This driver has been tested38573857 * to work properly to suppport receiving regulatory hints from38583858 * cellular base stations.38593859+ * @NL80211_FEATURE_P2P_DEVICE_NEEDS_CHANNEL: (no longer available, only38603860+ * here to reserve the value for API/ABI compatibility)38593861 * @NL80211_FEATURE_SAE: This driver supports simultaneous authentication of38603862 * equals (SAE) with user space SME (NL80211_CMD_AUTHENTICATE) in station38613863 * mode···38993897 NL80211_FEATURE_HT_IBSS = 1 << 1,39003898 NL80211_FEATURE_INACTIVITY_TIMER = 1 << 2,39013899 NL80211_FEATURE_CELL_BASE_REG_HINTS = 1 << 3,39023902- /* bit 4 is reserved - don't use */39003900+ NL80211_FEATURE_P2P_DEVICE_NEEDS_CHANNEL = 1 << 4,39033901 NL80211_FEATURE_SAE = 1 << 5,39043902 NL80211_FEATURE_LOW_PRIORITY_SCAN = 1 << 6,39053903 NL80211_FEATURE_SCAN_FLUSH = 1 << 7,
+92-82
kernel/events/core.c
···14431443 cpuctx->exclusive = 0;14441444}1445144514461446+struct remove_event {14471447+ struct perf_event *event;14481448+ bool detach_group;14491449+};14501450+14461451/*14471452 * Cross CPU call to remove a performance event14481453 *···14561451 */14571452static int __perf_remove_from_context(void *info)14581453{14591459- struct perf_event *event = info;14541454+ struct remove_event *re = info;14551455+ struct perf_event *event = re->event;14601456 struct perf_event_context *ctx = event->ctx;14611457 struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);1462145814631459 raw_spin_lock(&ctx->lock);14641460 event_sched_out(event, cpuctx, ctx);14611461+ if (re->detach_group)14621462+ perf_group_detach(event);14651463 list_del_event(event, ctx);14661464 if (!ctx->nr_events && cpuctx->task_ctx == ctx) {14671465 ctx->is_active = 0;···14891481 * When called from perf_event_exit_task, it's OK because the14901482 * context has been detached from its task.14911483 */14921492-static void perf_remove_from_context(struct perf_event *event)14841484+static void perf_remove_from_context(struct perf_event *event, bool detach_group)14931485{14941486 struct perf_event_context *ctx = event->ctx;14951487 struct task_struct *task = ctx->task;14881488+ struct remove_event re = {14891489+ .event = event,14901490+ .detach_group = detach_group,14911491+ };1496149214971493 lockdep_assert_held(&ctx->mutex);14981494···15051493 * Per cpu events are removed via an smp call and15061494 * the removal is always successful.15071495 */15081508- cpu_function_call(event->cpu, __perf_remove_from_context, event);14961496+ cpu_function_call(event->cpu, __perf_remove_from_context, &re);15091497 return;15101498 }1511149915121500retry:15131513- if (!task_function_call(task, __perf_remove_from_context, event))15011501+ if (!task_function_call(task, __perf_remove_from_context, &re))15141502 return;1515150315161504 raw_spin_lock_irq(&ctx->lock);···15271515 * Since the task isn't running, its safe to remove the event, us15281516 * holding the ctx->lock ensures the task won't get scheduled in.15291517 */15181518+ if (detach_group)15191519+ perf_group_detach(event);15301520 list_del_event(event, ctx);15311521 raw_spin_unlock_irq(&ctx->lock);15321522}···31923178}3193317931943180static void ring_buffer_put(struct ring_buffer *rb);31953195-static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb);31813181+static void ring_buffer_attach(struct perf_event *event,31823182+ struct ring_buffer *rb);3196318331973184static void unaccount_event_cpu(struct perf_event *event, int cpu)31983185{···32533238 unaccount_event(event);3254323932553240 if (event->rb) {32563256- struct ring_buffer *rb;32573257-32583241 /*32593242 * Can happen when we close an event with re-directed output.32603243 *···32603247 * over us; possibly making our ring_buffer_put() the last.32613248 */32623249 mutex_lock(&event->mmap_mutex);32633263- rb = event->rb;32643264- if (rb) {32653265- rcu_assign_pointer(event->rb, NULL);32663266- ring_buffer_detach(event, rb);32673267- ring_buffer_put(rb); /* could be last */32683268- }32503250+ ring_buffer_attach(event, NULL);32693251 mutex_unlock(&event->mmap_mutex);32703252 }32713253···32893281 * to trigger the AB-BA case.32903282 */32913283 mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING);32923292- raw_spin_lock_irq(&ctx->lock);32933293- perf_group_detach(event);32943294- raw_spin_unlock_irq(&ctx->lock);32953295- perf_remove_from_context(event);32843284+ perf_remove_from_context(event, true);32963285 mutex_unlock(&ctx->mutex);3297328632983287 free_event(event);···38443839static void ring_buffer_attach(struct perf_event *event,38453840 struct ring_buffer *rb)38463841{38423842+ struct ring_buffer *old_rb = NULL;38473843 unsigned long flags;3848384438493849- if (!list_empty(&event->rb_entry))38503850- return;38453845+ if (event->rb) {38463846+ /*38473847+ * Should be impossible, we set this when removing38483848+ * event->rb_entry and wait/clear when adding event->rb_entry.38493849+ */38503850+ WARN_ON_ONCE(event->rcu_pending);3851385138523852- spin_lock_irqsave(&rb->event_lock, flags);38533853- if (list_empty(&event->rb_entry))38543854- list_add(&event->rb_entry, &rb->event_list);38553855- spin_unlock_irqrestore(&rb->event_lock, flags);38563856-}38523852+ old_rb = event->rb;38533853+ event->rcu_batches = get_state_synchronize_rcu();38543854+ event->rcu_pending = 1;3857385538583858-static void ring_buffer_detach(struct perf_event *event, struct ring_buffer *rb)38593859-{38603860- unsigned long flags;38563856+ spin_lock_irqsave(&old_rb->event_lock, flags);38573857+ list_del_rcu(&event->rb_entry);38583858+ spin_unlock_irqrestore(&old_rb->event_lock, flags);38593859+ }3861386038623862- if (list_empty(&event->rb_entry))38633863- return;38613861+ if (event->rcu_pending && rb) {38623862+ cond_synchronize_rcu(event->rcu_batches);38633863+ event->rcu_pending = 0;38643864+ }3864386538653865- spin_lock_irqsave(&rb->event_lock, flags);38663866- list_del_init(&event->rb_entry);38673867- wake_up_all(&event->waitq);38683868- spin_unlock_irqrestore(&rb->event_lock, flags);38663866+ if (rb) {38673867+ spin_lock_irqsave(&rb->event_lock, flags);38683868+ list_add_rcu(&event->rb_entry, &rb->event_list);38693869+ spin_unlock_irqrestore(&rb->event_lock, flags);38703870+ }38713871+38723872+ rcu_assign_pointer(event->rb, rb);38733873+38743874+ if (old_rb) {38753875+ ring_buffer_put(old_rb);38763876+ /*38773877+ * Since we detached before setting the new rb, so that we38783878+ * could attach the new rb, we could have missed a wakeup.38793879+ * Provide it now.38803880+ */38813881+ wake_up_all(&event->waitq);38823882+ }38693883}3870388438713885static void ring_buffer_wakeup(struct perf_event *event)···39533929{39543930 struct perf_event *event = vma->vm_file->private_data;3955393139563956- struct ring_buffer *rb = event->rb;39323932+ struct ring_buffer *rb = ring_buffer_get(event);39573933 struct user_struct *mmap_user = rb->mmap_user;39583934 int mmap_locked = rb->mmap_locked;39593935 unsigned long size = perf_data_size(rb);···39613937 atomic_dec(&rb->mmap_count);3962393839633939 if (!atomic_dec_and_mutex_lock(&event->mmap_count, &event->mmap_mutex))39643964- return;39403940+ goto out_put;3965394139663966- /* Detach current event from the buffer. */39673967- rcu_assign_pointer(event->rb, NULL);39683968- ring_buffer_detach(event, rb);39423942+ ring_buffer_attach(event, NULL);39693943 mutex_unlock(&event->mmap_mutex);3970394439713945 /* If there's still other mmap()s of this buffer, we're done. */39723972- if (atomic_read(&rb->mmap_count)) {39733973- ring_buffer_put(rb); /* can't be last */39743974- return;39753975- }39463946+ if (atomic_read(&rb->mmap_count))39473947+ goto out_put;3976394839773949 /*39783950 * No other mmap()s, detach from all other events that might redirect···39983978 * still restart the iteration to make sure we're not now39993979 * iterating the wrong list.40003980 */40014001- if (event->rb == rb) {40024002- rcu_assign_pointer(event->rb, NULL);40034003- ring_buffer_detach(event, rb);40044004- ring_buffer_put(rb); /* can't be last, we still have one */40054005- }39813981+ if (event->rb == rb)39823982+ ring_buffer_attach(event, NULL);39833983+40063984 mutex_unlock(&event->mmap_mutex);40073985 put_event(event);40083986···40254007 vma->vm_mm->pinned_vm -= mmap_locked;40264008 free_uid(mmap_user);4027400940104010+out_put:40284011 ring_buffer_put(rb); /* could be last */40294012}40304013···41434124 vma->vm_mm->pinned_vm += extra;4144412541454126 ring_buffer_attach(event, rb);41464146- rcu_assign_pointer(event->rb, rb);4147412741484128 perf_event_init_userpage(event);41494129 perf_event_update_userpage(event);···5426540854275409 /* Recursion avoidance in each contexts */54285410 int recursion[PERF_NR_CONTEXTS];54115411+54125412+ /* Keeps track of cpu being initialized/exited */54135413+ bool online;54295414};5430541554315416static DEFINE_PER_CPU(struct swevent_htable, swevent_htable);···56755654 hwc->state = !(flags & PERF_EF_START);5676565556775656 head = find_swevent_head(swhash, event);56785678- if (WARN_ON_ONCE(!head))56575657+ if (!head) {56585658+ /*56595659+ * We can race with cpu hotplug code. Do not56605660+ * WARN if the cpu just got unplugged.56615661+ */56625662+ WARN_ON_ONCE(swhash->online);56795663 return -EINVAL;56645664+ }5680566556815666 hlist_add_head_rcu(&event->hlist_entry, head);56825667···69416914static int69426915perf_event_set_output(struct perf_event *event, struct perf_event *output_event)69436916{69446944- struct ring_buffer *rb = NULL, *old_rb = NULL;69176917+ struct ring_buffer *rb = NULL;69456918 int ret = -EINVAL;6946691969476920 if (!output_event)···69696942 if (atomic_read(&event->mmap_count))69706943 goto unlock;6971694469726972- old_rb = event->rb;69736973-69746945 if (output_event) {69756946 /* get the rb we want to redirect to */69766947 rb = ring_buffer_get(output_event);···69766951 goto unlock;69776952 }6978695369796979- if (old_rb)69806980- ring_buffer_detach(event, old_rb);69816981-69826982- if (rb)69836983- ring_buffer_attach(event, rb);69846984-69856985- rcu_assign_pointer(event->rb, rb);69866986-69876987- if (old_rb) {69886988- ring_buffer_put(old_rb);69896989- /*69906990- * Since we detached before setting the new rb, so that we69916991- * could attach the new rb, we could have missed a wakeup.69926992- * Provide it now.69936993- */69946994- wake_up_all(&event->waitq);69956995- }69546954+ ring_buffer_attach(event, rb);6996695569976956 ret = 0;69986957unlock:···7026701770277018 if (attr.freq) {70287019 if (attr.sample_freq > sysctl_perf_event_sample_rate)70207020+ return -EINVAL;70217021+ } else {70227022+ if (attr.sample_period & (1ULL << 63))70297023 return -EINVAL;70307024 }70317025···71777165 struct perf_event_context *gctx = group_leader->ctx;7178716671797167 mutex_lock(&gctx->mutex);71807180- perf_remove_from_context(group_leader);71687168+ perf_remove_from_context(group_leader, false);7181716971827170 /*71837171 * Removing from the context ends up with disabled···71877175 perf_event__state_init(group_leader);71887176 list_for_each_entry(sibling, &group_leader->sibling_list,71897177 group_entry) {71907190- perf_remove_from_context(sibling);71787178+ perf_remove_from_context(sibling, false);71917179 perf_event__state_init(sibling);71927180 put_ctx(gctx);71937181 }···73177305 mutex_lock(&src_ctx->mutex);73187306 list_for_each_entry_safe(event, tmp, &src_ctx->event_list,73197307 event_entry) {73207320- perf_remove_from_context(event);73087308+ perf_remove_from_context(event, false);73217309 unaccount_event_cpu(event, src_cpu);73227310 put_ctx(src_ctx);73237311 list_add(&event->migrate_entry, &events);···73797367 struct perf_event_context *child_ctx,73807368 struct task_struct *child)73817369{73827382- if (child_event->parent) {73837383- raw_spin_lock_irq(&child_ctx->lock);73847384- perf_group_detach(child_event);73857385- raw_spin_unlock_irq(&child_ctx->lock);73867386- }73877387-73887388- perf_remove_from_context(child_event);73707370+ perf_remove_from_context(child_event, !!child_event->parent);7389737173907372 /*73917373 * It can happen that the parent exits first, and has events···77307724 * swapped under us.77317725 */77327726 parent_ctx = perf_pin_task_context(parent, ctxn);77277727+ if (!parent_ctx)77287728+ return 0;7733772977347730 /*77357731 * No need to check if parent_ctx != NULL here; since we saw···78437835 struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu);7844783678457837 mutex_lock(&swhash->hlist_mutex);78387838+ swhash->online = true;78467839 if (swhash->hlist_refcount > 0) {78477840 struct swevent_hlist *hlist;78487841···7866785778677858static void __perf_event_exit_context(void *__info)78687859{78607860+ struct remove_event re = { .detach_group = false };78697861 struct perf_event_context *ctx = __info;78707870- struct perf_event *event;7871786278727863 perf_pmu_rotate_stop(ctx->pmu);7873786478747865 rcu_read_lock();78757875- list_for_each_entry_rcu(event, &ctx->event_list, event_entry)78767876- __perf_remove_from_context(event);78667866+ list_for_each_entry_rcu(re.event, &ctx->event_list, event_entry)78677867+ __perf_remove_from_context(&re);78777868 rcu_read_unlock();78787869}78797870···79017892 perf_event_exit_cpu_context(cpu);7902789379037894 mutex_lock(&swhash->hlist_mutex);78957895+ swhash->online = false;79047896 swevent_hlist_release(swhash);79057897 mutex_unlock(&swhash->hlist_mutex);79067898}
+13-2
kernel/sched/core.c
···25922592 if (likely(prev->sched_class == class &&25932593 rq->nr_running == rq->cfs.h_nr_running)) {25942594 p = fair_sched_class.pick_next_task(rq, prev);25952595- if (likely(p && p != RETRY_TASK))25962596- return p;25952595+ if (unlikely(p == RETRY_TASK))25962596+ goto again;25972597+25982598+ /* assumes fair_sched_class->next == idle_sched_class */25992599+ if (unlikely(!p))26002600+ p = idle_sched_class.pick_next_task(rq, prev);26012601+26022602+ return p;25972603 }2598260425992605again:···31303124 dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);31313125 dl_se->dl_throttled = 0;31323126 dl_se->dl_new = 1;31273127+ dl_se->dl_yielded = 0;31333128}3134312931353130static void __setscheduler_params(struct task_struct *p,···36463639 * sys_sched_setattr - same as above, but with extended sched_attr36473640 * @pid: the pid in question.36483641 * @uattr: structure containing the extended parameters.36423642+ * @flags: for future extension.36493643 */36503644SYSCALL_DEFINE3(sched_setattr, pid_t, pid, struct sched_attr __user *, uattr,36513645 unsigned int, flags)···37913783 * @pid: the pid in question.37923784 * @uattr: structure containing the extended parameters.37933785 * @size: sizeof(attr) for fwd/bwd comp.37863786+ * @flags: for future extension.37943787 */37953788SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,37963789 unsigned int, size, unsigned int, flags)···60266017 ,60276018 .last_balance = jiffies,60286019 .balance_interval = sd_weight,60206020+ .max_newidle_lb_cost = 0,60216021+ .next_decay_max_lb_cost = jiffies,60296022 };60306023 SD_INIT_NAME(sd, NUMA);60316024 sd->private = &tl->data;
+1-3
kernel/sched/cpudeadline.c
···210210 */211211void cpudl_cleanup(struct cpudl *cp)212212{213213- /*214214- * nothing to do for the moment215215- */213213+ free_cpumask_var(cp->free_cpus);216214}
+1-2
kernel/sched/cpupri.c
···7070 int idx = 0;7171 int task_pri = convert_prio(p->prio);72727373- if (task_pri >= MAX_RT_PRIO)7474- return 0;7373+ BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES);75747675 for (idx = 0; idx < task_pri; idx++) {7776 struct cpupri_vec *vec = &cp->pri_to_cpu[idx];
+16-16
kernel/sched/cputime.c
···332332 * softirq as those do not count in task exec_runtime any more.333333 */334334static void irqtime_account_process_tick(struct task_struct *p, int user_tick,335335- struct rq *rq)335335+ struct rq *rq, int ticks)336336{337337- cputime_t one_jiffy_scaled = cputime_to_scaled(cputime_one_jiffy);337337+ cputime_t scaled = cputime_to_scaled(cputime_one_jiffy);338338+ u64 cputime = (__force u64) cputime_one_jiffy;338339 u64 *cpustat = kcpustat_this_cpu->cpustat;339340340341 if (steal_account_process_tick())341342 return;342343344344+ cputime *= ticks;345345+ scaled *= ticks;346346+343347 if (irqtime_account_hi_update()) {344344- cpustat[CPUTIME_IRQ] += (__force u64) cputime_one_jiffy;348348+ cpustat[CPUTIME_IRQ] += cputime;345349 } else if (irqtime_account_si_update()) {346346- cpustat[CPUTIME_SOFTIRQ] += (__force u64) cputime_one_jiffy;350350+ cpustat[CPUTIME_SOFTIRQ] += cputime;347351 } else if (this_cpu_ksoftirqd() == p) {348352 /*349353 * ksoftirqd time do not get accounted in cpu_softirq_time.350354 * So, we have to handle it separately here.351355 * Also, p->stime needs to be updated for ksoftirqd.352356 */353353- __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled,354354- CPUTIME_SOFTIRQ);357357+ __account_system_time(p, cputime, scaled, CPUTIME_SOFTIRQ);355358 } else if (user_tick) {356356- account_user_time(p, cputime_one_jiffy, one_jiffy_scaled);359359+ account_user_time(p, cputime, scaled);357360 } else if (p == rq->idle) {358358- account_idle_time(cputime_one_jiffy);361361+ account_idle_time(cputime);359362 } else if (p->flags & PF_VCPU) { /* System time or guest time */360360- account_guest_time(p, cputime_one_jiffy, one_jiffy_scaled);363363+ account_guest_time(p, cputime, scaled);361364 } else {362362- __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled,363363- CPUTIME_SYSTEM);365365+ __account_system_time(p, cputime, scaled, CPUTIME_SYSTEM);364366 }365367}366368367369static void irqtime_account_idle_ticks(int ticks)368370{369369- int i;370371 struct rq *rq = this_rq();371372372372- for (i = 0; i < ticks; i++)373373- irqtime_account_process_tick(current, 0, rq);373373+ irqtime_account_process_tick(current, 0, rq, ticks);374374}375375#else /* CONFIG_IRQ_TIME_ACCOUNTING */376376static inline void irqtime_account_idle_ticks(int ticks) {}377377static inline void irqtime_account_process_tick(struct task_struct *p, int user_tick,378378- struct rq *rq) {}378378+ struct rq *rq, int nr_ticks) {}379379#endif /* CONFIG_IRQ_TIME_ACCOUNTING */380380381381/*···464464 return;465465466466 if (sched_clock_irqtime) {467467- irqtime_account_process_tick(p, user_tick, rq);467467+ irqtime_account_process_tick(p, user_tick, rq, 1);468468 return;469469 }470470
+3-2
kernel/sched/deadline.c
···528528 sched_clock_tick();529529 update_rq_clock(rq);530530 dl_se->dl_throttled = 0;531531+ dl_se->dl_yielded = 0;531532 if (p->on_rq) {532533 enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);533534 if (task_has_dl_policy(rq->curr))···894893 * We make the task go to sleep until its current deadline by895894 * forcing its runtime to zero. This way, update_curr_dl() stops896895 * it and the bandwidth timer will wake it up and will give it897897- * new scheduling parameters (thanks to dl_new=1).896896+ * new scheduling parameters (thanks to dl_yielded=1).898897 */899898 if (p->dl.runtime > 0) {900900- rq->curr->dl.dl_new = 1;899899+ rq->curr->dl.dl_yielded = 1;901900 p->dl.runtime = 0;902901 }903902 update_curr_dl(rq);
+8-8
kernel/sched/fair.c
···66536653 int this_cpu = this_rq->cpu;6654665466556655 idle_enter_fair(this_rq);66566656+66566657 /*66576658 * We must set idle_stamp _before_ calling idle_balance(), such that we66586659 * measure the duration of idle_balance() as idle time.···6706670567076706 raw_spin_lock(&this_rq->lock);6708670767086708+ if (curr_cost > this_rq->max_idle_balance_cost)67096709+ this_rq->max_idle_balance_cost = curr_cost;67106710+67096711 /*67106710- * While browsing the domains, we released the rq lock.67116711- * A task could have be enqueued in the meantime67126712+ * While browsing the domains, we released the rq lock, a task could67136713+ * have been enqueued in the meantime. Since we're not going idle,67146714+ * pretend we pulled a task.67126715 */67136713- if (this_rq->cfs.h_nr_running && !pulled_task) {67166716+ if (this_rq->cfs.h_nr_running && !pulled_task)67146717 pulled_task = 1;67156715- goto out;67166716- }6717671867186719 if (pulled_task || time_after(jiffies, this_rq->next_balance)) {67196720 /*···67246721 */67256722 this_rq->next_balance = next_balance;67266723 }67276727-67286728- if (curr_cost > this_rq->max_idle_balance_cost)67296729- this_rq->max_idle_balance_cost = curr_cost;6730672467316725out:67326726 /* Is there a task of a high priority class? */
+4-2
mm/filemap.c
···257257{258258 int ret = 0;259259 /* Check for outstanding write errors */260260- if (test_and_clear_bit(AS_ENOSPC, &mapping->flags))260260+ if (test_bit(AS_ENOSPC, &mapping->flags) &&261261+ test_and_clear_bit(AS_ENOSPC, &mapping->flags))261262 ret = -ENOSPC;262262- if (test_and_clear_bit(AS_EIO, &mapping->flags))263263+ if (test_bit(AS_EIO, &mapping->flags) &&264264+ test_and_clear_bit(AS_EIO, &mapping->flags))263265 ret = -EIO;264266 return ret;265267}
+1-1
mm/madvise.c
···195195 for (; start < end; start += PAGE_SIZE) {196196 index = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;197197198198- page = find_get_page(mapping, index);198198+ page = find_get_entry(mapping, index);199199 if (!radix_tree_exceptional_entry(page)) {200200 if (page)201201 page_cache_release(page);
+14-13
mm/memcontrol.c
···1077107710781078 rcu_read_lock();10791079 do {10801080- memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));10811081- if (unlikely(!memcg))10801080+ /*10811081+ * Page cache insertions can happen withou an10821082+ * actual mm context, e.g. during disk probing10831083+ * on boot, loopback IO, acct() writes etc.10841084+ */10851085+ if (unlikely(!mm))10821086 memcg = root_mem_cgroup;10871087+ else {10881088+ memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));10891089+ if (unlikely(!memcg))10901090+ memcg = root_mem_cgroup;10911091+ }10831092 } while (!css_tryget(&memcg->css));10841093 rcu_read_unlock();10851094 return memcg;···39673958 return 0;39683959 }3969396039703970- /*39713971- * Page cache insertions can happen without an actual mm39723972- * context, e.g. during disk probing on boot.39733973- */39743974- if (unlikely(!mm))39753975- memcg = root_mem_cgroup;39763976- else {39773977- memcg = mem_cgroup_try_charge_mm(mm, gfp_mask, 1, true);39783978- if (!memcg)39793979- return -ENOMEM;39803980- }39613961+ memcg = mem_cgroup_try_charge_mm(mm, gfp_mask, 1, true);39623962+ if (!memcg)39633963+ return -ENOMEM;39813964 __mem_cgroup_commit_charge(memcg, page, 1, type, false);39823965 return 0;39833966}
+10-7
mm/memory-failure.c
···10811081 return 0;10821082 } else if (PageHuge(hpage)) {10831083 /*10841084- * Check "just unpoisoned", "filter hit", and10851085- * "race with other subpage."10841084+ * Check "filter hit" and "race with other subpage."10861085 */10871086 lock_page(hpage);10881088- if (!PageHWPoison(hpage)10891089- || (hwpoison_filter(p) && TestClearPageHWPoison(p))10901090- || (p != hpage && TestSetPageHWPoison(hpage))) {10911091- atomic_long_sub(nr_pages, &num_poisoned_pages);10921092- return 0;10871087+ if (PageHWPoison(hpage)) {10881088+ if ((hwpoison_filter(p) && TestClearPageHWPoison(p))10891089+ || (p != hpage && TestSetPageHWPoison(hpage))) {10901090+ atomic_long_sub(nr_pages, &num_poisoned_pages);10911091+ unlock_page(hpage);10921092+ return 0;10931093+ }10931094 }10941095 set_page_hwpoison_huge_page(hpage);10951096 res = dequeue_hwpoisoned_huge_page(hpage);···11531152 */11541153 if (!PageHWPoison(p)) {11551154 printk(KERN_ERR "MCE %#lx: just unpoisoned\n", pfn);11551155+ atomic_long_sub(nr_pages, &num_poisoned_pages);11561156+ put_page(hpage);11561157 res = 0;11571158 goto out;11581159 }
···15451545 if ((orig_neigh_node) && (!is_single_hop_neigh))15461546 batadv_orig_node_free_ref(orig_neigh_node);15471547out:15481548+ if (router_ifinfo)15491549+ batadv_neigh_ifinfo_free_ref(router_ifinfo);15481550 if (router)15491551 batadv_neigh_node_free_ref(router);15501552 if (router_router)
+1-2
net/batman-adv/distributed-arp-table.c
···940940 * additional DAT answer may trigger kernel warnings about941941 * a packet coming from the wrong port.942942 */943943- if (batadv_is_my_client(bat_priv, dat_entry->mac_addr,944944- BATADV_NO_FLAGS)) {943943+ if (batadv_is_my_client(bat_priv, dat_entry->mac_addr, vid)) {945944 ret = true;946945 goto out;947946 }
+8-3
net/batman-adv/fragmentation.c
···418418 struct batadv_neigh_node *neigh_node)419419{420420 struct batadv_priv *bat_priv;421421- struct batadv_hard_iface *primary_if;421421+ struct batadv_hard_iface *primary_if = NULL;422422 struct batadv_frag_packet frag_header;423423 struct sk_buff *skb_fragment;424424 unsigned mtu = neigh_node->if_incoming->net_dev->mtu;425425 unsigned header_size = sizeof(frag_header);426426 unsigned max_fragment_size, max_packet_size;427427+ bool ret = false;427428428429 /* To avoid merge and refragmentation at next-hops we never send429430 * fragments larger than BATADV_FRAG_MAX_FRAG_SIZE···484483 skb->len + ETH_HLEN);485484 batadv_send_skb_packet(skb, neigh_node->if_incoming, neigh_node->addr);486485487487- return true;486486+ ret = true;487487+488488out_err:489489- return false;489489+ if (primary_if)490490+ batadv_hardif_free_ref(primary_if);491491+492492+ return ret;490493}
+10-3
net/batman-adv/gateway_client.c
···42424343static void batadv_gw_node_free_ref(struct batadv_gw_node *gw_node)4444{4545- if (atomic_dec_and_test(&gw_node->refcount))4545+ if (atomic_dec_and_test(&gw_node->refcount)) {4646+ batadv_orig_node_free_ref(gw_node->orig_node);4647 kfree_rcu(gw_node, rcu);4848+ }4749}48504951static struct batadv_gw_node *···408406 if (gateway->bandwidth_down == 0)409407 return;410408411411- gw_node = kzalloc(sizeof(*gw_node), GFP_ATOMIC);412412- if (!gw_node)409409+ if (!atomic_inc_not_zero(&orig_node->refcount))413410 return;411411+412412+ gw_node = kzalloc(sizeof(*gw_node), GFP_ATOMIC);413413+ if (!gw_node) {414414+ batadv_orig_node_free_ref(orig_node);415415+ return;416416+ }414417415418 INIT_HLIST_NODE(&gw_node->list);416419 gw_node->orig_node = orig_node;
+1-1
net/batman-adv/hard-interface.c
···8383 return true;84848585 /* no more parents..stop recursion */8686- if (net_dev->iflink == net_dev->ifindex)8686+ if (net_dev->iflink == 0 || net_dev->iflink == net_dev->ifindex)8787 return false;88888989 /* recurse over the parent device */
+57-5
net/batman-adv/originator.c
···501501static void batadv_orig_ifinfo_free_rcu(struct rcu_head *rcu)502502{503503 struct batadv_orig_ifinfo *orig_ifinfo;504504+ struct batadv_neigh_node *router;504505505506 orig_ifinfo = container_of(rcu, struct batadv_orig_ifinfo, rcu);506507507508 if (orig_ifinfo->if_outgoing != BATADV_IF_DEFAULT)508509 batadv_hardif_free_ref_now(orig_ifinfo->if_outgoing);509510511511+ /* this is the last reference to this object */512512+ router = rcu_dereference_protected(orig_ifinfo->router, true);513513+ if (router)514514+ batadv_neigh_node_free_ref_now(router);510515 kfree(orig_ifinfo);511516}512517···707702}708703709704/**705705+ * batadv_purge_neigh_ifinfo - purge obsolete ifinfo entries from neighbor706706+ * @bat_priv: the bat priv with all the soft interface information707707+ * @neigh: orig node which is to be checked708708+ */709709+static void710710+batadv_purge_neigh_ifinfo(struct batadv_priv *bat_priv,711711+ struct batadv_neigh_node *neigh)712712+{713713+ struct batadv_neigh_ifinfo *neigh_ifinfo;714714+ struct batadv_hard_iface *if_outgoing;715715+ struct hlist_node *node_tmp;716716+717717+ spin_lock_bh(&neigh->ifinfo_lock);718718+719719+ /* for all ifinfo objects for this neighinator */720720+ hlist_for_each_entry_safe(neigh_ifinfo, node_tmp,721721+ &neigh->ifinfo_list, list) {722722+ if_outgoing = neigh_ifinfo->if_outgoing;723723+724724+ /* always keep the default interface */725725+ if (if_outgoing == BATADV_IF_DEFAULT)726726+ continue;727727+728728+ /* don't purge if the interface is not (going) down */729729+ if ((if_outgoing->if_status != BATADV_IF_INACTIVE) &&730730+ (if_outgoing->if_status != BATADV_IF_NOT_IN_USE) &&731731+ (if_outgoing->if_status != BATADV_IF_TO_BE_REMOVED))732732+ continue;733733+734734+ batadv_dbg(BATADV_DBG_BATMAN, bat_priv,735735+ "neighbor/ifinfo purge: neighbor %pM, iface: %s\n",736736+ neigh->addr, if_outgoing->net_dev->name);737737+738738+ hlist_del_rcu(&neigh_ifinfo->list);739739+ batadv_neigh_ifinfo_free_ref(neigh_ifinfo);740740+ }741741+742742+ spin_unlock_bh(&neigh->ifinfo_lock);743743+}744744+745745+/**710746 * batadv_purge_orig_ifinfo - purge obsolete ifinfo entries from originator711747 * @bat_priv: the bat priv with all the soft interface information712748 * @orig_node: orig node which is to be checked···846800847801 hlist_del_rcu(&neigh_node->list);848802 batadv_neigh_node_free_ref(neigh_node);803803+ } else {804804+ /* only necessary if not the whole neighbor is to be805805+ * deleted, but some interface has been removed.806806+ */807807+ batadv_purge_neigh_ifinfo(bat_priv, neigh_node);849808 }850809 }851810···908857{909858 struct batadv_neigh_node *best_neigh_node;910859 struct batadv_hard_iface *hard_iface;911911- bool changed;860860+ bool changed_ifinfo, changed_neigh;912861913862 if (batadv_has_timed_out(orig_node->last_seen,914863 2 * BATADV_PURGE_TIMEOUT)) {···918867 jiffies_to_msecs(orig_node->last_seen));919868 return true;920869 }921921- changed = batadv_purge_orig_ifinfo(bat_priv, orig_node);922922- changed = changed || batadv_purge_orig_neighbors(bat_priv, orig_node);870870+ changed_ifinfo = batadv_purge_orig_ifinfo(bat_priv, orig_node);871871+ changed_neigh = batadv_purge_orig_neighbors(bat_priv, orig_node);923872924924- if (!changed)873873+ if (!changed_ifinfo && !changed_neigh)925874 return false;926875927876 /* first for NULL ... */···10791028 bat_priv->bat_algo_ops->bat_orig_print(bat_priv, seq, hard_iface);1080102910811030out:10821082- batadv_hardif_free_ref(hard_iface);10311031+ if (hard_iface)10321032+ batadv_hardif_free_ref(hard_iface);10831033 return 0;10841034}10851035
+2-2
net/bridge/br_netfilter.c
···859859 return NF_STOLEN;860860}861861862862-#if IS_ENABLED(CONFIG_NF_CONNTRACK_IPV4)862862+#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)863863static int br_nf_dev_queue_xmit(struct sk_buff *skb)864864{865865 int ret;866866867867- if (skb->nfct != NULL && skb->protocol == htons(ETH_P_IP) &&867867+ if (skb->protocol == htons(ETH_P_IP) &&868868 skb->len + nf_bridge_mtu_reduction(skb) > skb->dev->mtu &&869869 !skb_is_gso(skb)) {870870 if (br_parse_ip_options(skb))
+88-14
net/core/dev.c
···24182418 * 2. No high memory really exists on this machine.24192419 */2420242024212421-static int illegal_highdma(const struct net_device *dev, struct sk_buff *skb)24212421+static int illegal_highdma(struct net_device *dev, struct sk_buff *skb)24222422{24232423#ifdef CONFIG_HIGHMEM24242424 int i;···24932493}2494249424952495static netdev_features_t harmonize_features(struct sk_buff *skb,24962496- const struct net_device *dev,24972497- netdev_features_t features)24962496+ netdev_features_t features)24982497{24992498 int tmp;2500249925012500 if (skb->ip_summed != CHECKSUM_NONE &&25022501 !can_checksum_protocol(features, skb_network_protocol(skb, &tmp))) {25032502 features &= ~NETIF_F_ALL_CSUM;25042504- } else if (illegal_highdma(dev, skb)) {25032503+ } else if (illegal_highdma(skb->dev, skb)) {25052504 features &= ~NETIF_F_SG;25062505 }2507250625082507 return features;25092508}2510250925112511-netdev_features_t netif_skb_dev_features(struct sk_buff *skb,25122512- const struct net_device *dev)25102510+netdev_features_t netif_skb_features(struct sk_buff *skb)25132511{25142512 __be16 protocol = skb->protocol;25152515- netdev_features_t features = dev->features;25132513+ netdev_features_t features = skb->dev->features;2516251425172517- if (skb_shinfo(skb)->gso_segs > dev->gso_max_segs)25152515+ if (skb_shinfo(skb)->gso_segs > skb->dev->gso_max_segs)25182516 features &= ~NETIF_F_GSO_MASK;2519251725202518 if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD)) {25212519 struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;25222520 protocol = veh->h_vlan_encapsulated_proto;25232521 } else if (!vlan_tx_tag_present(skb)) {25242524- return harmonize_features(skb, dev, features);25222522+ return harmonize_features(skb, features);25252523 }2526252425272527- features &= (dev->vlan_features | NETIF_F_HW_VLAN_CTAG_TX |25252525+ features &= (skb->dev->vlan_features | NETIF_F_HW_VLAN_CTAG_TX |25282526 NETIF_F_HW_VLAN_STAG_TX);2529252725302528 if (protocol == htons(ETH_P_8021Q) || protocol == htons(ETH_P_8021AD))···25302532 NETIF_F_GEN_CSUM | NETIF_F_HW_VLAN_CTAG_TX |25312533 NETIF_F_HW_VLAN_STAG_TX;2532253425332533- return harmonize_features(skb, dev, features);25352535+ return harmonize_features(skb, features);25342536}25352535-EXPORT_SYMBOL(netif_skb_dev_features);25372537+EXPORT_SYMBOL(netif_skb_features);2536253825372539int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,25382540 struct netdev_queue *txq)···39513953 }39523954 NAPI_GRO_CB(skb)->count = 1;39533955 NAPI_GRO_CB(skb)->age = jiffies;39563956+ NAPI_GRO_CB(skb)->last = skb;39543957 skb_shinfo(skb)->gso_size = skb_gro_len(skb);39553958 skb->next = napi->gro_list;39563959 napi->gro_list = skb;···45424543EXPORT_SYMBOL(netdev_adjacent_get_private);4543454445444545/**45464546+ * netdev_upper_get_next_dev_rcu - Get the next dev from upper list45474547+ * @dev: device45484548+ * @iter: list_head ** of the current position45494549+ *45504550+ * Gets the next device from the dev's upper list, starting from iter45514551+ * position. The caller must hold RCU read lock.45524552+ */45534553+struct net_device *netdev_upper_get_next_dev_rcu(struct net_device *dev,45544554+ struct list_head **iter)45554555+{45564556+ struct netdev_adjacent *upper;45574557+45584558+ WARN_ON_ONCE(!rcu_read_lock_held() && !lockdep_rtnl_is_held());45594559+45604560+ upper = list_entry_rcu((*iter)->next, struct netdev_adjacent, list);45614561+45624562+ if (&upper->list == &dev->adj_list.upper)45634563+ return NULL;45644564+45654565+ *iter = &upper->list;45664566+45674567+ return upper->dev;45684568+}45694569+EXPORT_SYMBOL(netdev_upper_get_next_dev_rcu);45704570+45714571+/**45454572 * netdev_all_upper_get_next_dev_rcu - Get the next dev from upper list45464573 * @dev: device45474574 * @iter: list_head ** of the current position···46474622 return lower->private;46484623}46494624EXPORT_SYMBOL(netdev_lower_get_next_private_rcu);46254625+46264626+/**46274627+ * netdev_lower_get_next - Get the next device from the lower neighbour46284628+ * list46294629+ * @dev: device46304630+ * @iter: list_head ** of the current position46314631+ *46324632+ * Gets the next netdev_adjacent from the dev's lower neighbour46334633+ * list, starting from iter position. The caller must hold RTNL lock or46344634+ * its own locking that guarantees that the neighbour lower46354635+ * list will remain unchainged.46364636+ */46374637+void *netdev_lower_get_next(struct net_device *dev, struct list_head **iter)46384638+{46394639+ struct netdev_adjacent *lower;46404640+46414641+ lower = list_entry((*iter)->next, struct netdev_adjacent, list);46424642+46434643+ if (&lower->list == &dev->adj_list.lower)46444644+ return NULL;46454645+46464646+ *iter = &lower->list;46474647+46484648+ return lower->dev;46494649+}46504650+EXPORT_SYMBOL(netdev_lower_get_next);4650465146514652/**46524653 * netdev_lower_get_first_private_rcu - Get the first ->private from the···51245073}51255074EXPORT_SYMBOL(netdev_lower_dev_get_private);5126507550765076+50775077+int dev_get_nest_level(struct net_device *dev,50785078+ bool (*type_check)(struct net_device *dev))50795079+{50805080+ struct net_device *lower = NULL;50815081+ struct list_head *iter;50825082+ int max_nest = -1;50835083+ int nest;50845084+50855085+ ASSERT_RTNL();50865086+50875087+ netdev_for_each_lower_dev(dev, lower, iter) {50885088+ nest = dev_get_nest_level(lower, type_check);50895089+ if (max_nest < nest)50905090+ max_nest = nest;50915091+ }50925092+50935093+ if (type_check(dev))50945094+ max_nest++;50955095+50965096+ return max_nest;50975097+}50985098+EXPORT_SYMBOL(dev_get_nest_level);50995099+51275100static void dev_change_rx_flags(struct net_device *dev, int flags)51285101{51295102 const struct net_device_ops *ops = dev->netdev_ops;···53135238 if (ops->ndo_set_rx_mode)53145239 ops->ndo_set_rx_mode(dev);53155240}53165316-EXPORT_SYMBOL(__dev_set_rx_mode);5317524153185242void dev_set_rx_mode(struct net_device *dev)53195243{···5617554356185544/* Delayed registration/unregisteration */56195545static LIST_HEAD(net_todo_list);56205620-static DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq);55465546+DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq);5621554756225548static void net_set_todo(struct net_device *dev)56235549{
···353353}354354EXPORT_SYMBOL_GPL(__rtnl_link_unregister);355355356356+/* Return with the rtnl_lock held when there are no network357357+ * devices unregistering in any network namespace.358358+ */359359+static void rtnl_lock_unregistering_all(void)360360+{361361+ struct net *net;362362+ bool unregistering;363363+ DEFINE_WAIT(wait);364364+365365+ for (;;) {366366+ prepare_to_wait(&netdev_unregistering_wq, &wait,367367+ TASK_UNINTERRUPTIBLE);368368+ unregistering = false;369369+ rtnl_lock();370370+ for_each_net(net) {371371+ if (net->dev_unreg_count > 0) {372372+ unregistering = true;373373+ break;374374+ }375375+ }376376+ if (!unregistering)377377+ break;378378+ __rtnl_unlock();379379+ schedule();380380+ }381381+ finish_wait(&netdev_unregistering_wq, &wait);382382+}383383+356384/**357385 * rtnl_link_unregister - Unregister rtnl_link_ops from rtnetlink.358386 * @ops: struct rtnl_link_ops * to unregister359387 */360388void rtnl_link_unregister(struct rtnl_link_ops *ops)361389{362362- rtnl_lock();390390+ /* Close the race with cleanup_net() */391391+ mutex_lock(&net_mutex);392392+ rtnl_lock_unregistering_all();363393 __rtnl_link_unregister(ops);364394 rtnl_unlock();395395+ mutex_unlock(&net_mutex);365396}366397EXPORT_SYMBOL_GPL(rtnl_link_unregister);367398
+2-2
net/core/skbuff.c
···30763076 if (unlikely(p->len + len >= 65536))30773077 return -E2BIG;3078307830793079- lp = NAPI_GRO_CB(p)->last ?: p;30793079+ lp = NAPI_GRO_CB(p)->last;30803080 pinfo = skb_shinfo(lp);3081308130823082 if (headlen <= offset) {···3192319231933193 __skb_pull(skb, offset);3194319431953195- if (!NAPI_GRO_CB(p)->last)31953195+ if (NAPI_GRO_CB(p)->last == p)31963196 skb_shinfo(p)->frag_list = skb;31973197 else31983198 NAPI_GRO_CB(p)->last->next = skb;
···16501650 return register_pernet_subsys(&ipv4_mib_ops);16511651}1652165216531653+static __net_init int inet_init_net(struct net *net)16541654+{16551655+ /*16561656+ * Set defaults for local port range16571657+ */16581658+ seqlock_init(&net->ipv4.ip_local_ports.lock);16591659+ net->ipv4.ip_local_ports.range[0] = 32768;16601660+ net->ipv4.ip_local_ports.range[1] = 61000;16611661+16621662+ seqlock_init(&net->ipv4.ping_group_range.lock);16631663+ /*16641664+ * Sane defaults - nobody may create ping sockets.16651665+ * Boot scripts should set this to distro-specific group.16661666+ */16671667+ net->ipv4.ping_group_range.range[0] = make_kgid(&init_user_ns, 1);16681668+ net->ipv4.ping_group_range.range[1] = make_kgid(&init_user_ns, 0);16691669+ return 0;16701670+}16711671+16721672+static __net_exit void inet_exit_net(struct net *net)16731673+{16741674+}16751675+16761676+static __net_initdata struct pernet_operations af_inet_ops = {16771677+ .init = inet_init_net,16781678+ .exit = inet_exit_net,16791679+};16801680+16811681+static int __init init_inet_pernet_ops(void)16821682+{16831683+ return register_pernet_subsys(&af_inet_ops);16841684+}16851685+16531686static int ipv4_proc_init(void);1654168716551688/*···18271794 if (ip_mr_init())18281795 pr_crit("%s: Cannot init ipv4 mroute\n", __func__);18291796#endif17971797+17981798+ if (init_inet_pernet_ops())17991799+ pr_crit("%s: Cannot init ipv4 inet pernet ops\n", __func__);18301800 /*18311801 * Initialise per-cpu ipv4 mibs18321802 */
+1-1
net/ipv4/fib_semantics.c
···821821 fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL);822822 if (fi == NULL)823823 goto failure;824824+ fib_info_cnt++;824825 if (cfg->fc_mx) {825826 fi->fib_metrics = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL);826827 if (!fi->fib_metrics)827828 goto failure;828829 } else829830 fi->fib_metrics = (u32 *) dst_default_metrics;830830- fib_info_cnt++;831831832832 fi->fib_net = hold_net(net);833833 fi->fib_protocol = cfg->fc_protocol;
+4-4
net/ipv4/inet_connection_sock.c
···3737 unsigned int seq;38383939 do {4040- seq = read_seqbegin(&net->ipv4.sysctl_local_ports.lock);4040+ seq = read_seqbegin(&net->ipv4.ip_local_ports.lock);41414242- *low = net->ipv4.sysctl_local_ports.range[0];4343- *high = net->ipv4.sysctl_local_ports.range[1];4444- } while (read_seqretry(&net->ipv4.sysctl_local_ports.lock, seq));4242+ *low = net->ipv4.ip_local_ports.range[0];4343+ *high = net->ipv4.ip_local_ports.range[1];4444+ } while (read_seqretry(&net->ipv4.ip_local_ports.lock, seq));4545}4646EXPORT_SYMBOL(inet_get_local_port_range);4747
+2-52
net/ipv4/ip_forward.c
···4242static bool ip_may_fragment(const struct sk_buff *skb)4343{4444 return unlikely((ip_hdr(skb)->frag_off & htons(IP_DF)) == 0) ||4545- !skb->local_df;4545+ skb->local_df;4646}47474848static bool ip_exceeds_mtu(const struct sk_buff *skb, unsigned int mtu)4949{5050- if (skb->len <= mtu || skb->local_df)5050+ if (skb->len <= mtu)5151 return false;52525353 if (skb_is_gso(skb) && skb_gso_network_seglen(skb) <= mtu)···5656 return true;5757}58585959-static bool ip_gso_exceeds_dst_mtu(const struct sk_buff *skb)6060-{6161- unsigned int mtu;6262-6363- if (skb->local_df || !skb_is_gso(skb))6464- return false;6565-6666- mtu = ip_dst_mtu_maybe_forward(skb_dst(skb), true);6767-6868- /* if seglen > mtu, do software segmentation for IP fragmentation on6969- * output. DF bit cannot be set since ip_forward would have sent7070- * icmp error.7171- */7272- return skb_gso_network_seglen(skb) > mtu;7373-}7474-7575-/* called if GSO skb needs to be fragmented on forward */7676-static int ip_forward_finish_gso(struct sk_buff *skb)7777-{7878- struct dst_entry *dst = skb_dst(skb);7979- netdev_features_t features;8080- struct sk_buff *segs;8181- int ret = 0;8282-8383- features = netif_skb_dev_features(skb, dst->dev);8484- segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);8585- if (IS_ERR(segs)) {8686- kfree_skb(skb);8787- return -ENOMEM;8888- }8989-9090- consume_skb(skb);9191-9292- do {9393- struct sk_buff *nskb = segs->next;9494- int err;9595-9696- segs->next = NULL;9797- err = dst_output(segs);9898-9999- if (err && ret == 0)100100- ret = err;101101- segs = nskb;102102- } while (segs);103103-104104- return ret;105105-}1065910760static int ip_forward_finish(struct sk_buff *skb)10861{···6611367114 if (unlikely(opt->optlen))68115 ip_forward_options(skb);6969-7070- if (ip_gso_exceeds_dst_mtu(skb))7171- return ip_forward_finish_gso(skb);7211673117 return dst_output(skb);74118}
···211211 return -EINVAL;212212}213213214214+static int ip_finish_output_gso(struct sk_buff *skb)215215+{216216+ netdev_features_t features;217217+ struct sk_buff *segs;218218+ int ret = 0;219219+220220+ /* common case: locally created skb or seglen is <= mtu */221221+ if (((IPCB(skb)->flags & IPSKB_FORWARDED) == 0) ||222222+ skb_gso_network_seglen(skb) <= ip_skb_dst_mtu(skb))223223+ return ip_finish_output2(skb);224224+225225+ /* Slowpath - GSO segment length is exceeding the dst MTU.226226+ *227227+ * This can happen in two cases:228228+ * 1) TCP GRO packet, DF bit not set229229+ * 2) skb arrived via virtio-net, we thus get TSO/GSO skbs directly230230+ * from host network stack.231231+ */232232+ features = netif_skb_features(skb);233233+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);234234+ if (IS_ERR(segs)) {235235+ kfree_skb(skb);236236+ return -ENOMEM;237237+ }238238+239239+ consume_skb(skb);240240+241241+ do {242242+ struct sk_buff *nskb = segs->next;243243+ int err;244244+245245+ segs->next = NULL;246246+ err = ip_fragment(segs, ip_finish_output2);247247+248248+ if (err && ret == 0)249249+ ret = err;250250+ segs = nskb;251251+ } while (segs);252252+253253+ return ret;254254+}255255+214256static int ip_finish_output(struct sk_buff *skb)215257{216258#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)···262220 return dst_output(skb);263221 }264222#endif265265- if (skb->len > ip_skb_dst_mtu(skb) && !skb_is_gso(skb))223223+ if (skb_is_gso(skb))224224+ return ip_finish_output_gso(skb);225225+226226+ if (skb->len > ip_skb_dst_mtu(skb))266227 return ip_fragment(skb, ip_finish_output2);267267- else268268- return ip_finish_output2(skb);228228+229229+ return ip_finish_output2(skb);269230}270231271232int ip_mc_output(struct sock *sk, struct sk_buff *skb)
+3-1
net/ipv4/ip_tunnel.c
···540540 unsigned int max_headroom; /* The extra header space needed */541541 __be32 dst;542542 int err;543543- bool connected = true;543543+ bool connected;544544545545 inner_iph = (const struct iphdr *)skb_inner_network_header(skb);546546+ connected = (tunnel->parms.iph.daddr != 0);546547547548 dst = tnl_params->daddr;548549 if (dst == 0) {···883882 */884883 if (!IS_ERR(itn->fb_tunnel_dev)) {885884 itn->fb_tunnel_dev->features |= NETIF_F_NETNS_LOCAL;885885+ itn->fb_tunnel_dev->mtu = ip_tunnel_bind_dev(itn->fb_tunnel_dev);886886 ip_tunnel_add(itn, netdev_priv(itn->fb_tunnel_dev));887887 }888888 rtnl_unlock();
+4-1
net/ipv4/ip_vti.c
···239239static int vti4_err(struct sk_buff *skb, u32 info)240240{241241 __be32 spi;242242+ __u32 mark;242243 struct xfrm_state *x;243244 struct ip_tunnel *tunnel;244245 struct ip_esp_hdr *esph;···254253 iph->daddr, iph->saddr, 0);255254 if (!tunnel)256255 return -1;256256+257257+ mark = be32_to_cpu(tunnel->parms.o_key);257258258259 switch (protocol) {259260 case IPPROTO_ESP:···284281 return 0;285282 }286283287287- x = xfrm_state_lookup(net, skb->mark, (const xfrm_address_t *)&iph->daddr,284284+ x = xfrm_state_lookup(net, mark, (const xfrm_address_t *)&iph->daddr,288285 spi, protocol, AF_INET);289286 if (!x)290287 return 0;
+3-2
net/ipv4/netfilter/nf_defrag_ipv4.c
···2222#endif2323#include <net/netfilter/nf_conntrack_zones.h>24242525-/* Returns new sk_buff, or NULL */2625static int nf_ct_ipv4_gather_frags(struct sk_buff *skb, u_int32_t user)2726{2827 int err;···3233 err = ip_defrag(skb, user);3334 local_bh_enable();34353535- if (!err)3636+ if (!err) {3637 ip_send_check(ip_hdr(skb));3838+ skb->local_df = 1;3939+ }37403841 return err;3942}
+3-3
net/ipv4/ping.c
···236236static void inet_get_ping_group_range_net(struct net *net, kgid_t *low,237237 kgid_t *high)238238{239239- kgid_t *data = net->ipv4.sysctl_ping_group_range;239239+ kgid_t *data = net->ipv4.ping_group_range.range;240240 unsigned int seq;241241242242 do {243243- seq = read_seqbegin(&net->ipv4.sysctl_local_ports.lock);243243+ seq = read_seqbegin(&net->ipv4.ping_group_range.lock);244244245245 *low = data[0];246246 *high = data[1];247247- } while (read_seqretry(&net->ipv4.sysctl_local_ports.lock, seq));247247+ } while (read_seqretry(&net->ipv4.ping_group_range.lock, seq));248248}249249250250
+1-1
net/ipv4/route.c
···15191519 struct in_device *out_dev;15201520 unsigned int flags = 0;15211521 bool do_cache;15221522- u32 itag;15221522+ u32 itag = 0;1523152315241524 /* get a working reference to the output device */15251525 out_dev = __in_dev_get_rcu(FIB_RES_DEV(*res));
+14-28
net/ipv4/sysctl_net_ipv4.c
···4545/* Update system visible IP port range */4646static void set_local_port_range(struct net *net, int range[2])4747{4848- write_seqlock(&net->ipv4.sysctl_local_ports.lock);4949- net->ipv4.sysctl_local_ports.range[0] = range[0];5050- net->ipv4.sysctl_local_ports.range[1] = range[1];5151- write_sequnlock(&net->ipv4.sysctl_local_ports.lock);4848+ write_seqlock(&net->ipv4.ip_local_ports.lock);4949+ net->ipv4.ip_local_ports.range[0] = range[0];5050+ net->ipv4.ip_local_ports.range[1] = range[1];5151+ write_sequnlock(&net->ipv4.ip_local_ports.lock);5252}53535454/* Validate changes from /proc interface. */···5757 size_t *lenp, loff_t *ppos)5858{5959 struct net *net =6060- container_of(table->data, struct net, ipv4.sysctl_local_ports.range);6060+ container_of(table->data, struct net, ipv4.ip_local_ports.range);6161 int ret;6262 int range[2];6363 struct ctl_table tmp = {···8787{8888 kgid_t *data = table->data;8989 struct net *net =9090- container_of(table->data, struct net, ipv4.sysctl_ping_group_range);9090+ container_of(table->data, struct net, ipv4.ping_group_range.range);9191 unsigned int seq;9292 do {9393- seq = read_seqbegin(&net->ipv4.sysctl_local_ports.lock);9393+ seq = read_seqbegin(&net->ipv4.ip_local_ports.lock);94949595 *low = data[0];9696 *high = data[1];9797- } while (read_seqretry(&net->ipv4.sysctl_local_ports.lock, seq));9797+ } while (read_seqretry(&net->ipv4.ip_local_ports.lock, seq));9898}9999100100/* Update system visible IP port range */···102102{103103 kgid_t *data = table->data;104104 struct net *net =105105- container_of(table->data, struct net, ipv4.sysctl_ping_group_range);106106- write_seqlock(&net->ipv4.sysctl_local_ports.lock);105105+ container_of(table->data, struct net, ipv4.ping_group_range.range);106106+ write_seqlock(&net->ipv4.ip_local_ports.lock);107107 data[0] = low;108108 data[1] = high;109109- write_sequnlock(&net->ipv4.sysctl_local_ports.lock);109109+ write_sequnlock(&net->ipv4.ip_local_ports.lock);110110}111111112112/* Validate changes from /proc interface. */···805805 },806806 {807807 .procname = "ping_group_range",808808- .data = &init_net.ipv4.sysctl_ping_group_range,808808+ .data = &init_net.ipv4.ping_group_range.range,809809 .maxlen = sizeof(gid_t)*2,810810 .mode = 0644,811811 .proc_handler = ipv4_ping_group_range,···819819 },820820 {821821 .procname = "ip_local_port_range",822822- .maxlen = sizeof(init_net.ipv4.sysctl_local_ports.range),823823- .data = &init_net.ipv4.sysctl_local_ports.range,822822+ .maxlen = sizeof(init_net.ipv4.ip_local_ports.range),823823+ .data = &init_net.ipv4.ip_local_ports.range,824824 .mode = 0644,825825 .proc_handler = ipv4_local_port_range,826826 },···857857 for (i = 0; i < ARRAY_SIZE(ipv4_net_table) - 1; i++)858858 table[i].data += (void *)net - (void *)&init_net;859859 }860860-861861- /*862862- * Sane defaults - nobody may create ping sockets.863863- * Boot scripts should set this to distro-specific group.864864- */865865- net->ipv4.sysctl_ping_group_range[0] = make_kgid(&init_user_ns, 1);866866- net->ipv4.sysctl_ping_group_range[1] = make_kgid(&init_user_ns, 0);867867-868868- /*869869- * Set defaults for local port range870870- */871871- seqlock_init(&net->ipv4.sysctl_local_ports.lock);872872- net->ipv4.sysctl_local_ports.range[0] = 32768;873873- net->ipv4.sysctl_local_ports.range[1] = 61000;874860875861 net->ipv4.ipv4_hdr = register_net_sysctl(net, "net/ipv4", table);876862 if (net->ipv4.ipv4_hdr == NULL)
···3598359835993599 sdata_lock(sdata);3600360036013601- if (ifmgd->auth_data) {36013601+ if (ifmgd->auth_data || ifmgd->assoc_data) {36023602+ const u8 *bssid = ifmgd->auth_data ?36033603+ ifmgd->auth_data->bss->bssid :36043604+ ifmgd->assoc_data->bss->bssid;36053605+36023606 /*36033603- * If we are trying to authenticate while suspending, cfg8021136043604- * won't know and won't actually abort those attempts, thus we36053605- * need to do that ourselves.36073607+ * If we are trying to authenticate / associate while suspending,36083608+ * cfg80211 won't know and won't actually abort those attempts,36093609+ * thus we need to do that ourselves.36063610 */36073607- ieee80211_send_deauth_disassoc(sdata,36083608- ifmgd->auth_data->bss->bssid,36113611+ ieee80211_send_deauth_disassoc(sdata, bssid,36093612 IEEE80211_STYPE_DEAUTH,36103613 WLAN_REASON_DEAUTH_LEAVING,36113614 false, frame_buf);36123612- ieee80211_destroy_auth_data(sdata, false);36153615+ if (ifmgd->assoc_data)36163616+ ieee80211_destroy_assoc_data(sdata, false);36173617+ if (ifmgd->auth_data)36183618+ ieee80211_destroy_auth_data(sdata, false);36133619 cfg80211_tx_mlme_mgmt(sdata->dev, frame_buf,36143620 IEEE80211_DEAUTH_FRAME_LEN);36153621 }
+20-7
net/mac80211/offchannel.c
···333333 container_of(work, struct ieee80211_roc_work, work.work);334334 struct ieee80211_sub_if_data *sdata = roc->sdata;335335 struct ieee80211_local *local = sdata->local;336336- bool started;336336+ bool started, on_channel;337337338338 mutex_lock(&local->mtx);339339···354354 if (!roc->started) {355355 struct ieee80211_roc_work *dep;356356357357- /* start this ROC */358358- ieee80211_offchannel_stop_vifs(local);357357+ WARN_ON(local->use_chanctx);359358360360- /* switch channel etc */359359+ /* If actually operating on the desired channel (with at least360360+ * 20 MHz channel width) don't stop all the operations but still361361+ * treat it as though the ROC operation started properly, so362362+ * other ROC operations won't interfere with this one.363363+ */364364+ roc->on_channel = roc->chan == local->_oper_chandef.chan &&365365+ local->_oper_chandef.width != NL80211_CHAN_WIDTH_5 &&366366+ local->_oper_chandef.width != NL80211_CHAN_WIDTH_10;367367+368368+ /* start this ROC */361369 ieee80211_recalc_idle(local);362370363363- local->tmp_channel = roc->chan;364364- ieee80211_hw_config(local, 0);371371+ if (!roc->on_channel) {372372+ ieee80211_offchannel_stop_vifs(local);373373+374374+ local->tmp_channel = roc->chan;375375+ ieee80211_hw_config(local, 0);376376+ }365377366378 /* tell userspace or send frame */367379 ieee80211_handle_roc_started(roc);···392380 finish:393381 list_del(&roc->list);394382 started = roc->started;383383+ on_channel = roc->on_channel;395384 ieee80211_roc_notify_destroy(roc, !roc->abort);396385397397- if (started) {386386+ if (started && !on_channel) {398387 ieee80211_flush_queues(local, NULL);399388400389 local->tmp_channel = NULL;
+2-1
net/mac80211/rx.c
···12311231 if (ether_addr_equal(bssid, rx->sdata->u.ibss.bssid) &&12321232 test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {12331233 sta->last_rx = jiffies;12341234- if (ieee80211_is_data(hdr->frame_control)) {12341234+ if (ieee80211_is_data(hdr->frame_control) &&12351235+ !is_multicast_ether_addr(hdr->addr1)) {12351236 sta->last_rx_rate_idx = status->rate_idx;12361237 sta->last_rx_rate_flag = status->flag;12371238 sta->last_rx_rate_vht_flag = status->vht_flag;
+2-1
net/mac80211/sta_info.c
···11481148 atomic_dec(&ps->num_sta_ps);1149114911501150 /* This station just woke up and isn't aware of our SMPS state */11511151- if (!ieee80211_smps_is_restrictive(sta->known_smps_mode,11511151+ if (!ieee80211_vif_is_mesh(&sdata->vif) &&11521152+ !ieee80211_smps_is_restrictive(sta->known_smps_mode,11521153 sdata->smps_mode) &&11531154 sta->known_smps_mode != sdata->bss->req_smps &&11541155 sta_info_tx_streams(sta) != 1) {
+2-3
net/mac80211/status.c
···314314 !is_multicast_ether_addr(hdr->addr1))315315 txflags |= IEEE80211_RADIOTAP_F_TX_FAIL;316316317317- if ((info->status.rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS) ||318318- (info->status.rates[0].flags & IEEE80211_TX_RC_USE_CTS_PROTECT))317317+ if (info->status.rates[0].flags & IEEE80211_TX_RC_USE_CTS_PROTECT)319318 txflags |= IEEE80211_RADIOTAP_F_TX_CTS;320320- else if (info->status.rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS)319319+ if (info->status.rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS)321320 txflags |= IEEE80211_RADIOTAP_F_TX_RTS;322321323322 put_unaligned_le16(txflags, pos);
···17801780 mutex_unlock(&local->mtx);1781178117821782 if (sched_scan_stopped)17831783- cfg80211_sched_scan_stopped(local->hw.wiphy);17831783+ cfg80211_sched_scan_stopped_rtnl(local->hw.wiphy);1784178417851785 /*17861786 * If this is for hw restart things are still running.
+6-3
net/mac80211/vht.c
···129129 if (!vht_cap_ie || !sband->vht_cap.vht_supported)130130 return;131131132132- /* A VHT STA must support 40 MHz */133133- if (!(sta->sta.ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40))134134- return;132132+ /*133133+ * A VHT STA must support 40 MHz, but if we verify that here134134+ * then we break a few things - some APs (e.g. Netgear R6300v2135135+ * and others based on the BCM4360 chipset) will unset this136136+ * capability bit when operating in 20 MHz.137137+ */135138136139 vht_cap->vht_supported = true;137140
+3
net/netfilter/nf_conntrack_netlink.c
···13361336#ifdef CONFIG_NF_NAT_NEEDED13371337 int ret;1338133813391339+ if (!cda[CTA_NAT_DST] && !cda[CTA_NAT_SRC])13401340+ return 0;13411341+13391342 ret = ctnetlink_parse_nat_setup(ct, NF_NAT_MANIP_DST,13401343 cda[CTA_NAT_DST]);13411344 if (ret < 0)
+23-26
net/netfilter/nf_tables_core.c
···6666 int rulenum;6767};68686969-static inline void7070-nft_chain_stats(const struct nft_chain *this, const struct nft_pktinfo *pkt,7171- struct nft_jumpstack *jumpstack, unsigned int stackptr)7272-{7373- struct nft_stats __percpu *stats;7474- const struct nft_chain *chain = stackptr ? jumpstack[0].chain : this;7575-7676- rcu_read_lock_bh();7777- stats = rcu_dereference(nft_base_chain(chain)->stats);7878- __this_cpu_inc(stats->pkts);7979- __this_cpu_add(stats->bytes, pkt->skb->len);8080- rcu_read_unlock_bh();8181-}8282-8369enum nft_trace {8470 NFT_TRACE_RULE,8571 NFT_TRACE_RETURN,···103117unsigned int104118nft_do_chain(struct nft_pktinfo *pkt, const struct nf_hook_ops *ops)105119{106106- const struct nft_chain *chain = ops->priv;120120+ const struct nft_chain *chain = ops->priv, *basechain = chain;107121 const struct nft_rule *rule;108122 const struct nft_expr *expr, *last;109123 struct nft_data data[NFT_REG_MAX + 1];110124 unsigned int stackptr = 0;111125 struct nft_jumpstack jumpstack[NFT_JUMP_STACK_SIZE];112112- int rulenum = 0;126126+ struct nft_stats __percpu *stats;127127+ int rulenum;113128 /*114129 * Cache cursor to avoid problems in case that the cursor is updated115130 * while traversing the ruleset.···118131 unsigned int gencursor = ACCESS_ONCE(chain->net->nft.gencursor);119132120133do_chain:134134+ rulenum = 0;121135 rule = list_entry(&chain->rules, struct nft_rule, list);122136next_rule:123137 data[NFT_REG_VERDICT].verdict = NFT_CONTINUE;···144156 switch (data[NFT_REG_VERDICT].verdict) {145157 case NFT_BREAK:146158 data[NFT_REG_VERDICT].verdict = NFT_CONTINUE;147147- /* fall through */159159+ continue;148160 case NFT_CONTINUE:161161+ if (unlikely(pkt->skb->nf_trace))162162+ nft_trace_packet(pkt, chain, rulenum, NFT_TRACE_RULE);149163 continue;150164 }151165 break;···173183 jumpstack[stackptr].rule = rule;174184 jumpstack[stackptr].rulenum = rulenum;175185 stackptr++;176176- /* fall through */186186+ chain = data[NFT_REG_VERDICT].chain;187187+ goto do_chain;177188 case NFT_GOTO:189189+ if (unlikely(pkt->skb->nf_trace))190190+ nft_trace_packet(pkt, chain, rulenum, NFT_TRACE_RULE);191191+178192 chain = data[NFT_REG_VERDICT].chain;179193 goto do_chain;180194 case NFT_RETURN:181195 if (unlikely(pkt->skb->nf_trace))182196 nft_trace_packet(pkt, chain, rulenum, NFT_TRACE_RETURN);183183-184184- /* fall through */197197+ break;185198 case NFT_CONTINUE:199199+ if (unlikely(pkt->skb->nf_trace && !(chain->flags & NFT_BASE_CHAIN)))200200+ nft_trace_packet(pkt, chain, ++rulenum, NFT_TRACE_RETURN);186201 break;187202 default:188203 WARN_ON(1);189204 }190205191206 if (stackptr > 0) {192192- if (unlikely(pkt->skb->nf_trace))193193- nft_trace_packet(pkt, chain, ++rulenum, NFT_TRACE_RETURN);194194-195207 stackptr--;196208 chain = jumpstack[stackptr].chain;197209 rule = jumpstack[stackptr].rule;198210 rulenum = jumpstack[stackptr].rulenum;199211 goto next_rule;200212 }201201- nft_chain_stats(chain, pkt, jumpstack, stackptr);202213203214 if (unlikely(pkt->skb->nf_trace))204204- nft_trace_packet(pkt, chain, ++rulenum, NFT_TRACE_POLICY);215215+ nft_trace_packet(pkt, basechain, -1, NFT_TRACE_POLICY);205216206206- return nft_base_chain(chain)->policy;217217+ rcu_read_lock_bh();218218+ stats = rcu_dereference(nft_base_chain(basechain)->stats);219219+ __this_cpu_inc(stats->pkts);220220+ __this_cpu_add(stats->bytes, pkt->skb->len);221221+ rcu_read_unlock_bh();222222+223223+ return nft_base_chain(basechain)->policy;207224}208225EXPORT_SYMBOL_GPL(nft_do_chain);209226
···11# file format version22FILE_VERSION = 13344-MAKEFLAGS += --no-print-directory55-LIBLOCKDEP_VERSION=$(shell make -sC ../../.. kernelversion)44+LIBLOCKDEP_VERSION=$(shell make --no-print-directory -sC ../../.. kernelversion)6576# Makefiles suck: This macro sets a default value of $(2) for the87# variable named by $(1), unless the variable has been set by···230231install: install_lib231232232233clean:233233- $(RM) *.o *~ $(TARGETS) *.a *.so $(VERSION_FILES) .*.d234234+ $(RM) *.o *~ $(TARGETS) *.a *liblockdep*.so* $(VERSION_FILES) .*.d234235 $(RM) tags TAGS235236236237endif # skip-makefile