···1919(DSA_MAX_SWITCHES).2020Each of these switch child nodes should have the following required properties:21212222-- reg : Describes the switch address on the MII bus2222+- reg : Contains two fields. The first one describes the2323+ address on the MII bus. The second is the switch2424+ number that must be unique in cascaded configurations2325- #address-cells : Must be 12426- #size-cells : Must be 02527
+8
Documentation/input/alps.txt
···114114 byte 4: 0 y6 y5 y4 y3 y2 y1 y0115115 byte 5: 0 z6 z5 z4 z3 z2 z1 z0116116117117+Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for118118+the DualPoint Stick.119119+117120Dualpoint device -- interleaved packet format118121---------------------------------------------119122···129126 byte 6: 0 y9 y8 y7 1 m r l130127 byte 7: 0 y6 y5 y4 y3 y2 y1 y0131128 byte 8: 0 z6 z5 z4 z3 z2 z1 z0129129+130130+Devices which use the interleaving format normally send standard PS/2 mouse131131+packets for the DualPoint Stick + ALPS Absolute Mode packets for the132132+touchpad, switching to the interleaved packet format when both the stick and133133+the touchpad are used at the same time.132134133135ALPS Absolute Mode - Protocol Version 3134136---------------------------------------
+6
Documentation/input/event-codes.txt
···294294The kernel does not provide button emulation for such devices but treats295295them as any other INPUT_PROP_BUTTONPAD device.296296297297+INPUT_PROP_ACCELEROMETER298298+-------------------------299299+Directional axes on this device (absolute and/or relative x, y, z) represent300300+accelerometer data. All other axes retain their meaning. A device must not mix301301+regular directional axes and accelerometer axes on the same event node.302302+297303Guidelines:298304==========299305The guidelines below ensure proper single-touch and multi-finger functionality.
+6-3
Documentation/input/multi-touch-protocol.txt
···312312313313The type of approaching tool. A lot of kernel drivers cannot distinguish314314between different tool types, such as a finger or a pen. In such cases, the315315-event should be omitted. The protocol currently supports MT_TOOL_FINGER and316316-MT_TOOL_PEN [2]. For type B devices, this event is handled by input core;317317-drivers should instead use input_mt_report_slot_state().315315+event should be omitted. The protocol currently supports MT_TOOL_FINGER,316316+MT_TOOL_PEN, and MT_TOOL_PALM [2]. For type B devices, this event is handled317317+by input core; drivers should instead use input_mt_report_slot_state().318318+A contact's ABS_MT_TOOL_TYPE may change over time while still touching the319319+device, because the firmware may not be able to determine which tool is being320320+used when it first appears.318321319322ABS_MT_BLOB_ID320323
+14-16
MAINTAINERS
···637637F: include/uapi/linux/kfd_ioctl.h638638639639AMD MICROCODE UPDATE SUPPORT640640-M: Andreas Herrmann <herrmann.der.user@googlemail.com>641641-L: amd64-microcode@amd64.org640640+M: Borislav Petkov <bp@alien8.de>642641S: Maintained643642F: arch/x86/kernel/cpu/microcode/amd*644643···50935094F: drivers/platform/x86/intel_menlow.c5094509550955096INTEL IA32 MICROCODE UPDATE SUPPORT50965096-M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>50975097+M: Borislav Petkov <bp@alien8.de>50975098S: Maintained50985099F: arch/x86/kernel/cpu/microcode/core*50995100F: arch/x86/kernel/cpu/microcode/intel*···51345135S: Maintained51355136F: drivers/char/hw_random/ixp4xx-rng.c5136513751375137-INTEL ETHERNET DRIVERS (e100/e1000/e1000e/fm10k/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e/i40evf)51385138+INTEL ETHERNET DRIVERS51385139M: Jeff Kirsher <jeffrey.t.kirsher@intel.com>51395139-M: Jesse Brandeburg <jesse.brandeburg@intel.com>51405140-M: Bruce Allan <bruce.w.allan@intel.com>51415141-M: Carolyn Wyborny <carolyn.wyborny@intel.com>51425142-M: Don Skidmore <donald.c.skidmore@intel.com>51435143-M: Greg Rose <gregory.v.rose@intel.com>51445144-M: Matthew Vick <matthew.vick@intel.com>51455145-M: John Ronciak <john.ronciak@intel.com>51465146-M: Mitch Williams <mitch.a.williams@intel.com>51475147-M: Linux NICS <linux.nics@intel.com>51485148-L: e1000-devel@lists.sourceforge.net51405140+R: Jesse Brandeburg <jesse.brandeburg@intel.com>51415141+R: Shannon Nelson <shannon.nelson@intel.com>51425142+R: Carolyn Wyborny <carolyn.wyborny@intel.com>51435143+R: Don Skidmore <donald.c.skidmore@intel.com>51445144+R: Matthew Vick <matthew.vick@intel.com>51455145+R: John Ronciak <john.ronciak@intel.com>51465146+R: Mitch Williams <mitch.a.williams@intel.com>51475147+L: intel-wired-lan@lists.osuosl.org51495148W: http://www.intel.com/support/feedback.htm51505149W: http://e1000.sourceforge.net/51515151-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net.git51525152-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next.git51505150+Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/51515151+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git51525152+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git51535153S: Supported51545154F: Documentation/networking/e100.txt51555155F: Documentation/networking/e1000.txt
···212212 INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */213213 INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */214214 /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */215215- INTEL_EVENT_CONSTRAINT(0x08a3, 0x4),215215+ INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4),216216 /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */217217- INTEL_EVENT_CONSTRAINT(0x0ca3, 0x4),217217+ INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4),218218 /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */219219- INTEL_EVENT_CONSTRAINT(0x04a3, 0xf),219219+ INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf),220220 EVENT_CONSTRAINT_END221221};222222···16491649 if (c)16501650 return c;1651165116521652- c = intel_pebs_constraints(event);16521652+ c = intel_shared_regs_constraints(cpuc, event);16531653 if (c)16541654 return c;1655165516561656- c = intel_shared_regs_constraints(cpuc, event);16561656+ c = intel_pebs_constraints(event);16571657 if (c)16581658 return c;16591659
+15-1
arch/x86/kernel/entry_64.S
···799799 cmpq %r11,(EFLAGS-ARGOFFSET)(%rsp) /* R11 == RFLAGS */800800 jne opportunistic_sysret_failed801801802802- testq $X86_EFLAGS_RF,%r11 /* sysret can't restore RF */802802+ /*803803+ * SYSRET can't restore RF. SYSRET can restore TF, but unlike IRET,804804+ * restoring TF results in a trap from userspace immediately after805805+ * SYSRET. This would cause an infinite loop whenever #DB happens806806+ * with register state that satisfies the opportunistic SYSRET807807+ * conditions. For example, single-stepping this user code:808808+ *809809+ * movq $stuck_here,%rcx810810+ * pushfq811811+ * popq %r11812812+ * stuck_here:813813+ *814814+ * would never get past 'stuck_here'.815815+ */816816+ testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11803817 jnz opportunistic_sysret_failed804818805819 /* nothing to check for RSP */
···260260 */261261 if (echan->edesc) {262262 int cyclic = echan->edesc->cyclic;263263+264264+ /*265265+ * free the running request descriptor266266+ * since it is not in any of the vdesc lists267267+ */268268+ edma_desc_free(&echan->edesc->vdesc);269269+263270 echan->edesc = NULL;264271 edma_stop(echan->ch_num);265272 /* Move the cyclic channel back to default queue */
···981981 * c->desc is NULL and exit.)982982 */983983 if (c->desc) {984984+ omap_dma_desc_free(&c->desc->vd);984985 c->desc = NULL;985986 /* Avoid stopping the dma twice */986987 if (!c->paused)
+7-15
drivers/firmware/dmi_scan.c
···8686 int i = 0;87878888 /*8989- * Stop when we see all the items the table claimed to have9090- * OR we run off the end of the table (also happens)8989+ * Stop when we have seen all the items the table claimed to have9090+ * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run9191+ * off the end of the table (should never happen but sometimes does9292+ * on bogus implementations.)9193 */9292- while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) {9494+ while ((!num || i < num) &&9595+ (data - buf + sizeof(struct dmi_header)) <= len) {9396 const struct dmi_header *dm = (const struct dmi_header *)data;94979598 /*···532529 if (memcmp(buf, "_SM3_", 5) == 0 &&533530 buf[6] < 32 && dmi_checksum(buf, buf[6])) {534531 dmi_ver = get_unaligned_be16(buf + 7);532532+ dmi_num = 0; /* No longer specified */535533 dmi_len = get_unaligned_le32(buf + 12);536534 dmi_base = get_unaligned_le64(buf + 16);537537-538538- /*539539- * The 64-bit SMBIOS 3.0 entry point no longer has a field540540- * containing the number of structures present in the table.541541- * Instead, it defines the table size as a maximum size, and542542- * relies on the end-of-table structure type (#127) to be used543543- * to signal the end of the table.544544- * So let's define dmi_num as an upper bound as well: each545545- * structure has a 4 byte header, so dmi_len / 4 is an upper546546- * bound for the number of structures in the table.547547- */548548- dmi_num = dmi_len / 4;549535550536 if (dmi_walk_early(dmi_decode) == 0) {551537 pr_info("SMBIOS %d.%d present.\n",
···219219 ret = of_property_read_u32_index(np, "gpio,syscon-dev", 2,220220 &priv->dir_reg_offset);221221 if (ret)222222- dev_err(dev, "can't read the dir register offset!\n");222222+ dev_dbg(dev, "can't read the dir register offset!\n");223223224224 priv->dir_reg_offset <<= 3;225225 }
+10
drivers/gpio/gpiolib-acpi.c
···201201 if (!handler)202202 return AE_BAD_PARAMETER;203203204204+ pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin);205205+ if (pin < 0)206206+ return AE_BAD_PARAMETER;207207+204208 desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event");205209 if (IS_ERR(desc)) {206210 dev_err(chip->dev, "Failed to request GPIO\n");···554550 struct acpi_gpio_connection *conn;555551 struct gpio_desc *desc;556552 bool found;553553+554554+ pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin);555555+ if (pin < 0) {556556+ status = AE_BAD_PARAMETER;557557+ goto out;558558+ }557559558560 mutex_lock(&achip->conn_lock);559561
···11131113 drm_modeset_lock_all(dev);1114111411151115 plane = drm_plane_find(dev, set->plane_id);11161116- if (!plane) {11161116+ if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) {11171117 ret = -ENOENT;11181118 goto out_unlock;11191119 }
+4-7
drivers/gpu/drm/radeon/radeon_mn.c
···122122 it = interval_tree_iter_first(&rmn->objects, start, end);123123 while (it) {124124 struct radeon_bo *bo;125125- struct fence *fence;126125 int r;127126128127 bo = container_of(it, struct radeon_bo, mn_it);···133134 continue;134135 }135136136136- fence = reservation_object_get_excl(bo->tbo.resv);137137- if (fence) {138138- r = radeon_fence_wait((struct radeon_fence *)fence, false);139139- if (r)140140- DRM_ERROR("(%d) failed to wait for user bo\n", r);141141- }137137+ r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true,138138+ false, MAX_SCHEDULE_TIMEOUT);139139+ if (r)140140+ DRM_ERROR("(%d) failed to wait for user bo\n", r);142141143142 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU);144143 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+4
drivers/gpu/drm/radeon/radeon_ttm.c
···598598 enum dma_data_direction direction = write ?599599 DMA_BIDIRECTIONAL : DMA_TO_DEVICE;600600601601+ /* double check that we don't free the table twice */602602+ if (!ttm->sg->sgl)603603+ return;604604+601605 /* free the sg table and pages again */602606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction);603607
+1-1
drivers/iio/accel/bma180.c
···659659660660 mutex_lock(&data->mutex);661661662662- for_each_set_bit(bit, indio_dev->buffer->scan_mask,662662+ for_each_set_bit(bit, indio_dev->active_scan_mask,663663 indio_dev->masklength) {664664 ret = bma180_get_data_reg(data, bit);665665 if (ret < 0) {
···141141 struct regulator *vref;142142 struct vf610_adc_feature adc_feature;143143144144+ u32 sample_freq_avail[5];145145+144146 struct completion completion;145147};148148+149149+static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 };146150147151#define VF610_ADC_CHAN(_idx, _chan_type) { \148152 .type = (_chan_type), \···184180 /* sentinel */185181};186182187187-/*188188- * ADC sample frequency, unit is ADCK cycles.189189- * ADC clk source is ipg clock, which is the same as bus clock.190190- *191191- * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder)192192- * SFCAdder: fixed to 6 ADCK cycles193193- * AverageNum: 1, 4, 8, 16, 32 samples for hardware average.194194- * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode195195- * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles196196- *197197- * By default, enable 12 bit resolution mode, clock source198198- * set to ipg clock, So get below frequency group:199199- */200200-static const u32 vf610_sample_freq_avail[5] =201201-{1941176, 559332, 286957, 145374, 73171};183183+static inline void vf610_adc_calculate_rates(struct vf610_adc *info)184184+{185185+ unsigned long adck_rate, ipg_rate = clk_get_rate(info->clk);186186+ int i;187187+188188+ /*189189+ * Calculate ADC sample frequencies190190+ * Sample time unit is ADCK cycles. ADCK clk source is ipg clock,191191+ * which is the same as bus clock.192192+ *193193+ * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder)194194+ * SFCAdder: fixed to 6 ADCK cycles195195+ * AverageNum: 1, 4, 8, 16, 32 samples for hardware average.196196+ * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode197197+ * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles198198+ */199199+ adck_rate = ipg_rate / info->adc_feature.clk_div;200200+ for (i = 0; i < ARRAY_SIZE(vf610_hw_avgs); i++)201201+ info->sample_freq_avail[i] =202202+ adck_rate / (6 + vf610_hw_avgs[i] * (25 + 3));203203+}202204203205static inline void vf610_adc_cfg_init(struct vf610_adc *info)204206{207207+ struct vf610_adc_feature *adc_feature = &info->adc_feature;208208+205209 /* set default Configuration for ADC controller */206206- info->adc_feature.clk_sel = VF610_ADCIOC_BUSCLK_SET;207207- info->adc_feature.vol_ref = VF610_ADCIOC_VR_VREF_SET;210210+ adc_feature->clk_sel = VF610_ADCIOC_BUSCLK_SET;211211+ adc_feature->vol_ref = VF610_ADCIOC_VR_VREF_SET;208212209209- info->adc_feature.calibration = true;210210- info->adc_feature.ovwren = true;213213+ adc_feature->calibration = true;214214+ adc_feature->ovwren = true;211215212212- info->adc_feature.clk_div = 1;213213- info->adc_feature.res_mode = 12;214214- info->adc_feature.sample_rate = 1;215215- info->adc_feature.lpm = true;216216+ adc_feature->res_mode = 12;217217+ adc_feature->sample_rate = 1;218218+ adc_feature->lpm = true;219219+220220+ /* Use a save ADCK which is below 20MHz on all devices */221221+ adc_feature->clk_div = 8;222222+223223+ vf610_adc_calculate_rates(info);216224}217225218226static void vf610_adc_cfg_post_set(struct vf610_adc *info)···306290307291 cfg_data = readl(info->regs + VF610_REG_ADC_CFG);308292309309- /* low power configuration */310293 cfg_data &= ~VF610_ADC_ADLPC_EN;311294 if (adc_feature->lpm)312295 cfg_data |= VF610_ADC_ADLPC_EN;313296314314- /* disable high speed */315297 cfg_data &= ~VF610_ADC_ADHSC_EN;316298317299 writel(cfg_data, info->regs + VF610_REG_ADC_CFG);···449435 return IRQ_HANDLED;450436}451437452452-static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("1941176, 559332, 286957, 145374, 73171");438438+static ssize_t vf610_show_samp_freq_avail(struct device *dev,439439+ struct device_attribute *attr, char *buf)440440+{441441+ struct vf610_adc *info = iio_priv(dev_to_iio_dev(dev));442442+ size_t len = 0;443443+ int i;444444+445445+ for (i = 0; i < ARRAY_SIZE(info->sample_freq_avail); i++)446446+ len += scnprintf(buf + len, PAGE_SIZE - len,447447+ "%u ", info->sample_freq_avail[i]);448448+449449+ /* replace trailing space by newline */450450+ buf[len - 1] = '\n';451451+452452+ return len;453453+}454454+455455+static IIO_DEV_ATTR_SAMP_FREQ_AVAIL(vf610_show_samp_freq_avail);453456454457static struct attribute *vf610_attributes[] = {455455- &iio_const_attr_sampling_frequency_available.dev_attr.attr,458458+ &iio_dev_attr_sampling_frequency_available.dev_attr.attr,456459 NULL457460};458461···533502 return IIO_VAL_FRACTIONAL_LOG2;534503535504 case IIO_CHAN_INFO_SAMP_FREQ:536536- *val = vf610_sample_freq_avail[info->adc_feature.sample_rate];505505+ *val = info->sample_freq_avail[info->adc_feature.sample_rate];537506 *val2 = 0;538507 return IIO_VAL_INT;539508···556525 switch (mask) {557526 case IIO_CHAN_INFO_SAMP_FREQ:558527 for (i = 0;559559- i < ARRAY_SIZE(vf610_sample_freq_avail);528528+ i < ARRAY_SIZE(info->sample_freq_avail);560529 i++)561561- if (val == vf610_sample_freq_avail[i]) {530530+ if (val == info->sample_freq_avail[i]) {562531 info->adc_feature.sample_rate = i;563532 vf610_adc_sample_set(info);564533 return 0;
+1-1
drivers/iio/gyro/bmg160.c
···822822 int bit, ret, i = 0;823823824824 mutex_lock(&data->mutex);825825- for_each_set_bit(bit, indio_dev->buffer->scan_mask,825825+ for_each_set_bit(bit, indio_dev->active_scan_mask,826826 indio_dev->masklength) {827827 ret = i2c_smbus_read_word_data(data->client,828828 BMG160_AXIS_TO_REG(bit));
+1-1
drivers/iio/imu/adis_trigger.c
···6060 iio_trigger_set_drvdata(adis->trig, adis);6161 ret = iio_trigger_register(adis->trig);62626363- indio_dev->trig = adis->trig;6363+ indio_dev->trig = iio_trigger_get(adis->trig);6464 if (ret)6565 goto error_free_irq;6666
+30-26
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···410410 }411411}412412413413-static int inv_mpu6050_write_fsr(struct inv_mpu6050_state *st, int fsr)413413+static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)414414{415415- int result;415415+ int result, i;416416 u8 d;417417418418- if (fsr < 0 || fsr > INV_MPU6050_MAX_GYRO_FS_PARAM)419419- return -EINVAL;420420- if (fsr == st->chip_config.fsr)421421- return 0;418418+ for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) {419419+ if (gyro_scale_6050[i] == val) {420420+ d = (i << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT);421421+ result = inv_mpu6050_write_reg(st,422422+ st->reg->gyro_config, d);423423+ if (result)424424+ return result;422425423423- d = (fsr << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT);424424- result = inv_mpu6050_write_reg(st, st->reg->gyro_config, d);425425- if (result)426426- return result;427427- st->chip_config.fsr = fsr;426426+ st->chip_config.fsr = i;427427+ return 0;428428+ }429429+ }428430429429- return 0;431431+ return -EINVAL;430432}431433432432-static int inv_mpu6050_write_accel_fs(struct inv_mpu6050_state *st, int fs)434434+static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)433435{434434- int result;436436+ int result, i;435437 u8 d;436438437437- if (fs < 0 || fs > INV_MPU6050_MAX_ACCL_FS_PARAM)438438- return -EINVAL;439439- if (fs == st->chip_config.accl_fs)440440- return 0;439439+ for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) {440440+ if (accel_scale[i] == val) {441441+ d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);442442+ result = inv_mpu6050_write_reg(st,443443+ st->reg->accl_config, d);444444+ if (result)445445+ return result;441446442442- d = (fs << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT);443443- result = inv_mpu6050_write_reg(st, st->reg->accl_config, d);444444- if (result)445445- return result;446446- st->chip_config.accl_fs = fs;447447+ st->chip_config.accl_fs = i;448448+ return 0;449449+ }450450+ }447451448448- return 0;452452+ return -EINVAL;449453}450454451455static int inv_mpu6050_write_raw(struct iio_dev *indio_dev,···475471 case IIO_CHAN_INFO_SCALE:476472 switch (chan->type) {477473 case IIO_ANGL_VEL:478478- result = inv_mpu6050_write_fsr(st, val);474474+ result = inv_mpu6050_write_gyro_scale(st, val2);479475 break;480476 case IIO_ACCEL:481481- result = inv_mpu6050_write_accel_fs(st, val);477477+ result = inv_mpu6050_write_accel_scale(st, val2);482478 break;483479 default:484480 result = -EINVAL;
+14-11
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
···2424#include <linux/poll.h>2525#include "inv_mpu_iio.h"26262727+static void inv_clear_kfifo(struct inv_mpu6050_state *st)2828+{2929+ unsigned long flags;3030+3131+ /* take the spin lock sem to avoid interrupt kick in */3232+ spin_lock_irqsave(&st->time_stamp_lock, flags);3333+ kfifo_reset(&st->timestamps);3434+ spin_unlock_irqrestore(&st->time_stamp_lock, flags);3535+}3636+2737int inv_reset_fifo(struct iio_dev *indio_dev)2838{2939 int result;···6050 INV_MPU6050_BIT_FIFO_RST);6151 if (result)6252 goto reset_fifo_fail;5353+5454+ /* clear timestamps fifo */5555+ inv_clear_kfifo(st);5656+6357 /* enable interrupt */6458 if (st->chip_config.accl_fifo_enable ||6559 st->chip_config.gyro_fifo_enable) {···9581 INV_MPU6050_BIT_DATA_RDY_EN);96829783 return result;9898-}9999-100100-static void inv_clear_kfifo(struct inv_mpu6050_state *st)101101-{102102- unsigned long flags;103103-104104- /* take the spin lock sem to avoid interrupt kick in */105105- spin_lock_irqsave(&st->time_stamp_lock, flags);106106- kfifo_reset(&st->timestamps);107107- spin_unlock_irqrestore(&st->time_stamp_lock, flags);10884}1098511086/**···188184flush_fifo:189185 /* Flush HW and SW FIFOs. */190186 inv_reset_fifo(indio_dev);191191- inv_clear_kfifo(st);192187 mutex_unlock(&indio_dev->mlock);193188 iio_trigger_notify_done(indio_dev->trig);194189
+1-1
drivers/iio/imu/kmx61.c
···12271227 base = KMX61_MAG_XOUT_L;1228122812291229 mutex_lock(&data->lock);12301230- for_each_set_bit(bit, indio_dev->buffer->scan_mask,12301230+ for_each_set_bit(bit, indio_dev->active_scan_mask,12311231 indio_dev->masklength) {12321232 ret = kmx61_read_measurement(data, base, bit);12331233 if (ret < 0) {
+3-2
drivers/iio/industrialio-core.c
···847847 * @attr_list: List of IIO device attributes848848 *849849 * This function frees the memory allocated for each of the IIO device850850- * attributes in the list. Note: if you want to reuse the list after calling851851- * this function you have to reinitialize it using INIT_LIST_HEAD().850850+ * attributes in the list.852851 */853852void iio_free_chan_devattr_list(struct list_head *attr_list)854853{···855856856857 list_for_each_entry_safe(p, n, attr_list, l) {857858 kfree(p->dev_attr.attr.name);859859+ list_del(&p->l);858860 kfree(p);859861 }860862}···936936937937 iio_free_chan_devattr_list(&indio_dev->channel_attr_list);938938 kfree(indio_dev->chan_attr_group.attrs);939939+ indio_dev->chan_attr_group.attrs = NULL;939940}940941941942static void iio_dev_release(struct device *device)
···9999 if (dmasync)100100 dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs);101101102102+ /*103103+ * If the combination of the addr and size requested for this memory104104+ * region causes an integer overflow, return error.105105+ */106106+ if ((PAGE_ALIGN(addr + size) <= size) ||107107+ (PAGE_ALIGN(addr + size) <= addr))108108+ return ERR_PTR(-EINVAL);109109+102110 if (!can_do_mlock())103111 return ERR_PTR(-EPERM);104112
+29-19
drivers/input/mouse/alps.c
···11541154 mutex_unlock(&alps_mutex);11551155}1156115611571157-static void alps_report_bare_ps2_packet(struct input_dev *dev,11571157+static void alps_report_bare_ps2_packet(struct psmouse *psmouse,11581158 unsigned char packet[],11591159 bool report_buttons)11601160{11611161+ struct alps_data *priv = psmouse->private;11621162+ struct input_dev *dev;11631163+11641164+ /* Figure out which device to use to report the bare packet */11651165+ if (priv->proto_version == ALPS_PROTO_V2 &&11661166+ (priv->flags & ALPS_DUALPOINT)) {11671167+ /* On V2 devices the DualPoint Stick reports bare packets */11681168+ dev = priv->dev2;11691169+ } else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) {11701170+ /* Register dev3 mouse if we received PS/2 packet first time */11711171+ if (!IS_ERR(priv->dev3))11721172+ psmouse_queue_work(psmouse, &priv->dev3_register_work,11731173+ 0);11741174+ return;11751175+ } else {11761176+ dev = priv->dev3;11771177+ }11781178+11611179 if (report_buttons)11621180 alps_report_buttons(dev, NULL,11631181 packet[0] & 1, packet[0] & 2, packet[0] & 4);···12501232 * de-synchronization.12511233 */1252123412531253- alps_report_bare_ps2_packet(priv->dev2,12541254- &psmouse->packet[3], false);12351235+ alps_report_bare_ps2_packet(psmouse, &psmouse->packet[3],12361236+ false);1255123712561238 /*12571239 * Continue with the standard ALPS protocol handling,···13071289 * properly we only do this if the device is fully synchronized.13081290 */13091291 if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) {13101310-13111311- /* Register dev3 mouse if we received PS/2 packet first time */13121312- if (unlikely(!priv->dev3))13131313- psmouse_queue_work(psmouse,13141314- &priv->dev3_register_work, 0);13151315-13161292 if (psmouse->pktcnt == 3) {13171317- /* Once dev3 mouse device is registered report data */13181318- if (likely(!IS_ERR_OR_NULL(priv->dev3)))13191319- alps_report_bare_ps2_packet(priv->dev3,13201320- psmouse->packet,13211321- true);12931293+ alps_report_bare_ps2_packet(psmouse, psmouse->packet,12941294+ true);13221295 return PSMOUSE_FULL_PACKET;13231296 }13241297 return PSMOUSE_GOOD_DATA;···22902281 priv->set_abs_params = alps_set_abs_params_mt;22912282 priv->nibble_commands = alps_v3_nibble_commands;22922283 priv->addr_command = PSMOUSE_CMD_RESET_WRAP;22932293- priv->x_max = 1360;22942294- priv->y_max = 660;22952284 priv->x_bits = 23;22962285 priv->y_bits = 12;22862286+22872287+ if (alps_dolphin_get_device_area(psmouse, priv))22882288+ return -EIO;22892289+22972290 break;2298229122992292 case ALPS_PROTO_V6:···23142303 priv->set_abs_params = alps_set_abs_params_mt;23152304 priv->nibble_commands = alps_v3_nibble_commands;23162305 priv->addr_command = PSMOUSE_CMD_RESET_WRAP;23172317-23182318- if (alps_dolphin_get_device_area(psmouse, priv))23192319- return -EIO;23062306+ priv->x_max = 0xfff;23072307+ priv->y_max = 0x7ff;2320230823212309 if (priv->fw_ver[1] != 0xba)23222310 priv->flags |= ALPS_BUTTONPAD;
···169169170170static void its_encode_devid(struct its_cmd_block *cmd, u32 devid)171171{172172- cmd->raw_cmd[0] &= ~(0xffffUL << 32);172172+ cmd->raw_cmd[0] &= BIT_ULL(32) - 1;173173 cmd->raw_cmd[0] |= ((u64)devid) << 32;174174}175175···802802 int i;803803 int psz = SZ_64K;804804 u64 shr = GITS_BASER_InnerShareable;805805+ u64 cache = GITS_BASER_WaWb;805806806807 for (i = 0; i < GITS_BASER_NR_REGS; i++) {807808 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8);···849848 val = (virt_to_phys(base) |850849 (type << GITS_BASER_TYPE_SHIFT) |851850 ((entry_size - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) |852852- GITS_BASER_WaWb |851851+ cache |853852 shr |854853 GITS_BASER_VALID);855854···875874 * Shareability didn't stick. Just use876875 * whatever the read reported, which is likely877876 * to be the only thing this redistributor878878- * supports.877877+ * supports. If that's zero, make it878878+ * non-cacheable as well.879879 */880880 shr = tmp & GITS_BASER_SHAREABILITY_MASK;881881+ if (!shr)882882+ cache = GITS_BASER_nC;881883 goto retry_baser;882884 }883885···984980 tmp = readq_relaxed(rbase + GICR_PROPBASER);985981986982 if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) {983983+ if (!(tmp & GICR_PROPBASER_SHAREABILITY_MASK)) {984984+ /*985985+ * The HW reports non-shareable, we must986986+ * remove the cacheability attributes as987987+ * well.988988+ */989989+ val &= ~(GICR_PROPBASER_SHAREABILITY_MASK |990990+ GICR_PROPBASER_CACHEABILITY_MASK);991991+ val |= GICR_PROPBASER_nC;992992+ writeq_relaxed(val, rbase + GICR_PROPBASER);993993+ }987994 pr_info_once("GIC: using cache flushing for LPI property table\n");988995 gic_rdists->flags |= RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING;989996 }990997991998 /* set PENDBASE */992999 val = (page_to_phys(pend_page) |993993- GICR_PROPBASER_InnerShareable |994994- GICR_PROPBASER_WaWb);10001000+ GICR_PENDBASER_InnerShareable |10011001+ GICR_PENDBASER_WaWb);99510029961003 writeq_relaxed(val, rbase + GICR_PENDBASER);10041004+ tmp = readq_relaxed(rbase + GICR_PENDBASER);10051005+10061006+ if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) {10071007+ /*10081008+ * The HW reports non-shareable, we must remove the10091009+ * cacheability attributes as well.10101010+ */10111011+ val &= ~(GICR_PENDBASER_SHAREABILITY_MASK |10121012+ GICR_PENDBASER_CACHEABILITY_MASK);10131013+ val |= GICR_PENDBASER_nC;10141014+ writeq_relaxed(val, rbase + GICR_PENDBASER);10151015+ }99710169981017 /* Enable LPIs */9991018 val = readl_relaxed(rbase + GICR_CTLR);···10531026 * This ITS wants a linear CPU number.10541027 */10551028 target = readq_relaxed(gic_data_rdist_rd_base() + GICR_TYPER);10561056- target = GICR_TYPER_CPU_NUMBER(target);10291029+ target = GICR_TYPER_CPU_NUMBER(target) << 16;10571030 }1058103110591032 /* Perform collection mapping */···1449142214501423 writeq_relaxed(baser, its->base + GITS_CBASER);14511424 tmp = readq_relaxed(its->base + GITS_CBASER);14521452- writeq_relaxed(0, its->base + GITS_CWRITER);14531453- writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR);1454142514551455- if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) {14261426+ if ((tmp ^ baser) & GITS_CBASER_SHAREABILITY_MASK) {14271427+ if (!(tmp & GITS_CBASER_SHAREABILITY_MASK)) {14281428+ /*14291429+ * The HW reports non-shareable, we must14301430+ * remove the cacheability attributes as14311431+ * well.14321432+ */14331433+ baser &= ~(GITS_CBASER_SHAREABILITY_MASK |14341434+ GITS_CBASER_CACHEABILITY_MASK);14351435+ baser |= GITS_CBASER_nC;14361436+ writeq_relaxed(baser, its->base + GITS_CBASER);14371437+ }14561438 pr_info("ITS: using cache flushing for cmd queue\n");14571439 its->flags |= ITS_FLAGS_CMDQ_NEEDS_FLUSHING;14581440 }14411441+14421442+ writeq_relaxed(0, its->base + GITS_CWRITER);14431443+ writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR);1459144414601445 if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) {14611446 its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
+1-1
drivers/lguest/Kconfig
···11config LGUEST22 tristate "Linux hypervisor example code"33- depends on X86_32 && EVENTFD && TTY33+ depends on X86_32 && EVENTFD && TTY && PCI_DIRECT44 select HVC_DRIVER55 ---help---66 This is a very simple module which allows you to run
+2-1
drivers/net/bonding/bond_main.c
···38503850 /* Find out if any slaves have the same mapping as this skb. */38513851 bond_for_each_slave_rcu(bond, slave, iter) {38523852 if (slave->queue_id == skb->queue_mapping) {38533853- if (bond_slave_can_tx(slave)) {38533853+ if (bond_slave_is_up(slave) &&38543854+ slave->link == BOND_LINK_UP) {38543855 bond_dev_queue_xmit(bond, skb, slave->dev);38553856 return 0;38563857 }
···110110 u8 unused[5];111111};112112113113-/* Extended usage of uCAN commands CMD_RX_FRAME_xxxABLE for PCAN-USB Pro FD */113113+/* Extended usage of uCAN commands CMD_xxx_xx_OPTION for PCAN-USB Pro FD */114114#define PCAN_UFD_FLTEXT_CALIBRATION 0x8000115115116116-struct __packed pcan_ufd_filter_ext {116116+struct __packed pcan_ufd_options {117117 __le16 opcode_channel;118118119119- __le16 ext_mask;119119+ __le16 ucan_mask;120120 u16 unused;121121 __le16 usb_mask;122122};···251251 /* moves the pointer forward */252252 pc += sizeof(struct pucan_wr_err_cnt);253253254254+ /* add command to switch from ISO to non-ISO mode, if fw allows it */255255+ if (dev->can.ctrlmode_supported & CAN_CTRLMODE_FD_NON_ISO) {256256+ struct pucan_options *puo = (struct pucan_options *)pc;257257+258258+ puo->opcode_channel =259259+ (dev->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO) ?260260+ pucan_cmd_opcode_channel(dev,261261+ PUCAN_CMD_CLR_DIS_OPTION) :262262+ pucan_cmd_opcode_channel(dev, PUCAN_CMD_SET_EN_OPTION);263263+264264+ puo->options = cpu_to_le16(PUCAN_OPTION_CANDFDISO);265265+266266+ /* to be sure that no other extended bits will be taken into267267+ * account268268+ */269269+ puo->unused = 0;270270+271271+ /* moves the pointer forward */272272+ pc += sizeof(struct pucan_options);273273+ }274274+254275 /* next, go back to operational mode */255276 cmd = (struct pucan_command *)pc;256277 cmd->opcode_channel = pucan_cmd_opcode_channel(dev,···342321 return pcan_usb_fd_send_cmd(dev, cmd);343322}344323345345-/* set/unset notifications filter:324324+/* set/unset options346325 *347347- * onoff sets(1)/unset(0) notifications348348- * mask each bit defines a kind of notification to set/unset326326+ * onoff set(1)/unset(0) options327327+ * mask each bit defines a kind of options to set/unset349328 */350350-static int pcan_usb_fd_set_filter_ext(struct peak_usb_device *dev,351351- bool onoff, u16 ext_mask, u16 usb_mask)329329+static int pcan_usb_fd_set_options(struct peak_usb_device *dev,330330+ bool onoff, u16 ucan_mask, u16 usb_mask)352331{353353- struct pcan_ufd_filter_ext *cmd = pcan_usb_fd_cmd_buffer(dev);332332+ struct pcan_ufd_options *cmd = pcan_usb_fd_cmd_buffer(dev);354333355334 cmd->opcode_channel = pucan_cmd_opcode_channel(dev,356356- (onoff) ? PUCAN_CMD_RX_FRAME_ENABLE :357357- PUCAN_CMD_RX_FRAME_DISABLE);335335+ (onoff) ? PUCAN_CMD_SET_EN_OPTION :336336+ PUCAN_CMD_CLR_DIS_OPTION);358337359359- cmd->ext_mask = cpu_to_le16(ext_mask);338338+ cmd->ucan_mask = cpu_to_le16(ucan_mask);360339 cmd->usb_mask = cpu_to_le16(usb_mask);361340362341 /* send the command */···791770 &pcan_usb_pro_fd);792771793772 /* enable USB calibration messages */794794- err = pcan_usb_fd_set_filter_ext(dev, 1,795795- PUCAN_FLTEXT_ERROR,796796- PCAN_UFD_FLTEXT_CALIBRATION);773773+ err = pcan_usb_fd_set_options(dev, 1,774774+ PUCAN_OPTION_ERROR,775775+ PCAN_UFD_FLTEXT_CALIBRATION);797776 }798777799778 pdev->usb_if->dev_opened_count++;···827806828807 /* turn off special msgs for that interface if no other dev opened */829808 if (pdev->usb_if->dev_opened_count == 1)830830- pcan_usb_fd_set_filter_ext(dev, 0,831831- PUCAN_FLTEXT_ERROR,832832- PCAN_UFD_FLTEXT_CALIBRATION);809809+ pcan_usb_fd_set_options(dev, 0,810810+ PUCAN_OPTION_ERROR,811811+ PCAN_UFD_FLTEXT_CALIBRATION);833812 pdev->usb_if->dev_opened_count--;834813835814 return 0;···881860 pdev->usb_if->fw_info.fw_version[2],882861 dev->adapter->ctrl_count);883862884884- /* the currently supported hw is non-ISO */885885- dev->can.ctrlmode = CAN_CTRLMODE_FD_NON_ISO;863863+ /* check for ability to switch between ISO/non-ISO modes */864864+ if (pdev->usb_if->fw_info.fw_version[0] >= 2) {865865+ /* firmware >= 2.x supports ISO/non-ISO switching */866866+ dev->can.ctrlmode_supported |= CAN_CTRLMODE_FD_NON_ISO;867867+ } else {868868+ /* firmware < 2.x only supports fixed(!) non-ISO */869869+ dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO;870870+ }886871887872 /* tell the hardware the can driver is running */888873 err = pcan_usb_fd_drv_loaded(dev, 1);···964937 if (dev->ctrl_idx == 0) {965938 /* turn off calibration message if any device were opened */966939 if (pdev->usb_if->dev_opened_count > 0)967967- pcan_usb_fd_set_filter_ext(dev, 0,968968- PUCAN_FLTEXT_ERROR,969969- PCAN_UFD_FLTEXT_CALIBRATION);940940+ pcan_usb_fd_set_options(dev, 0,941941+ PUCAN_OPTION_ERROR,942942+ PCAN_UFD_FLTEXT_CALIBRATION);970943971944 /* tell USB adapter that the driver is being unloaded */972945 pcan_usb_fd_drv_loaded(dev, 0);
+1-3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
···18111811 int stats_state;1812181218131813 /* used for synchronization of concurrent threads statistics handling */18141814- spinlock_t stats_lock;18141814+ struct mutex stats_lock;1815181518161816 /* used by dmae command loader */18171817 struct dmae_command stats_dmae;···1935193519361936 int fp_array_size;19371937 u32 dump_preset_idx;19381938- bool stats_started;19391939- struct semaphore stats_sema;1940193819411939 u8 phys_port_id[ETH_ALEN];19421940
+54-45
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
···129129 u32 xmac_val;130130 u32 emac_addr;131131 u32 emac_val;132132- u32 umac_addr;133133- u32 umac_val;132132+ u32 umac_addr[2];133133+ u32 umac_val[2];134134 u32 bmac_addr;135135 u32 bmac_val[2];136136};···78667866 return 0;78677867}7868786878697869+/* previous driver DMAE transaction may have occurred when pre-boot stage ended78707870+ * and boot began, or when kdump kernel was loaded. Either case would invalidate78717871+ * the addresses of the transaction, resulting in was-error bit set in the pci78727872+ * causing all hw-to-host pcie transactions to timeout. If this happened we want78737873+ * to clear the interrupt which detected this from the pglueb and the was done78747874+ * bit78757875+ */78767876+static void bnx2x_clean_pglue_errors(struct bnx2x *bp)78777877+{78787878+ if (!CHIP_IS_E1x(bp))78797879+ REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR,78807880+ 1 << BP_ABS_FUNC(bp));78817881+}78827882+78697883static int bnx2x_init_hw_func(struct bnx2x *bp)78707884{78717885 int port = BP_PORT(bp);···7972795879737959 bnx2x_init_block(bp, BLOCK_PGLUE_B, init_phase);7974796079757975- if (!CHIP_IS_E1x(bp))79767976- REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, func);79617961+ bnx2x_clean_pglue_errors(bp);7977796279787963 bnx2x_init_block(bp, BLOCK_ATC, init_phase);79797964 bnx2x_init_block(bp, BLOCK_DMAE, init_phase);···1015410141 return base + (BP_ABS_FUNC(bp)) * stride;1015510142}10156101431014410144+static bool bnx2x_prev_unload_close_umac(struct bnx2x *bp,1014510145+ u8 port, u32 reset_reg,1014610146+ struct bnx2x_mac_vals *vals)1014710147+{1014810148+ u32 mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port;1014910149+ u32 base_addr;1015010150+1015110151+ if (!(mask & reset_reg))1015210152+ return false;1015310153+1015410154+ BNX2X_DEV_INFO("Disable umac Rx %02x\n", port);1015510155+ base_addr = port ? GRCBASE_UMAC1 : GRCBASE_UMAC0;1015610156+ vals->umac_addr[port] = base_addr + UMAC_REG_COMMAND_CONFIG;1015710157+ vals->umac_val[port] = REG_RD(bp, vals->umac_addr[port]);1015810158+ REG_WR(bp, vals->umac_addr[port], 0);1015910159+1016010160+ return true;1016110161+}1016210162+1015710163static void bnx2x_prev_unload_close_mac(struct bnx2x *bp,1015810164 struct bnx2x_mac_vals *vals)1015910165{···1018110149 u8 port = BP_PORT(bp);10182101501018310151 /* reset addresses as they also mark which values were changed */1018410184- vals->bmac_addr = 0;1018510185- vals->umac_addr = 0;1018610186- vals->xmac_addr = 0;1018710187- vals->emac_addr = 0;1015210152+ memset(vals, 0, sizeof(*vals));10188101531018910154 reset_reg = REG_RD(bp, MISC_REG_RESET_REG_2);1019010155···1023010201 REG_WR(bp, vals->xmac_addr, 0);1023110202 mac_stopped = true;1023210203 }1023310233- mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port;1023410234- if (mask & reset_reg) {1023510235- BNX2X_DEV_INFO("Disable umac Rx\n");1023610236- base_addr = BP_PORT(bp) ? GRCBASE_UMAC1 : GRCBASE_UMAC0;1023710237- vals->umac_addr = base_addr + UMAC_REG_COMMAND_CONFIG;1023810238- vals->umac_val = REG_RD(bp, vals->umac_addr);1023910239- REG_WR(bp, vals->umac_addr, 0);1024010240- mac_stopped = true;1024110241- }1020410204+1020510205+ mac_stopped |= bnx2x_prev_unload_close_umac(bp, 0,1020610206+ reset_reg, vals);1020710207+ mac_stopped |= bnx2x_prev_unload_close_umac(bp, 1,1020810208+ reset_reg, vals);1024210209 }10243102101024410211 if (mac_stopped)···1053010505 /* Close the MAC Rx to prevent BRB from filling up */1053110506 bnx2x_prev_unload_close_mac(bp, &mac_vals);10532105071053310533- /* close LLH filters towards the BRB */1050810508+ /* close LLH filters for both ports towards the BRB */1053410509 bnx2x_set_rx_filter(&bp->link_params, 0);1051010510+ bp->link_params.port ^= 1;1051110511+ bnx2x_set_rx_filter(&bp->link_params, 0);1051210512+ bp->link_params.port ^= 1;10535105131053610514 /* Check if the UNDI driver was previously loaded */1053710515 if (bnx2x_prev_is_after_undi(bp)) {···10581105531058210554 if (mac_vals.xmac_addr)1058310555 REG_WR(bp, mac_vals.xmac_addr, mac_vals.xmac_val);1058410584- if (mac_vals.umac_addr)1058510585- REG_WR(bp, mac_vals.umac_addr, mac_vals.umac_val);1055610556+ if (mac_vals.umac_addr[0])1055710557+ REG_WR(bp, mac_vals.umac_addr[0], mac_vals.umac_val[0]);1055810558+ if (mac_vals.umac_addr[1])1055910559+ REG_WR(bp, mac_vals.umac_addr[1], mac_vals.umac_val[1]);1058610560 if (mac_vals.emac_addr)1058710561 REG_WR(bp, mac_vals.emac_addr, mac_vals.emac_val);1058810562 if (mac_vals.bmac_addr) {···1060110571 return bnx2x_prev_mcp_done(bp);1060210572}10603105731060410604-/* previous driver DMAE transaction may have occurred when pre-boot stage ended1060510605- * and boot began, or when kdump kernel was loaded. Either case would invalidate1060610606- * the addresses of the transaction, resulting in was-error bit set in the pci1060710607- * causing all hw-to-host pcie transactions to timeout. If this happened we want1060810608- * to clear the interrupt which detected this from the pglueb and the was done1060910609- * bit1061010610- */1061110611-static void bnx2x_prev_interrupted_dmae(struct bnx2x *bp)1061210612-{1061310613- if (!CHIP_IS_E1x(bp)) {1061410614- u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS);1061510615- if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) {1061610616- DP(BNX2X_MSG_SP,1061710617- "'was error' bit was found to be set in pglueb upon startup. Clearing\n");1061810618- REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR,1061910619- 1 << BP_FUNC(bp));1062010620- }1062110621- }1062210622-}1062310623-1062410574static int bnx2x_prev_unload(struct bnx2x *bp)1062510575{1062610576 int time_counter = 10;···1061010600 /* clear hw from errors which may have resulted from an interrupted1061110601 * dmae transaction.1061210602 */1061310613- bnx2x_prev_interrupted_dmae(bp);1060310603+ bnx2x_clean_pglue_errors(bp);10614106041061510605 /* Release previously held locks */1061610606 hw_lock_reg = (BP_FUNC(bp) <= 5) ?···1204712037 mutex_init(&bp->port.phy_mutex);1204812038 mutex_init(&bp->fw_mb_mutex);1204912039 mutex_init(&bp->drv_info_mutex);1204012040+ mutex_init(&bp->stats_lock);1205012041 bp->drv_info_mng_owner = false;1205112051- spin_lock_init(&bp->stats_lock);1205212052- sema_init(&bp->stats_sema, 1);12053120421205412043 INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task);1205512044 INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task);···1367713668 cancel_delayed_work_sync(&bp->sp_task);1367813669 cancel_delayed_work_sync(&bp->period_task);13679136701368013680- spin_lock_bh(&bp->stats_lock);1367113671+ mutex_lock(&bp->stats_lock);1368113672 bp->stats_state = STATS_STATE_DISABLED;1368213682- spin_unlock_bh(&bp->stats_lock);1367313673+ mutex_unlock(&bp->stats_lock);13683136741368413675 bnx2x_save_statistics(bp);1368513676
+3-1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
···2238223822392239 cookie.vf = vf;22402240 cookie.state = VF_ACQUIRED;22412241- bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie);22412241+ rc = bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie);22422242+ if (rc)22432243+ goto op_err;22422244 }2243224522442246 DP(BNX2X_MSG_IOV, "set state to acquired\n");
+74-90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
···123123 */124124static void bnx2x_storm_stats_post(struct bnx2x *bp)125125{126126- if (!bp->stats_pending) {127127- int rc;126126+ int rc;128127129129- spin_lock_bh(&bp->stats_lock);128128+ if (bp->stats_pending)129129+ return;130130131131- if (bp->stats_pending) {132132- spin_unlock_bh(&bp->stats_lock);133133- return;134134- }131131+ bp->fw_stats_req->hdr.drv_stats_counter =132132+ cpu_to_le16(bp->stats_counter++);135133136136- bp->fw_stats_req->hdr.drv_stats_counter =137137- cpu_to_le16(bp->stats_counter++);134134+ DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",135135+ le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter));138136139139- DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n",140140- le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter));137137+ /* adjust the ramrod to include VF queues statistics */138138+ bnx2x_iov_adjust_stats_req(bp);139139+ bnx2x_dp_stats(bp);141140142142- /* adjust the ramrod to include VF queues statistics */143143- bnx2x_iov_adjust_stats_req(bp);144144- bnx2x_dp_stats(bp);145145-146146- /* send FW stats ramrod */147147- rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0,148148- U64_HI(bp->fw_stats_req_mapping),149149- U64_LO(bp->fw_stats_req_mapping),150150- NONE_CONNECTION_TYPE);151151- if (rc == 0)152152- bp->stats_pending = 1;153153-154154- spin_unlock_bh(&bp->stats_lock);155155- }141141+ /* send FW stats ramrod */142142+ rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0,143143+ U64_HI(bp->fw_stats_req_mapping),144144+ U64_LO(bp->fw_stats_req_mapping),145145+ NONE_CONNECTION_TYPE);146146+ if (rc == 0)147147+ bp->stats_pending = 1;156148}157149158150static void bnx2x_hw_stats_post(struct bnx2x *bp)···213221 */214222215223/* should be called under stats_sema */216216-static void __bnx2x_stats_pmf_update(struct bnx2x *bp)224224+static void bnx2x_stats_pmf_update(struct bnx2x *bp)217225{218226 struct dmae_command *dmae;219227 u32 opcode;···511519}512520513521/* should be called under stats_sema */514514-static void __bnx2x_stats_start(struct bnx2x *bp)522522+static void bnx2x_stats_start(struct bnx2x *bp)515523{516524 if (IS_PF(bp)) {517525 if (bp->port.pmf)···523531 bnx2x_hw_stats_post(bp);524532 bnx2x_storm_stats_post(bp);525533 }526526-527527- bp->stats_started = true;528528-}529529-530530-static void bnx2x_stats_start(struct bnx2x *bp)531531-{532532- if (down_timeout(&bp->stats_sema, HZ/10))533533- BNX2X_ERR("Unable to acquire stats lock\n");534534- __bnx2x_stats_start(bp);535535- up(&bp->stats_sema);536534}537535538536static void bnx2x_stats_pmf_start(struct bnx2x *bp)539537{540540- if (down_timeout(&bp->stats_sema, HZ/10))541541- BNX2X_ERR("Unable to acquire stats lock\n");542538 bnx2x_stats_comp(bp);543543- __bnx2x_stats_pmf_update(bp);544544- __bnx2x_stats_start(bp);545545- up(&bp->stats_sema);546546-}547547-548548-static void bnx2x_stats_pmf_update(struct bnx2x *bp)549549-{550550- if (down_timeout(&bp->stats_sema, HZ/10))551551- BNX2X_ERR("Unable to acquire stats lock\n");552552- __bnx2x_stats_pmf_update(bp);553553- up(&bp->stats_sema);539539+ bnx2x_stats_pmf_update(bp);540540+ bnx2x_stats_start(bp);554541}555542556543static void bnx2x_stats_restart(struct bnx2x *bp)···539568 */540569 if (IS_VF(bp))541570 return;542542- if (down_timeout(&bp->stats_sema, HZ/10))543543- BNX2X_ERR("Unable to acquire stats lock\n");571571+544572 bnx2x_stats_comp(bp);545545- __bnx2x_stats_start(bp);546546- up(&bp->stats_sema);573573+ bnx2x_stats_start(bp);547574}548575549576static void bnx2x_bmac_stats_update(struct bnx2x *bp)···12151246{12161247 u32 *stats_comp = bnx2x_sp(bp, stats_comp);1217124812181218- /* we run update from timer context, so give up12191219- * if somebody is in the middle of transition12201220- */12211221- if (down_trylock(&bp->stats_sema))12491249+ if (bnx2x_edebug_stats_stopped(bp))12221250 return;12231223-12241224- if (bnx2x_edebug_stats_stopped(bp) || !bp->stats_started)12251225- goto out;1226125112271252 if (IS_PF(bp)) {12281253 if (*stats_comp != DMAE_COMP_VAL)12291229- goto out;12541254+ return;1230125512311256 if (bp->port.pmf)12321257 bnx2x_hw_stats_update(bp);···12301267 BNX2X_ERR("storm stats were not updated for 3 times\n");12311268 bnx2x_panic();12321269 }12331233- goto out;12701270+ return;12341271 }12351272 } else {12361273 /* vf doesn't collect HW statistics, and doesn't get completions···1244128112451282 /* vf is done */12461283 if (IS_VF(bp))12471247- goto out;12841284+ return;1248128512491286 if (netif_msg_timer(bp)) {12501287 struct bnx2x_eth_stats *estats = &bp->eth_stats;···1255129212561293 bnx2x_hw_stats_post(bp);12571294 bnx2x_storm_stats_post(bp);12581258-12591259-out:12601260- up(&bp->stats_sema);12611295}1262129612631297static void bnx2x_port_stats_stop(struct bnx2x *bp)···1318135813191359static void bnx2x_stats_stop(struct bnx2x *bp)13201360{13211321- int update = 0;13221322-13231323- if (down_timeout(&bp->stats_sema, HZ/10))13241324- BNX2X_ERR("Unable to acquire stats lock\n");13251325-13261326- bp->stats_started = false;13611361+ bool update = false;1327136213281363 bnx2x_stats_comp(bp);13291364···13361381 bnx2x_hw_stats_post(bp);13371382 bnx2x_stats_comp(bp);13381383 }13391339-13401340- up(&bp->stats_sema);13411384}1342138513431386static void bnx2x_stats_do_nothing(struct bnx2x *bp)···1363141013641411void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event)13651412{13661366- enum bnx2x_stats_state state;13671367- void (*action)(struct bnx2x *bp);14131413+ enum bnx2x_stats_state state = bp->stats_state;14141414+13681415 if (unlikely(bp->panic))13691416 return;1370141713711371- spin_lock_bh(&bp->stats_lock);13721372- state = bp->stats_state;13731373- bp->stats_state = bnx2x_stats_stm[state][event].next_state;13741374- action = bnx2x_stats_stm[state][event].action;13751375- spin_unlock_bh(&bp->stats_lock);14181418+ /* Statistics update run from timer context, and we don't want to stop14191419+ * that context in case someone is in the middle of a transition.14201420+ * For other events, wait a bit until lock is taken.14211421+ */14221422+ if (!mutex_trylock(&bp->stats_lock)) {14231423+ if (event == STATS_EVENT_UPDATE)14241424+ return;1376142513771377- action(bp);14261426+ DP(BNX2X_MSG_STATS,14271427+ "Unlikely stats' lock contention [event %d]\n", event);14281428+ mutex_lock(&bp->stats_lock);14291429+ }14301430+14311431+ bnx2x_stats_stm[state][event].action(bp);14321432+ bp->stats_state = bnx2x_stats_stm[state][event].next_state;14331433+14341434+ mutex_unlock(&bp->stats_lock);1378143513791436 if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp))13801437 DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n",···19611998 }19621999}1963200019641964-void bnx2x_stats_safe_exec(struct bnx2x *bp,19651965- void (func_to_exec)(void *cookie),19661966- void *cookie){19671967- if (down_timeout(&bp->stats_sema, HZ/10))19681968- BNX2X_ERR("Unable to acquire stats lock\n");20012001+int bnx2x_stats_safe_exec(struct bnx2x *bp,20022002+ void (func_to_exec)(void *cookie),20032003+ void *cookie)20042004+{20052005+ int cnt = 10, rc = 0;20062006+20072007+ /* Wait for statistics to end [while blocking further requests],20082008+ * then run supplied function 'safely'.20092009+ */20102010+ mutex_lock(&bp->stats_lock);20112011+19692012 bnx2x_stats_comp(bp);20132013+ while (bp->stats_pending && cnt--)20142014+ if (bnx2x_storm_stats_update(bp))20152015+ usleep_range(1000, 2000);20162016+ if (bp->stats_pending) {20172017+ BNX2X_ERR("Failed to wait for stats pending to clear [possibly FW is stuck]\n");20182018+ rc = -EBUSY;20192019+ goto out;20202020+ }20212021+19702022 func_to_exec(cookie);19711971- __bnx2x_stats_start(bp);19721972- up(&bp->stats_sema);20232023+20242024+out:20252025+ /* No need to restart statistics - if they're enabled, the timer20262026+ * will restart the statistics.20272027+ */20282028+ mutex_unlock(&bp->stats_lock);20292029+20302030+ return rc;19732031}
···920920{921921 int i;922922923923- for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) {923923+ for (i = 0; i < adap->sge.ingr_sz; i++) {924924 struct sge_rspq *q = adap->sge.ingr_map[i];925925926926 if (q && q->handler) {···934934 }935935}936936937937+/* Disable interrupt and napi handler */938938+static void disable_interrupts(struct adapter *adap)939939+{940940+ if (adap->flags & FULL_INIT_DONE) {941941+ t4_intr_disable(adap);942942+ if (adap->flags & USING_MSIX) {943943+ free_msix_queue_irqs(adap);944944+ free_irq(adap->msix_info[0].vec, adap);945945+ } else {946946+ free_irq(adap->pdev->irq, adap);947947+ }948948+ quiesce_rx(adap);949949+ }950950+}951951+937952/*938953 * Enable NAPI scheduling and interrupt generation for all Rx queues.939954 */···956941{957942 int i;958943959959- for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) {944944+ for (i = 0; i < adap->sge.ingr_sz; i++) {960945 struct sge_rspq *q = adap->sge.ingr_map[i];961946962947 if (!q)···985970 int err, msi_idx, i, j;986971 struct sge *s = &adap->sge;987972988988- bitmap_zero(s->starving_fl, MAX_EGRQ);989989- bitmap_zero(s->txq_maperr, MAX_EGRQ);973973+ bitmap_zero(s->starving_fl, s->egr_sz);974974+ bitmap_zero(s->txq_maperr, s->egr_sz);990975991976 if (adap->flags & USING_MSIX)992977 msi_idx = 1; /* vector 0 is for non-queue interrupts */···998983 msi_idx = -((int)s->intrq.abs_id + 1);999984 }1000985986986+ /* NOTE: If you add/delete any Ingress/Egress Queue allocations in here,987987+ * don't forget to update the following which need to be988988+ * synchronized to and changes here.989989+ *990990+ * 1. The calculations of MAX_INGQ in cxgb4.h.991991+ *992992+ * 2. Update enable_msix/name_msix_vecs/request_msix_queue_irqs993993+ * to accommodate any new/deleted Ingress Queues994994+ * which need MSI-X Vectors.995995+ *996996+ * 3. Update sge_qinfo_show() to include information on the997997+ * new/deleted queues.998998+ */1001999 err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0],10021000 msi_idx, NULL, fwevtq_handler);10031001 if (err) {···4272424442734245static void cxgb_down(struct adapter *adapter)42744246{42754275- t4_intr_disable(adapter);42764247 cancel_work_sync(&adapter->tid_release_task);42774248 cancel_work_sync(&adapter->db_full_task);42784249 cancel_work_sync(&adapter->db_drop_task);42794250 adapter->tid_release_task_busy = false;42804251 adapter->tid_release_head = NULL;4281425242824282- if (adapter->flags & USING_MSIX) {42834283- free_msix_queue_irqs(adapter);42844284- free_irq(adapter->msix_info[0].vec, adapter);42854285- } else42864286- free_irq(adapter->pdev->irq, adapter);42874287- quiesce_rx(adapter);42884253 t4_sge_stop(adapter);42894254 t4_free_sge_resources(adapter);42904255 adapter->flags &= ~FULL_INIT_DONE;···47544733 if (ret < 0)47554734 return ret;4756473547574757- ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, MAX_EGRQ, 64, MAX_INGQ,47584758- 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, FW_CMD_CAP_PF);47364736+ ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, adap->sge.egr_sz, 64,47374737+ MAX_INGQ, 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF,47384738+ FW_CMD_CAP_PF);47594739 if (ret < 0)47604740 return ret;47614741···51105088 enum dev_state state;51115089 u32 params[7], val[7];51125090 struct fw_caps_config_cmd caps_cmd;51135113- struct fw_devlog_cmd devlog_cmd;51145114- u32 devlog_meminfo;51155091 int reset = 1;50925092+50935093+ /* Grab Firmware Device Log parameters as early as possible so we have50945094+ * access to it for debugging, etc.50955095+ */50965096+ ret = t4_init_devlog_params(adap);50975097+ if (ret < 0)50985098+ return ret;5116509951175100 /* Contact FW, advertising Master capability */51185101 ret = t4_fw_hello(adap, adap->mbox, adap->mbox, MASTER_MAY, &state);···51955168 ret = get_vpd_params(adap, &adap->params.vpd);51965169 if (ret < 0)51975170 goto bye;51985198-51995199- /* Read firmware device log parameters. We really need to find a way52005200- * to get these parameters initialized with some default values (which52015201- * are likely to be correct) for the case where we either don't52025202- * attache to the firmware or it's crashed when we probe the adapter.52035203- * That way we'll still be able to perform early firmware startup52045204- * debugging ... If the request to get the Firmware's Device Log52055205- * parameters fails, we'll live so we don't make that a fatal error.52065206- */52075207- memset(&devlog_cmd, 0, sizeof(devlog_cmd));52085208- devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) |52095209- FW_CMD_REQUEST_F | FW_CMD_READ_F);52105210- devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd));52115211- ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),52125212- &devlog_cmd);52135213- if (ret == 0) {52145214- devlog_meminfo =52155215- ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog);52165216- adap->params.devlog.memtype =52175217- FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo);52185218- adap->params.devlog.start =52195219- FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4;52205220- adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog);52215221- }5222517152235172 /*52245173 * Find out what ports are available to us. Note that we need to do···52955292 adap->tids.ftid_base = val[3];52965293 adap->tids.nftids = val[4] - val[3] + 1;52975294 adap->sge.ingr_start = val[5];52955295+52965296+ /* qids (ingress/egress) returned from firmware can be anywhere52975297+ * in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END.52985298+ * Hence driver needs to allocate memory for this range to52995299+ * store the queue info. Get the highest IQFLINT/EQ index returned53005300+ * in FW_EQ_*_CMD.alloc command.53015301+ */53025302+ params[0] = FW_PARAM_PFVF(EQ_END);53035303+ params[1] = FW_PARAM_PFVF(IQFLINT_END);53045304+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val);53055305+ if (ret < 0)53065306+ goto bye;53075307+ adap->sge.egr_sz = val[0] - adap->sge.egr_start + 1;53085308+ adap->sge.ingr_sz = val[1] - adap->sge.ingr_start + 1;53095309+53105310+ adap->sge.egr_map = kcalloc(adap->sge.egr_sz,53115311+ sizeof(*adap->sge.egr_map), GFP_KERNEL);53125312+ if (!adap->sge.egr_map) {53135313+ ret = -ENOMEM;53145314+ goto bye;53155315+ }53165316+53175317+ adap->sge.ingr_map = kcalloc(adap->sge.ingr_sz,53185318+ sizeof(*adap->sge.ingr_map), GFP_KERNEL);53195319+ if (!adap->sge.ingr_map) {53205320+ ret = -ENOMEM;53215321+ goto bye;53225322+ }53235323+53245324+ /* Allocate the memory for the vaious egress queue bitmaps53255325+ * ie starving_fl and txq_maperr.53265326+ */53275327+ adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),53285328+ sizeof(long), GFP_KERNEL);53295329+ if (!adap->sge.starving_fl) {53305330+ ret = -ENOMEM;53315331+ goto bye;53325332+ }53335333+53345334+ adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz),53355335+ sizeof(long), GFP_KERNEL);53365336+ if (!adap->sge.txq_maperr) {53375337+ ret = -ENOMEM;53385338+ goto bye;53395339+ }5298534052995341 params[0] = FW_PARAM_PFVF(CLIP_START);53005342 params[1] = FW_PARAM_PFVF(CLIP_END);···55495501 * happened to HW/FW, stop issuing commands.55505502 */55515503bye:55045504+ kfree(adap->sge.egr_map);55055505+ kfree(adap->sge.ingr_map);55065506+ kfree(adap->sge.starving_fl);55075507+ kfree(adap->sge.txq_maperr);55525508 if (ret != -ETIMEDOUT && ret != -EIO)55535509 t4_fw_bye(adap, adap->mbox);55545510 return ret;···55805528 netif_carrier_off(dev);55815529 }55825530 spin_unlock(&adap->stats_lock);55315531+ disable_interrupts(adap);55835532 if (adap->flags & FULL_INIT_DONE)55845533 cxgb_down(adap);55855534 rtnl_unlock();···5965591259665913 t4_free_mem(adapter->l2t);59675914 t4_free_mem(adapter->tids.tid_tab);59155915+ kfree(adapter->sge.egr_map);59165916+ kfree(adapter->sge.ingr_map);59175917+ kfree(adapter->sge.starving_fl);59185918+ kfree(adapter->sge.txq_maperr);59685919 disable_msi(adapter);5969592059705921 for_each_port(adapter, i)···6293623662946237 if (is_offload(adapter))62956238 detach_ulds(adapter);62396239+62406240+ disable_interrupts(adapter);6296624162976242 for_each_port(adapter, i)62986243 if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+4-3
drivers/net/ethernet/chelsio/cxgb4/sge.c
···21712171 struct adapter *adap = (struct adapter *)data;21722172 struct sge *s = &adap->sge;2173217321742174- for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++)21742174+ for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++)21752175 for (m = s->starving_fl[i]; m; m &= m - 1) {21762176 struct sge_eth_rxq *rxq;21772177 unsigned int id = __ffs(m) + i * BITS_PER_LONG;···22592259 struct adapter *adap = (struct adapter *)data;22602260 struct sge *s = &adap->sge;2261226122622262- for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++)22622262+ for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++)22632263 for (m = s->txq_maperr[i]; m; m &= m - 1) {22642264 unsigned long id = __ffs(m) + i * BITS_PER_LONG;22652265 struct sge_ofld_txq *txq = s->egr_map[id];···27412741 free_rspq_fl(adap, &adap->sge.intrq, NULL);2742274227432743 /* clear the reverse egress queue map */27442744- memset(adap->sge.egr_map, 0, sizeof(adap->sge.egr_map));27442744+ memset(adap->sge.egr_map, 0,27452745+ adap->sge.egr_sz * sizeof(*adap->sge.egr_map));27452746}2746274727472748void t4_sge_start(struct adapter *adap)
+53
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
···44594459}4460446044614461/**44624462+ * t4_init_devlog_params - initialize adapter->params.devlog44634463+ * @adap: the adapter44644464+ *44654465+ * Initialize various fields of the adapter's Firmware Device Log44664466+ * Parameters structure.44674467+ */44684468+int t4_init_devlog_params(struct adapter *adap)44694469+{44704470+ struct devlog_params *dparams = &adap->params.devlog;44714471+ u32 pf_dparams;44724472+ unsigned int devlog_meminfo;44734473+ struct fw_devlog_cmd devlog_cmd;44744474+ int ret;44754475+44764476+ /* If we're dealing with newer firmware, the Device Log Paramerters44774477+ * are stored in a designated register which allows us to access the44784478+ * Device Log even if we can't talk to the firmware.44794479+ */44804480+ pf_dparams =44814481+ t4_read_reg(adap, PCIE_FW_REG(PCIE_FW_PF_A, PCIE_FW_PF_DEVLOG));44824482+ if (pf_dparams) {44834483+ unsigned int nentries, nentries128;44844484+44854485+ dparams->memtype = PCIE_FW_PF_DEVLOG_MEMTYPE_G(pf_dparams);44864486+ dparams->start = PCIE_FW_PF_DEVLOG_ADDR16_G(pf_dparams) << 4;44874487+44884488+ nentries128 = PCIE_FW_PF_DEVLOG_NENTRIES128_G(pf_dparams);44894489+ nentries = (nentries128 + 1) * 128;44904490+ dparams->size = nentries * sizeof(struct fw_devlog_e);44914491+44924492+ return 0;44934493+ }44944494+44954495+ /* Otherwise, ask the firmware for it's Device Log Parameters.44964496+ */44974497+ memset(&devlog_cmd, 0, sizeof(devlog_cmd));44984498+ devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) |44994499+ FW_CMD_REQUEST_F | FW_CMD_READ_F);45004500+ devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd));45014501+ ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),45024502+ &devlog_cmd);45034503+ if (ret)45044504+ return ret;45054505+45064506+ devlog_meminfo = ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog);45074507+ dparams->memtype = FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo);45084508+ dparams->start = FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4;45094509+ dparams->size = ntohl(devlog_cmd.memsize_devlog);45104510+45114511+ return 0;45124512+}45134513+45144514+/**44624515 * t4_init_sge_params - initialize adap->params.sge44634516 * @adapter: the adapter44644517 *
···101101 FW_RI_BIND_MW_WR = 0x18,102102 FW_RI_FR_NSMR_WR = 0x19,103103 FW_RI_INV_LSTAG_WR = 0x1a,104104- FW_LASTC2E_WR = 0x40104104+ FW_LASTC2E_WR = 0x70105105};106106107107struct fw_wr_hdr {···993993 FW_MEMTYPE_CF_EXTMEM = 0x2,994994 FW_MEMTYPE_CF_FLASH = 0x4,995995 FW_MEMTYPE_CF_INTERNAL = 0x5,996996+ FW_MEMTYPE_CF_EXTMEM1 = 0x6,996997};997998998999struct fw_caps_config_cmd {···10361035 FW_PARAMS_MNEM_PFVF = 2, /* function params */10371036 FW_PARAMS_MNEM_REG = 3, /* limited register access */10381037 FW_PARAMS_MNEM_DMAQ = 4, /* dma queue params */10381038+ FW_PARAMS_MNEM_CHNET = 5, /* chnet params */10391039 FW_PARAMS_MNEM_LAST10401040};10411041···31043102 FW_DEVLOG_FACILITY_FCOE = 0x2E,31053103 FW_DEVLOG_FACILITY_FOISCSI = 0x30,31063104 FW_DEVLOG_FACILITY_FOFCOE = 0x32,31073107- FW_DEVLOG_FACILITY_MAX = 0x32,31053105+ FW_DEVLOG_FACILITY_CHNET = 0x34,31063106+ FW_DEVLOG_FACILITY_MAX = 0x34,31083107};3109310831103109/* log message format */···31413138#define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \31423139 (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \31433140 FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M)31413141+31423142+/* P C I E F W P F 7 R E G I S T E R */31433143+31443144+/* PF7 stores the Firmware Device Log parameters which allows Host Drivers to31453145+ * access the "devlog" which needing to contact firmware. The encoding is31463146+ * mostly the same as that returned by the DEVLOG command except for the size31473147+ * which is encoded as the number of entries in multiples-1 of 128 here rather31483148+ * than the memory size as is done in the DEVLOG command. Thus, 0 means 12831493149+ * and 15 means 2048. This of course in turn constrains the allowed values31503150+ * for the devlog size ...31513151+ */31523152+#define PCIE_FW_PF_DEVLOG 731533153+31543154+#define PCIE_FW_PF_DEVLOG_NENTRIES128_S 2831553155+#define PCIE_FW_PF_DEVLOG_NENTRIES128_M 0xf31563156+#define PCIE_FW_PF_DEVLOG_NENTRIES128_V(x) \31573157+ ((x) << PCIE_FW_PF_DEVLOG_NENTRIES128_S)31583158+#define PCIE_FW_PF_DEVLOG_NENTRIES128_G(x) \31593159+ (((x) >> PCIE_FW_PF_DEVLOG_NENTRIES128_S) & \31603160+ PCIE_FW_PF_DEVLOG_NENTRIES128_M)31613161+31623162+#define PCIE_FW_PF_DEVLOG_ADDR16_S 431633163+#define PCIE_FW_PF_DEVLOG_ADDR16_M 0xffffff31643164+#define PCIE_FW_PF_DEVLOG_ADDR16_V(x) ((x) << PCIE_FW_PF_DEVLOG_ADDR16_S)31653165+#define PCIE_FW_PF_DEVLOG_ADDR16_G(x) \31663166+ (((x) >> PCIE_FW_PF_DEVLOG_ADDR16_S) & PCIE_FW_PF_DEVLOG_ADDR16_M)31673167+31683168+#define PCIE_FW_PF_DEVLOG_MEMTYPE_S 031693169+#define PCIE_FW_PF_DEVLOG_MEMTYPE_M 0xf31703170+#define PCIE_FW_PF_DEVLOG_MEMTYPE_V(x) ((x) << PCIE_FW_PF_DEVLOG_MEMTYPE_S)31713171+#define PCIE_FW_PF_DEVLOG_MEMTYPE_G(x) \31723172+ (((x) >> PCIE_FW_PF_DEVLOG_MEMTYPE_S) & PCIE_FW_PF_DEVLOG_MEMTYPE_M)3144317331453174#endif /* _T4FW_INTERFACE_H_ */
···210210211211 if (rpl) {212212 /* request bit in high-order BE word */213213- WARN_ON((be32_to_cpu(*(const u32 *)cmd)213213+ WARN_ON((be32_to_cpu(*(const __be32 *)cmd)214214 & FW_CMD_REQUEST_F) == 0);215215 get_mbox_rpl(adapter, rpl, size, mbox_data);216216- WARN_ON((be32_to_cpu(*(u32 *)rpl)216216+ WARN_ON((be32_to_cpu(*(__be32 *)rpl)217217 & FW_CMD_REQUEST_F) != 0);218218 }219219 t4_write_reg(adapter, mbox_ctl,···484484 * o The BAR2 Queue ID.485485 * o The BAR2 Queue ID Offset into the BAR2 page.486486 */487487- bar2_page_offset = ((qid >> qpp_shift) << page_shift);487487+ bar2_page_offset = ((u64)(qid >> qpp_shift) << page_shift);488488 bar2_qid = qid & qpp_mask;489489 bar2_qid_offset = bar2_qid * SGE_UDB_SIZE;490490
+27-3
drivers/net/ethernet/freescale/fec_main.c
···19541954 struct fec_enet_private *fep = netdev_priv(ndev);19551955 struct device_node *node;19561956 int err = -ENXIO, i;19571957+ u32 mii_speed, holdtime;1957195819581959 /*19591960 * The i.MX28 dual fec interfaces are not equal.···19921991 * Reference Manual has an error on this, and gets fixed on i.MX6Q19931992 * document.19941993 */19951995- fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000);19941994+ mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000);19961995 if (fep->quirks & FEC_QUIRK_ENET_MAC)19971997- fep->phy_speed--;19981998- fep->phy_speed <<= 1;19961996+ mii_speed--;19971997+ if (mii_speed > 63) {19981998+ dev_err(&pdev->dev,19991999+ "fec clock (%lu) to fast to get right mii speed\n",20002000+ clk_get_rate(fep->clk_ipg));20012001+ err = -EINVAL;20022002+ goto err_out;20032003+ }20042004+20052005+ /*20062006+ * The i.MX28 and i.MX6 types have another filed in the MSCR (aka20072007+ * MII_SPEED) register that defines the MDIO output hold time. Earlier20082008+ * versions are RAZ there, so just ignore the difference and write the20092009+ * register always.20102010+ * The minimal hold time according to IEE802.3 (clause 22) is 10 ns.20112011+ * HOLDTIME + 1 is the number of clk cycles the fec is holding the20122012+ * output.20132013+ * The HOLDTIME bitfield takes values between 0 and 7 (inclusive).20142014+ * Given that ceil(clkrate / 5000000) <= 64, the calculation for20152015+ * holdtime cannot result in a value greater than 3.20162016+ */20172017+ holdtime = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 100000000) - 1;20182018+20192019+ fep->phy_speed = mii_speed << 1 | holdtime << 8;20202020+19992021 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);2000202220012023 fep->mii_bus = mdiobus_alloc();
+3
drivers/net/ethernet/freescale/ucc_geth.c
···38933893 ugeth->phy_interface = phy_interface;38943894 ugeth->max_speed = max_speed;3895389538963896+ /* Carrier starts down, phylib will bring it up */38973897+ netif_carrier_off(dev);38983898+38963899 err = register_netdev(dev);38973900 if (err) {38983901 if (netif_msg_probe(ugeth))
+1-6
drivers/net/ethernet/marvell/mvneta.c
···26582658static int mvneta_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)26592659{26602660 struct mvneta_port *pp = netdev_priv(dev);26612661- int ret;2662266126632662 if (!pp->phy_dev)26642663 return -ENOTSUPP;2665266426662666- ret = phy_mii_ioctl(pp->phy_dev, ifr, cmd);26672667- if (!ret)26682668- mvneta_adjust_link(dev);26692669-26702670- return ret;26652665+ return phy_mii_ioctl(pp->phy_dev, ifr, cmd);26712666}2672266726732668/* Ethtool methods */
+3-2
drivers/net/ethernet/mellanox/mlx4/cmd.c
···724724 * on the host, we deprecate the error message for this725725 * specific command/input_mod/opcode_mod/fw-status to be debug.726726 */727727- if (op == MLX4_CMD_SET_PORT && in_modifier == 1 &&727727+ if (op == MLX4_CMD_SET_PORT &&728728+ (in_modifier == 1 || in_modifier == 2) &&728729 op_modifier == 0 && context->fw_status == CMD_STAT_BAD_SIZE)729730 mlx4_dbg(dev, "command 0x%x failed: fw status = 0x%x\n",730731 op, context->fw_status);···19941993 goto reset_slave;19951994 slave_state[slave].vhcr_dma = ((u64) param) << 48;19961995 priv->mfunc.master.slave_state[slave].cookie = 0;19971997- mutex_init(&priv->mfunc.master.gen_eqe_mutex[slave]);19981996 break;19991997 case MLX4_COMM_CMD_VHCR1:20001998 if (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR0)···22252225 for (i = 0; i < dev->num_slaves; ++i) {22262226 s_state = &priv->mfunc.master.slave_state[i];22272227 s_state->last_cmd = MLX4_COMM_CMD_RESET;22282228+ mutex_init(&priv->mfunc.master.gen_eqe_mutex[i]);22282229 for (j = 0; j < MLX4_EVENT_TYPES_NUM; ++j)22292230 s_state->event_eq[j].eqn = -1;22302231 __raw_writel((__force u32) 0,
+8-7
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
···28052805 netif_carrier_off(dev);28062806 mlx4_en_set_default_moderation(priv);2807280728082808- err = register_netdev(dev);28092809- if (err) {28102810- en_err(priv, "Netdev registration failed for port %d\n", port);28112811- goto out;28122812- }28132813- priv->registered = 1;28142814-28152808 en_warn(priv, "Using %d TX rings\n", prof->tx_ring_num);28162809 en_warn(priv, "Using %d RX rings\n", prof->rx_ring_num);28172810···28452852 SERVICE_TASK_DELAY);2846285328472854 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap);28552855+28562856+ err = register_netdev(dev);28572857+ if (err) {28582858+ en_err(priv, "Netdev registration failed for port %d\n", port);28592859+ goto out;28602860+ }28612861+28622862+ priv->registered = 1;2848286328492864 return 0;28502865
+7-11
drivers/net/ethernet/mellanox/mlx4/eq.c
···153153154154 /* All active slaves need to receive the event */155155 if (slave == ALL_SLAVES) {156156- for (i = 0; i < dev->num_slaves; i++) {157157- if (i != dev->caps.function &&158158- master->slave_state[i].active)159159- if (mlx4_GEN_EQE(dev, i, eqe))160160- mlx4_warn(dev, "Failed to generate event for slave %d\n",161161- i);156156+ for (i = 0; i <= dev->persist->num_vfs; i++) {157157+ if (mlx4_GEN_EQE(dev, i, eqe))158158+ mlx4_warn(dev, "Failed to generate event for slave %d\n",159159+ i);162160 }163161 } else {164162 if (mlx4_GEN_EQE(dev, slave, eqe))···201203 struct mlx4_eqe *eqe)202204{203205 struct mlx4_priv *priv = mlx4_priv(dev);204204- struct mlx4_slave_state *s_slave =205205- &priv->mfunc.master.slave_state[slave];206206207207- if (!s_slave->active) {208208- /*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/207207+ if (slave < 0 || slave > dev->persist->num_vfs ||208208+ slave == dev->caps.function ||209209+ !priv->mfunc.master.slave_state[slave].active)209210 return;210210- }211211212212 slave_event(dev, slave, eqe);213213}
···30953095 if (!priv->mfunc.master.slave_state)30963096 return -EINVAL;3097309730983098+ /* check for slave valid, slave not PF, and slave active */30993099+ if (slave < 0 || slave > dev->persist->num_vfs ||31003100+ slave == dev->caps.function ||31013101+ !priv->mfunc.master.slave_state[slave].active)31023102+ return 0;31033103+30983104 event_eq = &priv->mfunc.master.slave_state[slave].event_eq[eqe->type];3099310531003106 /* Create the event only if the slave is registered */
+7-1
drivers/net/ethernet/rocker/rocker.c
···44684468 struct net_device *master = netdev_master_upper_dev_get(dev);44694469 int err = 0;4470447044714471+ /* There are currently three cases handled here:44724472+ * 1. Joining a bridge44734473+ * 2. Leaving a previously joined bridge44744474+ * 3. Other, e.g. being added to or removed from a bond or openvswitch,44754475+ * in which case nothing is done44764476+ */44714477 if (master && master->rtnl_link_ops &&44724478 !strcmp(master->rtnl_link_ops->kind, "bridge"))44734479 err = rocker_port_bridge_join(rocker_port, master);44744474- else44804480+ else if (rocker_port_is_bridged(rocker_port))44754481 err = rocker_port_bridge_leave(rocker_port);4476448244774483 return err;
···1172117211731173 /* return skb */11741174 ctx->tx_curr_skb = NULL;11751175- dev->net->stats.tx_packets += ctx->tx_curr_frame_num;1176117511771176 /* keep private stats: framing overhead and number of NTBs */11781177 ctx->tx_overhead += skb_out->len - ctx->tx_curr_frame_payload;11791178 ctx->tx_ntbs++;1180117911811181- /* usbnet has already counted all the framing overhead.11801180+ /* usbnet will count all the framing overhead by default.11821181 * Adjust the stats so that the tx_bytes counter show real11831182 * payload data instead.11841183 */11851185- dev->net->stats.tx_bytes -= skb_out->len - ctx->tx_curr_frame_payload;11841184+ usbnet_set_skb_tx_stats(skb_out, n,11851185+ ctx->tx_curr_frame_payload - skb_out->len);1186118611871187 return skb_out;11881188
+2
drivers/net/usb/r8152.c
···492492/* Define these values to match your device */493493#define VENDOR_ID_REALTEK 0x0bda494494#define VENDOR_ID_SAMSUNG 0x04e8495495+#define VENDOR_ID_LENOVO 0x17ef495496496497#define MCU_TYPE_PLA 0x0100497498#define MCU_TYPE_USB 0x0000···40384037 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152)},40394038 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)},40404039 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)},40404040+ {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)},40414041 {}40424042};40434043
···126126 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_MCHAN, "mchan");127127 if (drvr->bus_if->wowl_supported)128128 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl");129129- brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0);129129+ if (drvr->bus_if->chip != BRCM_CC_43362_CHIP_ID)130130+ brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0);130131131132 /* set chip related quirks */132133 switch (drvr->bus_if->chip) {
-1
drivers/net/wireless/iwlwifi/dvm/dev.h
···708708 unsigned long reload_jiffies;709709 int reload_count;710710 bool ucode_loaded;711711- bool init_ucode_run; /* Don't run init uCode again */712711713712 u8 plcp_delta_threshold;714713
···11241124 /*This is for new trx flow*/11251125 struct rtl_tx_buffer_desc *pbuffer_desc = NULL;11261126 u8 temp_one = 1;11271127+ u8 *entry;1127112811281129 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc));11291130 ring = &rtlpci->tx_ring[BEACON_QUEUE];11301131 pskb = __skb_dequeue(&ring->queue);11311131- if (pskb)11321132+ if (rtlpriv->use_new_trx_flow)11331133+ entry = (u8 *)(&ring->buffer_desc[ring->idx]);11341134+ else11351135+ entry = (u8 *)(&ring->desc[ring->idx]);11361136+ if (pskb) {11371137+ pci_unmap_single(rtlpci->pdev,11381138+ rtlpriv->cfg->ops->get_desc(11391139+ (u8 *)entry, true, HW_DESC_TXBUFF_ADDR),11401140+ pskb->len, PCI_DMA_TODEVICE);11321141 kfree_skb(pskb);11421142+ }1133114311341144 /*NB: the beacon data buffer must be 32-bit aligned. */11351145 pskb = ieee80211_beacon_get(hw, mac->vif);
+1-4
drivers/net/xen-netfront.c
···1008100810091009static int xennet_change_mtu(struct net_device *dev, int mtu)10101010{10111011- int max = xennet_can_sg(dev) ?10121012- XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN;10111011+ int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN;1013101210141013 if (mtu > max)10151014 return -EINVAL;···1277127812781279 netdev->ethtool_ops = &xennet_ethtool_ops;12791280 SET_NETDEV_DEV(netdev, &dev->dev);12801280-12811281- netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER);1282128112831282 np->netdev = netdev;12841283
+8-3
drivers/of/address.c
···450450 return NULL;451451}452452453453-static int of_empty_ranges_quirk(void)453453+static int of_empty_ranges_quirk(struct device_node *np)454454{455455 if (IS_ENABLED(CONFIG_PPC)) {456456- /* To save cycles, we cache the result */456456+ /* To save cycles, we cache the result for global "Mac" setting */457457 static int quirk_state = -1;458458459459+ /* PA-SEMI sdc DT bug */460460+ if (of_device_is_compatible(np, "1682m-sdc"))461461+ return true;462462+463463+ /* Make quirk cached */459464 if (quirk_state < 0)460465 quirk_state =461466 of_machine_is_compatible("Power Macintosh") ||···495490 * This code is only enabled on powerpc. --gcl496491 */497492 ranges = of_get_property(parent, rprop, &rlen);498498- if (ranges == NULL && !of_empty_ranges_quirk()) {493493+ if (ranges == NULL && !of_empty_ranges_quirk(parent)) {499494 pr_debug("OF: no ranges; cannot translate\n");500495 return 1;501496 }
+1
drivers/staging/iio/Kconfig
···3838config IIO_SIMPLE_DUMMY_BUFFER3939 bool "Buffered capture support"4040 select IIO_BUFFER4141+ select IIO_TRIGGER4142 select IIO_KFIFO_BUF4243 help4344 Add buffered data capture to the simple dummy driver.
···387387 status = PORT_PLC;388388 port_change_bit = "link state";389389 break;390390+ case USB_PORT_FEAT_C_PORT_CONFIG_ERROR:391391+ status = PORT_CEC;392392+ port_change_bit = "config error";393393+ break;390394 default:391395 /* Should never happen */392396 return;···592588 status |= USB_PORT_STAT_C_LINK_STATE << 16;593589 if ((raw_port_status & PORT_WRC))594590 status |= USB_PORT_STAT_C_BH_RESET << 16;591591+ if ((raw_port_status & PORT_CEC))592592+ status |= USB_PORT_STAT_C_CONFIG_ERROR << 16;595593 }596594597595 if (hcd->speed != HCD_USB3) {···10111005 case USB_PORT_FEAT_C_OVER_CURRENT:10121006 case USB_PORT_FEAT_C_ENABLE:10131007 case USB_PORT_FEAT_C_PORT_LINK_STATE:10081008+ case USB_PORT_FEAT_C_PORT_CONFIG_ERROR:10141009 xhci_clear_port_change_bit(xhci, wValue, wIndex,10151010 port_array[wIndex], temp);10161011 break;···10761069 */10771070 status = bus_state->resuming_ports;1078107110791079- mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC;10721072+ mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC;1080107310811074 spin_lock_irqsave(&xhci->lock, flags);10821075 /* For each port, did anything change? If so, set that bit in buf. */
···6161/* For Xircom PGSDB9 and older Entrega version of the same device */6262#define XIRCOM_VENDOR_ID 0x085a6363#define XIRCOM_FAKE_ID 0x80276464+#define XIRCOM_FAKE_ID_2 0x8025 /* "PGMFHUB" serial */6465#define ENTREGA_VENDOR_ID 0x16456566#define ENTREGA_FAKE_ID 0x80936667···7170#endif7271#ifdef XIRCOM7372 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) },7373+ { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) },7474 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) },7575#endif7676 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) },···9593#ifdef XIRCOM9694static const struct usb_device_id id_table_fake_xircom[] = {9795 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) },9696+ { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) },9897 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) },9998 { }10099};
+17
drivers/xen/Kconfig
···55555656 In that case step 3 should be omitted.57575858+config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT5959+ int "Hotplugged memory limit (in GiB) for a PV guest"6060+ default 512 if X86_646161+ default 4 if X86_326262+ range 0 64 if X86_326363+ depends on XEN_HAVE_PVMMU6464+ depends on XEN_BALLOON_MEMORY_HOTPLUG6565+ help6666+ Maxmium amount of memory (in GiB) that a PV guest can be6767+ expanded to when using memory hotplug.6868+6969+ A PV guest can have more memory than this limit if is7070+ started with a larger maximum.7171+7272+ This value is used to allocate enough space in internal7373+ tables needed for physical memory administration.7474+5875config XEN_SCRUB_PAGES5976 bool "Scrub pages before returning them to system"6077 depends on XEN_BALLOON
+23
drivers/xen/balloon.c
···229229 balloon_hotplug = round_up(balloon_hotplug, PAGES_PER_SECTION);230230 nid = memory_add_physaddr_to_nid(hotplug_start_paddr);231231232232+#ifdef CONFIG_XEN_HAVE_PVMMU233233+ /*234234+ * add_memory() will build page tables for the new memory so235235+ * the p2m must contain invalid entries so the correct236236+ * non-present PTEs will be written.237237+ *238238+ * If a failure occurs, the original (identity) p2m entries239239+ * are not restored since this region is now known not to240240+ * conflict with any devices.241241+ */ 242242+ if (!xen_feature(XENFEAT_auto_translated_physmap)) {243243+ unsigned long pfn, i;244244+245245+ pfn = PFN_DOWN(hotplug_start_paddr);246246+ for (i = 0; i < balloon_hotplug; i++) {247247+ if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {248248+ pr_warn("set_phys_to_machine() failed, no memory added\n");249249+ return BP_ECANCELED;250250+ }251251+ }252252+ }253253+#endif254254+232255 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT);233256234257 if (rc) {
+5-1
fs/cifs/cifsencrypt.c
···11/*22 * fs/cifs/cifsencrypt.c33 *44+ * Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP55+ * for more detailed information66+ *47 * Copyright (C) International Business Machines Corp., 2005,201358 * Author(s): Steve French (sfrench@us.ibm.com)69 *···518515 __func__);519516 return rc;520517 }521521- } else if (ses->serverName) {518518+ } else {519519+ /* We use ses->serverName if no domain name available */522520 len = strlen(ses->serverName);523521524522 server = kmalloc(2 + (len * 2), GFP_KERNEL);
+11-2
fs/cifs/connect.c
···15991599 pr_warn("CIFS: username too long\n");16001600 goto cifs_parse_mount_err;16011601 }16021602+16031603+ kfree(vol->username);16021604 vol->username = kstrdup(string, GFP_KERNEL);16031605 if (!vol->username)16041606 goto cifs_parse_mount_err;···17021700 goto cifs_parse_mount_err;17031701 }1704170217031703+ kfree(vol->domainname);17051704 vol->domainname = kstrdup(string, GFP_KERNEL);17061705 if (!vol->domainname) {17071706 pr_warn("CIFS: no memory for domainname\n");···17341731 }1735173217361733 if (strncasecmp(string, "default", 7) != 0) {17341734+ kfree(vol->iocharset);17371735 vol->iocharset = kstrdup(string,17381736 GFP_KERNEL);17391737 if (!vol->iocharset) {···29172913 * calling name ends in null (byte 16) from old smb29182914 * convention.29192915 */29202920- if (server->workstation_RFC1001_name &&29212921- server->workstation_RFC1001_name[0] != 0)29162916+ if (server->workstation_RFC1001_name[0] != 0)29222917 rfc1002mangle(ses_init_buf->trailer.29232918 session_req.calling_name,29242919 server->workstation_RFC1001_name,···36953692#endif /* CIFS_WEAK_PW_HASH */36963693 rc = SMBNTencrypt(tcon->password, ses->server->cryptkey,36973694 bcc_ptr, nls_codepage);36953695+ if (rc) {36963696+ cifs_dbg(FYI, "%s Can't generate NTLM rsp. Error: %d\n",36973697+ __func__, rc);36983698+ cifs_buf_release(smb_buffer);36993699+ return rc;37003700+ }3698370136993702 bcc_ptr += CIFS_AUTH_RESP_SIZE;37003703 if (ses->capabilities & CAP_UNICODE) {
···322322323323 /* return pointer to beginning of data area, ie offset from SMB start */324324 if ((*off != 0) && (*len != 0))325325- return hdr->ProtocolId + *off;325325+ return (char *)(&hdr->ProtocolId[0]) + *off;326326 else327327 return NULL;328328}
+2-1
fs/cifs/smb2ops.c
···684684685685 /* No need to change MaxChunks since already set to 1 */686686 chunk_sizes_updated = true;687687- }687687+ } else688688+ goto cchunk_out;688689 }689690690691cchunk_out:
+10-7
fs/cifs/smb2pdu.c
···12181218 struct smb2_ioctl_req *req;12191219 struct smb2_ioctl_rsp *rsp;12201220 struct TCP_Server_Info *server;12211221- struct cifs_ses *ses = tcon->ses;12211221+ struct cifs_ses *ses;12221222 struct kvec iov[2];12231223 int resp_buftype;12241224 int num_iovecs;···12321232 /* zero out returned data len, in case of error */12331233 if (plen)12341234 *plen = 0;12351235+12361236+ if (tcon)12371237+ ses = tcon->ses;12381238+ else12391239+ return -EIO;1235124012361241 if (ses && (ses->server))12371242 server = ses->server;···13011296 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base;1302129713031298 if ((rc != 0) && (rc != -EINVAL)) {13041304- if (tcon)13051305- cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);12991299+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13061300 goto ioctl_exit;13071301 } else if (rc == -EINVAL) {13081302 if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) &&13091303 (opcode != FSCTL_SRV_COPYCHUNK)) {13101310- if (tcon)13111311- cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13041304+ cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE);13121305 goto ioctl_exit;13131306 }13141307 }···1632162916331630 rc = SendReceive2(xid, ses, iov, 1, &resp_buftype, 0);1634163116351635- if ((rc != 0) && tcon)16321632+ if (rc != 0)16361633 cifs_stats_fail_inc(tcon, SMB2_FLUSH_HE);1637163416381635 free_rsp_buf(resp_buftype, iov[0].iov_base);···21172114 struct kvec iov[2];21182115 int rc = 0;21192116 int len;21202120- int resp_buftype;21172117+ int resp_buftype = CIFS_NO_BUFFER;21212118 unsigned char *bufptr;21222119 struct TCP_Server_Info *server;21232120 struct cifs_ses *ses = tcon->ses;
+83-10
fs/fs-writeback.c
···5353 struct completion *done; /* set if the caller waits */5454};55555656+/*5757+ * If an inode is constantly having its pages dirtied, but then the5858+ * updates stop dirtytime_expire_interval seconds in the past, it's5959+ * possible for the worst case time between when an inode has its6060+ * timestamps updated and when they finally get written out to be two6161+ * dirtytime_expire_intervals. We set the default to 12 hours (in6262+ * seconds), which means most of the time inodes will have their6363+ * timestamps written to disk after 12 hours, but in the worst case a6464+ * few inodes might not their timestamps updated for 24 hours.6565+ */6666+unsigned int dirtytime_expire_interval = 12 * 60 * 60;6767+5668/**5769 * writeback_in_progress - determine whether there is writeback in progress5870 * @bdi: the device's backing_dev_info structure.···287275288276 if ((flags & EXPIRE_DIRTY_ATIME) == 0)289277 older_than_this = work->older_than_this;290290- else if ((work->reason == WB_REASON_SYNC) == 0) {291291- expire_time = jiffies - (HZ * 86400);278278+ else if (!work->for_sync) {279279+ expire_time = jiffies - (dirtytime_expire_interval * HZ);292280 older_than_this = &expire_time;293281 }294282 while (!list_empty(delaying_queue)) {···470458 */471459 redirty_tail(inode, wb);472460 } else if (inode->i_state & I_DIRTY_TIME) {461461+ inode->dirtied_when = jiffies;473462 list_move(&inode->i_wb_list, &wb->b_dirty_time);474463 } else {475464 /* The inode is clean. Remove from writeback lists. */···518505 spin_lock(&inode->i_lock);519506520507 dirty = inode->i_state & I_DIRTY;521521- if (((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) &&522522- (inode->i_state & I_DIRTY_TIME)) ||523523- (inode->i_state & I_DIRTY_TIME_EXPIRED)) {524524- dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED;525525- trace_writeback_lazytime(inode);526526- }508508+ if (inode->i_state & I_DIRTY_TIME) {509509+ if ((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) ||510510+ unlikely(inode->i_state & I_DIRTY_TIME_EXPIRED) ||511511+ unlikely(time_after(jiffies,512512+ (inode->dirtied_time_when +513513+ dirtytime_expire_interval * HZ)))) {514514+ dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED;515515+ trace_writeback_lazytime(inode);516516+ }517517+ } else518518+ inode->i_state &= ~I_DIRTY_TIME_EXPIRED;527519 inode->i_state &= ~dirty;528520529521 /*···11491131 rcu_read_unlock();11501132}1151113311341134+/*11351135+ * Wake up bdi's periodically to make sure dirtytime inodes gets11361136+ * written back periodically. We deliberately do *not* check the11371137+ * b_dirtytime list in wb_has_dirty_io(), since this would cause the11381138+ * kernel to be constantly waking up once there are any dirtytime11391139+ * inodes on the system. So instead we define a separate delayed work11401140+ * function which gets called much more rarely. (By default, only11411141+ * once every 12 hours.)11421142+ *11431143+ * If there is any other write activity going on in the file system,11441144+ * this function won't be necessary. But if the only thing that has11451145+ * happened on the file system is a dirtytime inode caused by an atime11461146+ * update, we need this infrastructure below to make sure that inode11471147+ * eventually gets pushed out to disk.11481148+ */11491149+static void wakeup_dirtytime_writeback(struct work_struct *w);11501150+static DECLARE_DELAYED_WORK(dirtytime_work, wakeup_dirtytime_writeback);11511151+11521152+static void wakeup_dirtytime_writeback(struct work_struct *w)11531153+{11541154+ struct backing_dev_info *bdi;11551155+11561156+ rcu_read_lock();11571157+ list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) {11581158+ if (list_empty(&bdi->wb.b_dirty_time))11591159+ continue;11601160+ bdi_wakeup_thread(bdi);11611161+ }11621162+ rcu_read_unlock();11631163+ schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);11641164+}11651165+11661166+static int __init start_dirtytime_writeback(void)11671167+{11681168+ schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ);11691169+ return 0;11701170+}11711171+__initcall(start_dirtytime_writeback);11721172+11731173+int dirtytime_interval_handler(struct ctl_table *table, int write,11741174+ void __user *buffer, size_t *lenp, loff_t *ppos)11751175+{11761176+ int ret;11771177+11781178+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);11791179+ if (ret == 0 && write)11801180+ mod_delayed_work(system_wq, &dirtytime_work, 0);11811181+ return ret;11821182+}11831183+11521184static noinline void block_dump___mark_inode_dirty(struct inode *inode)11531185{11541186 if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) {···13371269 }1338127013391271 inode->dirtied_when = jiffies;13401340- list_move(&inode->i_wb_list, dirtytime ?13411341- &bdi->wb.b_dirty_time : &bdi->wb.b_dirty);12721272+ if (dirtytime)12731273+ inode->dirtied_time_when = jiffies;12741274+ if (inode->i_state & (I_DIRTY_INODE | I_DIRTY_PAGES))12751275+ list_move(&inode->i_wb_list, &bdi->wb.b_dirty);12761276+ else12771277+ list_move(&inode->i_wb_list,12781278+ &bdi->wb.b_dirty_time);13421279 spin_unlock(&bdi->wb.list_lock);13431280 trace_writeback_dirty_inode_enqueue(inode);13441281
+2-3
fs/locks.c
···13881388int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)13891389{13901390 int error = 0;13911391- struct file_lock *new_fl;13921391 struct file_lock_context *ctx = inode->i_flctx;13931393- struct file_lock *fl;13921392+ struct file_lock *new_fl, *fl, *tmp;13941393 unsigned long break_time;13951394 int want_write = (mode & O_ACCMODE) != O_RDONLY;13961395 LIST_HEAD(dispose);···14191420 break_time++; /* so that 0 means no break time */14201421 }1421142214221422- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {14231423+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) {14231424 if (!leases_conflict(fl, new_fl))14241425 continue;14251426 if (want_write) {
···15621562 p = xdr_decode_hyper(p, &lgp->lg_seg.offset);15631563 p = xdr_decode_hyper(p, &lgp->lg_seg.length);15641564 p = xdr_decode_hyper(p, &lgp->lg_minlength);15651565- nfsd4_decode_stateid(argp, &lgp->lg_sid);15651565+15661566+ status = nfsd4_decode_stateid(argp, &lgp->lg_sid);15671567+ if (status)15681568+ return status;15691569+15661570 READ_BUF(4);15671571 lgp->lg_maxcount = be32_to_cpup(p++);15681572···15841580 p = xdr_decode_hyper(p, &lcp->lc_seg.offset);15851581 p = xdr_decode_hyper(p, &lcp->lc_seg.length);15861582 lcp->lc_reclaim = be32_to_cpup(p++);15871587- nfsd4_decode_stateid(argp, &lcp->lc_sid);15831583+15841584+ status = nfsd4_decode_stateid(argp, &lcp->lc_sid);15851585+ if (status)15861586+ return status;15871587+15881588 READ_BUF(4);15891589 lcp->lc_newoffset = be32_to_cpup(p++);15901590 if (lcp->lc_newoffset) {···16361628 READ_BUF(16);16371629 p = xdr_decode_hyper(p, &lrp->lr_seg.offset);16381630 p = xdr_decode_hyper(p, &lrp->lr_seg.length);16391639- nfsd4_decode_stateid(argp, &lrp->lr_sid);16311631+16321632+ status = nfsd4_decode_stateid(argp, &lrp->lr_sid);16331633+ if (status)16341634+ return status;16351635+16401636 READ_BUF(4);16411637 lrp->lrf_body_len = be32_to_cpup(p++);16421638 if (lrp->lrf_body_len > 0) {···41354123 return nfserr_resource;41364124 *p++ = cpu_to_be32(lrp->lrs_present);41374125 if (lrp->lrs_present)41384138- nfsd4_encode_stateid(xdr, &lrp->lr_sid);41264126+ return nfsd4_encode_stateid(xdr, &lrp->lr_sid);41394127 return nfs_ok;41404128}41414129#endif /* CONFIG_NFSD_PNFS */
+5-1
fs/nfsd/nfscache.c
···165165{166166 unsigned int hashsize;167167 unsigned int i;168168+ int status = 0;168169169170 max_drc_entries = nfsd_cache_size_limit();170171 atomic_set(&num_drc_entries, 0);171172 hashsize = nfsd_hashsize(max_drc_entries);172173 maskbits = ilog2(hashsize);173174174174- register_shrinker(&nfsd_reply_cache_shrinker);175175+ status = register_shrinker(&nfsd_reply_cache_shrinker);176176+ if (status)177177+ return status;178178+175179 drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep),176180 0, 0, NULL);177181 if (!drc_slab)
+1
include/linux/fs.h
···604604 struct mutex i_mutex;605605606606 unsigned long dirtied_when; /* jiffies of first dirtying */607607+ unsigned long dirtied_time_when;607608608609 struct hlist_node i_hash;609610 struct list_head i_wb_list; /* backing dev IO list */
···44#include <linux/compiler.h>5566unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__;77+unsigned long lcm_not_zero(unsigned long a, unsigned long b) __attribute_const__;7889#endif /* _LCM_H */
+6
include/linux/netdevice.h
···21852185void synchronize_net(void);21862186int init_dummy_netdev(struct net_device *dev);2187218721882188+DECLARE_PER_CPU(int, xmit_recursion);21892189+static inline int dev_recursion_level(void)21902190+{21912191+ return this_cpu_read(xmit_recursion);21922192+}21932193+21882194struct net_device *dev_get_by_index(struct net *net, int ifindex);21892195struct net_device *__dev_get_by_index(struct net *net, int ifindex);21902196struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
···227227 struct urb *urb;228228 struct usbnet *dev;229229 enum skb_state state;230230- size_t length;230230+ long length;231231+ unsigned long packets;231232};233233+234234+/* Drivers that set FLAG_MULTI_PACKET must call this in their235235+ * tx_fixup method before returning an skb.236236+ */237237+static inline void238238+usbnet_set_skb_tx_stats(struct sk_buff *skb,239239+ unsigned long packets, long bytes_delta)240240+{241241+ struct skb_data *entry = (struct skb_data *) skb->cb;242242+243243+ entry->packets = packets;244244+ entry->length = bytes_delta;245245+}232246233247extern int usbnet_open(struct net_device *net);234248extern int usbnet_stop(struct net_device *net);
+3
include/linux/writeback.h
···130130extern unsigned long vm_dirty_bytes;131131extern unsigned int dirty_writeback_interval;132132extern unsigned int dirty_expire_interval;133133+extern unsigned int dirtytime_expire_interval;133134extern int vm_highmem_is_dirtyable;134135extern int block_dump;135136extern int laptop_mode;···147146extern int dirty_bytes_handler(struct ctl_table *table, int write,148147 void __user *buffer, size_t *lenp,149148 loff_t *ppos);149149+int dirtytime_interval_handler(struct ctl_table *table, int write,150150+ void __user *buffer, size_t *lenp, loff_t *ppos);150151151152struct ctl_table;152153int dirty_writeback_centisecs_handler(struct ctl_table *, int,
-16
include/net/ip.h
···453453454454#endif455455456456-static inline int sk_mc_loop(struct sock *sk)457457-{458458- if (!sk)459459- return 1;460460- switch (sk->sk_family) {461461- case AF_INET:462462- return inet_sk(sk)->mc_loop;463463-#if IS_ENABLED(CONFIG_IPV6)464464- case AF_INET6:465465- return inet6_sk(sk)->mc_loop;466466-#endif467467- }468468- WARN_ON(1);469469- return 1;470470-}471471-472456bool ip_call_ra_chain(struct sk_buff *skb);473457474458/*
···973973 */974974#define MT_TOOL_FINGER 0975975#define MT_TOOL_PEN 1976976-#define MT_TOOL_MAX 1976976+#define MT_TOOL_PALM 2977977+#define MT_TOOL_MAX 2977978978979/*979980 * Values describing the status of a force-feedback effect
+1-1
include/uapi/linux/nfsd/export.h
···4747 * exported filesystem.4848 */4949#define NFSEXP_V4ROOT 0x100005050-#define NFSEXP_NOPNFS 0x200005050+#define NFSEXP_PNFS 0x2000051515252/* All flags that we claim to support. (Note we don't support NOACL.) */5353#define NFSEXP_ALLFLAGS 0x3FE7F
···1212 return 0;1313}1414EXPORT_SYMBOL_GPL(lcm);1515+1616+unsigned long lcm_not_zero(unsigned long a, unsigned long b)1717+{1818+ unsigned long l = lcm(a, b);1919+2020+ if (l)2121+ return l;2222+2323+ return (b ? : a);2424+}2525+EXPORT_SYMBOL_GPL(lcm_not_zero);
···198198 */199199int peernet2id(struct net *net, struct net *peer)200200{201201- int id = __peernet2id(net, peer, true);201201+ bool alloc = atomic_read(&peer->count) == 0 ? false : true;202202+ int id;202203204204+ id = __peernet2id(net, peer, alloc);203205 return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED;204206}205207EXPORT_SYMBOL(peernet2id);
···653653 sock_reset_flag(sk, bit);654654}655655656656+bool sk_mc_loop(struct sock *sk)657657+{658658+ if (dev_recursion_level())659659+ return false;660660+ if (!sk)661661+ return true;662662+ switch (sk->sk_family) {663663+ case AF_INET:664664+ return inet_sk(sk)->mc_loop;665665+#if IS_ENABLED(CONFIG_IPV6)666666+ case AF_INET6:667667+ return inet6_sk(sk)->mc_loop;668668+#endif669669+ }670670+ WARN_ON(1);671671+ return true;672672+}673673+EXPORT_SYMBOL(sk_mc_loop);674674+656675/*657676 * This is meant for all protocols to use and covers goings on658677 * at the socket level. Everything here is generic.
···501501#ifdef CONFIG_OF502502static int dsa_of_setup_routing_table(struct dsa_platform_data *pd,503503 struct dsa_chip_data *cd,504504- int chip_index,504504+ int chip_index, int port_index,505505 struct device_node *link)506506{507507- int ret;508507 const __be32 *reg;509509- int link_port_addr;510508 int link_sw_addr;511509 struct device_node *parent_sw;512510 int len;···517519 if (!reg || (len != sizeof(*reg) * 2))518520 return -EINVAL;519521522522+ /*523523+ * Get the destination switch number from the second field of its 'reg'524524+ * property, i.e. for "reg = <0x19 1>" sw_addr is '1'.525525+ */520526 link_sw_addr = be32_to_cpup(reg + 1);521527522528 if (link_sw_addr >= pd->nr_chips)···537535 memset(cd->rtable, -1, pd->nr_chips * sizeof(s8));538536 }539537540540- reg = of_get_property(link, "reg", NULL);541541- if (!reg) {542542- ret = -EINVAL;543543- goto out;544544- }545545-546546- link_port_addr = be32_to_cpup(reg);547547-548548- cd->rtable[link_sw_addr] = link_port_addr;538538+ cd->rtable[link_sw_addr] = port_index;549539550540 return 0;551551-out:552552- kfree(cd->rtable);553553- return ret;554541}555542556543static void dsa_of_free_platform_data(struct dsa_platform_data *pd)···649658 if (!strcmp(port_name, "dsa") && link &&650659 pd->nr_chips > 1) {651660 ret = dsa_of_setup_routing_table(pd, cd,652652- chip_index, link);661661+ chip_index, port_index, link);653662 if (ret)654663 goto out_free_chip;655664 }
···31053105 if (!first_ackt.v64)31063106 first_ackt = last_ackt;3107310731083108- if (!(sacked & TCPCB_SACKED_ACKED))31083108+ if (!(sacked & TCPCB_SACKED_ACKED)) {31093109 reord = min(pkts_acked, reord);31103110- if (!after(scb->end_seq, tp->high_seq))31113111- flag |= FLAG_ORIG_SACK_ACKED;31103110+ if (!after(scb->end_seq, tp->high_seq))31113111+ flag |= FLAG_ORIG_SACK_ACKED;31123112+ }31123113 }3113311431143115 if (sacked & TCPCB_SACKED_ACKED)
···12181218 if (rt)12191219 rt6_set_expires(rt, jiffies + (HZ * lifetime));12201220 if (ra_msg->icmph.icmp6_hop_limit) {12211221- in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;12211221+ /* Only set hop_limit on the interface if it is higher than12221222+ * the current hop_limit.12231223+ */12241224+ if (in6_dev->cnf.hop_limit < ra_msg->icmph.icmp6_hop_limit) {12251225+ in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;12261226+ } else {12271227+ ND_PRINTK(2, warn, "RA: Got route advertisement with lower hop_limit than current\n");12281228+ }12221229 if (rt)12231230 dst_metric_set(&rt->dst, RTAX_HOPLIMIT,12241231 ra_msg->icmph.icmp6_hop_limit);
+12-1
net/ipv6/tcp_ipv6.c
···14111411 TCP_SKB_CB(skb)->sacked = 0;14121412}1413141314141414+static void tcp_v6_restore_cb(struct sk_buff *skb)14151415+{14161416+ /* We need to move header back to the beginning if xfrm6_policy_check()14171417+ * and tcp_v6_fill_cb() are going to be called again.14181418+ */14191419+ memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,14201420+ sizeof(struct inet6_skb_parm));14211421+}14221422+14141423static int tcp_v6_rcv(struct sk_buff *skb)14151424{14161425 const struct tcphdr *th;···15521543 inet_twsk_deschedule(tw, &tcp_death_row);15531544 inet_twsk_put(tw);15541545 sk = sk2;15461546+ tcp_v6_restore_cb(skb);15551547 goto process;15561548 }15571549 /* Fall through to ACK */···15611551 tcp_v6_timewait_ack(sk, skb);15621552 break;15631553 case TCP_TW_RST:15541554+ tcp_v6_restore_cb(skb);15641555 goto no_tcp_socket;15651556 case TCP_TW_SUCCESS:15661557 ;···15961585 skb->sk = sk;15971586 skb->destructor = sock_edemux;15981587 if (sk->sk_state != TCP_TIME_WAIT) {15991599- struct dst_entry *dst = sk->sk_rx_dst;15881588+ struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst);1600158916011590 if (dst)16021591 dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+1-3
net/iucv/af_iucv.c
···11141114 noblock, &err);11151115 else11161116 skb = sock_alloc_send_skb(sk, len, noblock, &err);11171117- if (!skb) {11181118- err = -ENOMEM;11171117+ if (!skb)11191118 goto out;11201120- }11211119 if (iucv->transport == AF_IUCV_TRANS_HIPER)11221120 skb_reserve(skb, sizeof(struct af_iucv_trans_hdr) + ETH_HLEN);11231121 if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
···175175 * @reorder_lock: serializes access to reorder buffer, see below.176176 * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and177177 * and ssn.178178+ * @removed: this session is removed (but might have been found due to RCU)178179 *179180 * This structure's lifetime is managed by RCU, assignments to180181 * the array holding it must hold the aggregation mutex.···200199 u16 timeout;201200 u8 dialog_token;202201 bool auto_seq;202202+ bool removed;203203};204204205205/**