Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'master-2014-07-31' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next

Conflicts:
net/6lowpan/iphc.c

Minor conflicts in iphc.c were changes overlapping with some
style cleanups.

John W. Linville says:

====================
Please pull this last(?) batch of wireless change intended for the
3.17 stream...

For the NFC bits, Samuel says:

"This is a rather quiet one, we have:

- A new driver from ST Microelectronics for their NCI ST21NFCB,
including device tree support.

- p2p support for the ST21NFCA driver

- A few fixes an enhancements for the NFC digital laye"

For the Atheros bits, Kalle says:

"Michal and Janusz did some important RX aggregation fixes, basically we
were missing RX reordering altogether. The 10.1 firmware doesn't support
Ad-Hoc mode and Michal fixed ath10k so that it doesn't advertise Ad-Hoc
support with that firmware. Also he implemented a workaround for a KVM
issue."

For the Bluetooth bits, Gustavo and Johan say:

"To quote Gustavo from his previous request:

'Some last minute fixes for -next. We have a fix for a use after free in
RFCOMM, another fix to an issue with ADV_DIRECT_IND and one for ADV_IND with
auto-connection handling. Last, we added support for reading the codec and
MWS setting for controllers that support these features.'

Additionally there are fixes to LE scanning, an update to conform to the 4.1
core specification as well as fixes for tracking the page scan state. All
of these fixes are important for 3.17."

And,

"We've got:

- 6lowpan fixes/cleanups
- A couple crash fixes, one for the Marvell HCI driver and another in LE SMP.
- Fix for an incorrect connected state check
- Fix for the bondable requirement during pairing (an issue which had
crept in because of using "pairable" when in fact the actual meaning
was "bondable" (these have different meanings in Bluetooth)"

Along with those are some late-breaking hardware support patches in
brcmfmac and b43 as well as a stray ath9k patch.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+7546 -375
+33
Documentation/devicetree/bindings/net/nfc/st21nfcb.txt
··· 1 + * STMicroelectronics SAS. ST21NFCB NFC Controller 2 + 3 + Required properties: 4 + - compatible: Should be "st,st21nfcb_i2c". 5 + - clock-frequency: I²C work frequency. 6 + - reg: address on the bus 7 + - interrupt-parent: phandle for the interrupt gpio controller 8 + - interrupts: GPIO interrupt to which the chip is connected 9 + - reset-gpios: Output GPIO pin used to reset the ST21NFCB 10 + 11 + Optional SoC Specific Properties: 12 + - pinctrl-names: Contains only one value - "default". 13 + - pintctrl-0: Specifies the pin control groups used for this controller. 14 + 15 + Example (for ARM-based BeagleBoard xM with ST21NFCB on I2C2): 16 + 17 + &i2c2 { 18 + 19 + status = "okay"; 20 + 21 + st21nfcb: st21nfcb@8 { 22 + 23 + compatible = "st,st21nfcb_i2c"; 24 + 25 + reg = <0x08>; 26 + clock-frequency = <400000>; 27 + 28 + interrupt-parent = <&gpio5>; 29 + interrupts = <2 IRQ_TYPE_LEVEL_LOW>; 30 + 31 + reset-gpios = <&gpio5 29 GPIO_ACTIVE_HIGH>; 32 + }; 33 + };
+41
Documentation/devicetree/bindings/net/wireless/brcm,bcm43xx-fmac.txt
··· 1 + Broadcom BCM43xx Fullmac wireless SDIO devices 2 + 3 + This node provides properties for controlling the Broadcom wireless device. The 4 + node is expected to be specified as a child node to the SDIO controller that 5 + connects the device to the system. 6 + 7 + Required properties: 8 + 9 + - compatible : Should be "brcm,bcm4329-fmac". 10 + 11 + Optional properties: 12 + - brcm,drive-strength : drive strength used for SDIO pins on device in mA 13 + (default = 6). 14 + - interrupt-parent : the phandle for the interrupt controller to which the 15 + device interrupts are connected. 16 + - interrupts : specifies attributes for the out-of-band interrupt (host-wake). 17 + When not specified the device will use in-band SDIO interrupts. 18 + - interrupt-names : name of the out-of-band interrupt, which must be set 19 + to "host-wake". 20 + 21 + Example: 22 + 23 + mmc3: mmc@01c12000 { 24 + #address-cells = <1>; 25 + #size-cells = <0>; 26 + 27 + pinctrl-names = "default"; 28 + pinctrl-0 = <&mmc3_pins_a>; 29 + vmmc-supply = <&reg_vmmc3>; 30 + bus-width = <4>; 31 + non-removable; 32 + status = "okay"; 33 + 34 + brcmf: bcrmf@1 { 35 + reg = <1>; 36 + compatible = "brcm,bcm4329-fmac"; 37 + interrupt-parent = <&pio>; 38 + interrupts = <10 8>; /* PH10 / EINT10 */ 39 + interrupt-names = "host-wake"; 40 + }; 41 + };
+3 -1
MAINTAINERS
··· 154 154 L: linux-bluetooth@vger.kernel.org 155 155 S: Maintained 156 156 F: net/6lowpan/ 157 + F: include/net/6lowpan.h 157 158 158 159 6PACK NETWORK DRIVER FOR AX.25 159 160 M: Andreas Koensgen <ajk@comnets.uni-bremen.de> ··· 5715 5714 F: drivers/net/ethernet/marvell/mvneta.* 5716 5715 5717 5716 MARVELL MWIFIEX WIRELESS DRIVER 5718 - M: Bing Zhao <bzhao@marvell.com> 5717 + M: Amitkumar Karwar <akarwar@marvell.com> 5718 + M: Avinash Patil <patila@marvell.com> 5719 5719 L: linux-wireless@vger.kernel.org 5720 5720 S: Maintained 5721 5721 F: drivers/net/wireless/mwifiex/
+1
drivers/bcma/driver_chipcommon_pmu.c
··· 603 603 tmp = BCMA_CC_PMU_CTL_PLL_UPD | BCMA_CC_PMU_CTL_NOILPONW; 604 604 break; 605 605 606 + case BCMA_CHIP_ID_BCM43131: 606 607 case BCMA_CHIP_ID_BCM43217: 607 608 case BCMA_CHIP_ID_BCM43227: 608 609 case BCMA_CHIP_ID_BCM43228:
+1
drivers/bcma/host_pci.c
··· 280 280 { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4359) }, 281 281 { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4365) }, 282 282 { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43a9) }, 283 + { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x43aa) }, 283 284 { PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, 0x4727) }, 284 285 { 0, }, 285 286 };
+11 -11
drivers/bcma/scan.c
··· 32 32 { BCMA_CORE_4706_CHIPCOMMON, "BCM4706 ChipCommon" }, 33 33 { BCMA_CORE_4706_SOC_RAM, "BCM4706 SOC RAM" }, 34 34 { BCMA_CORE_4706_MAC_GBIT, "BCM4706 GBit MAC" }, 35 - { BCMA_CORE_PCIEG2, "PCIe Gen 2" }, 36 - { BCMA_CORE_DMA, "DMA" }, 37 - { BCMA_CORE_SDIO3, "SDIO3" }, 38 - { BCMA_CORE_USB20, "USB 2.0" }, 39 - { BCMA_CORE_USB30, "USB 3.0" }, 40 - { BCMA_CORE_A9JTAG, "ARM Cortex A9 JTAG" }, 41 - { BCMA_CORE_DDR23, "Denali DDR2/DDR3 memory controller" }, 42 - { BCMA_CORE_ROM, "ROM" }, 43 - { BCMA_CORE_NAND, "NAND flash controller" }, 44 - { BCMA_CORE_QSPI, "SPI flash controller" }, 45 - { BCMA_CORE_CHIPCOMMON_B, "Chipcommon B" }, 35 + { BCMA_CORE_NS_PCIEG2, "PCIe Gen 2" }, 36 + { BCMA_CORE_NS_DMA, "DMA" }, 37 + { BCMA_CORE_NS_SDIO3, "SDIO3" }, 38 + { BCMA_CORE_NS_USB20, "USB 2.0" }, 39 + { BCMA_CORE_NS_USB30, "USB 3.0" }, 40 + { BCMA_CORE_NS_A9JTAG, "ARM Cortex A9 JTAG" }, 41 + { BCMA_CORE_NS_DDR23, "Denali DDR2/DDR3 memory controller" }, 42 + { BCMA_CORE_NS_ROM, "ROM" }, 43 + { BCMA_CORE_NS_NAND, "NAND flash controller" }, 44 + { BCMA_CORE_NS_QSPI, "SPI flash controller" }, 45 + { BCMA_CORE_NS_CHIPCOMMON_B, "Chipcommon B" }, 46 46 { BCMA_CORE_ARMCA9, "ARM Cortex A9 core (ihost)" }, 47 47 { BCMA_CORE_AMEMC, "AMEMC (DDR)" }, 48 48 { BCMA_CORE_ALTA, "ALTA (I2S)" },
+1
drivers/bcma/sprom.c
··· 534 534 /* for these chips OTP is always available */ 535 535 present = true; 536 536 break; 537 + case BCMA_CHIP_ID_BCM43131: 537 538 case BCMA_CHIP_ID_BCM43217: 538 539 case BCMA_CHIP_ID_BCM43227: 539 540 case BCMA_CHIP_ID_BCM43228:
+5
drivers/bluetooth/btmrvl_main.c
··· 710 710 init_waitqueue_head(&priv->main_thread.wait_q); 711 711 priv->main_thread.task = kthread_run(btmrvl_service_main_thread, 712 712 &priv->main_thread, "btmrvl_main_service"); 713 + if (IS_ERR(priv->main_thread.task)) 714 + goto err_thread; 713 715 714 716 priv->btmrvl_dev.card = card; 715 717 priv->btmrvl_dev.tx_dnld_rdy = true; 716 718 717 719 return priv; 720 + 721 + err_thread: 722 + btmrvl_free_adapter(priv); 718 723 719 724 err_adapter: 720 725 kfree(priv);
+18 -2
drivers/net/wireless/ath/ath10k/htc.c
··· 546 546 547 547 int ath10k_htc_wait_target(struct ath10k_htc *htc) 548 548 { 549 - int status = 0; 549 + int i, status = 0; 550 550 struct ath10k_htc_svc_conn_req conn_req; 551 551 struct ath10k_htc_svc_conn_resp conn_resp; 552 552 struct ath10k_htc_msg *msg; ··· 556 556 557 557 status = wait_for_completion_timeout(&htc->ctl_resp, 558 558 ATH10K_HTC_WAIT_TIMEOUT_HZ); 559 - if (status <= 0) { 559 + if (status == 0) { 560 + /* Workaround: In some cases the PCI HIF doesn't 561 + * receive interrupt for the control response message 562 + * even if the buffer was completed. It is suspected 563 + * iomap writes unmasking PCI CE irqs aren't propagated 564 + * properly in KVM PCI-passthrough sometimes. 565 + */ 566 + ath10k_warn("failed to receive control response completion, polling..\n"); 567 + 568 + for (i = 0; i < CE_COUNT; i++) 569 + ath10k_hif_send_complete_check(htc->ar, i, 1); 570 + 571 + status = wait_for_completion_timeout(&htc->ctl_resp, 572 + ATH10K_HTC_WAIT_TIMEOUT_HZ); 573 + 560 574 if (status == 0) 561 575 status = -ETIMEDOUT; 576 + } 562 577 578 + if (status < 0) { 563 579 ath10k_err("ctl_resp never came in (%d)\n", status); 564 580 return status; 565 581 }
+113 -9
drivers/net/wireless/ath/ath10k/htt_rx.c
··· 21 21 #include "txrx.h" 22 22 #include "debug.h" 23 23 #include "trace.h" 24 + #include "mac.h" 24 25 25 26 #include <linux/log2.h> 26 27 ··· 308 307 static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt, 309 308 u8 **fw_desc, int *fw_desc_len, 310 309 struct sk_buff **head_msdu, 311 - struct sk_buff **tail_msdu) 310 + struct sk_buff **tail_msdu, 311 + u32 *attention) 312 312 { 313 313 int msdu_len, msdu_chaining = 0; 314 314 struct sk_buff *msdu; ··· 359 357 break; 360 358 } 361 359 360 + *attention |= __le32_to_cpu(rx_desc->attention.flags) & 361 + (RX_ATTENTION_FLAGS_TKIP_MIC_ERR | 362 + RX_ATTENTION_FLAGS_DECRYPT_ERR | 363 + RX_ATTENTION_FLAGS_FCS_ERR | 364 + RX_ATTENTION_FLAGS_MGMT_TYPE); 362 365 /* 363 366 * Copy the FW rx descriptor for this MSDU from the rx 364 367 * indication message into the MSDU's netbuf. HL uses the ··· 1222 1215 for (j = 0; j < mpdu_ranges[i].mpdu_count; j++) { 1223 1216 struct sk_buff *msdu_head, *msdu_tail; 1224 1217 1218 + attention = 0; 1225 1219 msdu_head = NULL; 1226 1220 msdu_tail = NULL; 1227 1221 ret = ath10k_htt_rx_amsdu_pop(htt, 1228 1222 &fw_desc, 1229 1223 &fw_desc_len, 1230 1224 &msdu_head, 1231 - &msdu_tail); 1225 + &msdu_tail, 1226 + &attention); 1232 1227 1233 1228 if (ret < 0) { 1234 1229 ath10k_warn("failed to pop amsdu from htt rx ring %d\n", ··· 1242 1233 rxd = container_of((void *)msdu_head->data, 1243 1234 struct htt_rx_desc, 1244 1235 msdu_payload); 1245 - attention = __le32_to_cpu(rxd->attention.flags); 1246 1236 1247 1237 if (!ath10k_htt_rx_amsdu_allowed(htt, msdu_head, 1248 1238 status, ··· 1294 1286 u8 *fw_desc; 1295 1287 int fw_desc_len, hdrlen, paramlen; 1296 1288 int trim; 1289 + u32 attention = 0; 1297 1290 1298 1291 fw_desc_len = __le16_to_cpu(frag->fw_rx_desc_bytes); 1299 1292 fw_desc = (u8 *)frag->fw_msdu_rx_desc; ··· 1304 1295 1305 1296 spin_lock_bh(&htt->rx_ring.lock); 1306 1297 ret = ath10k_htt_rx_amsdu_pop(htt, &fw_desc, &fw_desc_len, 1307 - &msdu_head, &msdu_tail); 1298 + &msdu_head, &msdu_tail, 1299 + &attention); 1308 1300 spin_unlock_bh(&htt->rx_ring.lock); 1309 1301 1310 1302 ath10k_dbg(ATH10K_DBG_HTT_DUMP, "htt rx frag ahead\n"); ··· 1322 1312 1323 1313 hdr = (struct ieee80211_hdr *)msdu_head->data; 1324 1314 rxd = (void *)msdu_head->data - sizeof(*rxd); 1325 - tkip_mic_err = !!(__le32_to_cpu(rxd->attention.flags) & 1326 - RX_ATTENTION_FLAGS_TKIP_MIC_ERR); 1327 - decrypt_err = !!(__le32_to_cpu(rxd->attention.flags) & 1328 - RX_ATTENTION_FLAGS_DECRYPT_ERR); 1315 + tkip_mic_err = !!(attention & RX_ATTENTION_FLAGS_TKIP_MIC_ERR); 1316 + decrypt_err = !!(attention & RX_ATTENTION_FLAGS_DECRYPT_ERR); 1329 1317 fmt = MS(__le32_to_cpu(rxd->msdu_start.info1), 1330 1318 RX_MSDU_START_INFO1_DECAP_FORMAT); 1331 1319 ··· 1430 1422 } 1431 1423 } 1432 1424 1425 + static void ath10k_htt_rx_addba(struct ath10k *ar, struct htt_resp *resp) 1426 + { 1427 + struct htt_rx_addba *ev = &resp->rx_addba; 1428 + struct ath10k_peer *peer; 1429 + struct ath10k_vif *arvif; 1430 + u16 info0, tid, peer_id; 1431 + 1432 + info0 = __le16_to_cpu(ev->info0); 1433 + tid = MS(info0, HTT_RX_BA_INFO0_TID); 1434 + peer_id = MS(info0, HTT_RX_BA_INFO0_PEER_ID); 1435 + 1436 + ath10k_dbg(ATH10K_DBG_HTT, 1437 + "htt rx addba tid %hu peer_id %hu size %hhu\n", 1438 + tid, peer_id, ev->window_size); 1439 + 1440 + spin_lock_bh(&ar->data_lock); 1441 + peer = ath10k_peer_find_by_id(ar, peer_id); 1442 + if (!peer) { 1443 + ath10k_warn("received addba event for invalid peer_id: %hu\n", 1444 + peer_id); 1445 + spin_unlock_bh(&ar->data_lock); 1446 + return; 1447 + } 1448 + 1449 + arvif = ath10k_get_arvif(ar, peer->vdev_id); 1450 + if (!arvif) { 1451 + ath10k_warn("received addba event for invalid vdev_id: %u\n", 1452 + peer->vdev_id); 1453 + spin_unlock_bh(&ar->data_lock); 1454 + return; 1455 + } 1456 + 1457 + ath10k_dbg(ATH10K_DBG_HTT, 1458 + "htt rx start rx ba session sta %pM tid %hu size %hhu\n", 1459 + peer->addr, tid, ev->window_size); 1460 + 1461 + ieee80211_start_rx_ba_session_offl(arvif->vif, peer->addr, tid); 1462 + spin_unlock_bh(&ar->data_lock); 1463 + } 1464 + 1465 + static void ath10k_htt_rx_delba(struct ath10k *ar, struct htt_resp *resp) 1466 + { 1467 + struct htt_rx_delba *ev = &resp->rx_delba; 1468 + struct ath10k_peer *peer; 1469 + struct ath10k_vif *arvif; 1470 + u16 info0, tid, peer_id; 1471 + 1472 + info0 = __le16_to_cpu(ev->info0); 1473 + tid = MS(info0, HTT_RX_BA_INFO0_TID); 1474 + peer_id = MS(info0, HTT_RX_BA_INFO0_PEER_ID); 1475 + 1476 + ath10k_dbg(ATH10K_DBG_HTT, 1477 + "htt rx delba tid %hu peer_id %hu\n", 1478 + tid, peer_id); 1479 + 1480 + spin_lock_bh(&ar->data_lock); 1481 + peer = ath10k_peer_find_by_id(ar, peer_id); 1482 + if (!peer) { 1483 + ath10k_warn("received addba event for invalid peer_id: %hu\n", 1484 + peer_id); 1485 + spin_unlock_bh(&ar->data_lock); 1486 + return; 1487 + } 1488 + 1489 + arvif = ath10k_get_arvif(ar, peer->vdev_id); 1490 + if (!arvif) { 1491 + ath10k_warn("received addba event for invalid vdev_id: %u\n", 1492 + peer->vdev_id); 1493 + spin_unlock_bh(&ar->data_lock); 1494 + return; 1495 + } 1496 + 1497 + ath10k_dbg(ATH10K_DBG_HTT, 1498 + "htt rx stop rx ba session sta %pM tid %hu\n", 1499 + peer->addr, tid); 1500 + 1501 + ieee80211_stop_rx_ba_session_offl(arvif->vif, peer->addr, tid); 1502 + spin_unlock_bh(&ar->data_lock); 1503 + } 1504 + 1433 1505 void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb) 1434 1506 { 1435 1507 struct ath10k_htt *htt = &ar->htt; ··· 1604 1516 trace_ath10k_htt_stats(skb->data, skb->len); 1605 1517 break; 1606 1518 case HTT_T2H_MSG_TYPE_TX_INSPECT_IND: 1519 + /* Firmware can return tx frames if it's unable to fully 1520 + * process them and suspects host may be able to fix it. ath10k 1521 + * sends all tx frames as already inspected so this shouldn't 1522 + * happen unless fw has a bug. 1523 + */ 1524 + ath10k_warn("received an unexpected htt tx inspect event\n"); 1525 + break; 1607 1526 case HTT_T2H_MSG_TYPE_RX_ADDBA: 1527 + ath10k_htt_rx_addba(ar, resp); 1528 + break; 1608 1529 case HTT_T2H_MSG_TYPE_RX_DELBA: 1609 - case HTT_T2H_MSG_TYPE_RX_FLUSH: 1530 + ath10k_htt_rx_delba(ar, resp); 1531 + break; 1532 + case HTT_T2H_MSG_TYPE_RX_FLUSH: { 1533 + /* Ignore this event because mac80211 takes care of Rx 1534 + * aggregation reordering. 1535 + */ 1536 + break; 1537 + } 1610 1538 default: 1611 1539 ath10k_dbg(ATH10K_DBG_HTT, "htt event (%d) not handled\n", 1612 1540 resp->hdr.msg_type);
+6
drivers/net/wireless/ath/ath10k/htt_tx.c
··· 531 531 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L3_OFFLOAD; 532 532 flags1 |= HTT_DATA_TX_DESC_FLAGS1_CKSUM_L4_OFFLOAD; 533 533 534 + /* Prevent firmware from sending up tx inspection requests. There's 535 + * nothing ath10k can do with frames requested for inspection so force 536 + * it to simply rely a regular tx completion with discard status. 537 + */ 538 + flags1 |= HTT_DATA_TX_DESC_FLAGS1_POSTPONED; 539 + 534 540 skb_cb->htt.txbuf->cmd_hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FRM; 535 541 skb_cb->htt.txbuf->cmd_tx.flags0 = flags0; 536 542 skb_cb->htt.txbuf->cmd_tx.flags1 = __cpu_to_le16(flags1);
+67 -31
drivers/net/wireless/ath/ath10k/mac.c
··· 1865 1865 return 0; 1866 1866 } 1867 1867 1868 - /* 1869 - * Frames sent to the FW have to be in "Native Wifi" format. 1870 - * Strip the QoS field from the 802.11 header. 1868 + /* HTT Tx uses Native Wifi tx mode which expects 802.11 frames without QoS 1869 + * Control in the header. 1871 1870 */ 1872 - static void ath10k_tx_h_qos_workaround(struct ieee80211_hw *hw, 1873 - struct ieee80211_tx_control *control, 1874 - struct sk_buff *skb) 1871 + static void ath10k_tx_h_nwifi(struct ieee80211_hw *hw, struct sk_buff *skb) 1875 1872 { 1876 1873 struct ieee80211_hdr *hdr = (void *)skb->data; 1874 + struct ath10k_skb_cb *cb = ATH10K_SKB_CB(skb); 1877 1875 u8 *qos_ctl; 1878 1876 1879 1877 if (!ieee80211_is_data_qos(hdr->frame_control)) ··· 1881 1883 memmove(skb->data + IEEE80211_QOS_CTL_LEN, 1882 1884 skb->data, (void *)qos_ctl - (void *)skb->data); 1883 1885 skb_pull(skb, IEEE80211_QOS_CTL_LEN); 1886 + 1887 + /* Fw/Hw generates a corrupted QoS Control Field for QoS NullFunc 1888 + * frames. Powersave is handled by the fw/hw so QoS NyllFunc frames are 1889 + * used only for CQM purposes (e.g. hostapd station keepalive ping) so 1890 + * it is safe to downgrade to NullFunc. 1891 + */ 1892 + if (ieee80211_is_qos_nullfunc(hdr->frame_control)) { 1893 + hdr->frame_control &= ~__cpu_to_le16(IEEE80211_STYPE_QOS_DATA); 1894 + cb->htt.tid = HTT_DATA_TX_EXT_TID_NON_QOS_MCAST_BCAST; 1895 + } 1884 1896 } 1885 1897 1886 1898 static void ath10k_tx_wep_key_work(struct work_struct *work) ··· 1927 1919 mutex_unlock(&arvif->ar->conf_mutex); 1928 1920 } 1929 1921 1930 - static void ath10k_tx_h_update_wep_key(struct sk_buff *skb) 1922 + static void ath10k_tx_h_update_wep_key(struct ieee80211_vif *vif, 1923 + struct ieee80211_key_conf *key, 1924 + struct sk_buff *skb) 1931 1925 { 1932 - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1933 - struct ieee80211_vif *vif = info->control.vif; 1934 1926 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 1935 1927 struct ath10k *ar = arvif->ar; 1936 1928 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1937 - struct ieee80211_key_conf *key = info->control.hw_key; 1938 1929 1939 1930 if (!ieee80211_has_protected(hdr->frame_control)) 1940 1931 return; ··· 1955 1948 ieee80211_queue_work(ar->hw, &arvif->wep_key_work); 1956 1949 } 1957 1950 1958 - static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar, struct sk_buff *skb) 1951 + static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar, 1952 + struct ieee80211_vif *vif, 1953 + struct sk_buff *skb) 1959 1954 { 1960 1955 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 1961 - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1962 - struct ieee80211_vif *vif = info->control.vif; 1963 1956 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 1964 1957 1965 1958 /* This is case only for P2P_GO */ ··· 2261 2254 struct ieee80211_tx_control *control, 2262 2255 struct sk_buff *skb) 2263 2256 { 2264 - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2265 - struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 2266 2257 struct ath10k *ar = hw->priv; 2267 - u8 tid, vdev_id; 2258 + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2259 + struct ieee80211_vif *vif = info->control.vif; 2260 + struct ieee80211_key_conf *key = info->control.hw_key; 2261 + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 2268 2262 2269 2263 /* We should disable CCK RATE due to P2P */ 2270 2264 if (info->flags & IEEE80211_TX_CTL_NO_CCK_RATE) 2271 2265 ath10k_dbg(ATH10K_DBG_MAC, "IEEE80211_TX_CTL_NO_CCK_RATE\n"); 2272 2266 2273 - /* we must calculate tid before we apply qos workaround 2274 - * as we'd lose the qos control field */ 2275 - tid = ath10k_tx_h_get_tid(hdr); 2276 - vdev_id = ath10k_tx_h_get_vdev_id(ar, info); 2267 + ATH10K_SKB_CB(skb)->htt.is_offchan = false; 2268 + ATH10K_SKB_CB(skb)->htt.tid = ath10k_tx_h_get_tid(hdr); 2269 + ATH10K_SKB_CB(skb)->vdev_id = ath10k_tx_h_get_vdev_id(ar, info); 2277 2270 2278 2271 /* it makes no sense to process injected frames like that */ 2279 - if (info->control.vif && 2280 - info->control.vif->type != NL80211_IFTYPE_MONITOR) { 2281 - ath10k_tx_h_qos_workaround(hw, control, skb); 2282 - ath10k_tx_h_update_wep_key(skb); 2283 - ath10k_tx_h_add_p2p_noa_ie(ar, skb); 2284 - ath10k_tx_h_seq_no(skb); 2272 + if (vif && vif->type != NL80211_IFTYPE_MONITOR) { 2273 + ath10k_tx_h_nwifi(hw, skb); 2274 + ath10k_tx_h_update_wep_key(vif, key, skb); 2275 + ath10k_tx_h_add_p2p_noa_ie(ar, vif, skb); 2276 + ath10k_tx_h_seq_no(vif, skb); 2285 2277 } 2286 - 2287 - ATH10K_SKB_CB(skb)->vdev_id = vdev_id; 2288 - ATH10K_SKB_CB(skb)->htt.is_offchan = false; 2289 - ATH10K_SKB_CB(skb)->htt.tid = tid; 2290 2278 2291 2279 if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) { 2292 2280 spin_lock_bh(&ar->data_lock); ··· 4333 4331 return 0; 4334 4332 } 4335 4333 4334 + static int ath10k_ampdu_action(struct ieee80211_hw *hw, 4335 + struct ieee80211_vif *vif, 4336 + enum ieee80211_ampdu_mlme_action action, 4337 + struct ieee80211_sta *sta, u16 tid, u16 *ssn, 4338 + u8 buf_size) 4339 + { 4340 + struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 4341 + 4342 + ath10k_dbg(ATH10K_DBG_MAC, "mac ampdu vdev_id %i sta %pM tid %hu action %d\n", 4343 + arvif->vdev_id, sta->addr, tid, action); 4344 + 4345 + switch (action) { 4346 + case IEEE80211_AMPDU_RX_START: 4347 + case IEEE80211_AMPDU_RX_STOP: 4348 + /* HTT AddBa/DelBa events trigger mac80211 Rx BA session 4349 + * creation/removal. Do we need to verify this? 4350 + */ 4351 + return 0; 4352 + case IEEE80211_AMPDU_TX_START: 4353 + case IEEE80211_AMPDU_TX_STOP_CONT: 4354 + case IEEE80211_AMPDU_TX_STOP_FLUSH: 4355 + case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT: 4356 + case IEEE80211_AMPDU_TX_OPERATIONAL: 4357 + /* Firmware offloads Tx aggregation entirely so deny mac80211 4358 + * Tx aggregation requests. 4359 + */ 4360 + return -EOPNOTSUPP; 4361 + } 4362 + 4363 + return -EINVAL; 4364 + } 4365 + 4336 4366 static const struct ieee80211_ops ath10k_ops = { 4337 4367 .tx = ath10k_tx, 4338 4368 .start = ath10k_start, ··· 4392 4358 .set_bitrate_mask = ath10k_set_bitrate_mask, 4393 4359 .sta_rc_update = ath10k_sta_rc_update, 4394 4360 .get_tsf = ath10k_get_tsf, 4361 + .ampdu_action = ath10k_ampdu_action, 4395 4362 #ifdef CONFIG_PM 4396 4363 .suspend = ath10k_suspend, 4397 4364 .resume = ath10k_resume, ··· 4733 4698 4734 4699 ar->hw->wiphy->interface_modes = 4735 4700 BIT(NL80211_IFTYPE_STATION) | 4736 - BIT(NL80211_IFTYPE_ADHOC) | 4737 4701 BIT(NL80211_IFTYPE_AP); 4738 4702 4739 4703 if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) { ··· 4802 4768 ar->hw->wiphy->iface_combinations = ath10k_if_comb; 4803 4769 ar->hw->wiphy->n_iface_combinations = 4804 4770 ARRAY_SIZE(ath10k_if_comb); 4771 + 4772 + ar->hw->wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC); 4805 4773 } 4806 4774 4807 4775 ar->hw->netdev_features = NETIF_F_HW_CSUM;
+2 -2
drivers/net/wireless/ath/ath10k/mac.h
··· 43 43 return (struct ath10k_vif *)vif->drv_priv; 44 44 } 45 45 46 - static inline void ath10k_tx_h_seq_no(struct sk_buff *skb) 46 + static inline void ath10k_tx_h_seq_no(struct ieee80211_vif *vif, 47 + struct sk_buff *skb) 47 48 { 48 49 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 49 50 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 50 - struct ieee80211_vif *vif = info->control.vif; 51 51 struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif); 52 52 53 53 if (info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ) {
+9 -8
drivers/net/wireless/ath/ath10k/pci.c
··· 726 726 unsigned int nbytes, max_nbytes; 727 727 unsigned int transfer_id; 728 728 unsigned int flags; 729 - int err; 729 + int err, num_replenish = 0; 730 730 731 731 while (ath10k_ce_completed_recv_next(ce_state, &transfer_context, 732 732 &ce_data, &nbytes, &transfer_id, 733 733 &flags) == 0) { 734 - err = ath10k_pci_post_rx_pipe(pipe_info, 1); 735 - if (unlikely(err)) { 736 - /* FIXME: retry */ 737 - ath10k_warn("failed to replenish CE rx ring %d: %d\n", 738 - pipe_info->pipe_num, err); 739 - } 740 - 734 + num_replenish++; 741 735 skb = transfer_context; 742 736 max_nbytes = skb->len + skb_tailroom(skb); 743 737 dma_unmap_single(ar->dev, ATH10K_SKB_CB(skb)->paddr, ··· 746 752 747 753 skb_put(skb, nbytes); 748 754 cb->rx_completion(ar, skb, pipe_info->pipe_num); 755 + } 756 + 757 + err = ath10k_pci_post_rx_pipe(pipe_info, num_replenish); 758 + if (unlikely(err)) { 759 + /* FIXME: retry */ 760 + ath10k_warn("failed to replenish CE rx ring %d (%d bufs): %d\n", 761 + pipe_info->pipe_num, num_replenish, err); 749 762 } 750 763 } 751 764
+1 -2
drivers/net/wireless/ath/ath10k/txrx.c
··· 119 119 return NULL; 120 120 } 121 121 122 - static struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, 123 - int peer_id) 122 + struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, int peer_id) 124 123 { 125 124 struct ath10k_peer *peer; 126 125
+1
drivers/net/wireless/ath/ath10k/txrx.h
··· 24 24 25 25 struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id, 26 26 const u8 *addr); 27 + struct ath10k_peer *ath10k_peer_find_by_id(struct ath10k *ar, int peer_id); 27 28 int ath10k_wait_for_peer_created(struct ath10k *ar, int vdev_id, 28 29 const u8 *addr); 29 30 int ath10k_wait_for_peer_deleted(struct ath10k *ar, int vdev_id,
+1 -1
drivers/net/wireless/ath/ath10k/wmi.c
··· 1432 1432 continue; 1433 1433 } 1434 1434 1435 - ath10k_tx_h_seq_no(bcn); 1435 + ath10k_tx_h_seq_no(arvif->vif, bcn); 1436 1436 ath10k_wmi_update_tim(ar, arvif, bcn, bcn_info); 1437 1437 ath10k_wmi_update_noa(ar, arvif, bcn, bcn_info); 1438 1438
+1
drivers/net/wireless/ath/ath9k/ahb.c
··· 113 113 114 114 irq = res->start; 115 115 116 + ath9k_fill_chanctx_ops(); 116 117 hw = ieee80211_alloc_hw(sizeof(struct ath_softc), &ath9k_ops); 117 118 if (hw == NULL) { 118 119 dev_err(&pdev->dev, "no memory for ieee80211_hw\n");
+9 -13
drivers/net/wireless/b43/Kconfig
··· 132 132 SoC: BCM4712, BCM5352E 133 133 134 134 config B43_PHY_N 135 - bool "Support for 802.11n (N-PHY) devices" 135 + bool "Support for N-PHY (the main 802.11n series) devices" 136 136 depends on B43 137 137 default y 138 138 ---help--- 139 - Support for the N-PHY. 140 - 141 - This enables support for devices with N-PHY. 142 - 143 - Say N if you expect high stability and performance. Saying Y will not 144 - affect other devices support and may provide support for basic needs. 139 + This PHY type can be found in the following chipsets: 140 + PCI: BCM4321, BCM4322, 141 + BCM43222, BCM43224, BCM43225, 142 + BCM43131, BCM43217, BCM43227, BCM43228 143 + SoC: BCM4716, BCM4717, BCM4718, BCM5356, BCM5357, BCM5358 145 144 146 145 config B43_PHY_LP 147 - bool "Support for low-power (LP-PHY) devices" 146 + bool "Support for LP-PHY (low-power 802.11g) devices" 148 147 depends on B43 && B43_SSB 149 148 default y 150 149 ---help--- 151 - Support for the LP-PHY. 152 150 The LP-PHY is a low-power PHY built into some notebooks 153 151 and embedded devices. It supports 802.11a/b/g 154 152 (802.11a support is optional, and currently disabled). 155 153 156 154 config B43_PHY_HT 157 - bool "Support for HT-PHY (high throughput) devices" 155 + bool "Support for HT-PHY (high throughput 802.11n) devices" 158 156 depends on B43 && B43_BCMA 159 157 default y 160 158 ---help--- 161 - Support for the HT-PHY. 162 - 163 - Enables support for BCM4331 and possibly other chipsets with that PHY. 159 + This PHY type with 3x3:3 MIMO can be found in the BCM4331 PCI chipset. 164 160 165 161 config B43_PHY_LCN 166 162 bool "Support for LCN-PHY devices (BROKEN)"
+2 -1
drivers/net/wireless/b43/main.c
··· 2985 2985 { 2986 2986 u16 chip_id = dev->dev->chip_id; 2987 2987 2988 - if (chip_id == BCMA_CHIP_ID_BCM43217 || 2988 + if (chip_id == BCMA_CHIP_ID_BCM43131 || 2989 + chip_id == BCMA_CHIP_ID_BCM43217 || 2989 2990 chip_id == BCMA_CHIP_ID_BCM43222 || 2990 2991 chip_id == BCMA_CHIP_ID_BCM43224 || 2991 2992 chip_id == BCMA_CHIP_ID_BCM43225 ||
+5 -1
drivers/net/wireless/b43/phy_n.c
··· 4982 4982 if (dev->phy.rev == 16) 4983 4983 b43_nphy_pa_set_tx_dig_filter(dev, 0x186, dig_filter_phy_rev16); 4984 4984 4985 - if (dev->dev->chip_id == BCMA_CHIP_ID_BCM43217) { 4985 + /* Verified with BCM43131 and BCM43217 */ 4986 + if (dev->phy.rev == 17) { 4986 4987 b43_nphy_pa_set_tx_dig_filter(dev, 0x186, dig_filter_phy_rev16); 4987 4988 b43_nphy_pa_set_tx_dig_filter(dev, 0x195, 4988 4989 tbl_tx_filter_coef_rev4[1]); ··· 6217 6216 u16 tmp16; 6218 6217 6219 6218 if (new_channel->band == IEEE80211_BAND_5GHZ) { 6219 + /* Switch to 2 GHz for a moment to access B43_PHY_B_BBCFG */ 6220 + b43_phy_mask(dev, B43_NPHY_BANDCTL, ~B43_NPHY_BANDCTL_5GHZ); 6221 + 6220 6222 tmp16 = b43_read16(dev, B43_MMIO_PSM_PHY_HDR); 6221 6223 b43_write16(dev, B43_MMIO_PSM_PHY_HDR, tmp16 | 4); 6222 6224 /* Put BPHY in the reset */
+10
drivers/net/wireless/brcm80211/Kconfig
··· 48 48 IEEE802.11n embedded FullMAC WLAN driver. Say Y if you want to 49 49 use the driver for an USB wireless card. 50 50 51 + config BRCMFMAC_PCIE 52 + bool "PCIE bus interface support for FullMAC driver" 53 + depends on BRCMFMAC 54 + depends on PCI 55 + select FW_LOADER 56 + ---help--- 57 + This option enables the PCIE bus interface support for Broadcom 58 + IEEE802.11ac embedded FullMAC WLAN driver. Say Y if you want to 59 + use the driver for an PCIE wireless card. 60 + 51 61 config BRCM_TRACING 52 62 bool "Broadcom device tracing" 53 63 depends on BRCMSMAC || BRCMFMAC
+7
drivers/net/wireless/brcm80211/brcmfmac/Makefile
··· 31 31 p2p.o \ 32 32 proto.o \ 33 33 bcdc.o \ 34 + commonring.o \ 35 + flowring.o \ 36 + msgbuf.o \ 34 37 dhd_common.o \ 35 38 dhd_linux.o \ 36 39 firmware.o \ ··· 45 42 bcmsdh.o 46 43 brcmfmac-$(CONFIG_BRCMFMAC_USB) += \ 47 44 usb.o 45 + brcmfmac-$(CONFIG_BRCMFMAC_PCIE) += \ 46 + pcie.o 48 47 brcmfmac-$(CONFIG_BRCMDBG) += \ 49 48 dhd_dbg.o 50 49 brcmfmac-$(CONFIG_BRCM_TRACING) += \ 51 50 tracepoint.o 51 + brcmfmac-$(CONFIG_OF) += \ 52 + of.o
+20
drivers/net/wireless/brcm80211/brcmfmac/bcdc.c
··· 337 337 return brcmf_bus_txdata(drvr->bus_if, pktbuf); 338 338 } 339 339 340 + static void 341 + brcmf_proto_bcdc_configure_addr_mode(struct brcmf_pub *drvr, int ifidx, 342 + enum proto_addr_mode addr_mode) 343 + { 344 + } 345 + 346 + static void 347 + brcmf_proto_bcdc_delete_peer(struct brcmf_pub *drvr, int ifidx, 348 + u8 peer[ETH_ALEN]) 349 + { 350 + } 351 + 352 + static void 353 + brcmf_proto_bcdc_add_tdls_peer(struct brcmf_pub *drvr, int ifidx, 354 + u8 peer[ETH_ALEN]) 355 + { 356 + } 340 357 341 358 int brcmf_proto_bcdc_attach(struct brcmf_pub *drvr) 342 359 { ··· 373 356 drvr->proto->query_dcmd = brcmf_proto_bcdc_query_dcmd; 374 357 drvr->proto->set_dcmd = brcmf_proto_bcdc_set_dcmd; 375 358 drvr->proto->txdata = brcmf_proto_bcdc_txdata; 359 + drvr->proto->configure_addr_mode = brcmf_proto_bcdc_configure_addr_mode; 360 + drvr->proto->delete_peer = brcmf_proto_bcdc_delete_peer; 361 + drvr->proto->add_tdls_peer = brcmf_proto_bcdc_add_tdls_peer; 376 362 drvr->proto->pd = bcdc; 377 363 378 364 drvr->hdrlen += BCDC_HEADER_LEN + BRCMF_PROT_FW_SIGNAL_MAX_TXBYTES;
+20
drivers/net/wireless/brcm80211/brcmfmac/bcmsdh.c
··· 38 38 #include <brcm_hw_ids.h> 39 39 #include <brcmu_utils.h> 40 40 #include <brcmu_wifi.h> 41 + #include <chipcommon.h> 41 42 #include <soc.h> 43 + #include "chip.h" 42 44 #include "dhd_bus.h" 43 45 #include "dhd_dbg.h" 44 46 #include "sdio_host.h" 47 + #include "of.h" 45 48 46 49 #define SDIOH_API_ACCESS_RETRY_LIMIT 2 47 50 ··· 120 117 { 121 118 int ret = 0; 122 119 u8 data; 120 + u32 addr, gpiocontrol; 123 121 unsigned long flags; 124 122 125 123 if ((sdiodev->pdata) && (sdiodev->pdata->oob_irq_supported)) { ··· 149 145 sdiodev->irq_wake = true; 150 146 151 147 sdio_claim_host(sdiodev->func[1]); 148 + 149 + if (sdiodev->bus_if->chip == BRCM_CC_43362_CHIP_ID) { 150 + /* assign GPIO to SDIO core */ 151 + addr = CORE_CC_REG(SI_ENUM_BASE, gpiocontrol); 152 + gpiocontrol = brcmf_sdiod_regrl(sdiodev, addr, &ret); 153 + gpiocontrol |= 0x2; 154 + brcmf_sdiod_regwl(sdiodev, addr, gpiocontrol, &ret); 155 + 156 + brcmf_sdiod_regwb(sdiodev, SBSDIO_GPIO_SELECT, 0xf, 157 + &ret); 158 + brcmf_sdiod_regwb(sdiodev, SBSDIO_GPIO_OUT, 0, &ret); 159 + brcmf_sdiod_regwb(sdiodev, SBSDIO_GPIO_EN, 0x2, &ret); 160 + } 152 161 153 162 /* must configure SDIO_CCCR_IENx to enable irq */ 154 163 data = brcmf_sdiod_regrb(sdiodev, SDIO_CCCR_IENx, &ret); ··· 1060 1043 dev_set_drvdata(&sdiodev->func[1]->dev, bus_if); 1061 1044 sdiodev->dev = &sdiodev->func[1]->dev; 1062 1045 sdiodev->pdata = brcmfmac_sdio_pdata; 1046 + 1047 + if (!sdiodev->pdata) 1048 + brcmf_of_probe(sdiodev); 1063 1049 1064 1050 atomic_set(&sdiodev->suspend, false); 1065 1051 init_waitqueue_head(&sdiodev->request_word_wait);
+8
drivers/net/wireless/brcm80211/brcmfmac/chip.c
··· 506 506 break; 507 507 case BRCM_CC_4339_CHIP_ID: 508 508 case BRCM_CC_4354_CHIP_ID: 509 + case BRCM_CC_4356_CHIP_ID: 510 + case BRCM_CC_43567_CHIP_ID: 511 + case BRCM_CC_43569_CHIP_ID: 512 + case BRCM_CC_43570_CHIP_ID: 509 513 ci->pub.ramsize = 0xc0000; 514 + ci->pub.rambase = 0x180000; 515 + break; 516 + case BRCM_CC_43602_CHIP_ID: 517 + ci->pub.ramsize = 0xf0000; 510 518 ci->pub.rambase = 0x180000; 511 519 break; 512 520 default:
+273
drivers/net/wireless/brcm80211/brcmfmac/commonring.c
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + 16 + #include <linux/types.h> 17 + #include <linux/netdevice.h> 18 + 19 + #include <brcmu_utils.h> 20 + #include <brcmu_wifi.h> 21 + 22 + #include "dhd.h" 23 + #include "commonring.h" 24 + 25 + 26 + /* dma flushing needs implementation for mips and arm platforms. Should 27 + * be put in util. Note, this is not real flushing. It is virtual non 28 + * cached memory. Only write buffers should have to be drained. Though 29 + * this may be different depending on platform...... 30 + * SEE ALSO msgbuf.c 31 + */ 32 + #define brcmf_dma_flush(addr, len) 33 + #define brcmf_dma_invalidate_cache(addr, len) 34 + 35 + 36 + void brcmf_commonring_register_cb(struct brcmf_commonring *commonring, 37 + int (*cr_ring_bell)(void *ctx), 38 + int (*cr_update_rptr)(void *ctx), 39 + int (*cr_update_wptr)(void *ctx), 40 + int (*cr_write_rptr)(void *ctx), 41 + int (*cr_write_wptr)(void *ctx), void *ctx) 42 + { 43 + commonring->cr_ring_bell = cr_ring_bell; 44 + commonring->cr_update_rptr = cr_update_rptr; 45 + commonring->cr_update_wptr = cr_update_wptr; 46 + commonring->cr_write_rptr = cr_write_rptr; 47 + commonring->cr_write_wptr = cr_write_wptr; 48 + commonring->cr_ctx = ctx; 49 + } 50 + 51 + 52 + void brcmf_commonring_config(struct brcmf_commonring *commonring, u16 depth, 53 + u16 item_len, void *buf_addr) 54 + { 55 + commonring->depth = depth; 56 + commonring->item_len = item_len; 57 + commonring->buf_addr = buf_addr; 58 + if (!commonring->inited) { 59 + spin_lock_init(&commonring->lock); 60 + commonring->inited = true; 61 + } 62 + commonring->r_ptr = 0; 63 + if (commonring->cr_write_rptr) 64 + commonring->cr_write_rptr(commonring->cr_ctx); 65 + commonring->w_ptr = 0; 66 + if (commonring->cr_write_wptr) 67 + commonring->cr_write_wptr(commonring->cr_ctx); 68 + commonring->f_ptr = 0; 69 + } 70 + 71 + 72 + void brcmf_commonring_lock(struct brcmf_commonring *commonring) 73 + __acquires(&commonring->lock) 74 + { 75 + unsigned long flags; 76 + 77 + spin_lock_irqsave(&commonring->lock, flags); 78 + commonring->flags = flags; 79 + } 80 + 81 + 82 + void brcmf_commonring_unlock(struct brcmf_commonring *commonring) 83 + __releases(&commonring->lock) 84 + { 85 + spin_unlock_irqrestore(&commonring->lock, commonring->flags); 86 + } 87 + 88 + 89 + bool brcmf_commonring_write_available(struct brcmf_commonring *commonring) 90 + { 91 + u16 available; 92 + bool retry = true; 93 + 94 + again: 95 + if (commonring->r_ptr <= commonring->w_ptr) 96 + available = commonring->depth - commonring->w_ptr + 97 + commonring->r_ptr; 98 + else 99 + available = commonring->r_ptr - commonring->w_ptr; 100 + 101 + if (available > 1) { 102 + if (!commonring->was_full) 103 + return true; 104 + if (available > commonring->depth / 8) { 105 + commonring->was_full = false; 106 + return true; 107 + } 108 + if (retry) { 109 + if (commonring->cr_update_rptr) 110 + commonring->cr_update_rptr(commonring->cr_ctx); 111 + retry = false; 112 + goto again; 113 + } 114 + return false; 115 + } 116 + 117 + if (retry) { 118 + if (commonring->cr_update_rptr) 119 + commonring->cr_update_rptr(commonring->cr_ctx); 120 + retry = false; 121 + goto again; 122 + } 123 + 124 + commonring->was_full = true; 125 + return false; 126 + } 127 + 128 + 129 + void *brcmf_commonring_reserve_for_write(struct brcmf_commonring *commonring) 130 + { 131 + void *ret_ptr; 132 + u16 available; 133 + bool retry = true; 134 + 135 + again: 136 + if (commonring->r_ptr <= commonring->w_ptr) 137 + available = commonring->depth - commonring->w_ptr + 138 + commonring->r_ptr; 139 + else 140 + available = commonring->r_ptr - commonring->w_ptr; 141 + 142 + if (available > 1) { 143 + ret_ptr = commonring->buf_addr + 144 + (commonring->w_ptr * commonring->item_len); 145 + commonring->w_ptr++; 146 + if (commonring->w_ptr == commonring->depth) 147 + commonring->w_ptr = 0; 148 + return ret_ptr; 149 + } 150 + 151 + if (retry) { 152 + if (commonring->cr_update_rptr) 153 + commonring->cr_update_rptr(commonring->cr_ctx); 154 + retry = false; 155 + goto again; 156 + } 157 + 158 + commonring->was_full = true; 159 + return NULL; 160 + } 161 + 162 + 163 + void * 164 + brcmf_commonring_reserve_for_write_multiple(struct brcmf_commonring *commonring, 165 + u16 n_items, u16 *alloced) 166 + { 167 + void *ret_ptr; 168 + u16 available; 169 + bool retry = true; 170 + 171 + again: 172 + if (commonring->r_ptr <= commonring->w_ptr) 173 + available = commonring->depth - commonring->w_ptr + 174 + commonring->r_ptr; 175 + else 176 + available = commonring->r_ptr - commonring->w_ptr; 177 + 178 + if (available > 1) { 179 + ret_ptr = commonring->buf_addr + 180 + (commonring->w_ptr * commonring->item_len); 181 + *alloced = min_t(u16, n_items, available - 1); 182 + if (*alloced + commonring->w_ptr > commonring->depth) 183 + *alloced = commonring->depth - commonring->w_ptr; 184 + commonring->w_ptr += *alloced; 185 + if (commonring->w_ptr == commonring->depth) 186 + commonring->w_ptr = 0; 187 + return ret_ptr; 188 + } 189 + 190 + if (retry) { 191 + if (commonring->cr_update_rptr) 192 + commonring->cr_update_rptr(commonring->cr_ctx); 193 + retry = false; 194 + goto again; 195 + } 196 + 197 + commonring->was_full = true; 198 + return NULL; 199 + } 200 + 201 + 202 + int brcmf_commonring_write_complete(struct brcmf_commonring *commonring) 203 + { 204 + void *address; 205 + 206 + address = commonring->buf_addr; 207 + address += (commonring->f_ptr * commonring->item_len); 208 + if (commonring->f_ptr > commonring->w_ptr) { 209 + brcmf_dma_flush(address, 210 + (commonring->depth - commonring->f_ptr) * 211 + commonring->item_len); 212 + address = commonring->buf_addr; 213 + commonring->f_ptr = 0; 214 + } 215 + brcmf_dma_flush(address, (commonring->w_ptr - commonring->f_ptr) * 216 + commonring->item_len); 217 + 218 + commonring->f_ptr = commonring->w_ptr; 219 + 220 + if (commonring->cr_write_wptr) 221 + commonring->cr_write_wptr(commonring->cr_ctx); 222 + if (commonring->cr_ring_bell) 223 + return commonring->cr_ring_bell(commonring->cr_ctx); 224 + 225 + return -EIO; 226 + } 227 + 228 + 229 + void brcmf_commonring_write_cancel(struct brcmf_commonring *commonring, 230 + u16 n_items) 231 + { 232 + if (commonring->w_ptr == 0) 233 + commonring->w_ptr = commonring->depth - n_items; 234 + else 235 + commonring->w_ptr -= n_items; 236 + } 237 + 238 + 239 + void *brcmf_commonring_get_read_ptr(struct brcmf_commonring *commonring, 240 + u16 *n_items) 241 + { 242 + void *ret_addr; 243 + 244 + if (commonring->cr_update_wptr) 245 + commonring->cr_update_wptr(commonring->cr_ctx); 246 + 247 + *n_items = (commonring->w_ptr >= commonring->r_ptr) ? 248 + (commonring->w_ptr - commonring->r_ptr) : 249 + (commonring->depth - commonring->r_ptr); 250 + 251 + if (*n_items == 0) 252 + return NULL; 253 + 254 + ret_addr = commonring->buf_addr + 255 + (commonring->r_ptr * commonring->item_len); 256 + 257 + commonring->r_ptr += *n_items; 258 + if (commonring->r_ptr == commonring->depth) 259 + commonring->r_ptr = 0; 260 + 261 + brcmf_dma_invalidate_cache(ret_addr, *n_ items * commonring->item_len); 262 + 263 + return ret_addr; 264 + } 265 + 266 + 267 + int brcmf_commonring_read_complete(struct brcmf_commonring *commonring) 268 + { 269 + if (commonring->cr_write_rptr) 270 + return commonring->cr_write_rptr(commonring->cr_ctx); 271 + 272 + return -EIO; 273 + }
+69
drivers/net/wireless/brcm80211/brcmfmac/commonring.h
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + #ifndef BRCMFMAC_COMMONRING_H 16 + #define BRCMFMAC_COMMONRING_H 17 + 18 + 19 + struct brcmf_commonring { 20 + u16 r_ptr; 21 + u16 w_ptr; 22 + u16 f_ptr; 23 + u16 depth; 24 + u16 item_len; 25 + 26 + void *buf_addr; 27 + 28 + int (*cr_ring_bell)(void *ctx); 29 + int (*cr_update_rptr)(void *ctx); 30 + int (*cr_update_wptr)(void *ctx); 31 + int (*cr_write_rptr)(void *ctx); 32 + int (*cr_write_wptr)(void *ctx); 33 + 34 + void *cr_ctx; 35 + 36 + spinlock_t lock; 37 + unsigned long flags; 38 + bool inited; 39 + bool was_full; 40 + }; 41 + 42 + 43 + void brcmf_commonring_register_cb(struct brcmf_commonring *commonring, 44 + int (*cr_ring_bell)(void *ctx), 45 + int (*cr_update_rptr)(void *ctx), 46 + int (*cr_update_wptr)(void *ctx), 47 + int (*cr_write_rptr)(void *ctx), 48 + int (*cr_write_wptr)(void *ctx), void *ctx); 49 + void brcmf_commonring_config(struct brcmf_commonring *commonring, u16 depth, 50 + u16 item_len, void *buf_addr); 51 + void brcmf_commonring_lock(struct brcmf_commonring *commonring); 52 + void brcmf_commonring_unlock(struct brcmf_commonring *commonring); 53 + bool brcmf_commonring_write_available(struct brcmf_commonring *commonring); 54 + void *brcmf_commonring_reserve_for_write(struct brcmf_commonring *commonring); 55 + void * 56 + brcmf_commonring_reserve_for_write_multiple(struct brcmf_commonring *commonring, 57 + u16 n_items, u16 *alloced); 58 + int brcmf_commonring_write_complete(struct brcmf_commonring *commonring); 59 + void brcmf_commonring_write_cancel(struct brcmf_commonring *commonring, 60 + u16 n_items); 61 + void *brcmf_commonring_get_read_ptr(struct brcmf_commonring *commonring, 62 + u16 *n_items); 63 + int brcmf_commonring_read_complete(struct brcmf_commonring *commonring); 64 + 65 + #define brcmf_commonring_n_items(commonring) (commonring->depth) 66 + #define brcmf_commonring_len_item(commonring) (commonring->item_len) 67 + 68 + 69 + #endif /* BRCMFMAC_COMMONRING_H */
+4 -3
drivers/net/wireless/brcm80211/brcmfmac/dhd.h
··· 121 121 * 122 122 * @BRCMF_NETIF_STOP_REASON_FWS_FC: 123 123 * netif stopped due to firmware signalling flow control. 124 - * @BRCMF_NETIF_STOP_REASON_BLOCK_BUS: 125 - * netif stopped due to bus blocking. 124 + * @BRCMF_NETIF_STOP_REASON_FLOW: 125 + * netif stopped due to flowring full. 126 126 */ 127 127 enum brcmf_netif_stop_reason { 128 128 BRCMF_NETIF_STOP_REASON_FWS_FC = 1, 129 - BRCMF_NETIF_STOP_REASON_BLOCK_BUS = 2 129 + BRCMF_NETIF_STOP_REASON_FLOW = 2 130 130 }; 131 131 132 132 /** ··· 181 181 enum brcmf_netif_stop_reason reason, bool state); 182 182 void brcmf_txfinalize(struct brcmf_pub *drvr, struct sk_buff *txp, u8 ifidx, 183 183 bool success); 184 + void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb); 184 185 185 186 /* Sets dongle media info (drv_version, mac address). */ 186 187 int brcmf_c_preinit_dcmds(struct brcmf_if *ifp);
+33
drivers/net/wireless/brcm80211/brcmfmac/dhd_bus.h
··· 19 19 20 20 #include "dhd_dbg.h" 21 21 22 + /* IDs of the 6 default common rings of msgbuf protocol */ 23 + #define BRCMF_H2D_MSGRING_CONTROL_SUBMIT 0 24 + #define BRCMF_H2D_MSGRING_RXPOST_SUBMIT 1 25 + #define BRCMF_D2H_MSGRING_CONTROL_COMPLETE 2 26 + #define BRCMF_D2H_MSGRING_TX_COMPLETE 3 27 + #define BRCMF_D2H_MSGRING_RX_COMPLETE 4 28 + 29 + #define BRCMF_NROF_H2D_COMMON_MSGRINGS 2 30 + #define BRCMF_NROF_D2H_COMMON_MSGRINGS 3 31 + #define BRCMF_NROF_COMMON_MSGRINGS (BRCMF_NROF_H2D_COMMON_MSGRINGS + \ 32 + BRCMF_NROF_D2H_COMMON_MSGRINGS) 33 + 22 34 /* The level of bus communication with the dongle */ 23 35 enum brcmf_bus_state { 24 36 BRCMF_BUS_UNKNOWN, /* Not determined yet */ ··· 82 70 struct pktq * (*gettxq)(struct device *dev); 83 71 }; 84 72 73 + 74 + /** 75 + * struct brcmf_bus_msgbuf - bus ringbuf if in case of msgbuf. 76 + * 77 + * @commonrings: commonrings which are always there. 78 + * @flowrings: commonrings which are dynamically created and destroyed for data. 79 + * @rx_dataoffset: if set then all rx data has this this offset. 80 + * @max_rxbufpost: maximum number of buffers to post for rx. 81 + * @nrof_flowrings: number of flowrings. 82 + */ 83 + struct brcmf_bus_msgbuf { 84 + struct brcmf_commonring *commonrings[BRCMF_NROF_COMMON_MSGRINGS]; 85 + struct brcmf_commonring **flowrings; 86 + u32 rx_dataoffset; 87 + u32 max_rxbufpost; 88 + u32 nrof_flowrings; 89 + }; 90 + 91 + 85 92 /** 86 93 * struct brcmf_bus - interface structure between common and bus layer 87 94 * ··· 120 89 union { 121 90 struct brcmf_sdio_dev *sdio; 122 91 struct brcmf_usbdev *usb; 92 + struct brcmf_pciedev *pcie; 123 93 } bus_priv; 124 94 enum brcmf_bus_protocol_type proto_type; 125 95 struct device *dev; ··· 133 101 bool always_use_fws_queue; 134 102 135 103 struct brcmf_bus_ops *ops; 104 + struct brcmf_bus_msgbuf *msgbuf; 136 105 }; 137 106 138 107 /*
+19 -17
drivers/net/wireless/brcm80211/brcmfmac/dhd_dbg.h
··· 18 18 #define _BRCMF_DBG_H_ 19 19 20 20 /* message levels */ 21 - #define BRCMF_TRACE_VAL 0x00000002 22 - #define BRCMF_INFO_VAL 0x00000004 23 - #define BRCMF_DATA_VAL 0x00000008 24 - #define BRCMF_CTL_VAL 0x00000010 25 - #define BRCMF_TIMER_VAL 0x00000020 26 - #define BRCMF_HDRS_VAL 0x00000040 27 - #define BRCMF_BYTES_VAL 0x00000080 28 - #define BRCMF_INTR_VAL 0x00000100 29 - #define BRCMF_GLOM_VAL 0x00000200 30 - #define BRCMF_EVENT_VAL 0x00000400 31 - #define BRCMF_BTA_VAL 0x00000800 32 - #define BRCMF_FIL_VAL 0x00001000 33 - #define BRCMF_USB_VAL 0x00002000 34 - #define BRCMF_SCAN_VAL 0x00004000 35 - #define BRCMF_CONN_VAL 0x00008000 36 - #define BRCMF_BCDC_VAL 0x00010000 37 - #define BRCMF_SDIO_VAL 0x00020000 21 + #define BRCMF_TRACE_VAL 0x00000002 22 + #define BRCMF_INFO_VAL 0x00000004 23 + #define BRCMF_DATA_VAL 0x00000008 24 + #define BRCMF_CTL_VAL 0x00000010 25 + #define BRCMF_TIMER_VAL 0x00000020 26 + #define BRCMF_HDRS_VAL 0x00000040 27 + #define BRCMF_BYTES_VAL 0x00000080 28 + #define BRCMF_INTR_VAL 0x00000100 29 + #define BRCMF_GLOM_VAL 0x00000200 30 + #define BRCMF_EVENT_VAL 0x00000400 31 + #define BRCMF_BTA_VAL 0x00000800 32 + #define BRCMF_FIL_VAL 0x00001000 33 + #define BRCMF_USB_VAL 0x00002000 34 + #define BRCMF_SCAN_VAL 0x00004000 35 + #define BRCMF_CONN_VAL 0x00008000 36 + #define BRCMF_BCDC_VAL 0x00010000 37 + #define BRCMF_SDIO_VAL 0x00020000 38 + #define BRCMF_MSGBUF_VAL 0x00040000 39 + #define BRCMF_PCIE_VAL 0x00080000 38 40 39 41 /* set default print format */ 40 42 #undef pr_fmt
+8 -1
drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c
··· 32 32 #include "fwsignal.h" 33 33 #include "feature.h" 34 34 #include "proto.h" 35 + #include "pcie.h" 35 36 36 37 MODULE_AUTHOR("Broadcom Corporation"); 37 38 MODULE_DESCRIPTION("Broadcom 802.11 wireless LAN fullmac driver."); ··· 289 288 brcmf_fws_bus_blocked(drvr, state); 290 289 } 291 290 292 - static void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb) 291 + void brcmf_netif_rx(struct brcmf_if *ifp, struct sk_buff *skb) 293 292 { 294 293 skb->dev = ifp->ndev; 295 294 skb->protocol = eth_type_trans(skb, skb->dev); ··· 1086 1085 #ifdef CONFIG_BRCMFMAC_USB 1087 1086 brcmf_usb_register(); 1088 1087 #endif 1088 + #ifdef CONFIG_BRCMFMAC_PCIE 1089 + brcmf_pcie_register(); 1090 + #endif 1089 1091 } 1090 1092 static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register); 1091 1093 ··· 1113 1109 #endif 1114 1110 #ifdef CONFIG_BRCMFMAC_USB 1115 1111 brcmf_usb_exit(); 1112 + #endif 1113 + #ifdef CONFIG_BRCMFMAC_PCIE 1114 + brcmf_pcie_exit(); 1116 1115 #endif 1117 1116 brcmf_debugfs_exit(); 1118 1117 }
+17 -6
drivers/net/wireless/brcm80211/brcmfmac/dhd_sdio.c
··· 670 670 struct brcmf_sdio_dev *sdiodev) 671 671 { 672 672 int i; 673 + uint fw_len, nv_len; 674 + char end; 673 675 674 676 for (i = 0; i < ARRAY_SIZE(brcmf_fwname_data); i++) { 675 677 if (brcmf_fwname_data[i].chipid == ci->chip && ··· 684 682 return -ENODEV; 685 683 } 686 684 685 + fw_len = sizeof(sdiodev->fw_name) - 1; 686 + nv_len = sizeof(sdiodev->nvram_name) - 1; 687 687 /* check if firmware path is provided by module parameter */ 688 688 if (brcmf_firmware_path[0] != '\0') { 689 - if (brcmf_firmware_path[strlen(brcmf_firmware_path) - 1] != '/') 690 - strcat(brcmf_firmware_path, "/"); 689 + strncpy(sdiodev->fw_name, brcmf_firmware_path, fw_len); 690 + strncpy(sdiodev->nvram_name, brcmf_firmware_path, nv_len); 691 + fw_len -= strlen(sdiodev->fw_name); 692 + nv_len -= strlen(sdiodev->nvram_name); 691 693 692 - strcpy(sdiodev->fw_name, brcmf_firmware_path); 693 - strcpy(sdiodev->nvram_name, brcmf_firmware_path); 694 + end = brcmf_firmware_path[strlen(brcmf_firmware_path) - 1]; 695 + if (end != '/') { 696 + strncat(sdiodev->fw_name, "/", fw_len); 697 + strncat(sdiodev->nvram_name, "/", nv_len); 698 + fw_len--; 699 + nv_len--; 700 + } 694 701 } 695 - strcat(sdiodev->fw_name, brcmf_fwname_data[i].bin); 696 - strcat(sdiodev->nvram_name, brcmf_fwname_data[i].nv); 702 + strncat(sdiodev->fw_name, brcmf_fwname_data[i].bin, fw_len); 703 + strncat(sdiodev->nvram_name, brcmf_fwname_data[i].nv, nv_len); 697 704 698 705 return 0; 699 706 }
+501
drivers/net/wireless/brcm80211/brcmfmac/flowring.c
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + 16 + 17 + #include <linux/types.h> 18 + #include <linux/netdevice.h> 19 + #include <linux/etherdevice.h> 20 + #include <brcmu_utils.h> 21 + 22 + #include "dhd.h" 23 + #include "dhd_dbg.h" 24 + #include "dhd_bus.h" 25 + #include "proto.h" 26 + #include "flowring.h" 27 + #include "msgbuf.h" 28 + 29 + 30 + #define BRCMF_FLOWRING_HIGH 1024 31 + #define BRCMF_FLOWRING_LOW (BRCMF_FLOWRING_HIGH - 256) 32 + #define BRCMF_FLOWRING_INVALID_IFIDX 0xff 33 + 34 + #define BRCMF_FLOWRING_HASH_AP(da, fifo, ifidx) (da[5] + fifo + ifidx * 16) 35 + #define BRCMF_FLOWRING_HASH_STA(fifo, ifidx) (fifo + ifidx * 16) 36 + 37 + static const u8 ALLZEROMAC[ETH_ALEN] = { 0, 0, 0, 0, 0, 0 }; 38 + static const u8 ALLFFMAC[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }; 39 + 40 + static const u8 brcmf_flowring_prio2fifo[] = { 41 + 1, 42 + 0, 43 + 0, 44 + 1, 45 + 2, 46 + 2, 47 + 3, 48 + 3 49 + }; 50 + 51 + 52 + static bool 53 + brcmf_flowring_is_tdls_mac(struct brcmf_flowring *flow, u8 mac[ETH_ALEN]) 54 + { 55 + struct brcmf_flowring_tdls_entry *search; 56 + 57 + search = flow->tdls_entry; 58 + 59 + while (search) { 60 + if (memcmp(search->mac, mac, ETH_ALEN) == 0) 61 + return true; 62 + search = search->next; 63 + } 64 + 65 + return false; 66 + } 67 + 68 + 69 + u32 brcmf_flowring_lookup(struct brcmf_flowring *flow, u8 da[ETH_ALEN], 70 + u8 prio, u8 ifidx) 71 + { 72 + struct brcmf_flowring_hash *hash; 73 + u8 hash_idx; 74 + u32 i; 75 + bool found; 76 + bool sta; 77 + u8 fifo; 78 + u8 *mac; 79 + 80 + fifo = brcmf_flowring_prio2fifo[prio]; 81 + sta = (flow->addr_mode[ifidx] == ADDR_INDIRECT); 82 + mac = da; 83 + if ((!sta) && (is_multicast_ether_addr(da))) { 84 + mac = (u8 *)ALLFFMAC; 85 + fifo = 0; 86 + } 87 + if ((sta) && (flow->tdls_active) && 88 + (brcmf_flowring_is_tdls_mac(flow, da))) { 89 + sta = false; 90 + } 91 + hash_idx = sta ? BRCMF_FLOWRING_HASH_STA(fifo, ifidx) : 92 + BRCMF_FLOWRING_HASH_AP(mac, fifo, ifidx); 93 + found = false; 94 + hash = flow->hash; 95 + for (i = 0; i < BRCMF_FLOWRING_HASHSIZE; i++) { 96 + if ((sta || (memcmp(hash[hash_idx].mac, mac, ETH_ALEN) == 0)) && 97 + (hash[hash_idx].fifo == fifo) && 98 + (hash[hash_idx].ifidx == ifidx)) { 99 + found = true; 100 + break; 101 + } 102 + hash_idx++; 103 + } 104 + if (found) 105 + return hash[hash_idx].flowid; 106 + 107 + return BRCMF_FLOWRING_INVALID_ID; 108 + } 109 + 110 + 111 + u32 brcmf_flowring_create(struct brcmf_flowring *flow, u8 da[ETH_ALEN], 112 + u8 prio, u8 ifidx) 113 + { 114 + struct brcmf_flowring_ring *ring; 115 + struct brcmf_flowring_hash *hash; 116 + u8 hash_idx; 117 + u32 i; 118 + bool found; 119 + u8 fifo; 120 + bool sta; 121 + u8 *mac; 122 + 123 + fifo = brcmf_flowring_prio2fifo[prio]; 124 + sta = (flow->addr_mode[ifidx] == ADDR_INDIRECT); 125 + mac = da; 126 + if ((!sta) && (is_multicast_ether_addr(da))) { 127 + mac = (u8 *)ALLFFMAC; 128 + fifo = 0; 129 + } 130 + if ((sta) && (flow->tdls_active) && 131 + (brcmf_flowring_is_tdls_mac(flow, da))) { 132 + sta = false; 133 + } 134 + hash_idx = sta ? BRCMF_FLOWRING_HASH_STA(fifo, ifidx) : 135 + BRCMF_FLOWRING_HASH_AP(mac, fifo, ifidx); 136 + found = false; 137 + hash = flow->hash; 138 + for (i = 0; i < BRCMF_FLOWRING_HASHSIZE; i++) { 139 + if ((hash[hash_idx].ifidx == BRCMF_FLOWRING_INVALID_IFIDX) && 140 + (memcmp(hash[hash_idx].mac, ALLZEROMAC, ETH_ALEN) == 0)) { 141 + found = true; 142 + break; 143 + } 144 + hash_idx++; 145 + } 146 + if (found) { 147 + for (i = 0; i < flow->nrofrings; i++) { 148 + if (flow->rings[i] == NULL) 149 + break; 150 + } 151 + if (i == flow->nrofrings) 152 + return -ENOMEM; 153 + 154 + ring = kzalloc(sizeof(*ring), GFP_ATOMIC); 155 + if (!ring) 156 + return -ENOMEM; 157 + 158 + memcpy(hash[hash_idx].mac, mac, ETH_ALEN); 159 + hash[hash_idx].fifo = fifo; 160 + hash[hash_idx].ifidx = ifidx; 161 + hash[hash_idx].flowid = i; 162 + 163 + ring->hash_id = hash_idx; 164 + ring->status = RING_CLOSED; 165 + skb_queue_head_init(&ring->skblist); 166 + flow->rings[i] = ring; 167 + 168 + return i; 169 + } 170 + return BRCMF_FLOWRING_INVALID_ID; 171 + } 172 + 173 + 174 + u8 brcmf_flowring_tid(struct brcmf_flowring *flow, u8 flowid) 175 + { 176 + struct brcmf_flowring_ring *ring; 177 + 178 + ring = flow->rings[flowid]; 179 + 180 + return flow->hash[ring->hash_id].fifo; 181 + } 182 + 183 + 184 + static void brcmf_flowring_block(struct brcmf_flowring *flow, u8 flowid, 185 + bool blocked) 186 + { 187 + struct brcmf_flowring_ring *ring; 188 + struct brcmf_bus *bus_if; 189 + struct brcmf_pub *drvr; 190 + struct brcmf_if *ifp; 191 + bool currently_blocked; 192 + int i; 193 + u8 ifidx; 194 + unsigned long flags; 195 + 196 + spin_lock_irqsave(&flow->block_lock, flags); 197 + 198 + ring = flow->rings[flowid]; 199 + ifidx = brcmf_flowring_ifidx_get(flow, flowid); 200 + 201 + currently_blocked = false; 202 + for (i = 0; i < flow->nrofrings; i++) { 203 + if (flow->rings[i]) { 204 + ring = flow->rings[i]; 205 + if ((ring->status == RING_OPEN) && 206 + (brcmf_flowring_ifidx_get(flow, i) == ifidx)) { 207 + if (ring->blocked) { 208 + currently_blocked = true; 209 + break; 210 + } 211 + } 212 + } 213 + } 214 + ring->blocked = blocked; 215 + if (currently_blocked == blocked) { 216 + spin_unlock_irqrestore(&flow->block_lock, flags); 217 + return; 218 + } 219 + 220 + bus_if = dev_get_drvdata(flow->dev); 221 + drvr = bus_if->drvr; 222 + ifp = drvr->iflist[ifidx]; 223 + brcmf_txflowblock_if(ifp, BRCMF_NETIF_STOP_REASON_FLOW, blocked); 224 + 225 + spin_unlock_irqrestore(&flow->block_lock, flags); 226 + } 227 + 228 + 229 + void brcmf_flowring_delete(struct brcmf_flowring *flow, u8 flowid) 230 + { 231 + struct brcmf_flowring_ring *ring; 232 + u8 hash_idx; 233 + struct sk_buff *skb; 234 + 235 + ring = flow->rings[flowid]; 236 + if (!ring) 237 + return; 238 + brcmf_flowring_block(flow, flowid, false); 239 + hash_idx = ring->hash_id; 240 + flow->hash[hash_idx].ifidx = BRCMF_FLOWRING_INVALID_IFIDX; 241 + memset(flow->hash[hash_idx].mac, 0, ETH_ALEN); 242 + flow->rings[flowid] = NULL; 243 + 244 + skb = skb_dequeue(&ring->skblist); 245 + while (skb) { 246 + brcmu_pkt_buf_free_skb(skb); 247 + skb = skb_dequeue(&ring->skblist); 248 + } 249 + 250 + kfree(ring); 251 + } 252 + 253 + 254 + void brcmf_flowring_enqueue(struct brcmf_flowring *flow, u8 flowid, 255 + struct sk_buff *skb) 256 + { 257 + struct brcmf_flowring_ring *ring; 258 + 259 + ring = flow->rings[flowid]; 260 + 261 + skb_queue_tail(&ring->skblist, skb); 262 + 263 + if (!ring->blocked && 264 + (skb_queue_len(&ring->skblist) > BRCMF_FLOWRING_HIGH)) { 265 + brcmf_flowring_block(flow, flowid, true); 266 + brcmf_dbg(MSGBUF, "Flowcontrol: BLOCK for ring %d\n", flowid); 267 + /* To prevent (work around) possible race condition, check 268 + * queue len again. It is also possible to use locking to 269 + * protect, but that is undesirable for every enqueue and 270 + * dequeue. This simple check will solve a possible race 271 + * condition if it occurs. 272 + */ 273 + if (skb_queue_len(&ring->skblist) < BRCMF_FLOWRING_LOW) 274 + brcmf_flowring_block(flow, flowid, false); 275 + } 276 + } 277 + 278 + 279 + struct sk_buff *brcmf_flowring_dequeue(struct brcmf_flowring *flow, u8 flowid) 280 + { 281 + struct brcmf_flowring_ring *ring; 282 + struct sk_buff *skb; 283 + 284 + ring = flow->rings[flowid]; 285 + if (ring->status != RING_OPEN) 286 + return NULL; 287 + 288 + skb = skb_dequeue(&ring->skblist); 289 + 290 + if (ring->blocked && 291 + (skb_queue_len(&ring->skblist) < BRCMF_FLOWRING_LOW)) { 292 + brcmf_flowring_block(flow, flowid, false); 293 + brcmf_dbg(MSGBUF, "Flowcontrol: OPEN for ring %d\n", flowid); 294 + } 295 + 296 + return skb; 297 + } 298 + 299 + 300 + void brcmf_flowring_reinsert(struct brcmf_flowring *flow, u8 flowid, 301 + struct sk_buff *skb) 302 + { 303 + struct brcmf_flowring_ring *ring; 304 + 305 + ring = flow->rings[flowid]; 306 + 307 + skb_queue_head(&ring->skblist, skb); 308 + } 309 + 310 + 311 + u32 brcmf_flowring_qlen(struct brcmf_flowring *flow, u8 flowid) 312 + { 313 + struct brcmf_flowring_ring *ring; 314 + 315 + ring = flow->rings[flowid]; 316 + if (!ring) 317 + return 0; 318 + 319 + if (ring->status != RING_OPEN) 320 + return 0; 321 + 322 + return skb_queue_len(&ring->skblist); 323 + } 324 + 325 + 326 + void brcmf_flowring_open(struct brcmf_flowring *flow, u8 flowid) 327 + { 328 + struct brcmf_flowring_ring *ring; 329 + 330 + ring = flow->rings[flowid]; 331 + if (!ring) { 332 + brcmf_err("Ring NULL, for flowid %d\n", flowid); 333 + return; 334 + } 335 + 336 + ring->status = RING_OPEN; 337 + } 338 + 339 + 340 + u8 brcmf_flowring_ifidx_get(struct brcmf_flowring *flow, u8 flowid) 341 + { 342 + struct brcmf_flowring_ring *ring; 343 + u8 hash_idx; 344 + 345 + ring = flow->rings[flowid]; 346 + hash_idx = ring->hash_id; 347 + 348 + return flow->hash[hash_idx].ifidx; 349 + } 350 + 351 + 352 + struct brcmf_flowring *brcmf_flowring_attach(struct device *dev, u16 nrofrings) 353 + { 354 + struct brcmf_flowring *flow; 355 + u32 i; 356 + 357 + flow = kzalloc(sizeof(*flow), GFP_ATOMIC); 358 + if (flow) { 359 + flow->dev = dev; 360 + flow->nrofrings = nrofrings; 361 + spin_lock_init(&flow->block_lock); 362 + for (i = 0; i < ARRAY_SIZE(flow->addr_mode); i++) 363 + flow->addr_mode[i] = ADDR_INDIRECT; 364 + for (i = 0; i < ARRAY_SIZE(flow->hash); i++) 365 + flow->hash[i].ifidx = BRCMF_FLOWRING_INVALID_IFIDX; 366 + flow->rings = kcalloc(nrofrings, sizeof(*flow->rings), 367 + GFP_ATOMIC); 368 + if (!flow->rings) { 369 + kfree(flow); 370 + flow = NULL; 371 + } 372 + } 373 + 374 + return flow; 375 + } 376 + 377 + 378 + void brcmf_flowring_detach(struct brcmf_flowring *flow) 379 + { 380 + struct brcmf_bus *bus_if = dev_get_drvdata(flow->dev); 381 + struct brcmf_pub *drvr = bus_if->drvr; 382 + struct brcmf_flowring_tdls_entry *search; 383 + struct brcmf_flowring_tdls_entry *remove; 384 + u8 flowid; 385 + 386 + for (flowid = 0; flowid < flow->nrofrings; flowid++) { 387 + if (flow->rings[flowid]) 388 + brcmf_msgbuf_delete_flowring(drvr, flowid); 389 + } 390 + 391 + search = flow->tdls_entry; 392 + while (search) { 393 + remove = search; 394 + search = search->next; 395 + kfree(remove); 396 + } 397 + kfree(flow->rings); 398 + kfree(flow); 399 + } 400 + 401 + 402 + void brcmf_flowring_configure_addr_mode(struct brcmf_flowring *flow, int ifidx, 403 + enum proto_addr_mode addr_mode) 404 + { 405 + struct brcmf_bus *bus_if = dev_get_drvdata(flow->dev); 406 + struct brcmf_pub *drvr = bus_if->drvr; 407 + u32 i; 408 + u8 flowid; 409 + 410 + if (flow->addr_mode[ifidx] != addr_mode) { 411 + for (i = 0; i < ARRAY_SIZE(flow->hash); i++) { 412 + if (flow->hash[i].ifidx == ifidx) { 413 + flowid = flow->hash[i].flowid; 414 + if (flow->rings[flowid]->status != RING_OPEN) 415 + continue; 416 + flow->rings[flowid]->status = RING_CLOSING; 417 + brcmf_msgbuf_delete_flowring(drvr, flowid); 418 + } 419 + } 420 + flow->addr_mode[ifidx] = addr_mode; 421 + } 422 + } 423 + 424 + 425 + void brcmf_flowring_delete_peer(struct brcmf_flowring *flow, int ifidx, 426 + u8 peer[ETH_ALEN]) 427 + { 428 + struct brcmf_bus *bus_if = dev_get_drvdata(flow->dev); 429 + struct brcmf_pub *drvr = bus_if->drvr; 430 + struct brcmf_flowring_hash *hash; 431 + struct brcmf_flowring_tdls_entry *prev; 432 + struct brcmf_flowring_tdls_entry *search; 433 + u32 i; 434 + u8 flowid; 435 + bool sta; 436 + 437 + sta = (flow->addr_mode[ifidx] == ADDR_INDIRECT); 438 + 439 + search = flow->tdls_entry; 440 + prev = NULL; 441 + while (search) { 442 + if (memcmp(search->mac, peer, ETH_ALEN) == 0) { 443 + sta = false; 444 + break; 445 + } 446 + prev = search; 447 + search = search->next; 448 + } 449 + 450 + hash = flow->hash; 451 + for (i = 0; i < BRCMF_FLOWRING_HASHSIZE; i++) { 452 + if ((sta || (memcmp(hash[i].mac, peer, ETH_ALEN) == 0)) && 453 + (hash[i].ifidx == ifidx)) { 454 + flowid = flow->hash[i].flowid; 455 + if (flow->rings[flowid]->status == RING_OPEN) { 456 + flow->rings[flowid]->status = RING_CLOSING; 457 + brcmf_msgbuf_delete_flowring(drvr, flowid); 458 + } 459 + } 460 + } 461 + 462 + if (search) { 463 + if (prev) 464 + prev->next = search->next; 465 + else 466 + flow->tdls_entry = search->next; 467 + kfree(search); 468 + if (flow->tdls_entry == NULL) 469 + flow->tdls_active = false; 470 + } 471 + } 472 + 473 + 474 + void brcmf_flowring_add_tdls_peer(struct brcmf_flowring *flow, int ifidx, 475 + u8 peer[ETH_ALEN]) 476 + { 477 + struct brcmf_flowring_tdls_entry *tdls_entry; 478 + struct brcmf_flowring_tdls_entry *search; 479 + 480 + tdls_entry = kzalloc(sizeof(*tdls_entry), GFP_ATOMIC); 481 + if (tdls_entry == NULL) 482 + return; 483 + 484 + memcpy(tdls_entry->mac, peer, ETH_ALEN); 485 + tdls_entry->next = NULL; 486 + if (flow->tdls_entry == NULL) { 487 + flow->tdls_entry = tdls_entry; 488 + } else { 489 + search = flow->tdls_entry; 490 + if (memcmp(search->mac, peer, ETH_ALEN) == 0) 491 + return; 492 + while (search->next) { 493 + search = search->next; 494 + if (memcmp(search->mac, peer, ETH_ALEN) == 0) 495 + return; 496 + } 497 + search->next = tdls_entry; 498 + } 499 + 500 + flow->tdls_active = true; 501 + }
+84
drivers/net/wireless/brcm80211/brcmfmac/flowring.h
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + #ifndef BRCMFMAC_FLOWRING_H 16 + #define BRCMFMAC_FLOWRING_H 17 + 18 + 19 + #define BRCMF_FLOWRING_HASHSIZE 256 20 + #define BRCMF_FLOWRING_INVALID_ID 0xFFFFFFFF 21 + 22 + 23 + struct brcmf_flowring_hash { 24 + u8 mac[ETH_ALEN]; 25 + u8 fifo; 26 + u8 ifidx; 27 + u8 flowid; 28 + }; 29 + 30 + enum ring_status { 31 + RING_CLOSED, 32 + RING_CLOSING, 33 + RING_OPEN 34 + }; 35 + 36 + struct brcmf_flowring_ring { 37 + u8 hash_id; 38 + bool blocked; 39 + enum ring_status status; 40 + struct sk_buff_head skblist; 41 + }; 42 + 43 + struct brcmf_flowring_tdls_entry { 44 + u8 mac[ETH_ALEN]; 45 + struct brcmf_flowring_tdls_entry *next; 46 + }; 47 + 48 + struct brcmf_flowring { 49 + struct device *dev; 50 + struct brcmf_flowring_hash hash[BRCMF_FLOWRING_HASHSIZE]; 51 + struct brcmf_flowring_ring **rings; 52 + spinlock_t block_lock; 53 + enum proto_addr_mode addr_mode[BRCMF_MAX_IFS]; 54 + u16 nrofrings; 55 + bool tdls_active; 56 + struct brcmf_flowring_tdls_entry *tdls_entry; 57 + }; 58 + 59 + 60 + u32 brcmf_flowring_lookup(struct brcmf_flowring *flow, u8 da[ETH_ALEN], 61 + u8 prio, u8 ifidx); 62 + u32 brcmf_flowring_create(struct brcmf_flowring *flow, u8 da[ETH_ALEN], 63 + u8 prio, u8 ifidx); 64 + void brcmf_flowring_delete(struct brcmf_flowring *flow, u8 flowid); 65 + void brcmf_flowring_open(struct brcmf_flowring *flow, u8 flowid); 66 + u8 brcmf_flowring_tid(struct brcmf_flowring *flow, u8 flowid); 67 + void brcmf_flowring_enqueue(struct brcmf_flowring *flow, u8 flowid, 68 + struct sk_buff *skb); 69 + struct sk_buff *brcmf_flowring_dequeue(struct brcmf_flowring *flow, u8 flowid); 70 + void brcmf_flowring_reinsert(struct brcmf_flowring *flow, u8 flowid, 71 + struct sk_buff *skb); 72 + u32 brcmf_flowring_qlen(struct brcmf_flowring *flow, u8 flowid); 73 + u8 brcmf_flowring_ifidx_get(struct brcmf_flowring *flow, u8 flowid); 74 + struct brcmf_flowring *brcmf_flowring_attach(struct device *dev, u16 nrofrings); 75 + void brcmf_flowring_detach(struct brcmf_flowring *flow); 76 + void brcmf_flowring_configure_addr_mode(struct brcmf_flowring *flow, int ifidx, 77 + enum proto_addr_mode addr_mode); 78 + void brcmf_flowring_delete_peer(struct brcmf_flowring *flow, int ifidx, 79 + u8 peer[ETH_ALEN]); 80 + void brcmf_flowring_add_tdls_peer(struct brcmf_flowring *flow, int ifidx, 81 + u8 peer[ETH_ALEN]); 82 + 83 + 84 + #endif /* BRCMFMAC_FLOWRING_H */
+5 -1
drivers/net/wireless/brcm80211/brcmfmac/fweh.c
··· 293 293 goto event_free; 294 294 } 295 295 296 - ifp = drvr->iflist[emsg.bsscfgidx]; 296 + if ((event->code == BRCMF_E_TDLS_PEER_EVENT) && 297 + (emsg.bsscfgidx == 1)) 298 + ifp = drvr->iflist[0]; 299 + else 300 + ifp = drvr->iflist[emsg.bsscfgidx]; 297 301 err = brcmf_fweh_call_event_handler(ifp, event->code, &emsg, 298 302 event->data); 299 303 if (err) {
+5
drivers/net/wireless/brcm80211/brcmfmac/fweh.h
··· 102 102 BRCMF_ENUM_DEF(DCS_REQUEST, 73) \ 103 103 BRCMF_ENUM_DEF(FIFO_CREDIT_MAP, 74) \ 104 104 BRCMF_ENUM_DEF(ACTION_FRAME_RX, 75) \ 105 + BRCMF_ENUM_DEF(TDLS_PEER_EVENT, 92) \ 105 106 BRCMF_ENUM_DEF(BCMC_CREDIT_SUPPORT, 127) \ 106 107 BRCMF_ENUM_DEF(PSTA_PRIMARY_INTF_IND, 128) 107 108 ··· 155 154 #define BRCMF_E_REASON_DIRECTED_ROAM 6 156 155 #define BRCMF_E_REASON_TSPEC_REJECTED 7 157 156 #define BRCMF_E_REASON_BETTER_AP 8 157 + 158 + #define BRCMF_E_REASON_TDLS_PEER_DISCOVERED 0 159 + #define BRCMF_E_REASON_TDLS_PEER_CONNECTED 1 160 + #define BRCMF_E_REASON_TDLS_PEER_DISCONNECTED 2 158 161 159 162 /* action field values for brcmf_ifevent */ 160 163 #define BRCMF_E_IF_ADD 1
+1397
drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + 16 + /******************************************************************************* 17 + * Communicates with the dongle by using dcmd codes. 18 + * For certain dcmd codes, the dongle interprets string data from the host. 19 + ******************************************************************************/ 20 + 21 + #include <linux/types.h> 22 + #include <linux/netdevice.h> 23 + 24 + #include <brcmu_utils.h> 25 + #include <brcmu_wifi.h> 26 + 27 + #include "dhd.h" 28 + #include "dhd_dbg.h" 29 + #include "proto.h" 30 + #include "msgbuf.h" 31 + #include "commonring.h" 32 + #include "flowring.h" 33 + #include "dhd_bus.h" 34 + #include "tracepoint.h" 35 + 36 + 37 + #define MSGBUF_IOCTL_RESP_TIMEOUT 2000 38 + 39 + #define MSGBUF_TYPE_GEN_STATUS 0x1 40 + #define MSGBUF_TYPE_RING_STATUS 0x2 41 + #define MSGBUF_TYPE_FLOW_RING_CREATE 0x3 42 + #define MSGBUF_TYPE_FLOW_RING_CREATE_CMPLT 0x4 43 + #define MSGBUF_TYPE_FLOW_RING_DELETE 0x5 44 + #define MSGBUF_TYPE_FLOW_RING_DELETE_CMPLT 0x6 45 + #define MSGBUF_TYPE_FLOW_RING_FLUSH 0x7 46 + #define MSGBUF_TYPE_FLOW_RING_FLUSH_CMPLT 0x8 47 + #define MSGBUF_TYPE_IOCTLPTR_REQ 0x9 48 + #define MSGBUF_TYPE_IOCTLPTR_REQ_ACK 0xA 49 + #define MSGBUF_TYPE_IOCTLRESP_BUF_POST 0xB 50 + #define MSGBUF_TYPE_IOCTL_CMPLT 0xC 51 + #define MSGBUF_TYPE_EVENT_BUF_POST 0xD 52 + #define MSGBUF_TYPE_WL_EVENT 0xE 53 + #define MSGBUF_TYPE_TX_POST 0xF 54 + #define MSGBUF_TYPE_TX_STATUS 0x10 55 + #define MSGBUF_TYPE_RXBUF_POST 0x11 56 + #define MSGBUF_TYPE_RX_CMPLT 0x12 57 + #define MSGBUF_TYPE_LPBK_DMAXFER 0x13 58 + #define MSGBUF_TYPE_LPBK_DMAXFER_CMPLT 0x14 59 + 60 + #define NR_TX_PKTIDS 2048 61 + #define NR_RX_PKTIDS 1024 62 + 63 + #define BRCMF_IOCTL_REQ_PKTID 0xFFFE 64 + 65 + #define BRCMF_MSGBUF_MAX_PKT_SIZE 2048 66 + #define BRCMF_MSGBUF_RXBUFPOST_THRESHOLD 32 67 + #define BRCMF_MSGBUF_MAX_IOCTLRESPBUF_POST 8 68 + #define BRCMF_MSGBUF_MAX_EVENTBUF_POST 8 69 + 70 + #define BRCMF_MSGBUF_PKT_FLAGS_FRAME_802_3 0x01 71 + #define BRCMF_MSGBUF_PKT_FLAGS_PRIO_SHIFT 5 72 + 73 + #define BRCMF_MSGBUF_TX_FLUSH_CNT1 32 74 + #define BRCMF_MSGBUF_TX_FLUSH_CNT2 96 75 + 76 + 77 + struct msgbuf_common_hdr { 78 + u8 msgtype; 79 + u8 ifidx; 80 + u8 flags; 81 + u8 rsvd0; 82 + __le32 request_id; 83 + }; 84 + 85 + struct msgbuf_buf_addr { 86 + __le32 low_addr; 87 + __le32 high_addr; 88 + }; 89 + 90 + struct msgbuf_ioctl_req_hdr { 91 + struct msgbuf_common_hdr msg; 92 + __le32 cmd; 93 + __le16 trans_id; 94 + __le16 input_buf_len; 95 + __le16 output_buf_len; 96 + __le16 rsvd0[3]; 97 + struct msgbuf_buf_addr req_buf_addr; 98 + __le32 rsvd1[2]; 99 + }; 100 + 101 + struct msgbuf_tx_msghdr { 102 + struct msgbuf_common_hdr msg; 103 + u8 txhdr[ETH_HLEN]; 104 + u8 flags; 105 + u8 seg_cnt; 106 + struct msgbuf_buf_addr metadata_buf_addr; 107 + struct msgbuf_buf_addr data_buf_addr; 108 + __le16 metadata_buf_len; 109 + __le16 data_len; 110 + __le32 rsvd0; 111 + }; 112 + 113 + struct msgbuf_rx_bufpost { 114 + struct msgbuf_common_hdr msg; 115 + __le16 metadata_buf_len; 116 + __le16 data_buf_len; 117 + __le32 rsvd0; 118 + struct msgbuf_buf_addr metadata_buf_addr; 119 + struct msgbuf_buf_addr data_buf_addr; 120 + }; 121 + 122 + struct msgbuf_rx_ioctl_resp_or_event { 123 + struct msgbuf_common_hdr msg; 124 + __le16 host_buf_len; 125 + __le16 rsvd0[3]; 126 + struct msgbuf_buf_addr host_buf_addr; 127 + __le32 rsvd1[4]; 128 + }; 129 + 130 + struct msgbuf_completion_hdr { 131 + __le16 status; 132 + __le16 flow_ring_id; 133 + }; 134 + 135 + struct msgbuf_rx_event { 136 + struct msgbuf_common_hdr msg; 137 + struct msgbuf_completion_hdr compl_hdr; 138 + __le16 event_data_len; 139 + __le16 seqnum; 140 + __le16 rsvd0[4]; 141 + }; 142 + 143 + struct msgbuf_ioctl_resp_hdr { 144 + struct msgbuf_common_hdr msg; 145 + struct msgbuf_completion_hdr compl_hdr; 146 + __le16 resp_len; 147 + __le16 trans_id; 148 + __le32 cmd; 149 + __le32 rsvd0; 150 + }; 151 + 152 + struct msgbuf_tx_status { 153 + struct msgbuf_common_hdr msg; 154 + struct msgbuf_completion_hdr compl_hdr; 155 + __le16 metadata_len; 156 + __le16 tx_status; 157 + }; 158 + 159 + struct msgbuf_rx_complete { 160 + struct msgbuf_common_hdr msg; 161 + struct msgbuf_completion_hdr compl_hdr; 162 + __le16 metadata_len; 163 + __le16 data_len; 164 + __le16 data_offset; 165 + __le16 flags; 166 + __le32 rx_status_0; 167 + __le32 rx_status_1; 168 + __le32 rsvd0; 169 + }; 170 + 171 + struct msgbuf_tx_flowring_create_req { 172 + struct msgbuf_common_hdr msg; 173 + u8 da[ETH_ALEN]; 174 + u8 sa[ETH_ALEN]; 175 + u8 tid; 176 + u8 if_flags; 177 + __le16 flow_ring_id; 178 + u8 tc; 179 + u8 priority; 180 + __le16 int_vector; 181 + __le16 max_items; 182 + __le16 len_item; 183 + struct msgbuf_buf_addr flow_ring_addr; 184 + }; 185 + 186 + struct msgbuf_tx_flowring_delete_req { 187 + struct msgbuf_common_hdr msg; 188 + __le16 flow_ring_id; 189 + __le16 reason; 190 + __le32 rsvd0[7]; 191 + }; 192 + 193 + struct msgbuf_flowring_create_resp { 194 + struct msgbuf_common_hdr msg; 195 + struct msgbuf_completion_hdr compl_hdr; 196 + __le32 rsvd0[3]; 197 + }; 198 + 199 + struct msgbuf_flowring_delete_resp { 200 + struct msgbuf_common_hdr msg; 201 + struct msgbuf_completion_hdr compl_hdr; 202 + __le32 rsvd0[3]; 203 + }; 204 + 205 + struct msgbuf_flowring_flush_resp { 206 + struct msgbuf_common_hdr msg; 207 + struct msgbuf_completion_hdr compl_hdr; 208 + __le32 rsvd0[3]; 209 + }; 210 + 211 + struct brcmf_msgbuf { 212 + struct brcmf_pub *drvr; 213 + 214 + struct brcmf_commonring **commonrings; 215 + struct brcmf_commonring **flowrings; 216 + dma_addr_t *flowring_dma_handle; 217 + u16 nrof_flowrings; 218 + 219 + u16 rx_dataoffset; 220 + u32 max_rxbufpost; 221 + u16 rx_metadata_offset; 222 + u32 rxbufpost; 223 + 224 + u32 max_ioctlrespbuf; 225 + u32 cur_ioctlrespbuf; 226 + u32 max_eventbuf; 227 + u32 cur_eventbuf; 228 + 229 + void *ioctbuf; 230 + dma_addr_t ioctbuf_handle; 231 + u32 ioctbuf_phys_hi; 232 + u32 ioctbuf_phys_lo; 233 + u32 ioctl_resp_status; 234 + u32 ioctl_resp_ret_len; 235 + u32 ioctl_resp_pktid; 236 + 237 + u16 data_seq_no; 238 + u16 ioctl_seq_no; 239 + u32 reqid; 240 + wait_queue_head_t ioctl_resp_wait; 241 + bool ctl_completed; 242 + 243 + struct brcmf_msgbuf_pktids *tx_pktids; 244 + struct brcmf_msgbuf_pktids *rx_pktids; 245 + struct brcmf_flowring *flow; 246 + 247 + struct workqueue_struct *txflow_wq; 248 + struct work_struct txflow_work; 249 + unsigned long *flow_map; 250 + unsigned long *txstatus_done_map; 251 + }; 252 + 253 + struct brcmf_msgbuf_pktid { 254 + atomic_t allocated; 255 + u16 data_offset; 256 + struct sk_buff *skb; 257 + dma_addr_t physaddr; 258 + }; 259 + 260 + struct brcmf_msgbuf_pktids { 261 + u32 array_size; 262 + u32 last_allocated_idx; 263 + enum dma_data_direction direction; 264 + struct brcmf_msgbuf_pktid *array; 265 + }; 266 + 267 + 268 + /* dma flushing needs implementation for mips and arm platforms. Should 269 + * be put in util. Note, this is not real flushing. It is virtual non 270 + * cached memory. Only write buffers should have to be drained. Though 271 + * this may be different depending on platform...... 272 + */ 273 + #define brcmf_dma_flush(addr, len) 274 + #define brcmf_dma_invalidate_cache(addr, len) 275 + 276 + 277 + static void brcmf_msgbuf_rxbuf_ioctlresp_post(struct brcmf_msgbuf *msgbuf); 278 + 279 + 280 + static struct brcmf_msgbuf_pktids * 281 + brcmf_msgbuf_init_pktids(u32 nr_array_entries, 282 + enum dma_data_direction direction) 283 + { 284 + struct brcmf_msgbuf_pktid *array; 285 + struct brcmf_msgbuf_pktids *pktids; 286 + 287 + array = kcalloc(nr_array_entries, sizeof(*array), GFP_ATOMIC); 288 + if (!array) 289 + return NULL; 290 + 291 + pktids = kzalloc(sizeof(*pktids), GFP_ATOMIC); 292 + if (!pktids) { 293 + kfree(array); 294 + return NULL; 295 + } 296 + pktids->array = array; 297 + pktids->array_size = nr_array_entries; 298 + 299 + return pktids; 300 + } 301 + 302 + 303 + static int 304 + brcmf_msgbuf_alloc_pktid(struct device *dev, 305 + struct brcmf_msgbuf_pktids *pktids, 306 + struct sk_buff *skb, u16 data_offset, 307 + dma_addr_t *physaddr, u32 *idx) 308 + { 309 + struct brcmf_msgbuf_pktid *array; 310 + u32 count; 311 + 312 + array = pktids->array; 313 + 314 + *physaddr = dma_map_single(dev, skb->data + data_offset, 315 + skb->len - data_offset, pktids->direction); 316 + 317 + if (dma_mapping_error(dev, *physaddr)) { 318 + brcmf_err("dma_map_single failed !!\n"); 319 + return -ENOMEM; 320 + } 321 + 322 + *idx = pktids->last_allocated_idx; 323 + 324 + count = 0; 325 + do { 326 + (*idx)++; 327 + if (*idx == pktids->array_size) 328 + *idx = 0; 329 + if (array[*idx].allocated.counter == 0) 330 + if (atomic_cmpxchg(&array[*idx].allocated, 0, 1) == 0) 331 + break; 332 + count++; 333 + } while (count < pktids->array_size); 334 + 335 + if (count == pktids->array_size) 336 + return -ENOMEM; 337 + 338 + array[*idx].data_offset = data_offset; 339 + array[*idx].physaddr = *physaddr; 340 + array[*idx].skb = skb; 341 + 342 + pktids->last_allocated_idx = *idx; 343 + 344 + return 0; 345 + } 346 + 347 + 348 + static struct sk_buff * 349 + brcmf_msgbuf_get_pktid(struct device *dev, struct brcmf_msgbuf_pktids *pktids, 350 + u32 idx) 351 + { 352 + struct brcmf_msgbuf_pktid *pktid; 353 + struct sk_buff *skb; 354 + 355 + if (idx >= pktids->array_size) { 356 + brcmf_err("Invalid packet id %d (max %d)\n", idx, 357 + pktids->array_size); 358 + return NULL; 359 + } 360 + if (pktids->array[idx].allocated.counter) { 361 + pktid = &pktids->array[idx]; 362 + dma_unmap_single(dev, pktid->physaddr, 363 + pktid->skb->len - pktid->data_offset, 364 + pktids->direction); 365 + skb = pktid->skb; 366 + pktid->allocated.counter = 0; 367 + return skb; 368 + } else { 369 + brcmf_err("Invalid packet id %d (not in use)\n", idx); 370 + } 371 + 372 + return NULL; 373 + } 374 + 375 + 376 + static void 377 + brcmf_msgbuf_release_array(struct device *dev, 378 + struct brcmf_msgbuf_pktids *pktids) 379 + { 380 + struct brcmf_msgbuf_pktid *array; 381 + struct brcmf_msgbuf_pktid *pktid; 382 + u32 count; 383 + 384 + array = pktids->array; 385 + count = 0; 386 + do { 387 + if (array[count].allocated.counter) { 388 + pktid = &array[count]; 389 + dma_unmap_single(dev, pktid->physaddr, 390 + pktid->skb->len - pktid->data_offset, 391 + pktids->direction); 392 + brcmu_pkt_buf_free_skb(pktid->skb); 393 + } 394 + count++; 395 + } while (count < pktids->array_size); 396 + 397 + kfree(array); 398 + kfree(pktids); 399 + } 400 + 401 + 402 + static void brcmf_msgbuf_release_pktids(struct brcmf_msgbuf *msgbuf) 403 + { 404 + if (msgbuf->rx_pktids) 405 + brcmf_msgbuf_release_array(msgbuf->drvr->bus_if->dev, 406 + msgbuf->rx_pktids); 407 + if (msgbuf->tx_pktids) 408 + brcmf_msgbuf_release_array(msgbuf->drvr->bus_if->dev, 409 + msgbuf->tx_pktids); 410 + } 411 + 412 + 413 + static int brcmf_msgbuf_tx_ioctl(struct brcmf_pub *drvr, int ifidx, 414 + uint cmd, void *buf, uint len) 415 + { 416 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 417 + struct brcmf_commonring *commonring; 418 + struct msgbuf_ioctl_req_hdr *request; 419 + u16 buf_len; 420 + void *ret_ptr; 421 + int err; 422 + 423 + commonring = msgbuf->commonrings[BRCMF_H2D_MSGRING_CONTROL_SUBMIT]; 424 + brcmf_commonring_lock(commonring); 425 + ret_ptr = brcmf_commonring_reserve_for_write(commonring); 426 + if (!ret_ptr) { 427 + brcmf_err("Failed to reserve space in commonring\n"); 428 + brcmf_commonring_unlock(commonring); 429 + return -ENOMEM; 430 + } 431 + 432 + msgbuf->reqid++; 433 + 434 + request = (struct msgbuf_ioctl_req_hdr *)ret_ptr; 435 + request->msg.msgtype = MSGBUF_TYPE_IOCTLPTR_REQ; 436 + request->msg.ifidx = (u8)ifidx; 437 + request->msg.flags = 0; 438 + request->msg.request_id = cpu_to_le32(BRCMF_IOCTL_REQ_PKTID); 439 + request->cmd = cpu_to_le32(cmd); 440 + request->output_buf_len = cpu_to_le16(len); 441 + request->trans_id = cpu_to_le16(msgbuf->reqid); 442 + 443 + buf_len = min_t(u16, len, BRCMF_TX_IOCTL_MAX_MSG_SIZE); 444 + request->input_buf_len = cpu_to_le16(buf_len); 445 + request->req_buf_addr.high_addr = cpu_to_le32(msgbuf->ioctbuf_phys_hi); 446 + request->req_buf_addr.low_addr = cpu_to_le32(msgbuf->ioctbuf_phys_lo); 447 + if (buf) 448 + memcpy(msgbuf->ioctbuf, buf, buf_len); 449 + else 450 + memset(msgbuf->ioctbuf, 0, buf_len); 451 + brcmf_dma_flush(ioctl_buf, buf_len); 452 + 453 + err = brcmf_commonring_write_complete(commonring); 454 + brcmf_commonring_unlock(commonring); 455 + 456 + return err; 457 + } 458 + 459 + 460 + static int brcmf_msgbuf_ioctl_resp_wait(struct brcmf_msgbuf *msgbuf) 461 + { 462 + return wait_event_timeout(msgbuf->ioctl_resp_wait, 463 + msgbuf->ctl_completed, 464 + msecs_to_jiffies(MSGBUF_IOCTL_RESP_TIMEOUT)); 465 + } 466 + 467 + 468 + static void brcmf_msgbuf_ioctl_resp_wake(struct brcmf_msgbuf *msgbuf) 469 + { 470 + if (waitqueue_active(&msgbuf->ioctl_resp_wait)) { 471 + msgbuf->ctl_completed = true; 472 + wake_up(&msgbuf->ioctl_resp_wait); 473 + } 474 + } 475 + 476 + 477 + static int brcmf_msgbuf_query_dcmd(struct brcmf_pub *drvr, int ifidx, 478 + uint cmd, void *buf, uint len) 479 + { 480 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 481 + struct sk_buff *skb = NULL; 482 + int timeout; 483 + int err; 484 + 485 + brcmf_dbg(MSGBUF, "ifidx=%d, cmd=%d, len=%d\n", ifidx, cmd, len); 486 + msgbuf->ctl_completed = false; 487 + err = brcmf_msgbuf_tx_ioctl(drvr, ifidx, cmd, buf, len); 488 + if (err) 489 + return err; 490 + 491 + timeout = brcmf_msgbuf_ioctl_resp_wait(msgbuf); 492 + if (!timeout) { 493 + brcmf_err("Timeout on response for query command\n"); 494 + return -EIO; 495 + } 496 + 497 + skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev, 498 + msgbuf->rx_pktids, 499 + msgbuf->ioctl_resp_pktid); 500 + if (msgbuf->ioctl_resp_ret_len != 0) { 501 + if (!skb) { 502 + brcmf_err("Invalid packet id idx recv'd %d\n", 503 + msgbuf->ioctl_resp_pktid); 504 + return -EBADF; 505 + } 506 + memcpy(buf, skb->data, (len < msgbuf->ioctl_resp_ret_len) ? 507 + len : msgbuf->ioctl_resp_ret_len); 508 + } 509 + if (skb) 510 + brcmu_pkt_buf_free_skb(skb); 511 + 512 + return msgbuf->ioctl_resp_status; 513 + } 514 + 515 + 516 + static int brcmf_msgbuf_set_dcmd(struct brcmf_pub *drvr, int ifidx, 517 + uint cmd, void *buf, uint len) 518 + { 519 + return brcmf_msgbuf_query_dcmd(drvr, ifidx, cmd, buf, len); 520 + } 521 + 522 + 523 + static int brcmf_msgbuf_hdrpull(struct brcmf_pub *drvr, bool do_fws, 524 + u8 *ifidx, struct sk_buff *skb) 525 + { 526 + return -ENODEV; 527 + } 528 + 529 + 530 + static void 531 + brcmf_msgbuf_remove_flowring(struct brcmf_msgbuf *msgbuf, u16 flowid) 532 + { 533 + u32 dma_sz; 534 + void *dma_buf; 535 + 536 + brcmf_dbg(MSGBUF, "Removing flowring %d\n", flowid); 537 + 538 + dma_sz = BRCMF_H2D_TXFLOWRING_MAX_ITEM * BRCMF_H2D_TXFLOWRING_ITEMSIZE; 539 + dma_buf = msgbuf->flowrings[flowid]->buf_addr; 540 + dma_free_coherent(msgbuf->drvr->bus_if->dev, dma_sz, dma_buf, 541 + msgbuf->flowring_dma_handle[flowid]); 542 + 543 + brcmf_flowring_delete(msgbuf->flow, flowid); 544 + } 545 + 546 + 547 + static u32 brcmf_msgbuf_flowring_create(struct brcmf_msgbuf *msgbuf, int ifidx, 548 + struct sk_buff *skb) 549 + { 550 + struct msgbuf_tx_flowring_create_req *create; 551 + struct ethhdr *eh = (struct ethhdr *)(skb->data); 552 + struct brcmf_commonring *commonring; 553 + void *ret_ptr; 554 + u32 flowid; 555 + void *dma_buf; 556 + u32 dma_sz; 557 + long long address; 558 + int err; 559 + 560 + flowid = brcmf_flowring_create(msgbuf->flow, eh->h_dest, 561 + skb->priority, ifidx); 562 + if (flowid == BRCMF_FLOWRING_INVALID_ID) 563 + return flowid; 564 + 565 + dma_sz = BRCMF_H2D_TXFLOWRING_MAX_ITEM * BRCMF_H2D_TXFLOWRING_ITEMSIZE; 566 + 567 + dma_buf = dma_alloc_coherent(msgbuf->drvr->bus_if->dev, dma_sz, 568 + &msgbuf->flowring_dma_handle[flowid], 569 + GFP_ATOMIC); 570 + if (!dma_buf) { 571 + brcmf_err("dma_alloc_coherent failed\n"); 572 + brcmf_flowring_delete(msgbuf->flow, flowid); 573 + return BRCMF_FLOWRING_INVALID_ID; 574 + } 575 + 576 + brcmf_commonring_config(msgbuf->flowrings[flowid], 577 + BRCMF_H2D_TXFLOWRING_MAX_ITEM, 578 + BRCMF_H2D_TXFLOWRING_ITEMSIZE, dma_buf); 579 + 580 + commonring = msgbuf->commonrings[BRCMF_H2D_MSGRING_CONTROL_SUBMIT]; 581 + brcmf_commonring_lock(commonring); 582 + ret_ptr = brcmf_commonring_reserve_for_write(commonring); 583 + if (!ret_ptr) { 584 + brcmf_err("Failed to reserve space in commonring\n"); 585 + brcmf_commonring_unlock(commonring); 586 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 587 + return BRCMF_FLOWRING_INVALID_ID; 588 + } 589 + 590 + create = (struct msgbuf_tx_flowring_create_req *)ret_ptr; 591 + create->msg.msgtype = MSGBUF_TYPE_FLOW_RING_CREATE; 592 + create->msg.ifidx = ifidx; 593 + create->msg.request_id = 0; 594 + create->tid = brcmf_flowring_tid(msgbuf->flow, flowid); 595 + create->flow_ring_id = cpu_to_le16(flowid + 596 + BRCMF_NROF_H2D_COMMON_MSGRINGS); 597 + memcpy(create->sa, eh->h_source, ETH_ALEN); 598 + memcpy(create->da, eh->h_dest, ETH_ALEN); 599 + address = (long long)(long)msgbuf->flowring_dma_handle[flowid]; 600 + create->flow_ring_addr.high_addr = cpu_to_le32(address >> 32); 601 + create->flow_ring_addr.low_addr = cpu_to_le32(address & 0xffffffff); 602 + create->max_items = cpu_to_le16(BRCMF_H2D_TXFLOWRING_MAX_ITEM); 603 + create->len_item = cpu_to_le16(BRCMF_H2D_TXFLOWRING_ITEMSIZE); 604 + 605 + brcmf_dbg(MSGBUF, "Send Flow Create Req flow ID %d for peer %pM prio %d ifindex %d\n", 606 + flowid, eh->h_dest, create->tid, ifidx); 607 + 608 + err = brcmf_commonring_write_complete(commonring); 609 + brcmf_commonring_unlock(commonring); 610 + if (err) { 611 + brcmf_err("Failed to write commonring\n"); 612 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 613 + return BRCMF_FLOWRING_INVALID_ID; 614 + } 615 + 616 + return flowid; 617 + } 618 + 619 + 620 + static void brcmf_msgbuf_txflow(struct brcmf_msgbuf *msgbuf, u8 flowid) 621 + { 622 + struct brcmf_flowring *flow = msgbuf->flow; 623 + struct brcmf_commonring *commonring; 624 + void *ret_ptr; 625 + u32 count; 626 + struct sk_buff *skb; 627 + dma_addr_t physaddr; 628 + u32 pktid; 629 + struct msgbuf_tx_msghdr *tx_msghdr; 630 + long long address; 631 + 632 + commonring = msgbuf->flowrings[flowid]; 633 + if (!brcmf_commonring_write_available(commonring)) 634 + return; 635 + 636 + brcmf_commonring_lock(commonring); 637 + 638 + count = BRCMF_MSGBUF_TX_FLUSH_CNT2 - BRCMF_MSGBUF_TX_FLUSH_CNT1; 639 + while (brcmf_flowring_qlen(flow, flowid)) { 640 + skb = brcmf_flowring_dequeue(flow, flowid); 641 + if (skb == NULL) { 642 + brcmf_err("No SKB, but qlen %d\n", 643 + brcmf_flowring_qlen(flow, flowid)); 644 + break; 645 + } 646 + skb_orphan(skb); 647 + if (brcmf_msgbuf_alloc_pktid(msgbuf->drvr->bus_if->dev, 648 + msgbuf->tx_pktids, skb, ETH_HLEN, 649 + &physaddr, &pktid)) { 650 + brcmf_flowring_reinsert(flow, flowid, skb); 651 + brcmf_err("No PKTID available !!\n"); 652 + break; 653 + } 654 + ret_ptr = brcmf_commonring_reserve_for_write(commonring); 655 + if (!ret_ptr) { 656 + brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev, 657 + msgbuf->tx_pktids, pktid); 658 + brcmf_flowring_reinsert(flow, flowid, skb); 659 + break; 660 + } 661 + count++; 662 + 663 + tx_msghdr = (struct msgbuf_tx_msghdr *)ret_ptr; 664 + 665 + tx_msghdr->msg.msgtype = MSGBUF_TYPE_TX_POST; 666 + tx_msghdr->msg.request_id = cpu_to_le32(pktid); 667 + tx_msghdr->msg.ifidx = brcmf_flowring_ifidx_get(flow, flowid); 668 + tx_msghdr->flags = BRCMF_MSGBUF_PKT_FLAGS_FRAME_802_3; 669 + tx_msghdr->flags |= (skb->priority & 0x07) << 670 + BRCMF_MSGBUF_PKT_FLAGS_PRIO_SHIFT; 671 + tx_msghdr->seg_cnt = 1; 672 + memcpy(tx_msghdr->txhdr, skb->data, ETH_HLEN); 673 + tx_msghdr->data_len = cpu_to_le16(skb->len - ETH_HLEN); 674 + address = (long long)(long)physaddr; 675 + tx_msghdr->data_buf_addr.high_addr = cpu_to_le32(address >> 32); 676 + tx_msghdr->data_buf_addr.low_addr = 677 + cpu_to_le32(address & 0xffffffff); 678 + tx_msghdr->metadata_buf_len = 0; 679 + tx_msghdr->metadata_buf_addr.high_addr = 0; 680 + tx_msghdr->metadata_buf_addr.low_addr = 0; 681 + if (count >= BRCMF_MSGBUF_TX_FLUSH_CNT2) { 682 + brcmf_commonring_write_complete(commonring); 683 + count = 0; 684 + } 685 + } 686 + if (count) 687 + brcmf_commonring_write_complete(commonring); 688 + brcmf_commonring_unlock(commonring); 689 + } 690 + 691 + 692 + static void brcmf_msgbuf_txflow_worker(struct work_struct *worker) 693 + { 694 + struct brcmf_msgbuf *msgbuf; 695 + u32 flowid; 696 + 697 + msgbuf = container_of(worker, struct brcmf_msgbuf, txflow_work); 698 + for_each_set_bit(flowid, msgbuf->flow_map, msgbuf->nrof_flowrings) { 699 + clear_bit(flowid, msgbuf->flow_map); 700 + brcmf_msgbuf_txflow(msgbuf, flowid); 701 + } 702 + } 703 + 704 + 705 + static int brcmf_msgbuf_schedule_txdata(struct brcmf_msgbuf *msgbuf, u32 flowid) 706 + { 707 + set_bit(flowid, msgbuf->flow_map); 708 + queue_work(msgbuf->txflow_wq, &msgbuf->txflow_work); 709 + 710 + return 0; 711 + } 712 + 713 + 714 + static int brcmf_msgbuf_txdata(struct brcmf_pub *drvr, int ifidx, 715 + u8 offset, struct sk_buff *skb) 716 + { 717 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 718 + struct brcmf_flowring *flow = msgbuf->flow; 719 + struct ethhdr *eh = (struct ethhdr *)(skb->data); 720 + u32 flowid; 721 + 722 + flowid = brcmf_flowring_lookup(flow, eh->h_dest, skb->priority, ifidx); 723 + if (flowid == BRCMF_FLOWRING_INVALID_ID) { 724 + flowid = brcmf_msgbuf_flowring_create(msgbuf, ifidx, skb); 725 + if (flowid == BRCMF_FLOWRING_INVALID_ID) 726 + return -ENOMEM; 727 + } 728 + brcmf_flowring_enqueue(flow, flowid, skb); 729 + brcmf_msgbuf_schedule_txdata(msgbuf, flowid); 730 + 731 + return 0; 732 + } 733 + 734 + 735 + static void 736 + brcmf_msgbuf_configure_addr_mode(struct brcmf_pub *drvr, int ifidx, 737 + enum proto_addr_mode addr_mode) 738 + { 739 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 740 + 741 + brcmf_flowring_configure_addr_mode(msgbuf->flow, ifidx, addr_mode); 742 + } 743 + 744 + 745 + static void 746 + brcmf_msgbuf_delete_peer(struct brcmf_pub *drvr, int ifidx, u8 peer[ETH_ALEN]) 747 + { 748 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 749 + 750 + brcmf_flowring_delete_peer(msgbuf->flow, ifidx, peer); 751 + } 752 + 753 + 754 + static void 755 + brcmf_msgbuf_add_tdls_peer(struct brcmf_pub *drvr, int ifidx, u8 peer[ETH_ALEN]) 756 + { 757 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 758 + 759 + brcmf_flowring_add_tdls_peer(msgbuf->flow, ifidx, peer); 760 + } 761 + 762 + 763 + static void 764 + brcmf_msgbuf_process_ioctl_complete(struct brcmf_msgbuf *msgbuf, void *buf) 765 + { 766 + struct msgbuf_ioctl_resp_hdr *ioctl_resp; 767 + 768 + ioctl_resp = (struct msgbuf_ioctl_resp_hdr *)buf; 769 + 770 + msgbuf->ioctl_resp_status = le16_to_cpu(ioctl_resp->compl_hdr.status); 771 + msgbuf->ioctl_resp_ret_len = le16_to_cpu(ioctl_resp->resp_len); 772 + msgbuf->ioctl_resp_pktid = le32_to_cpu(ioctl_resp->msg.request_id); 773 + 774 + brcmf_msgbuf_ioctl_resp_wake(msgbuf); 775 + 776 + if (msgbuf->cur_ioctlrespbuf) 777 + msgbuf->cur_ioctlrespbuf--; 778 + brcmf_msgbuf_rxbuf_ioctlresp_post(msgbuf); 779 + } 780 + 781 + 782 + static void 783 + brcmf_msgbuf_process_txstatus(struct brcmf_msgbuf *msgbuf, void *buf) 784 + { 785 + struct msgbuf_tx_status *tx_status; 786 + u32 idx; 787 + struct sk_buff *skb; 788 + u16 flowid; 789 + 790 + tx_status = (struct msgbuf_tx_status *)buf; 791 + idx = le32_to_cpu(tx_status->msg.request_id); 792 + flowid = le16_to_cpu(tx_status->compl_hdr.flow_ring_id); 793 + flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS; 794 + skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev, 795 + msgbuf->tx_pktids, idx); 796 + if (!skb) { 797 + brcmf_err("Invalid packet id idx recv'd %d\n", idx); 798 + return; 799 + } 800 + 801 + set_bit(flowid, msgbuf->txstatus_done_map); 802 + 803 + brcmf_txfinalize(msgbuf->drvr, skb, tx_status->msg.ifidx, true); 804 + } 805 + 806 + 807 + static u32 brcmf_msgbuf_rxbuf_data_post(struct brcmf_msgbuf *msgbuf, u32 count) 808 + { 809 + struct brcmf_commonring *commonring; 810 + void *ret_ptr; 811 + struct sk_buff *skb; 812 + u16 alloced; 813 + u32 pktlen; 814 + dma_addr_t physaddr; 815 + struct msgbuf_rx_bufpost *rx_bufpost; 816 + long long address; 817 + u32 pktid; 818 + u32 i; 819 + 820 + commonring = msgbuf->commonrings[BRCMF_H2D_MSGRING_RXPOST_SUBMIT]; 821 + ret_ptr = brcmf_commonring_reserve_for_write_multiple(commonring, 822 + count, 823 + &alloced); 824 + if (!ret_ptr) { 825 + brcmf_err("Failed to reserve space in commonring\n"); 826 + return 0; 827 + } 828 + 829 + for (i = 0; i < alloced; i++) { 830 + rx_bufpost = (struct msgbuf_rx_bufpost *)ret_ptr; 831 + memset(rx_bufpost, 0, sizeof(*rx_bufpost)); 832 + 833 + skb = brcmu_pkt_buf_get_skb(BRCMF_MSGBUF_MAX_PKT_SIZE); 834 + 835 + if (skb == NULL) { 836 + brcmf_err("Failed to alloc SKB\n"); 837 + brcmf_commonring_write_cancel(commonring, alloced - i); 838 + break; 839 + } 840 + 841 + pktlen = skb->len; 842 + if (brcmf_msgbuf_alloc_pktid(msgbuf->drvr->bus_if->dev, 843 + msgbuf->rx_pktids, skb, 0, 844 + &physaddr, &pktid)) { 845 + dev_kfree_skb_any(skb); 846 + brcmf_err("No PKTID available !!\n"); 847 + brcmf_commonring_write_cancel(commonring, alloced - i); 848 + break; 849 + } 850 + 851 + if (msgbuf->rx_metadata_offset) { 852 + address = (long long)(long)physaddr; 853 + rx_bufpost->metadata_buf_len = 854 + cpu_to_le16(msgbuf->rx_metadata_offset); 855 + rx_bufpost->metadata_buf_addr.high_addr = 856 + cpu_to_le32(address >> 32); 857 + rx_bufpost->metadata_buf_addr.low_addr = 858 + cpu_to_le32(address & 0xffffffff); 859 + 860 + skb_pull(skb, msgbuf->rx_metadata_offset); 861 + pktlen = skb->len; 862 + physaddr += msgbuf->rx_metadata_offset; 863 + } 864 + rx_bufpost->msg.msgtype = MSGBUF_TYPE_RXBUF_POST; 865 + rx_bufpost->msg.request_id = cpu_to_le32(pktid); 866 + 867 + address = (long long)(long)physaddr; 868 + rx_bufpost->data_buf_len = cpu_to_le16((u16)pktlen); 869 + rx_bufpost->data_buf_addr.high_addr = 870 + cpu_to_le32(address >> 32); 871 + rx_bufpost->data_buf_addr.low_addr = 872 + cpu_to_le32(address & 0xffffffff); 873 + 874 + ret_ptr += brcmf_commonring_len_item(commonring); 875 + } 876 + 877 + if (i) 878 + brcmf_commonring_write_complete(commonring); 879 + 880 + return i; 881 + } 882 + 883 + 884 + static void 885 + brcmf_msgbuf_rxbuf_data_fill(struct brcmf_msgbuf *msgbuf) 886 + { 887 + u32 fillbufs; 888 + u32 retcount; 889 + 890 + fillbufs = msgbuf->max_rxbufpost - msgbuf->rxbufpost; 891 + 892 + while (fillbufs) { 893 + retcount = brcmf_msgbuf_rxbuf_data_post(msgbuf, fillbufs); 894 + if (!retcount) 895 + break; 896 + msgbuf->rxbufpost += retcount; 897 + fillbufs -= retcount; 898 + } 899 + } 900 + 901 + 902 + static void 903 + brcmf_msgbuf_update_rxbufpost_count(struct brcmf_msgbuf *msgbuf, u16 rxcnt) 904 + { 905 + msgbuf->rxbufpost -= rxcnt; 906 + if (msgbuf->rxbufpost <= (msgbuf->max_rxbufpost - 907 + BRCMF_MSGBUF_RXBUFPOST_THRESHOLD)) 908 + brcmf_msgbuf_rxbuf_data_fill(msgbuf); 909 + } 910 + 911 + 912 + static u32 913 + brcmf_msgbuf_rxbuf_ctrl_post(struct brcmf_msgbuf *msgbuf, bool event_buf, 914 + u32 count) 915 + { 916 + struct brcmf_commonring *commonring; 917 + void *ret_ptr; 918 + struct sk_buff *skb; 919 + u16 alloced; 920 + u32 pktlen; 921 + dma_addr_t physaddr; 922 + struct msgbuf_rx_ioctl_resp_or_event *rx_bufpost; 923 + long long address; 924 + u32 pktid; 925 + u32 i; 926 + 927 + commonring = msgbuf->commonrings[BRCMF_H2D_MSGRING_CONTROL_SUBMIT]; 928 + brcmf_commonring_lock(commonring); 929 + ret_ptr = brcmf_commonring_reserve_for_write_multiple(commonring, 930 + count, 931 + &alloced); 932 + if (!ret_ptr) { 933 + brcmf_err("Failed to reserve space in commonring\n"); 934 + brcmf_commonring_unlock(commonring); 935 + return 0; 936 + } 937 + 938 + for (i = 0; i < alloced; i++) { 939 + rx_bufpost = (struct msgbuf_rx_ioctl_resp_or_event *)ret_ptr; 940 + memset(rx_bufpost, 0, sizeof(*rx_bufpost)); 941 + 942 + skb = brcmu_pkt_buf_get_skb(BRCMF_MSGBUF_MAX_PKT_SIZE); 943 + 944 + if (skb == NULL) { 945 + brcmf_err("Failed to alloc SKB\n"); 946 + brcmf_commonring_write_cancel(commonring, alloced - i); 947 + break; 948 + } 949 + 950 + pktlen = skb->len; 951 + if (brcmf_msgbuf_alloc_pktid(msgbuf->drvr->bus_if->dev, 952 + msgbuf->rx_pktids, skb, 0, 953 + &physaddr, &pktid)) { 954 + dev_kfree_skb_any(skb); 955 + brcmf_err("No PKTID available !!\n"); 956 + brcmf_commonring_write_cancel(commonring, alloced - i); 957 + break; 958 + } 959 + if (event_buf) 960 + rx_bufpost->msg.msgtype = MSGBUF_TYPE_EVENT_BUF_POST; 961 + else 962 + rx_bufpost->msg.msgtype = 963 + MSGBUF_TYPE_IOCTLRESP_BUF_POST; 964 + rx_bufpost->msg.request_id = cpu_to_le32(pktid); 965 + 966 + address = (long long)(long)physaddr; 967 + rx_bufpost->host_buf_len = cpu_to_le16((u16)pktlen); 968 + rx_bufpost->host_buf_addr.high_addr = 969 + cpu_to_le32(address >> 32); 970 + rx_bufpost->host_buf_addr.low_addr = 971 + cpu_to_le32(address & 0xffffffff); 972 + 973 + ret_ptr += brcmf_commonring_len_item(commonring); 974 + } 975 + 976 + if (i) 977 + brcmf_commonring_write_complete(commonring); 978 + 979 + brcmf_commonring_unlock(commonring); 980 + 981 + return i; 982 + } 983 + 984 + 985 + static void brcmf_msgbuf_rxbuf_ioctlresp_post(struct brcmf_msgbuf *msgbuf) 986 + { 987 + u32 count; 988 + 989 + count = msgbuf->max_ioctlrespbuf - msgbuf->cur_ioctlrespbuf; 990 + count = brcmf_msgbuf_rxbuf_ctrl_post(msgbuf, false, count); 991 + msgbuf->cur_ioctlrespbuf += count; 992 + } 993 + 994 + 995 + static void brcmf_msgbuf_rxbuf_event_post(struct brcmf_msgbuf *msgbuf) 996 + { 997 + u32 count; 998 + 999 + count = msgbuf->max_eventbuf - msgbuf->cur_eventbuf; 1000 + count = brcmf_msgbuf_rxbuf_ctrl_post(msgbuf, true, count); 1001 + msgbuf->cur_eventbuf += count; 1002 + } 1003 + 1004 + 1005 + static void 1006 + brcmf_msgbuf_rx_skb(struct brcmf_msgbuf *msgbuf, struct sk_buff *skb, 1007 + u8 ifidx) 1008 + { 1009 + struct brcmf_if *ifp; 1010 + 1011 + ifp = msgbuf->drvr->iflist[ifidx]; 1012 + if (!ifp || !ifp->ndev) { 1013 + brcmu_pkt_buf_free_skb(skb); 1014 + return; 1015 + } 1016 + brcmf_netif_rx(ifp, skb); 1017 + } 1018 + 1019 + 1020 + static void brcmf_msgbuf_process_event(struct brcmf_msgbuf *msgbuf, void *buf) 1021 + { 1022 + struct msgbuf_rx_event *event; 1023 + u32 idx; 1024 + u16 buflen; 1025 + struct sk_buff *skb; 1026 + 1027 + event = (struct msgbuf_rx_event *)buf; 1028 + idx = le32_to_cpu(event->msg.request_id); 1029 + buflen = le16_to_cpu(event->event_data_len); 1030 + 1031 + if (msgbuf->cur_eventbuf) 1032 + msgbuf->cur_eventbuf--; 1033 + brcmf_msgbuf_rxbuf_event_post(msgbuf); 1034 + 1035 + skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev, 1036 + msgbuf->rx_pktids, idx); 1037 + if (!skb) 1038 + return; 1039 + 1040 + if (msgbuf->rx_dataoffset) 1041 + skb_pull(skb, msgbuf->rx_dataoffset); 1042 + 1043 + skb_trim(skb, buflen); 1044 + 1045 + brcmf_msgbuf_rx_skb(msgbuf, skb, event->msg.ifidx); 1046 + } 1047 + 1048 + 1049 + static void 1050 + brcmf_msgbuf_process_rx_complete(struct brcmf_msgbuf *msgbuf, void *buf) 1051 + { 1052 + struct msgbuf_rx_complete *rx_complete; 1053 + struct sk_buff *skb; 1054 + u16 data_offset; 1055 + u16 buflen; 1056 + u32 idx; 1057 + 1058 + brcmf_msgbuf_update_rxbufpost_count(msgbuf, 1); 1059 + 1060 + rx_complete = (struct msgbuf_rx_complete *)buf; 1061 + data_offset = le16_to_cpu(rx_complete->data_offset); 1062 + buflen = le16_to_cpu(rx_complete->data_len); 1063 + idx = le32_to_cpu(rx_complete->msg.request_id); 1064 + 1065 + skb = brcmf_msgbuf_get_pktid(msgbuf->drvr->bus_if->dev, 1066 + msgbuf->rx_pktids, idx); 1067 + 1068 + if (data_offset) 1069 + skb_pull(skb, data_offset); 1070 + else if (msgbuf->rx_dataoffset) 1071 + skb_pull(skb, msgbuf->rx_dataoffset); 1072 + 1073 + skb_trim(skb, buflen); 1074 + 1075 + brcmf_msgbuf_rx_skb(msgbuf, skb, rx_complete->msg.ifidx); 1076 + } 1077 + 1078 + 1079 + static void 1080 + brcmf_msgbuf_process_flow_ring_create_response(struct brcmf_msgbuf *msgbuf, 1081 + void *buf) 1082 + { 1083 + struct msgbuf_flowring_create_resp *flowring_create_resp; 1084 + u16 status; 1085 + u16 flowid; 1086 + 1087 + flowring_create_resp = (struct msgbuf_flowring_create_resp *)buf; 1088 + 1089 + flowid = le16_to_cpu(flowring_create_resp->compl_hdr.flow_ring_id); 1090 + flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS; 1091 + status = le16_to_cpu(flowring_create_resp->compl_hdr.status); 1092 + 1093 + if (status) { 1094 + brcmf_err("Flowring creation failed, code %d\n", status); 1095 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 1096 + return; 1097 + } 1098 + brcmf_dbg(MSGBUF, "Flowring %d Create response status %d\n", flowid, 1099 + status); 1100 + 1101 + brcmf_flowring_open(msgbuf->flow, flowid); 1102 + 1103 + brcmf_msgbuf_schedule_txdata(msgbuf, flowid); 1104 + } 1105 + 1106 + 1107 + static void 1108 + brcmf_msgbuf_process_flow_ring_delete_response(struct brcmf_msgbuf *msgbuf, 1109 + void *buf) 1110 + { 1111 + struct msgbuf_flowring_delete_resp *flowring_delete_resp; 1112 + u16 status; 1113 + u16 flowid; 1114 + 1115 + flowring_delete_resp = (struct msgbuf_flowring_delete_resp *)buf; 1116 + 1117 + flowid = le16_to_cpu(flowring_delete_resp->compl_hdr.flow_ring_id); 1118 + flowid -= BRCMF_NROF_H2D_COMMON_MSGRINGS; 1119 + status = le16_to_cpu(flowring_delete_resp->compl_hdr.status); 1120 + 1121 + if (status) { 1122 + brcmf_err("Flowring deletion failed, code %d\n", status); 1123 + brcmf_flowring_delete(msgbuf->flow, flowid); 1124 + return; 1125 + } 1126 + brcmf_dbg(MSGBUF, "Flowring %d Delete response status %d\n", flowid, 1127 + status); 1128 + 1129 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 1130 + } 1131 + 1132 + 1133 + static void brcmf_msgbuf_process_msgtype(struct brcmf_msgbuf *msgbuf, void *buf) 1134 + { 1135 + struct msgbuf_common_hdr *msg; 1136 + 1137 + msg = (struct msgbuf_common_hdr *)buf; 1138 + switch (msg->msgtype) { 1139 + case MSGBUF_TYPE_FLOW_RING_CREATE_CMPLT: 1140 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_FLOW_RING_CREATE_CMPLT\n"); 1141 + brcmf_msgbuf_process_flow_ring_create_response(msgbuf, buf); 1142 + break; 1143 + case MSGBUF_TYPE_FLOW_RING_DELETE_CMPLT: 1144 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_FLOW_RING_DELETE_CMPLT\n"); 1145 + brcmf_msgbuf_process_flow_ring_delete_response(msgbuf, buf); 1146 + break; 1147 + case MSGBUF_TYPE_IOCTLPTR_REQ_ACK: 1148 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_IOCTLPTR_REQ_ACK\n"); 1149 + break; 1150 + case MSGBUF_TYPE_IOCTL_CMPLT: 1151 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_IOCTL_CMPLT\n"); 1152 + brcmf_msgbuf_process_ioctl_complete(msgbuf, buf); 1153 + break; 1154 + case MSGBUF_TYPE_WL_EVENT: 1155 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_WL_EVENT\n"); 1156 + brcmf_msgbuf_process_event(msgbuf, buf); 1157 + break; 1158 + case MSGBUF_TYPE_TX_STATUS: 1159 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_TX_STATUS\n"); 1160 + brcmf_msgbuf_process_txstatus(msgbuf, buf); 1161 + break; 1162 + case MSGBUF_TYPE_RX_CMPLT: 1163 + brcmf_dbg(MSGBUF, "MSGBUF_TYPE_RX_CMPLT\n"); 1164 + brcmf_msgbuf_process_rx_complete(msgbuf, buf); 1165 + break; 1166 + default: 1167 + brcmf_err("Unsupported msgtype %d\n", msg->msgtype); 1168 + break; 1169 + } 1170 + } 1171 + 1172 + 1173 + static void brcmf_msgbuf_process_rx(struct brcmf_msgbuf *msgbuf, 1174 + struct brcmf_commonring *commonring) 1175 + { 1176 + void *buf; 1177 + u16 count; 1178 + 1179 + again: 1180 + buf = brcmf_commonring_get_read_ptr(commonring, &count); 1181 + if (buf == NULL) 1182 + return; 1183 + 1184 + while (count) { 1185 + brcmf_msgbuf_process_msgtype(msgbuf, 1186 + buf + msgbuf->rx_dataoffset); 1187 + buf += brcmf_commonring_len_item(commonring); 1188 + count--; 1189 + } 1190 + brcmf_commonring_read_complete(commonring); 1191 + 1192 + if (commonring->r_ptr == 0) 1193 + goto again; 1194 + } 1195 + 1196 + 1197 + int brcmf_proto_msgbuf_rx_trigger(struct device *dev) 1198 + { 1199 + struct brcmf_bus *bus_if = dev_get_drvdata(dev); 1200 + struct brcmf_pub *drvr = bus_if->drvr; 1201 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 1202 + void *buf; 1203 + u32 flowid; 1204 + 1205 + buf = msgbuf->commonrings[BRCMF_D2H_MSGRING_RX_COMPLETE]; 1206 + brcmf_msgbuf_process_rx(msgbuf, buf); 1207 + buf = msgbuf->commonrings[BRCMF_D2H_MSGRING_TX_COMPLETE]; 1208 + brcmf_msgbuf_process_rx(msgbuf, buf); 1209 + buf = msgbuf->commonrings[BRCMF_D2H_MSGRING_CONTROL_COMPLETE]; 1210 + brcmf_msgbuf_process_rx(msgbuf, buf); 1211 + 1212 + for_each_set_bit(flowid, msgbuf->txstatus_done_map, 1213 + msgbuf->nrof_flowrings) { 1214 + clear_bit(flowid, msgbuf->txstatus_done_map); 1215 + if (brcmf_flowring_qlen(msgbuf->flow, flowid)) 1216 + brcmf_msgbuf_schedule_txdata(msgbuf, flowid); 1217 + } 1218 + 1219 + return 0; 1220 + } 1221 + 1222 + 1223 + void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid) 1224 + { 1225 + struct brcmf_msgbuf *msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 1226 + struct msgbuf_tx_flowring_delete_req *delete; 1227 + struct brcmf_commonring *commonring; 1228 + void *ret_ptr; 1229 + u8 ifidx; 1230 + int err; 1231 + 1232 + commonring = msgbuf->commonrings[BRCMF_H2D_MSGRING_CONTROL_SUBMIT]; 1233 + brcmf_commonring_lock(commonring); 1234 + ret_ptr = brcmf_commonring_reserve_for_write(commonring); 1235 + if (!ret_ptr) { 1236 + brcmf_err("FW unaware, flowring will be removed !!\n"); 1237 + brcmf_commonring_unlock(commonring); 1238 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 1239 + return; 1240 + } 1241 + 1242 + delete = (struct msgbuf_tx_flowring_delete_req *)ret_ptr; 1243 + 1244 + ifidx = brcmf_flowring_ifidx_get(msgbuf->flow, flowid); 1245 + 1246 + delete->msg.msgtype = MSGBUF_TYPE_FLOW_RING_DELETE; 1247 + delete->msg.ifidx = ifidx; 1248 + delete->msg.request_id = 0; 1249 + 1250 + delete->flow_ring_id = cpu_to_le16(flowid + 1251 + BRCMF_NROF_H2D_COMMON_MSGRINGS); 1252 + delete->reason = 0; 1253 + 1254 + brcmf_dbg(MSGBUF, "Send Flow Delete Req flow ID %d, ifindex %d\n", 1255 + flowid, ifidx); 1256 + 1257 + err = brcmf_commonring_write_complete(commonring); 1258 + brcmf_commonring_unlock(commonring); 1259 + if (err) { 1260 + brcmf_err("Failed to submit RING_DELETE, flowring will be removed\n"); 1261 + brcmf_msgbuf_remove_flowring(msgbuf, flowid); 1262 + } 1263 + } 1264 + 1265 + 1266 + int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr) 1267 + { 1268 + struct brcmf_bus_msgbuf *if_msgbuf; 1269 + struct brcmf_msgbuf *msgbuf; 1270 + long long address; 1271 + u32 count; 1272 + 1273 + if_msgbuf = drvr->bus_if->msgbuf; 1274 + msgbuf = kzalloc(sizeof(*msgbuf), GFP_ATOMIC); 1275 + if (!msgbuf) 1276 + goto fail; 1277 + 1278 + msgbuf->txflow_wq = create_singlethread_workqueue("msgbuf_txflow"); 1279 + if (msgbuf->txflow_wq == NULL) { 1280 + brcmf_err("workqueue creation failed\n"); 1281 + goto fail; 1282 + } 1283 + INIT_WORK(&msgbuf->txflow_work, brcmf_msgbuf_txflow_worker); 1284 + count = BITS_TO_LONGS(if_msgbuf->nrof_flowrings); 1285 + msgbuf->flow_map = kzalloc(count, GFP_ATOMIC); 1286 + if (!msgbuf->flow_map) 1287 + goto fail; 1288 + 1289 + msgbuf->txstatus_done_map = kzalloc(count, GFP_ATOMIC); 1290 + if (!msgbuf->txstatus_done_map) 1291 + goto fail; 1292 + 1293 + msgbuf->drvr = drvr; 1294 + msgbuf->ioctbuf = dma_alloc_coherent(drvr->bus_if->dev, 1295 + BRCMF_TX_IOCTL_MAX_MSG_SIZE, 1296 + &msgbuf->ioctbuf_handle, 1297 + GFP_ATOMIC); 1298 + if (!msgbuf->ioctbuf) 1299 + goto fail; 1300 + address = (long long)(long)msgbuf->ioctbuf_handle; 1301 + msgbuf->ioctbuf_phys_hi = address >> 32; 1302 + msgbuf->ioctbuf_phys_lo = address & 0xffffffff; 1303 + 1304 + drvr->proto->hdrpull = brcmf_msgbuf_hdrpull; 1305 + drvr->proto->query_dcmd = brcmf_msgbuf_query_dcmd; 1306 + drvr->proto->set_dcmd = brcmf_msgbuf_set_dcmd; 1307 + drvr->proto->txdata = brcmf_msgbuf_txdata; 1308 + drvr->proto->configure_addr_mode = brcmf_msgbuf_configure_addr_mode; 1309 + drvr->proto->delete_peer = brcmf_msgbuf_delete_peer; 1310 + drvr->proto->add_tdls_peer = brcmf_msgbuf_add_tdls_peer; 1311 + drvr->proto->pd = msgbuf; 1312 + 1313 + init_waitqueue_head(&msgbuf->ioctl_resp_wait); 1314 + 1315 + msgbuf->commonrings = 1316 + (struct brcmf_commonring **)if_msgbuf->commonrings; 1317 + msgbuf->flowrings = (struct brcmf_commonring **)if_msgbuf->flowrings; 1318 + msgbuf->nrof_flowrings = if_msgbuf->nrof_flowrings; 1319 + msgbuf->flowring_dma_handle = kzalloc(msgbuf->nrof_flowrings * 1320 + sizeof(*msgbuf->flowring_dma_handle), GFP_ATOMIC); 1321 + 1322 + msgbuf->rx_dataoffset = if_msgbuf->rx_dataoffset; 1323 + msgbuf->max_rxbufpost = if_msgbuf->max_rxbufpost; 1324 + 1325 + msgbuf->max_ioctlrespbuf = BRCMF_MSGBUF_MAX_IOCTLRESPBUF_POST; 1326 + msgbuf->max_eventbuf = BRCMF_MSGBUF_MAX_EVENTBUF_POST; 1327 + 1328 + msgbuf->tx_pktids = brcmf_msgbuf_init_pktids(NR_TX_PKTIDS, 1329 + DMA_TO_DEVICE); 1330 + if (!msgbuf->tx_pktids) 1331 + goto fail; 1332 + msgbuf->rx_pktids = brcmf_msgbuf_init_pktids(NR_RX_PKTIDS, 1333 + DMA_FROM_DEVICE); 1334 + if (!msgbuf->rx_pktids) 1335 + goto fail; 1336 + 1337 + msgbuf->flow = brcmf_flowring_attach(drvr->bus_if->dev, 1338 + if_msgbuf->nrof_flowrings); 1339 + if (!msgbuf->flow) 1340 + goto fail; 1341 + 1342 + 1343 + brcmf_dbg(MSGBUF, "Feeding buffers, rx data %d, rx event %d, rx ioctl resp %d\n", 1344 + msgbuf->max_rxbufpost, msgbuf->max_eventbuf, 1345 + msgbuf->max_ioctlrespbuf); 1346 + count = 0; 1347 + do { 1348 + brcmf_msgbuf_rxbuf_data_fill(msgbuf); 1349 + if (msgbuf->max_rxbufpost != msgbuf->rxbufpost) 1350 + msleep(10); 1351 + else 1352 + break; 1353 + count++; 1354 + } while (count < 10); 1355 + brcmf_msgbuf_rxbuf_event_post(msgbuf); 1356 + brcmf_msgbuf_rxbuf_ioctlresp_post(msgbuf); 1357 + 1358 + return 0; 1359 + 1360 + fail: 1361 + if (msgbuf) { 1362 + kfree(msgbuf->flow_map); 1363 + kfree(msgbuf->txstatus_done_map); 1364 + brcmf_msgbuf_release_pktids(msgbuf); 1365 + if (msgbuf->ioctbuf) 1366 + dma_free_coherent(drvr->bus_if->dev, 1367 + BRCMF_TX_IOCTL_MAX_MSG_SIZE, 1368 + msgbuf->ioctbuf, 1369 + msgbuf->ioctbuf_handle); 1370 + kfree(msgbuf); 1371 + } 1372 + return -ENOMEM; 1373 + } 1374 + 1375 + 1376 + void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr) 1377 + { 1378 + struct brcmf_msgbuf *msgbuf; 1379 + 1380 + brcmf_dbg(TRACE, "Enter\n"); 1381 + if (drvr->proto->pd) { 1382 + msgbuf = (struct brcmf_msgbuf *)drvr->proto->pd; 1383 + 1384 + kfree(msgbuf->flow_map); 1385 + kfree(msgbuf->txstatus_done_map); 1386 + if (msgbuf->txflow_wq) 1387 + destroy_workqueue(msgbuf->txflow_wq); 1388 + 1389 + brcmf_flowring_detach(msgbuf->flow); 1390 + dma_free_coherent(drvr->bus_if->dev, 1391 + BRCMF_TX_IOCTL_MAX_MSG_SIZE, 1392 + msgbuf->ioctbuf, msgbuf->ioctbuf_handle); 1393 + brcmf_msgbuf_release_pktids(msgbuf); 1394 + kfree(msgbuf); 1395 + drvr->proto->pd = NULL; 1396 + } 1397 + }
+40
drivers/net/wireless/brcm80211/brcmfmac/msgbuf.h
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + #ifndef BRCMFMAC_MSGBUF_H 16 + #define BRCMFMAC_MSGBUF_H 17 + 18 + 19 + #define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM 20 20 + #define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM 256 21 + #define BRCMF_D2H_MSGRING_CONTROL_COMPLETE_MAX_ITEM 20 22 + #define BRCMF_D2H_MSGRING_TX_COMPLETE_MAX_ITEM 1024 23 + #define BRCMF_D2H_MSGRING_RX_COMPLETE_MAX_ITEM 256 24 + #define BRCMF_H2D_TXFLOWRING_MAX_ITEM 512 25 + 26 + #define BRCMF_H2D_MSGRING_CONTROL_SUBMIT_ITEMSIZE 40 27 + #define BRCMF_H2D_MSGRING_RXPOST_SUBMIT_ITEMSIZE 32 28 + #define BRCMF_D2H_MSGRING_CONTROL_COMPLETE_ITEMSIZE 24 29 + #define BRCMF_D2H_MSGRING_TX_COMPLETE_ITEMSIZE 16 30 + #define BRCMF_D2H_MSGRING_RX_COMPLETE_ITEMSIZE 32 31 + #define BRCMF_H2D_TXFLOWRING_ITEMSIZE 48 32 + 33 + 34 + int brcmf_proto_msgbuf_rx_trigger(struct device *dev); 35 + int brcmf_proto_msgbuf_attach(struct brcmf_pub *drvr); 36 + void brcmf_proto_msgbuf_detach(struct brcmf_pub *drvr); 37 + void brcmf_msgbuf_delete_flowring(struct brcmf_pub *drvr, u8 flowid); 38 + 39 + 40 + #endif /* BRCMFMAC_MSGBUF_H */
+56
drivers/net/wireless/brcm80211/brcmfmac/of.c
··· 1 + /* 2 + * Copyright (c) 2014 Broadcom Corporation 3 + * 4 + * Permission to use, copy, modify, and/or distribute this software for any 5 + * purpose with or without fee is hereby granted, provided that the above 6 + * copyright notice and this permission notice appear in all copies. 7 + * 8 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 11 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 13 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 14 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 + */ 16 + #include <linux/init.h> 17 + #include <linux/of.h> 18 + #include <linux/of_irq.h> 19 + #include <linux/mmc/card.h> 20 + #include <linux/platform_data/brcmfmac-sdio.h> 21 + #include <linux/mmc/sdio_func.h> 22 + 23 + #include <defs.h> 24 + #include "dhd_dbg.h" 25 + #include "sdio_host.h" 26 + 27 + void brcmf_of_probe(struct brcmf_sdio_dev *sdiodev) 28 + { 29 + struct device *dev = sdiodev->dev; 30 + struct device_node *np = dev->of_node; 31 + int irq; 32 + u32 irqf; 33 + u32 val; 34 + 35 + if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac")) 36 + return; 37 + 38 + sdiodev->pdata = devm_kzalloc(dev, sizeof(*sdiodev->pdata), GFP_KERNEL); 39 + if (!sdiodev->pdata) 40 + return; 41 + 42 + irq = irq_of_parse_and_map(np, 0); 43 + if (irq < 0) { 44 + brcmf_err("interrupt could not be mapped: err=%d\n", irq); 45 + devm_kfree(dev, sdiodev->pdata); 46 + return; 47 + } 48 + irqf = irqd_get_trigger_type(irq_get_irq_data(irq)); 49 + 50 + sdiodev->pdata->oob_irq_supported = true; 51 + sdiodev->pdata->oob_irq_nr = irq; 52 + sdiodev->pdata->oob_irq_flags = irqf; 53 + 54 + if (of_property_read_u32(np, "brcm,drive-strength", &val) == 0) 55 + sdiodev->pdata->drive_strength = val; 56 + }
+22
drivers/net/wireless/brcm80211/brcmfmac/of.h
··· 1 + /* 2 + * Copyright (c) 2014 Broadcom Corporation 3 + * 4 + * Permission to use, copy, modify, and/or distribute this software for any 5 + * purpose with or without fee is hereby granted, provided that the above 6 + * copyright notice and this permission notice appear in all copies. 7 + * 8 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 11 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 13 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 14 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 + */ 16 + #ifdef CONFIG_OF 17 + void brcmf_of_probe(struct brcmf_sdio_dev *sdiodev); 18 + #else 19 + static void brcmf_of_probe(struct brcmf_sdio_dev *sdiodev) 20 + { 21 + } 22 + #endif /* CONFIG_OF */
+1846
drivers/net/wireless/brcm80211/brcmfmac/pcie.c
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/firmware.h> 19 + #include <linux/pci.h> 20 + #include <linux/vmalloc.h> 21 + #include <linux/delay.h> 22 + #include <linux/unaligned/access_ok.h> 23 + #include <linux/interrupt.h> 24 + #include <linux/bcma/bcma.h> 25 + #include <linux/sched.h> 26 + 27 + #include <soc.h> 28 + #include <chipcommon.h> 29 + #include <brcmu_utils.h> 30 + #include <brcmu_wifi.h> 31 + #include <brcm_hw_ids.h> 32 + 33 + #include "dhd_dbg.h" 34 + #include "dhd_bus.h" 35 + #include "commonring.h" 36 + #include "msgbuf.h" 37 + #include "pcie.h" 38 + #include "firmware.h" 39 + #include "chip.h" 40 + 41 + 42 + enum brcmf_pcie_state { 43 + BRCMFMAC_PCIE_STATE_DOWN, 44 + BRCMFMAC_PCIE_STATE_UP 45 + }; 46 + 47 + 48 + #define BRCMF_PCIE_43602_FW_NAME "brcm/brcmfmac43602-pcie.bin" 49 + #define BRCMF_PCIE_43602_NVRAM_NAME "brcm/brcmfmac43602-pcie.txt" 50 + #define BRCMF_PCIE_4354_FW_NAME "brcm/brcmfmac4354-pcie.bin" 51 + #define BRCMF_PCIE_4354_NVRAM_NAME "brcm/brcmfmac4354-pcie.txt" 52 + #define BRCMF_PCIE_4356_FW_NAME "brcm/brcmfmac4356-pcie.bin" 53 + #define BRCMF_PCIE_4356_NVRAM_NAME "brcm/brcmfmac4356-pcie.txt" 54 + #define BRCMF_PCIE_43570_FW_NAME "brcm/brcmfmac43570-pcie.bin" 55 + #define BRCMF_PCIE_43570_NVRAM_NAME "brcm/brcmfmac43570-pcie.txt" 56 + 57 + #define BRCMF_PCIE_FW_UP_TIMEOUT 2000 /* msec */ 58 + 59 + #define BRCMF_PCIE_TCM_MAP_SIZE (4096 * 1024) 60 + #define BRCMF_PCIE_REG_MAP_SIZE (32 * 1024) 61 + 62 + /* backplane addres space accessed by BAR0 */ 63 + #define BRCMF_PCIE_BAR0_WINDOW 0x80 64 + #define BRCMF_PCIE_BAR0_REG_SIZE 0x1000 65 + #define BRCMF_PCIE_BAR0_WRAPPERBASE 0x70 66 + 67 + #define BRCMF_PCIE_BAR0_WRAPBASE_DMP_OFFSET 0x1000 68 + #define BRCMF_PCIE_BARO_PCIE_ENUM_OFFSET 0x2000 69 + 70 + #define BRCMF_PCIE_ARMCR4REG_BANKIDX 0x40 71 + #define BRCMF_PCIE_ARMCR4REG_BANKPDA 0x4C 72 + 73 + #define BRCMF_PCIE_REG_INTSTATUS 0x90 74 + #define BRCMF_PCIE_REG_INTMASK 0x94 75 + #define BRCMF_PCIE_REG_SBMBX 0x98 76 + 77 + #define BRCMF_PCIE_PCIE2REG_INTMASK 0x24 78 + #define BRCMF_PCIE_PCIE2REG_MAILBOXINT 0x48 79 + #define BRCMF_PCIE_PCIE2REG_MAILBOXMASK 0x4C 80 + #define BRCMF_PCIE_PCIE2REG_CONFIGADDR 0x120 81 + #define BRCMF_PCIE_PCIE2REG_CONFIGDATA 0x124 82 + #define BRCMF_PCIE_PCIE2REG_H2D_MAILBOX 0x140 83 + 84 + #define BRCMF_PCIE_GENREV1 1 85 + #define BRCMF_PCIE_GENREV2 2 86 + 87 + #define BRCMF_PCIE2_INTA 0x01 88 + #define BRCMF_PCIE2_INTB 0x02 89 + 90 + #define BRCMF_PCIE_INT_0 0x01 91 + #define BRCMF_PCIE_INT_1 0x02 92 + #define BRCMF_PCIE_INT_DEF (BRCMF_PCIE_INT_0 | \ 93 + BRCMF_PCIE_INT_1) 94 + 95 + #define BRCMF_PCIE_MB_INT_FN0_0 0x0100 96 + #define BRCMF_PCIE_MB_INT_FN0_1 0x0200 97 + #define BRCMF_PCIE_MB_INT_D2H0_DB0 0x10000 98 + #define BRCMF_PCIE_MB_INT_D2H0_DB1 0x20000 99 + #define BRCMF_PCIE_MB_INT_D2H1_DB0 0x40000 100 + #define BRCMF_PCIE_MB_INT_D2H1_DB1 0x80000 101 + #define BRCMF_PCIE_MB_INT_D2H2_DB0 0x100000 102 + #define BRCMF_PCIE_MB_INT_D2H2_DB1 0x200000 103 + #define BRCMF_PCIE_MB_INT_D2H3_DB0 0x400000 104 + #define BRCMF_PCIE_MB_INT_D2H3_DB1 0x800000 105 + 106 + #define BRCMF_PCIE_MB_INT_D2H_DB (BRCMF_PCIE_MB_INT_D2H0_DB0 | \ 107 + BRCMF_PCIE_MB_INT_D2H0_DB1 | \ 108 + BRCMF_PCIE_MB_INT_D2H1_DB0 | \ 109 + BRCMF_PCIE_MB_INT_D2H1_DB1 | \ 110 + BRCMF_PCIE_MB_INT_D2H2_DB0 | \ 111 + BRCMF_PCIE_MB_INT_D2H2_DB1 | \ 112 + BRCMF_PCIE_MB_INT_D2H3_DB0 | \ 113 + BRCMF_PCIE_MB_INT_D2H3_DB1) 114 + 115 + #define BRCMF_PCIE_MIN_SHARED_VERSION 4 116 + #define BRCMF_PCIE_MAX_SHARED_VERSION 5 117 + #define BRCMF_PCIE_SHARED_VERSION_MASK 0x00FF 118 + #define BRCMF_PCIE_SHARED_TXPUSH_SUPPORT 0x4000 119 + 120 + #define BRCMF_PCIE_FLAGS_HTOD_SPLIT 0x4000 121 + #define BRCMF_PCIE_FLAGS_DTOH_SPLIT 0x8000 122 + 123 + #define BRCMF_SHARED_MAX_RXBUFPOST_OFFSET 34 124 + #define BRCMF_SHARED_RING_BASE_OFFSET 52 125 + #define BRCMF_SHARED_RX_DATAOFFSET_OFFSET 36 126 + #define BRCMF_SHARED_CONSOLE_ADDR_OFFSET 20 127 + #define BRCMF_SHARED_HTOD_MB_DATA_ADDR_OFFSET 40 128 + #define BRCMF_SHARED_DTOH_MB_DATA_ADDR_OFFSET 44 129 + #define BRCMF_SHARED_RING_INFO_ADDR_OFFSET 48 130 + #define BRCMF_SHARED_DMA_SCRATCH_LEN_OFFSET 52 131 + #define BRCMF_SHARED_DMA_SCRATCH_ADDR_OFFSET 56 132 + #define BRCMF_SHARED_DMA_RINGUPD_LEN_OFFSET 64 133 + #define BRCMF_SHARED_DMA_RINGUPD_ADDR_OFFSET 68 134 + 135 + #define BRCMF_RING_H2D_RING_COUNT_OFFSET 0 136 + #define BRCMF_RING_D2H_RING_COUNT_OFFSET 1 137 + #define BRCMF_RING_H2D_RING_MEM_OFFSET 4 138 + #define BRCMF_RING_H2D_RING_STATE_OFFSET 8 139 + 140 + #define BRCMF_RING_MEM_BASE_ADDR_OFFSET 8 141 + #define BRCMF_RING_MAX_ITEM_OFFSET 4 142 + #define BRCMF_RING_LEN_ITEMS_OFFSET 6 143 + #define BRCMF_RING_MEM_SZ 16 144 + #define BRCMF_RING_STATE_SZ 8 145 + 146 + #define BRCMF_SHARED_RING_H2D_W_IDX_PTR_OFFSET 4 147 + #define BRCMF_SHARED_RING_H2D_R_IDX_PTR_OFFSET 8 148 + #define BRCMF_SHARED_RING_D2H_W_IDX_PTR_OFFSET 12 149 + #define BRCMF_SHARED_RING_D2H_R_IDX_PTR_OFFSET 16 150 + #define BRCMF_SHARED_RING_TCM_MEMLOC_OFFSET 0 151 + #define BRCMF_SHARED_RING_MAX_SUB_QUEUES 52 152 + 153 + #define BRCMF_DEF_MAX_RXBUFPOST 255 154 + 155 + #define BRCMF_CONSOLE_BUFADDR_OFFSET 8 156 + #define BRCMF_CONSOLE_BUFSIZE_OFFSET 12 157 + #define BRCMF_CONSOLE_WRITEIDX_OFFSET 16 158 + 159 + #define BRCMF_DMA_D2H_SCRATCH_BUF_LEN 8 160 + #define BRCMF_DMA_D2H_RINGUPD_BUF_LEN 1024 161 + 162 + #define BRCMF_D2H_DEV_D3_ACK 0x00000001 163 + #define BRCMF_D2H_DEV_DS_ENTER_REQ 0x00000002 164 + #define BRCMF_D2H_DEV_DS_EXIT_NOTE 0x00000004 165 + 166 + #define BRCMF_H2D_HOST_D3_INFORM 0x00000001 167 + #define BRCMF_H2D_HOST_DS_ACK 0x00000002 168 + 169 + #define BRCMF_PCIE_MBDATA_TIMEOUT 2000 170 + 171 + #define BRCMF_PCIE_CFGREG_STATUS_CMD 0x4 172 + #define BRCMF_PCIE_CFGREG_PM_CSR 0x4C 173 + #define BRCMF_PCIE_CFGREG_MSI_CAP 0x58 174 + #define BRCMF_PCIE_CFGREG_MSI_ADDR_L 0x5C 175 + #define BRCMF_PCIE_CFGREG_MSI_ADDR_H 0x60 176 + #define BRCMF_PCIE_CFGREG_MSI_DATA 0x64 177 + #define BRCMF_PCIE_CFGREG_LINK_STATUS_CTRL 0xBC 178 + #define BRCMF_PCIE_CFGREG_LINK_STATUS_CTRL2 0xDC 179 + #define BRCMF_PCIE_CFGREG_RBAR_CTRL 0x228 180 + #define BRCMF_PCIE_CFGREG_PML1_SUB_CTRL1 0x248 181 + #define BRCMF_PCIE_CFGREG_REG_BAR2_CONFIG 0x4E0 182 + #define BRCMF_PCIE_CFGREG_REG_BAR3_CONFIG 0x4F4 183 + #define BRCMF_PCIE_LINK_STATUS_CTRL_ASPM_ENAB 3 184 + 185 + 186 + MODULE_FIRMWARE(BRCMF_PCIE_43602_FW_NAME); 187 + MODULE_FIRMWARE(BRCMF_PCIE_43602_NVRAM_NAME); 188 + MODULE_FIRMWARE(BRCMF_PCIE_4354_FW_NAME); 189 + MODULE_FIRMWARE(BRCMF_PCIE_4354_NVRAM_NAME); 190 + MODULE_FIRMWARE(BRCMF_PCIE_43570_FW_NAME); 191 + MODULE_FIRMWARE(BRCMF_PCIE_43570_NVRAM_NAME); 192 + 193 + 194 + struct brcmf_pcie_console { 195 + u32 base_addr; 196 + u32 buf_addr; 197 + u32 bufsize; 198 + u32 read_idx; 199 + u8 log_str[256]; 200 + u8 log_idx; 201 + }; 202 + 203 + struct brcmf_pcie_shared_info { 204 + u32 tcm_base_address; 205 + u32 flags; 206 + struct brcmf_pcie_ringbuf *commonrings[BRCMF_NROF_COMMON_MSGRINGS]; 207 + struct brcmf_pcie_ringbuf *flowrings; 208 + u16 max_rxbufpost; 209 + u32 nrof_flowrings; 210 + u32 rx_dataoffset; 211 + u32 htod_mb_data_addr; 212 + u32 dtoh_mb_data_addr; 213 + u32 ring_info_addr; 214 + struct brcmf_pcie_console console; 215 + void *scratch; 216 + dma_addr_t scratch_dmahandle; 217 + void *ringupd; 218 + dma_addr_t ringupd_dmahandle; 219 + }; 220 + 221 + struct brcmf_pcie_core_info { 222 + u32 base; 223 + u32 wrapbase; 224 + }; 225 + 226 + struct brcmf_pciedev_info { 227 + enum brcmf_pcie_state state; 228 + bool in_irq; 229 + bool irq_requested; 230 + struct pci_dev *pdev; 231 + char fw_name[BRCMF_FW_PATH_LEN + BRCMF_FW_NAME_LEN]; 232 + char nvram_name[BRCMF_FW_PATH_LEN + BRCMF_FW_NAME_LEN]; 233 + void __iomem *regs; 234 + void __iomem *tcm; 235 + u32 tcm_size; 236 + u32 ram_base; 237 + u32 ram_size; 238 + struct brcmf_chip *ci; 239 + u32 coreid; 240 + u32 generic_corerev; 241 + struct brcmf_pcie_shared_info shared; 242 + void (*ringbell)(struct brcmf_pciedev_info *devinfo); 243 + wait_queue_head_t mbdata_resp_wait; 244 + bool mbdata_completed; 245 + bool irq_allocated; 246 + }; 247 + 248 + struct brcmf_pcie_ringbuf { 249 + struct brcmf_commonring commonring; 250 + dma_addr_t dma_handle; 251 + u32 w_idx_addr; 252 + u32 r_idx_addr; 253 + struct brcmf_pciedev_info *devinfo; 254 + u8 id; 255 + }; 256 + 257 + 258 + static const u32 brcmf_ring_max_item[BRCMF_NROF_COMMON_MSGRINGS] = { 259 + BRCMF_H2D_MSGRING_CONTROL_SUBMIT_MAX_ITEM, 260 + BRCMF_H2D_MSGRING_RXPOST_SUBMIT_MAX_ITEM, 261 + BRCMF_D2H_MSGRING_CONTROL_COMPLETE_MAX_ITEM, 262 + BRCMF_D2H_MSGRING_TX_COMPLETE_MAX_ITEM, 263 + BRCMF_D2H_MSGRING_RX_COMPLETE_MAX_ITEM 264 + }; 265 + 266 + static const u32 brcmf_ring_itemsize[BRCMF_NROF_COMMON_MSGRINGS] = { 267 + BRCMF_H2D_MSGRING_CONTROL_SUBMIT_ITEMSIZE, 268 + BRCMF_H2D_MSGRING_RXPOST_SUBMIT_ITEMSIZE, 269 + BRCMF_D2H_MSGRING_CONTROL_COMPLETE_ITEMSIZE, 270 + BRCMF_D2H_MSGRING_TX_COMPLETE_ITEMSIZE, 271 + BRCMF_D2H_MSGRING_RX_COMPLETE_ITEMSIZE 272 + }; 273 + 274 + 275 + /* dma flushing needs implementation for mips and arm platforms. Should 276 + * be put in util. Note, this is not real flushing. It is virtual non 277 + * cached memory. Only write buffers should have to be drained. Though 278 + * this may be different depending on platform...... 279 + */ 280 + #define brcmf_dma_flush(addr, len) 281 + #define brcmf_dma_invalidate_cache(addr, len) 282 + 283 + 284 + static u32 285 + brcmf_pcie_read_reg32(struct brcmf_pciedev_info *devinfo, u32 reg_offset) 286 + { 287 + void __iomem *address = devinfo->regs + reg_offset; 288 + 289 + return (ioread32(address)); 290 + } 291 + 292 + 293 + static void 294 + brcmf_pcie_write_reg32(struct brcmf_pciedev_info *devinfo, u32 reg_offset, 295 + u32 value) 296 + { 297 + void __iomem *address = devinfo->regs + reg_offset; 298 + 299 + iowrite32(value, address); 300 + } 301 + 302 + 303 + static u8 304 + brcmf_pcie_read_tcm8(struct brcmf_pciedev_info *devinfo, u32 mem_offset) 305 + { 306 + void __iomem *address = devinfo->tcm + mem_offset; 307 + 308 + return (ioread8(address)); 309 + } 310 + 311 + 312 + static u16 313 + brcmf_pcie_read_tcm16(struct brcmf_pciedev_info *devinfo, u32 mem_offset) 314 + { 315 + void __iomem *address = devinfo->tcm + mem_offset; 316 + 317 + return (ioread16(address)); 318 + } 319 + 320 + 321 + static void 322 + brcmf_pcie_write_tcm16(struct brcmf_pciedev_info *devinfo, u32 mem_offset, 323 + u16 value) 324 + { 325 + void __iomem *address = devinfo->tcm + mem_offset; 326 + 327 + iowrite16(value, address); 328 + } 329 + 330 + 331 + static u32 332 + brcmf_pcie_read_tcm32(struct brcmf_pciedev_info *devinfo, u32 mem_offset) 333 + { 334 + void __iomem *address = devinfo->tcm + mem_offset; 335 + 336 + return (ioread32(address)); 337 + } 338 + 339 + 340 + static void 341 + brcmf_pcie_write_tcm32(struct brcmf_pciedev_info *devinfo, u32 mem_offset, 342 + u32 value) 343 + { 344 + void __iomem *address = devinfo->tcm + mem_offset; 345 + 346 + iowrite32(value, address); 347 + } 348 + 349 + 350 + static u32 351 + brcmf_pcie_read_ram32(struct brcmf_pciedev_info *devinfo, u32 mem_offset) 352 + { 353 + void __iomem *addr = devinfo->tcm + devinfo->ci->rambase + mem_offset; 354 + 355 + return (ioread32(addr)); 356 + } 357 + 358 + 359 + static void 360 + brcmf_pcie_write_ram32(struct brcmf_pciedev_info *devinfo, u32 mem_offset, 361 + u32 value) 362 + { 363 + void __iomem *addr = devinfo->tcm + devinfo->ci->rambase + mem_offset; 364 + 365 + iowrite32(value, addr); 366 + } 367 + 368 + 369 + static void 370 + brcmf_pcie_copy_mem_todev(struct brcmf_pciedev_info *devinfo, u32 mem_offset, 371 + void *srcaddr, u32 len) 372 + { 373 + void __iomem *address = devinfo->tcm + mem_offset; 374 + __le32 *src32; 375 + __le16 *src16; 376 + u8 *src8; 377 + 378 + if (((ulong)address & 4) || ((ulong)srcaddr & 4) || (len & 4)) { 379 + if (((ulong)address & 2) || ((ulong)srcaddr & 2) || (len & 2)) { 380 + src8 = (u8 *)srcaddr; 381 + while (len) { 382 + iowrite8(*src8, address); 383 + address++; 384 + src8++; 385 + len--; 386 + } 387 + } else { 388 + len = len / 2; 389 + src16 = (__le16 *)srcaddr; 390 + while (len) { 391 + iowrite16(le16_to_cpu(*src16), address); 392 + address += 2; 393 + src16++; 394 + len--; 395 + } 396 + } 397 + } else { 398 + len = len / 4; 399 + src32 = (__le32 *)srcaddr; 400 + while (len) { 401 + iowrite32(le32_to_cpu(*src32), address); 402 + address += 4; 403 + src32++; 404 + len--; 405 + } 406 + } 407 + } 408 + 409 + 410 + #define WRITECC32(devinfo, reg, value) brcmf_pcie_write_reg32(devinfo, \ 411 + CHIPCREGOFFS(reg), value) 412 + 413 + 414 + static void 415 + brcmf_pcie_select_core(struct brcmf_pciedev_info *devinfo, u16 coreid) 416 + { 417 + const struct pci_dev *pdev = devinfo->pdev; 418 + struct brcmf_core *core; 419 + u32 bar0_win; 420 + 421 + core = brcmf_chip_get_core(devinfo->ci, coreid); 422 + if (core) { 423 + bar0_win = core->base; 424 + pci_write_config_dword(pdev, BRCMF_PCIE_BAR0_WINDOW, bar0_win); 425 + if (pci_read_config_dword(pdev, BRCMF_PCIE_BAR0_WINDOW, 426 + &bar0_win) == 0) { 427 + if (bar0_win != core->base) { 428 + bar0_win = core->base; 429 + pci_write_config_dword(pdev, 430 + BRCMF_PCIE_BAR0_WINDOW, 431 + bar0_win); 432 + } 433 + } 434 + } else { 435 + brcmf_err("Unsupported core selected %x\n", coreid); 436 + } 437 + } 438 + 439 + 440 + static void brcmf_pcie_reset_device(struct brcmf_pciedev_info *devinfo) 441 + { 442 + u16 cfg_offset[] = { BRCMF_PCIE_CFGREG_STATUS_CMD, 443 + BRCMF_PCIE_CFGREG_PM_CSR, 444 + BRCMF_PCIE_CFGREG_MSI_CAP, 445 + BRCMF_PCIE_CFGREG_MSI_ADDR_L, 446 + BRCMF_PCIE_CFGREG_MSI_ADDR_H, 447 + BRCMF_PCIE_CFGREG_MSI_DATA, 448 + BRCMF_PCIE_CFGREG_LINK_STATUS_CTRL2, 449 + BRCMF_PCIE_CFGREG_RBAR_CTRL, 450 + BRCMF_PCIE_CFGREG_PML1_SUB_CTRL1, 451 + BRCMF_PCIE_CFGREG_REG_BAR2_CONFIG, 452 + BRCMF_PCIE_CFGREG_REG_BAR3_CONFIG }; 453 + u32 i; 454 + u32 val; 455 + u32 lsc; 456 + 457 + if (!devinfo->ci) 458 + return; 459 + 460 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 461 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGADDR, 462 + BRCMF_PCIE_CFGREG_LINK_STATUS_CTRL); 463 + lsc = brcmf_pcie_read_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA); 464 + val = lsc & (~BRCMF_PCIE_LINK_STATUS_CTRL_ASPM_ENAB); 465 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA, val); 466 + 467 + brcmf_pcie_select_core(devinfo, BCMA_CORE_CHIPCOMMON); 468 + WRITECC32(devinfo, watchdog, 4); 469 + msleep(100); 470 + 471 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 472 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGADDR, 473 + BRCMF_PCIE_CFGREG_LINK_STATUS_CTRL); 474 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA, lsc); 475 + 476 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 477 + for (i = 0; i < ARRAY_SIZE(cfg_offset); i++) { 478 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGADDR, 479 + cfg_offset[i]); 480 + val = brcmf_pcie_read_reg32(devinfo, 481 + BRCMF_PCIE_PCIE2REG_CONFIGDATA); 482 + brcmf_dbg(PCIE, "config offset 0x%04x, value 0x%04x\n", 483 + cfg_offset[i], val); 484 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA, 485 + val); 486 + } 487 + } 488 + 489 + 490 + static void brcmf_pcie_attach(struct brcmf_pciedev_info *devinfo) 491 + { 492 + u32 config; 493 + 494 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 495 + if (brcmf_pcie_read_reg32(devinfo, BRCMF_PCIE_PCIE2REG_INTMASK) != 0) 496 + brcmf_pcie_reset_device(devinfo); 497 + /* BAR1 window may not be sized properly */ 498 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 499 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGADDR, 0x4e0); 500 + config = brcmf_pcie_read_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA); 501 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_CONFIGDATA, config); 502 + 503 + device_wakeup_enable(&devinfo->pdev->dev); 504 + } 505 + 506 + 507 + static int brcmf_pcie_enter_download_state(struct brcmf_pciedev_info *devinfo) 508 + { 509 + brcmf_chip_enter_download(devinfo->ci); 510 + 511 + if (devinfo->ci->chip == BRCM_CC_43602_CHIP_ID) { 512 + brcmf_pcie_select_core(devinfo, BCMA_CORE_ARM_CR4); 513 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_ARMCR4REG_BANKIDX, 514 + 5); 515 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_ARMCR4REG_BANKPDA, 516 + 0); 517 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_ARMCR4REG_BANKIDX, 518 + 7); 519 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_ARMCR4REG_BANKPDA, 520 + 0); 521 + } 522 + return 0; 523 + } 524 + 525 + 526 + static int brcmf_pcie_exit_download_state(struct brcmf_pciedev_info *devinfo, 527 + u32 resetintr) 528 + { 529 + struct brcmf_core *core; 530 + 531 + if (devinfo->ci->chip == BRCM_CC_43602_CHIP_ID) { 532 + core = brcmf_chip_get_core(devinfo->ci, BCMA_CORE_INTERNAL_MEM); 533 + brcmf_chip_resetcore(core, 0, 0, 0); 534 + } 535 + 536 + return !brcmf_chip_exit_download(devinfo->ci, resetintr); 537 + } 538 + 539 + 540 + static void 541 + brcmf_pcie_send_mb_data(struct brcmf_pciedev_info *devinfo, u32 htod_mb_data) 542 + { 543 + struct brcmf_pcie_shared_info *shared; 544 + u32 addr; 545 + u32 cur_htod_mb_data; 546 + u32 i; 547 + 548 + shared = &devinfo->shared; 549 + addr = shared->htod_mb_data_addr; 550 + cur_htod_mb_data = brcmf_pcie_read_tcm32(devinfo, addr); 551 + 552 + if (cur_htod_mb_data != 0) 553 + brcmf_dbg(PCIE, "MB transaction is already pending 0x%04x\n", 554 + cur_htod_mb_data); 555 + 556 + i = 0; 557 + while (cur_htod_mb_data != 0) { 558 + msleep(10); 559 + i++; 560 + if (i > 100) 561 + break; 562 + cur_htod_mb_data = brcmf_pcie_read_tcm32(devinfo, addr); 563 + } 564 + 565 + brcmf_pcie_write_tcm32(devinfo, addr, htod_mb_data); 566 + pci_write_config_dword(devinfo->pdev, BRCMF_PCIE_REG_SBMBX, 1); 567 + pci_write_config_dword(devinfo->pdev, BRCMF_PCIE_REG_SBMBX, 1); 568 + } 569 + 570 + 571 + static void brcmf_pcie_handle_mb_data(struct brcmf_pciedev_info *devinfo) 572 + { 573 + struct brcmf_pcie_shared_info *shared; 574 + u32 addr; 575 + u32 dtoh_mb_data; 576 + 577 + shared = &devinfo->shared; 578 + addr = shared->dtoh_mb_data_addr; 579 + dtoh_mb_data = brcmf_pcie_read_tcm32(devinfo, addr); 580 + 581 + if (!dtoh_mb_data) 582 + return; 583 + 584 + brcmf_pcie_write_tcm32(devinfo, addr, 0); 585 + 586 + brcmf_dbg(PCIE, "D2H_MB_DATA: 0x%04x\n", dtoh_mb_data); 587 + if (dtoh_mb_data & BRCMF_D2H_DEV_DS_ENTER_REQ) { 588 + brcmf_dbg(PCIE, "D2H_MB_DATA: DEEP SLEEP REQ\n"); 589 + brcmf_pcie_send_mb_data(devinfo, BRCMF_H2D_HOST_DS_ACK); 590 + brcmf_dbg(PCIE, "D2H_MB_DATA: sent DEEP SLEEP ACK\n"); 591 + } 592 + if (dtoh_mb_data & BRCMF_D2H_DEV_DS_EXIT_NOTE) 593 + brcmf_dbg(PCIE, "D2H_MB_DATA: DEEP SLEEP EXIT\n"); 594 + if (dtoh_mb_data & BRCMF_D2H_DEV_D3_ACK) 595 + brcmf_dbg(PCIE, "D2H_MB_DATA: D3 ACK\n"); 596 + if (waitqueue_active(&devinfo->mbdata_resp_wait)) { 597 + devinfo->mbdata_completed = true; 598 + wake_up(&devinfo->mbdata_resp_wait); 599 + } 600 + } 601 + 602 + 603 + static void brcmf_pcie_bus_console_init(struct brcmf_pciedev_info *devinfo) 604 + { 605 + struct brcmf_pcie_shared_info *shared; 606 + struct brcmf_pcie_console *console; 607 + u32 addr; 608 + 609 + shared = &devinfo->shared; 610 + console = &shared->console; 611 + addr = shared->tcm_base_address + BRCMF_SHARED_CONSOLE_ADDR_OFFSET; 612 + console->base_addr = brcmf_pcie_read_tcm32(devinfo, addr); 613 + 614 + addr = console->base_addr + BRCMF_CONSOLE_BUFADDR_OFFSET; 615 + console->buf_addr = brcmf_pcie_read_tcm32(devinfo, addr); 616 + addr = console->base_addr + BRCMF_CONSOLE_BUFSIZE_OFFSET; 617 + console->bufsize = brcmf_pcie_read_tcm32(devinfo, addr); 618 + 619 + brcmf_dbg(PCIE, "Console: base %x, buf %x, size %d\n", 620 + console->base_addr, console->buf_addr, console->bufsize); 621 + } 622 + 623 + 624 + static void brcmf_pcie_bus_console_read(struct brcmf_pciedev_info *devinfo) 625 + { 626 + struct brcmf_pcie_console *console; 627 + u32 addr; 628 + u8 ch; 629 + u32 newidx; 630 + 631 + console = &devinfo->shared.console; 632 + addr = console->base_addr + BRCMF_CONSOLE_WRITEIDX_OFFSET; 633 + newidx = brcmf_pcie_read_tcm32(devinfo, addr); 634 + while (newidx != console->read_idx) { 635 + addr = console->buf_addr + console->read_idx; 636 + ch = brcmf_pcie_read_tcm8(devinfo, addr); 637 + console->read_idx++; 638 + if (console->read_idx == console->bufsize) 639 + console->read_idx = 0; 640 + if (ch == '\r') 641 + continue; 642 + console->log_str[console->log_idx] = ch; 643 + console->log_idx++; 644 + if ((ch != '\n') && 645 + (console->log_idx == (sizeof(console->log_str) - 2))) { 646 + ch = '\n'; 647 + console->log_str[console->log_idx] = ch; 648 + console->log_idx++; 649 + } 650 + 651 + if (ch == '\n') { 652 + console->log_str[console->log_idx] = 0; 653 + brcmf_dbg(PCIE, "CONSOLE: %s\n", console->log_str); 654 + console->log_idx = 0; 655 + } 656 + } 657 + } 658 + 659 + 660 + static __used void brcmf_pcie_ringbell_v1(struct brcmf_pciedev_info *devinfo) 661 + { 662 + u32 reg_value; 663 + 664 + brcmf_dbg(PCIE, "RING !\n"); 665 + reg_value = brcmf_pcie_read_reg32(devinfo, 666 + BRCMF_PCIE_PCIE2REG_MAILBOXINT); 667 + reg_value |= BRCMF_PCIE2_INTB; 668 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXINT, 669 + reg_value); 670 + } 671 + 672 + 673 + static void brcmf_pcie_ringbell_v2(struct brcmf_pciedev_info *devinfo) 674 + { 675 + brcmf_dbg(PCIE, "RING !\n"); 676 + /* Any arbitrary value will do, lets use 1 */ 677 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_H2D_MAILBOX, 1); 678 + } 679 + 680 + 681 + static void brcmf_pcie_intr_disable(struct brcmf_pciedev_info *devinfo) 682 + { 683 + if (devinfo->generic_corerev == BRCMF_PCIE_GENREV1) 684 + pci_write_config_dword(devinfo->pdev, BRCMF_PCIE_REG_INTMASK, 685 + 0); 686 + else 687 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXMASK, 688 + 0); 689 + } 690 + 691 + 692 + static void brcmf_pcie_intr_enable(struct brcmf_pciedev_info *devinfo) 693 + { 694 + if (devinfo->generic_corerev == BRCMF_PCIE_GENREV1) 695 + pci_write_config_dword(devinfo->pdev, BRCMF_PCIE_REG_INTMASK, 696 + BRCMF_PCIE_INT_DEF); 697 + else 698 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXMASK, 699 + BRCMF_PCIE_MB_INT_D2H_DB | 700 + BRCMF_PCIE_MB_INT_FN0_0 | 701 + BRCMF_PCIE_MB_INT_FN0_1); 702 + } 703 + 704 + 705 + static irqreturn_t brcmf_pcie_quick_check_isr_v1(int irq, void *arg) 706 + { 707 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)arg; 708 + u32 status; 709 + 710 + status = 0; 711 + pci_read_config_dword(devinfo->pdev, BRCMF_PCIE_REG_INTSTATUS, &status); 712 + if (status) { 713 + brcmf_pcie_intr_disable(devinfo); 714 + brcmf_dbg(PCIE, "Enter\n"); 715 + return IRQ_WAKE_THREAD; 716 + } 717 + return IRQ_NONE; 718 + } 719 + 720 + 721 + static irqreturn_t brcmf_pcie_quick_check_isr_v2(int irq, void *arg) 722 + { 723 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)arg; 724 + 725 + if (brcmf_pcie_read_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXINT)) { 726 + brcmf_pcie_intr_disable(devinfo); 727 + brcmf_dbg(PCIE, "Enter\n"); 728 + return IRQ_WAKE_THREAD; 729 + } 730 + return IRQ_NONE; 731 + } 732 + 733 + 734 + static irqreturn_t brcmf_pcie_isr_thread_v1(int irq, void *arg) 735 + { 736 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)arg; 737 + const struct pci_dev *pdev = devinfo->pdev; 738 + u32 status; 739 + 740 + devinfo->in_irq = true; 741 + status = 0; 742 + pci_read_config_dword(pdev, BRCMF_PCIE_REG_INTSTATUS, &status); 743 + brcmf_dbg(PCIE, "Enter %x\n", status); 744 + if (status) { 745 + pci_write_config_dword(pdev, BRCMF_PCIE_REG_INTSTATUS, status); 746 + if (devinfo->state == BRCMFMAC_PCIE_STATE_UP) 747 + brcmf_proto_msgbuf_rx_trigger(&devinfo->pdev->dev); 748 + } 749 + if (devinfo->state == BRCMFMAC_PCIE_STATE_UP) 750 + brcmf_pcie_intr_enable(devinfo); 751 + devinfo->in_irq = false; 752 + return IRQ_HANDLED; 753 + } 754 + 755 + 756 + static irqreturn_t brcmf_pcie_isr_thread_v2(int irq, void *arg) 757 + { 758 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)arg; 759 + u32 status; 760 + 761 + devinfo->in_irq = true; 762 + status = brcmf_pcie_read_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXINT); 763 + brcmf_dbg(PCIE, "Enter %x\n", status); 764 + if (status) { 765 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXINT, 766 + status); 767 + if (status & (BRCMF_PCIE_MB_INT_FN0_0 | 768 + BRCMF_PCIE_MB_INT_FN0_1)) 769 + brcmf_pcie_handle_mb_data(devinfo); 770 + if (status & BRCMF_PCIE_MB_INT_D2H_DB) { 771 + if (devinfo->state == BRCMFMAC_PCIE_STATE_UP) 772 + brcmf_proto_msgbuf_rx_trigger( 773 + &devinfo->pdev->dev); 774 + } 775 + } 776 + brcmf_pcie_bus_console_read(devinfo); 777 + if (devinfo->state == BRCMFMAC_PCIE_STATE_UP) 778 + brcmf_pcie_intr_enable(devinfo); 779 + devinfo->in_irq = false; 780 + return IRQ_HANDLED; 781 + } 782 + 783 + 784 + static int brcmf_pcie_request_irq(struct brcmf_pciedev_info *devinfo) 785 + { 786 + struct pci_dev *pdev; 787 + 788 + pdev = devinfo->pdev; 789 + 790 + brcmf_pcie_intr_disable(devinfo); 791 + 792 + brcmf_dbg(PCIE, "Enter\n"); 793 + /* is it a v1 or v2 implementation */ 794 + devinfo->irq_requested = false; 795 + if (devinfo->generic_corerev == BRCMF_PCIE_GENREV1) { 796 + if (request_threaded_irq(pdev->irq, 797 + brcmf_pcie_quick_check_isr_v1, 798 + brcmf_pcie_isr_thread_v1, 799 + IRQF_SHARED, "brcmf_pcie_intr", 800 + devinfo)) { 801 + brcmf_err("Failed to request IRQ %d\n", pdev->irq); 802 + return -EIO; 803 + } 804 + } else { 805 + if (request_threaded_irq(pdev->irq, 806 + brcmf_pcie_quick_check_isr_v2, 807 + brcmf_pcie_isr_thread_v2, 808 + IRQF_SHARED, "brcmf_pcie_intr", 809 + devinfo)) { 810 + brcmf_err("Failed to request IRQ %d\n", pdev->irq); 811 + return -EIO; 812 + } 813 + } 814 + devinfo->irq_requested = true; 815 + devinfo->irq_allocated = true; 816 + return 0; 817 + } 818 + 819 + 820 + static void brcmf_pcie_release_irq(struct brcmf_pciedev_info *devinfo) 821 + { 822 + struct pci_dev *pdev; 823 + u32 status; 824 + u32 count; 825 + 826 + if (!devinfo->irq_allocated) 827 + return; 828 + 829 + pdev = devinfo->pdev; 830 + 831 + brcmf_pcie_intr_disable(devinfo); 832 + if (!devinfo->irq_requested) 833 + return; 834 + devinfo->irq_requested = false; 835 + free_irq(pdev->irq, devinfo); 836 + 837 + msleep(50); 838 + count = 0; 839 + while ((devinfo->in_irq) && (count < 20)) { 840 + msleep(50); 841 + count++; 842 + } 843 + if (devinfo->in_irq) 844 + brcmf_err("Still in IRQ (processing) !!!\n"); 845 + 846 + if (devinfo->generic_corerev == BRCMF_PCIE_GENREV1) { 847 + status = 0; 848 + pci_read_config_dword(pdev, BRCMF_PCIE_REG_INTSTATUS, &status); 849 + pci_write_config_dword(pdev, BRCMF_PCIE_REG_INTSTATUS, status); 850 + } else { 851 + status = brcmf_pcie_read_reg32(devinfo, 852 + BRCMF_PCIE_PCIE2REG_MAILBOXINT); 853 + brcmf_pcie_write_reg32(devinfo, BRCMF_PCIE_PCIE2REG_MAILBOXINT, 854 + status); 855 + } 856 + devinfo->irq_allocated = false; 857 + } 858 + 859 + 860 + static int brcmf_pcie_ring_mb_write_rptr(void *ctx) 861 + { 862 + struct brcmf_pcie_ringbuf *ring = (struct brcmf_pcie_ringbuf *)ctx; 863 + struct brcmf_pciedev_info *devinfo = ring->devinfo; 864 + struct brcmf_commonring *commonring = &ring->commonring; 865 + 866 + if (devinfo->state != BRCMFMAC_PCIE_STATE_UP) 867 + return -EIO; 868 + 869 + brcmf_dbg(PCIE, "W r_ptr %d (%d), ring %d\n", commonring->r_ptr, 870 + commonring->w_ptr, ring->id); 871 + 872 + brcmf_pcie_write_tcm16(devinfo, ring->r_idx_addr, commonring->r_ptr); 873 + 874 + return 0; 875 + } 876 + 877 + 878 + static int brcmf_pcie_ring_mb_write_wptr(void *ctx) 879 + { 880 + struct brcmf_pcie_ringbuf *ring = (struct brcmf_pcie_ringbuf *)ctx; 881 + struct brcmf_pciedev_info *devinfo = ring->devinfo; 882 + struct brcmf_commonring *commonring = &ring->commonring; 883 + 884 + if (devinfo->state != BRCMFMAC_PCIE_STATE_UP) 885 + return -EIO; 886 + 887 + brcmf_dbg(PCIE, "W w_ptr %d (%d), ring %d\n", commonring->w_ptr, 888 + commonring->r_ptr, ring->id); 889 + 890 + brcmf_pcie_write_tcm16(devinfo, ring->w_idx_addr, commonring->w_ptr); 891 + 892 + return 0; 893 + } 894 + 895 + 896 + static int brcmf_pcie_ring_mb_ring_bell(void *ctx) 897 + { 898 + struct brcmf_pcie_ringbuf *ring = (struct brcmf_pcie_ringbuf *)ctx; 899 + struct brcmf_pciedev_info *devinfo = ring->devinfo; 900 + 901 + if (devinfo->state != BRCMFMAC_PCIE_STATE_UP) 902 + return -EIO; 903 + 904 + devinfo->ringbell(devinfo); 905 + 906 + return 0; 907 + } 908 + 909 + 910 + static int brcmf_pcie_ring_mb_update_rptr(void *ctx) 911 + { 912 + struct brcmf_pcie_ringbuf *ring = (struct brcmf_pcie_ringbuf *)ctx; 913 + struct brcmf_pciedev_info *devinfo = ring->devinfo; 914 + struct brcmf_commonring *commonring = &ring->commonring; 915 + 916 + if (devinfo->state != BRCMFMAC_PCIE_STATE_UP) 917 + return -EIO; 918 + 919 + commonring->r_ptr = brcmf_pcie_read_tcm16(devinfo, ring->r_idx_addr); 920 + 921 + brcmf_dbg(PCIE, "R r_ptr %d (%d), ring %d\n", commonring->r_ptr, 922 + commonring->w_ptr, ring->id); 923 + 924 + return 0; 925 + } 926 + 927 + 928 + static int brcmf_pcie_ring_mb_update_wptr(void *ctx) 929 + { 930 + struct brcmf_pcie_ringbuf *ring = (struct brcmf_pcie_ringbuf *)ctx; 931 + struct brcmf_pciedev_info *devinfo = ring->devinfo; 932 + struct brcmf_commonring *commonring = &ring->commonring; 933 + 934 + if (devinfo->state != BRCMFMAC_PCIE_STATE_UP) 935 + return -EIO; 936 + 937 + commonring->w_ptr = brcmf_pcie_read_tcm16(devinfo, ring->w_idx_addr); 938 + 939 + brcmf_dbg(PCIE, "R w_ptr %d (%d), ring %d\n", commonring->w_ptr, 940 + commonring->r_ptr, ring->id); 941 + 942 + return 0; 943 + } 944 + 945 + 946 + static void * 947 + brcmf_pcie_init_dmabuffer_for_device(struct brcmf_pciedev_info *devinfo, 948 + u32 size, u32 tcm_dma_phys_addr, 949 + dma_addr_t *dma_handle) 950 + { 951 + void *ring; 952 + long long address; 953 + 954 + ring = dma_alloc_coherent(&devinfo->pdev->dev, size, dma_handle, 955 + GFP_KERNEL); 956 + if (!ring) 957 + return NULL; 958 + 959 + address = (long long)(long)*dma_handle; 960 + brcmf_pcie_write_tcm32(devinfo, tcm_dma_phys_addr, 961 + address & 0xffffffff); 962 + brcmf_pcie_write_tcm32(devinfo, tcm_dma_phys_addr + 4, address >> 32); 963 + 964 + memset(ring, 0, size); 965 + 966 + return (ring); 967 + } 968 + 969 + 970 + static struct brcmf_pcie_ringbuf * 971 + brcmf_pcie_alloc_dma_and_ring(struct brcmf_pciedev_info *devinfo, u32 ring_id, 972 + u32 tcm_ring_phys_addr) 973 + { 974 + void *dma_buf; 975 + dma_addr_t dma_handle; 976 + struct brcmf_pcie_ringbuf *ring; 977 + u32 size; 978 + u32 addr; 979 + 980 + size = brcmf_ring_max_item[ring_id] * brcmf_ring_itemsize[ring_id]; 981 + dma_buf = brcmf_pcie_init_dmabuffer_for_device(devinfo, size, 982 + tcm_ring_phys_addr + BRCMF_RING_MEM_BASE_ADDR_OFFSET, 983 + &dma_handle); 984 + if (!dma_buf) 985 + return NULL; 986 + 987 + addr = tcm_ring_phys_addr + BRCMF_RING_MAX_ITEM_OFFSET; 988 + brcmf_pcie_write_tcm16(devinfo, addr, brcmf_ring_max_item[ring_id]); 989 + addr = tcm_ring_phys_addr + BRCMF_RING_LEN_ITEMS_OFFSET; 990 + brcmf_pcie_write_tcm16(devinfo, addr, brcmf_ring_itemsize[ring_id]); 991 + 992 + ring = kzalloc(sizeof(*ring), GFP_KERNEL); 993 + if (!ring) { 994 + dma_free_coherent(&devinfo->pdev->dev, size, dma_buf, 995 + dma_handle); 996 + return NULL; 997 + } 998 + brcmf_commonring_config(&ring->commonring, brcmf_ring_max_item[ring_id], 999 + brcmf_ring_itemsize[ring_id], dma_buf); 1000 + ring->dma_handle = dma_handle; 1001 + ring->devinfo = devinfo; 1002 + brcmf_commonring_register_cb(&ring->commonring, 1003 + brcmf_pcie_ring_mb_ring_bell, 1004 + brcmf_pcie_ring_mb_update_rptr, 1005 + brcmf_pcie_ring_mb_update_wptr, 1006 + brcmf_pcie_ring_mb_write_rptr, 1007 + brcmf_pcie_ring_mb_write_wptr, ring); 1008 + 1009 + return (ring); 1010 + } 1011 + 1012 + 1013 + static void brcmf_pcie_release_ringbuffer(struct device *dev, 1014 + struct brcmf_pcie_ringbuf *ring) 1015 + { 1016 + void *dma_buf; 1017 + u32 size; 1018 + 1019 + if (!ring) 1020 + return; 1021 + 1022 + dma_buf = ring->commonring.buf_addr; 1023 + if (dma_buf) { 1024 + size = ring->commonring.depth * ring->commonring.item_len; 1025 + dma_free_coherent(dev, size, dma_buf, ring->dma_handle); 1026 + } 1027 + kfree(ring); 1028 + } 1029 + 1030 + 1031 + static void brcmf_pcie_release_ringbuffers(struct brcmf_pciedev_info *devinfo) 1032 + { 1033 + u32 i; 1034 + 1035 + for (i = 0; i < BRCMF_NROF_COMMON_MSGRINGS; i++) { 1036 + brcmf_pcie_release_ringbuffer(&devinfo->pdev->dev, 1037 + devinfo->shared.commonrings[i]); 1038 + devinfo->shared.commonrings[i] = NULL; 1039 + } 1040 + kfree(devinfo->shared.flowrings); 1041 + devinfo->shared.flowrings = NULL; 1042 + } 1043 + 1044 + 1045 + static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo) 1046 + { 1047 + struct brcmf_pcie_ringbuf *ring; 1048 + struct brcmf_pcie_ringbuf *rings; 1049 + u32 ring_addr; 1050 + u32 d2h_w_idx_ptr; 1051 + u32 d2h_r_idx_ptr; 1052 + u32 h2d_w_idx_ptr; 1053 + u32 h2d_r_idx_ptr; 1054 + u32 addr; 1055 + u32 ring_mem_ptr; 1056 + u32 i; 1057 + u16 max_sub_queues; 1058 + 1059 + ring_addr = devinfo->shared.ring_info_addr; 1060 + brcmf_dbg(PCIE, "Base ring addr = 0x%08x\n", ring_addr); 1061 + 1062 + addr = ring_addr + BRCMF_SHARED_RING_D2H_W_IDX_PTR_OFFSET; 1063 + d2h_w_idx_ptr = brcmf_pcie_read_tcm32(devinfo, addr); 1064 + addr = ring_addr + BRCMF_SHARED_RING_D2H_R_IDX_PTR_OFFSET; 1065 + d2h_r_idx_ptr = brcmf_pcie_read_tcm32(devinfo, addr); 1066 + addr = ring_addr + BRCMF_SHARED_RING_H2D_W_IDX_PTR_OFFSET; 1067 + h2d_w_idx_ptr = brcmf_pcie_read_tcm32(devinfo, addr); 1068 + addr = ring_addr + BRCMF_SHARED_RING_H2D_R_IDX_PTR_OFFSET; 1069 + h2d_r_idx_ptr = brcmf_pcie_read_tcm32(devinfo, addr); 1070 + 1071 + addr = ring_addr + BRCMF_SHARED_RING_TCM_MEMLOC_OFFSET; 1072 + ring_mem_ptr = brcmf_pcie_read_tcm32(devinfo, addr); 1073 + 1074 + for (i = 0; i < BRCMF_NROF_H2D_COMMON_MSGRINGS; i++) { 1075 + ring = brcmf_pcie_alloc_dma_and_ring(devinfo, i, ring_mem_ptr); 1076 + if (!ring) 1077 + goto fail; 1078 + ring->w_idx_addr = h2d_w_idx_ptr; 1079 + ring->r_idx_addr = h2d_r_idx_ptr; 1080 + ring->id = i; 1081 + devinfo->shared.commonrings[i] = ring; 1082 + 1083 + h2d_w_idx_ptr += sizeof(u32); 1084 + h2d_r_idx_ptr += sizeof(u32); 1085 + ring_mem_ptr += BRCMF_RING_MEM_SZ; 1086 + } 1087 + 1088 + for (i = BRCMF_NROF_H2D_COMMON_MSGRINGS; 1089 + i < BRCMF_NROF_COMMON_MSGRINGS; i++) { 1090 + ring = brcmf_pcie_alloc_dma_and_ring(devinfo, i, ring_mem_ptr); 1091 + if (!ring) 1092 + goto fail; 1093 + ring->w_idx_addr = d2h_w_idx_ptr; 1094 + ring->r_idx_addr = d2h_r_idx_ptr; 1095 + ring->id = i; 1096 + devinfo->shared.commonrings[i] = ring; 1097 + 1098 + d2h_w_idx_ptr += sizeof(u32); 1099 + d2h_r_idx_ptr += sizeof(u32); 1100 + ring_mem_ptr += BRCMF_RING_MEM_SZ; 1101 + } 1102 + 1103 + addr = ring_addr + BRCMF_SHARED_RING_MAX_SUB_QUEUES; 1104 + max_sub_queues = brcmf_pcie_read_tcm16(devinfo, addr); 1105 + devinfo->shared.nrof_flowrings = 1106 + max_sub_queues - BRCMF_NROF_H2D_COMMON_MSGRINGS; 1107 + rings = kcalloc(devinfo->shared.nrof_flowrings, sizeof(*ring), 1108 + GFP_KERNEL); 1109 + if (!rings) 1110 + goto fail; 1111 + 1112 + brcmf_dbg(PCIE, "Nr of flowrings is %d\n", 1113 + devinfo->shared.nrof_flowrings); 1114 + 1115 + for (i = 0; i < devinfo->shared.nrof_flowrings; i++) { 1116 + ring = &rings[i]; 1117 + ring->devinfo = devinfo; 1118 + ring->id = i + BRCMF_NROF_COMMON_MSGRINGS; 1119 + brcmf_commonring_register_cb(&ring->commonring, 1120 + brcmf_pcie_ring_mb_ring_bell, 1121 + brcmf_pcie_ring_mb_update_rptr, 1122 + brcmf_pcie_ring_mb_update_wptr, 1123 + brcmf_pcie_ring_mb_write_rptr, 1124 + brcmf_pcie_ring_mb_write_wptr, 1125 + ring); 1126 + ring->w_idx_addr = h2d_w_idx_ptr; 1127 + ring->r_idx_addr = h2d_r_idx_ptr; 1128 + h2d_w_idx_ptr += sizeof(u32); 1129 + h2d_r_idx_ptr += sizeof(u32); 1130 + } 1131 + devinfo->shared.flowrings = rings; 1132 + 1133 + return 0; 1134 + 1135 + fail: 1136 + brcmf_err("Allocating commonring buffers failed\n"); 1137 + brcmf_pcie_release_ringbuffers(devinfo); 1138 + return -ENOMEM; 1139 + } 1140 + 1141 + 1142 + static void 1143 + brcmf_pcie_release_scratchbuffers(struct brcmf_pciedev_info *devinfo) 1144 + { 1145 + if (devinfo->shared.scratch) 1146 + dma_free_coherent(&devinfo->pdev->dev, 1147 + BRCMF_DMA_D2H_SCRATCH_BUF_LEN, 1148 + devinfo->shared.scratch, 1149 + devinfo->shared.scratch_dmahandle); 1150 + if (devinfo->shared.ringupd) 1151 + dma_free_coherent(&devinfo->pdev->dev, 1152 + BRCMF_DMA_D2H_RINGUPD_BUF_LEN, 1153 + devinfo->shared.ringupd, 1154 + devinfo->shared.ringupd_dmahandle); 1155 + } 1156 + 1157 + static int brcmf_pcie_init_scratchbuffers(struct brcmf_pciedev_info *devinfo) 1158 + { 1159 + long long address; 1160 + u32 addr; 1161 + 1162 + devinfo->shared.scratch = dma_alloc_coherent(&devinfo->pdev->dev, 1163 + BRCMF_DMA_D2H_SCRATCH_BUF_LEN, 1164 + &devinfo->shared.scratch_dmahandle, GFP_KERNEL); 1165 + if (!devinfo->shared.scratch) 1166 + goto fail; 1167 + 1168 + memset(devinfo->shared.scratch, 0, BRCMF_DMA_D2H_SCRATCH_BUF_LEN); 1169 + brcmf_dma_flush(devinfo->shared.scratch, BRCMF_DMA_D2H_SCRATCH_BUF_LEN); 1170 + 1171 + addr = devinfo->shared.tcm_base_address + 1172 + BRCMF_SHARED_DMA_SCRATCH_ADDR_OFFSET; 1173 + address = (long long)(long)devinfo->shared.scratch_dmahandle; 1174 + brcmf_pcie_write_tcm32(devinfo, addr, address & 0xffffffff); 1175 + brcmf_pcie_write_tcm32(devinfo, addr + 4, address >> 32); 1176 + addr = devinfo->shared.tcm_base_address + 1177 + BRCMF_SHARED_DMA_SCRATCH_LEN_OFFSET; 1178 + brcmf_pcie_write_tcm32(devinfo, addr, BRCMF_DMA_D2H_SCRATCH_BUF_LEN); 1179 + 1180 + devinfo->shared.ringupd = dma_alloc_coherent(&devinfo->pdev->dev, 1181 + BRCMF_DMA_D2H_RINGUPD_BUF_LEN, 1182 + &devinfo->shared.ringupd_dmahandle, GFP_KERNEL); 1183 + if (!devinfo->shared.ringupd) 1184 + goto fail; 1185 + 1186 + memset(devinfo->shared.ringupd, 0, BRCMF_DMA_D2H_RINGUPD_BUF_LEN); 1187 + brcmf_dma_flush(devinfo->shared.ringupd, BRCMF_DMA_D2H_RINGUPD_BUF_LEN); 1188 + 1189 + addr = devinfo->shared.tcm_base_address + 1190 + BRCMF_SHARED_DMA_RINGUPD_ADDR_OFFSET; 1191 + address = (long long)(long)devinfo->shared.ringupd_dmahandle; 1192 + brcmf_pcie_write_tcm32(devinfo, addr, address & 0xffffffff); 1193 + brcmf_pcie_write_tcm32(devinfo, addr + 4, address >> 32); 1194 + addr = devinfo->shared.tcm_base_address + 1195 + BRCMF_SHARED_DMA_RINGUPD_LEN_OFFSET; 1196 + brcmf_pcie_write_tcm32(devinfo, addr, BRCMF_DMA_D2H_RINGUPD_BUF_LEN); 1197 + return 0; 1198 + 1199 + fail: 1200 + brcmf_err("Allocating scratch buffers failed\n"); 1201 + brcmf_pcie_release_scratchbuffers(devinfo); 1202 + return -ENOMEM; 1203 + } 1204 + 1205 + 1206 + static void brcmf_pcie_down(struct device *dev) 1207 + { 1208 + } 1209 + 1210 + 1211 + static int brcmf_pcie_tx(struct device *dev, struct sk_buff *skb) 1212 + { 1213 + return 0; 1214 + } 1215 + 1216 + 1217 + static int brcmf_pcie_tx_ctlpkt(struct device *dev, unsigned char *msg, 1218 + uint len) 1219 + { 1220 + return 0; 1221 + } 1222 + 1223 + 1224 + static int brcmf_pcie_rx_ctlpkt(struct device *dev, unsigned char *msg, 1225 + uint len) 1226 + { 1227 + return 0; 1228 + } 1229 + 1230 + 1231 + static struct brcmf_bus_ops brcmf_pcie_bus_ops = { 1232 + .txdata = brcmf_pcie_tx, 1233 + .stop = brcmf_pcie_down, 1234 + .txctl = brcmf_pcie_tx_ctlpkt, 1235 + .rxctl = brcmf_pcie_rx_ctlpkt, 1236 + }; 1237 + 1238 + 1239 + static int 1240 + brcmf_pcie_init_share_ram_info(struct brcmf_pciedev_info *devinfo, 1241 + u32 sharedram_addr) 1242 + { 1243 + struct brcmf_pcie_shared_info *shared; 1244 + u32 addr; 1245 + u32 version; 1246 + 1247 + shared = &devinfo->shared; 1248 + shared->tcm_base_address = sharedram_addr; 1249 + 1250 + shared->flags = brcmf_pcie_read_tcm32(devinfo, sharedram_addr); 1251 + version = shared->flags & BRCMF_PCIE_SHARED_VERSION_MASK; 1252 + brcmf_dbg(PCIE, "PCIe protocol version %d\n", version); 1253 + if ((version > BRCMF_PCIE_MAX_SHARED_VERSION) || 1254 + (version < BRCMF_PCIE_MIN_SHARED_VERSION)) { 1255 + brcmf_err("Unsupported PCIE version %d\n", version); 1256 + return -EINVAL; 1257 + } 1258 + if (shared->flags & BRCMF_PCIE_SHARED_TXPUSH_SUPPORT) { 1259 + brcmf_err("Unsupported legacy TX mode 0x%x\n", 1260 + shared->flags & BRCMF_PCIE_SHARED_TXPUSH_SUPPORT); 1261 + return -EINVAL; 1262 + } 1263 + 1264 + addr = sharedram_addr + BRCMF_SHARED_MAX_RXBUFPOST_OFFSET; 1265 + shared->max_rxbufpost = brcmf_pcie_read_tcm16(devinfo, addr); 1266 + if (shared->max_rxbufpost == 0) 1267 + shared->max_rxbufpost = BRCMF_DEF_MAX_RXBUFPOST; 1268 + 1269 + addr = sharedram_addr + BRCMF_SHARED_RX_DATAOFFSET_OFFSET; 1270 + shared->rx_dataoffset = brcmf_pcie_read_tcm32(devinfo, addr); 1271 + 1272 + addr = sharedram_addr + BRCMF_SHARED_HTOD_MB_DATA_ADDR_OFFSET; 1273 + shared->htod_mb_data_addr = brcmf_pcie_read_tcm32(devinfo, addr); 1274 + 1275 + addr = sharedram_addr + BRCMF_SHARED_DTOH_MB_DATA_ADDR_OFFSET; 1276 + shared->dtoh_mb_data_addr = brcmf_pcie_read_tcm32(devinfo, addr); 1277 + 1278 + addr = sharedram_addr + BRCMF_SHARED_RING_INFO_ADDR_OFFSET; 1279 + shared->ring_info_addr = brcmf_pcie_read_tcm32(devinfo, addr); 1280 + 1281 + brcmf_dbg(PCIE, "max rx buf post %d, rx dataoffset %d\n", 1282 + shared->max_rxbufpost, shared->rx_dataoffset); 1283 + 1284 + brcmf_pcie_bus_console_init(devinfo); 1285 + 1286 + return 0; 1287 + } 1288 + 1289 + 1290 + static int brcmf_pcie_get_fwnames(struct brcmf_pciedev_info *devinfo) 1291 + { 1292 + char *fw_name; 1293 + char *nvram_name; 1294 + uint fw_len, nv_len; 1295 + char end; 1296 + 1297 + brcmf_dbg(PCIE, "Enter, chip 0x%04x chiprev %d\n", devinfo->ci->chip, 1298 + devinfo->ci->chiprev); 1299 + 1300 + switch (devinfo->ci->chip) { 1301 + case BRCM_CC_43602_CHIP_ID: 1302 + fw_name = BRCMF_PCIE_43602_FW_NAME; 1303 + nvram_name = BRCMF_PCIE_43602_NVRAM_NAME; 1304 + break; 1305 + case BRCM_CC_4354_CHIP_ID: 1306 + fw_name = BRCMF_PCIE_4354_FW_NAME; 1307 + nvram_name = BRCMF_PCIE_4354_NVRAM_NAME; 1308 + break; 1309 + case BRCM_CC_4356_CHIP_ID: 1310 + fw_name = BRCMF_PCIE_4356_FW_NAME; 1311 + nvram_name = BRCMF_PCIE_4356_NVRAM_NAME; 1312 + break; 1313 + case BRCM_CC_43567_CHIP_ID: 1314 + case BRCM_CC_43569_CHIP_ID: 1315 + case BRCM_CC_43570_CHIP_ID: 1316 + fw_name = BRCMF_PCIE_43570_FW_NAME; 1317 + nvram_name = BRCMF_PCIE_43570_NVRAM_NAME; 1318 + break; 1319 + default: 1320 + brcmf_err("Unsupported chip 0x%04x\n", devinfo->ci->chip); 1321 + return -ENODEV; 1322 + } 1323 + 1324 + fw_len = sizeof(devinfo->fw_name) - 1; 1325 + nv_len = sizeof(devinfo->nvram_name) - 1; 1326 + /* check if firmware path is provided by module parameter */ 1327 + if (brcmf_firmware_path[0] != '\0') { 1328 + strncpy(devinfo->fw_name, brcmf_firmware_path, fw_len); 1329 + strncpy(devinfo->nvram_name, brcmf_firmware_path, nv_len); 1330 + fw_len -= strlen(devinfo->fw_name); 1331 + nv_len -= strlen(devinfo->nvram_name); 1332 + 1333 + end = brcmf_firmware_path[strlen(brcmf_firmware_path) - 1]; 1334 + if (end != '/') { 1335 + strncat(devinfo->fw_name, "/", fw_len); 1336 + strncat(devinfo->nvram_name, "/", nv_len); 1337 + fw_len--; 1338 + nv_len--; 1339 + } 1340 + } 1341 + strncat(devinfo->fw_name, fw_name, fw_len); 1342 + strncat(devinfo->nvram_name, nvram_name, nv_len); 1343 + 1344 + return 0; 1345 + } 1346 + 1347 + 1348 + static int brcmf_pcie_download_fw_nvram(struct brcmf_pciedev_info *devinfo, 1349 + const struct firmware *fw, void *nvram, 1350 + u32 nvram_len) 1351 + { 1352 + u32 sharedram_addr; 1353 + u32 sharedram_addr_written; 1354 + u32 loop_counter; 1355 + int err; 1356 + u32 address; 1357 + u32 resetintr; 1358 + 1359 + devinfo->ringbell = brcmf_pcie_ringbell_v2; 1360 + devinfo->generic_corerev = BRCMF_PCIE_GENREV2; 1361 + 1362 + brcmf_dbg(PCIE, "Halt ARM.\n"); 1363 + err = brcmf_pcie_enter_download_state(devinfo); 1364 + if (err) 1365 + return err; 1366 + 1367 + brcmf_dbg(PCIE, "Download FW %s\n", devinfo->fw_name); 1368 + brcmf_pcie_copy_mem_todev(devinfo, devinfo->ci->rambase, 1369 + (void *)fw->data, fw->size); 1370 + 1371 + resetintr = get_unaligned_le32(fw->data); 1372 + release_firmware(fw); 1373 + 1374 + /* reset last 4 bytes of RAM address. to be used for shared 1375 + * area. This identifies when FW is running 1376 + */ 1377 + brcmf_pcie_write_ram32(devinfo, devinfo->ci->ramsize - 4, 0); 1378 + 1379 + if (nvram) { 1380 + brcmf_dbg(PCIE, "Download NVRAM %s\n", devinfo->nvram_name); 1381 + address = devinfo->ci->rambase + devinfo->ci->ramsize - 1382 + nvram_len; 1383 + brcmf_pcie_copy_mem_todev(devinfo, address, nvram, nvram_len); 1384 + brcmf_fw_nvram_free(nvram); 1385 + } else { 1386 + brcmf_dbg(PCIE, "No matching NVRAM file found %s\n", 1387 + devinfo->nvram_name); 1388 + } 1389 + 1390 + sharedram_addr_written = brcmf_pcie_read_ram32(devinfo, 1391 + devinfo->ci->ramsize - 1392 + 4); 1393 + brcmf_dbg(PCIE, "Bring ARM in running state\n"); 1394 + err = brcmf_pcie_exit_download_state(devinfo, resetintr); 1395 + if (err) 1396 + return err; 1397 + 1398 + brcmf_dbg(PCIE, "Wait for FW init\n"); 1399 + sharedram_addr = sharedram_addr_written; 1400 + loop_counter = BRCMF_PCIE_FW_UP_TIMEOUT / 50; 1401 + while ((sharedram_addr == sharedram_addr_written) && (loop_counter)) { 1402 + msleep(50); 1403 + sharedram_addr = brcmf_pcie_read_ram32(devinfo, 1404 + devinfo->ci->ramsize - 1405 + 4); 1406 + loop_counter--; 1407 + } 1408 + if (sharedram_addr == sharedram_addr_written) { 1409 + brcmf_err("FW failed to initialize\n"); 1410 + return -ENODEV; 1411 + } 1412 + brcmf_dbg(PCIE, "Shared RAM addr: 0x%08x\n", sharedram_addr); 1413 + 1414 + return (brcmf_pcie_init_share_ram_info(devinfo, sharedram_addr)); 1415 + } 1416 + 1417 + 1418 + static int brcmf_pcie_get_resource(struct brcmf_pciedev_info *devinfo) 1419 + { 1420 + struct pci_dev *pdev; 1421 + int err; 1422 + phys_addr_t bar0_addr, bar1_addr; 1423 + ulong bar1_size; 1424 + 1425 + pdev = devinfo->pdev; 1426 + 1427 + err = pci_enable_device(pdev); 1428 + if (err) { 1429 + brcmf_err("pci_enable_device failed err=%d\n", err); 1430 + return err; 1431 + } 1432 + 1433 + pci_set_master(pdev); 1434 + 1435 + /* Bar-0 mapped address */ 1436 + bar0_addr = pci_resource_start(pdev, 0); 1437 + /* Bar-1 mapped address */ 1438 + bar1_addr = pci_resource_start(pdev, 2); 1439 + /* read Bar-1 mapped memory range */ 1440 + bar1_size = pci_resource_len(pdev, 2); 1441 + if ((bar1_size == 0) || (bar1_addr == 0)) { 1442 + brcmf_err("BAR1 Not enabled, device size=%ld, addr=%#016llx\n", 1443 + bar1_size, (unsigned long long)bar1_addr); 1444 + return -EINVAL; 1445 + } 1446 + 1447 + devinfo->regs = ioremap_nocache(bar0_addr, BRCMF_PCIE_REG_MAP_SIZE); 1448 + devinfo->tcm = ioremap_nocache(bar1_addr, BRCMF_PCIE_TCM_MAP_SIZE); 1449 + devinfo->tcm_size = BRCMF_PCIE_TCM_MAP_SIZE; 1450 + 1451 + if (!devinfo->regs || !devinfo->tcm) { 1452 + brcmf_err("ioremap() failed (%p,%p)\n", devinfo->regs, 1453 + devinfo->tcm); 1454 + return -EINVAL; 1455 + } 1456 + brcmf_dbg(PCIE, "Phys addr : reg space = %p base addr %#016llx\n", 1457 + devinfo->regs, (unsigned long long)bar0_addr); 1458 + brcmf_dbg(PCIE, "Phys addr : mem space = %p base addr %#016llx\n", 1459 + devinfo->tcm, (unsigned long long)bar1_addr); 1460 + 1461 + return 0; 1462 + } 1463 + 1464 + 1465 + static void brcmf_pcie_release_resource(struct brcmf_pciedev_info *devinfo) 1466 + { 1467 + if (devinfo->tcm) 1468 + iounmap(devinfo->tcm); 1469 + if (devinfo->regs) 1470 + iounmap(devinfo->regs); 1471 + 1472 + pci_disable_device(devinfo->pdev); 1473 + } 1474 + 1475 + 1476 + static int brcmf_pcie_attach_bus(struct device *dev) 1477 + { 1478 + int ret; 1479 + 1480 + /* Attach to the common driver interface */ 1481 + ret = brcmf_attach(dev); 1482 + if (ret) { 1483 + brcmf_err("brcmf_attach failed\n"); 1484 + } else { 1485 + ret = brcmf_bus_start(dev); 1486 + if (ret) 1487 + brcmf_err("dongle is not responding\n"); 1488 + } 1489 + 1490 + return ret; 1491 + } 1492 + 1493 + 1494 + static u32 brcmf_pcie_buscore_prep_addr(const struct pci_dev *pdev, u32 addr) 1495 + { 1496 + u32 ret_addr; 1497 + 1498 + ret_addr = addr & (BRCMF_PCIE_BAR0_REG_SIZE - 1); 1499 + addr &= ~(BRCMF_PCIE_BAR0_REG_SIZE - 1); 1500 + pci_write_config_dword(pdev, BRCMF_PCIE_BAR0_WINDOW, addr); 1501 + 1502 + return ret_addr; 1503 + } 1504 + 1505 + 1506 + static u32 brcmf_pcie_buscore_read32(void *ctx, u32 addr) 1507 + { 1508 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)ctx; 1509 + 1510 + addr = brcmf_pcie_buscore_prep_addr(devinfo->pdev, addr); 1511 + return brcmf_pcie_read_reg32(devinfo, addr); 1512 + } 1513 + 1514 + 1515 + static void brcmf_pcie_buscore_write32(void *ctx, u32 addr, u32 value) 1516 + { 1517 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)ctx; 1518 + 1519 + addr = brcmf_pcie_buscore_prep_addr(devinfo->pdev, addr); 1520 + brcmf_pcie_write_reg32(devinfo, addr, value); 1521 + } 1522 + 1523 + 1524 + static int brcmf_pcie_buscoreprep(void *ctx) 1525 + { 1526 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)ctx; 1527 + int err; 1528 + 1529 + err = brcmf_pcie_get_resource(devinfo); 1530 + if (err == 0) { 1531 + /* Set CC watchdog to reset all the cores on the chip to bring 1532 + * back dongle to a sane state. 1533 + */ 1534 + brcmf_pcie_buscore_write32(ctx, CORE_CC_REG(SI_ENUM_BASE, 1535 + watchdog), 4); 1536 + msleep(100); 1537 + } 1538 + 1539 + return err; 1540 + } 1541 + 1542 + 1543 + static void brcmf_pcie_buscore_exitdl(void *ctx, struct brcmf_chip *chip, 1544 + u32 rstvec) 1545 + { 1546 + struct brcmf_pciedev_info *devinfo = (struct brcmf_pciedev_info *)ctx; 1547 + 1548 + brcmf_pcie_write_tcm32(devinfo, 0, rstvec); 1549 + } 1550 + 1551 + 1552 + static const struct brcmf_buscore_ops brcmf_pcie_buscore_ops = { 1553 + .prepare = brcmf_pcie_buscoreprep, 1554 + .exit_dl = brcmf_pcie_buscore_exitdl, 1555 + .read32 = brcmf_pcie_buscore_read32, 1556 + .write32 = brcmf_pcie_buscore_write32, 1557 + }; 1558 + 1559 + static void brcmf_pcie_setup(struct device *dev, const struct firmware *fw, 1560 + void *nvram, u32 nvram_len) 1561 + { 1562 + struct brcmf_bus *bus = dev_get_drvdata(dev); 1563 + struct brcmf_pciedev *pcie_bus_dev = bus->bus_priv.pcie; 1564 + struct brcmf_pciedev_info *devinfo = pcie_bus_dev->devinfo; 1565 + struct brcmf_commonring **flowrings; 1566 + int ret; 1567 + u32 i; 1568 + 1569 + brcmf_pcie_attach(devinfo); 1570 + 1571 + ret = brcmf_pcie_download_fw_nvram(devinfo, fw, nvram, nvram_len); 1572 + if (ret) 1573 + goto fail; 1574 + 1575 + devinfo->state = BRCMFMAC_PCIE_STATE_UP; 1576 + 1577 + ret = brcmf_pcie_init_ringbuffers(devinfo); 1578 + if (ret) 1579 + goto fail; 1580 + 1581 + ret = brcmf_pcie_init_scratchbuffers(devinfo); 1582 + if (ret) 1583 + goto fail; 1584 + 1585 + brcmf_pcie_select_core(devinfo, BCMA_CORE_PCIE2); 1586 + ret = brcmf_pcie_request_irq(devinfo); 1587 + if (ret) 1588 + goto fail; 1589 + 1590 + /* hook the commonrings in the bus structure. */ 1591 + for (i = 0; i < BRCMF_NROF_COMMON_MSGRINGS; i++) 1592 + bus->msgbuf->commonrings[i] = 1593 + &devinfo->shared.commonrings[i]->commonring; 1594 + 1595 + flowrings = kcalloc(devinfo->shared.nrof_flowrings, sizeof(flowrings), 1596 + GFP_KERNEL); 1597 + if (!flowrings) 1598 + goto fail; 1599 + 1600 + for (i = 0; i < devinfo->shared.nrof_flowrings; i++) 1601 + flowrings[i] = &devinfo->shared.flowrings[i].commonring; 1602 + bus->msgbuf->flowrings = flowrings; 1603 + 1604 + bus->msgbuf->rx_dataoffset = devinfo->shared.rx_dataoffset; 1605 + bus->msgbuf->max_rxbufpost = devinfo->shared.max_rxbufpost; 1606 + bus->msgbuf->nrof_flowrings = devinfo->shared.nrof_flowrings; 1607 + 1608 + init_waitqueue_head(&devinfo->mbdata_resp_wait); 1609 + 1610 + brcmf_pcie_intr_enable(devinfo); 1611 + if (brcmf_pcie_attach_bus(bus->dev) == 0) 1612 + return; 1613 + 1614 + brcmf_pcie_bus_console_read(devinfo); 1615 + 1616 + fail: 1617 + device_release_driver(dev); 1618 + } 1619 + 1620 + static int 1621 + brcmf_pcie_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1622 + { 1623 + int ret; 1624 + struct brcmf_pciedev_info *devinfo; 1625 + struct brcmf_pciedev *pcie_bus_dev; 1626 + struct brcmf_bus *bus; 1627 + 1628 + brcmf_dbg(PCIE, "Enter %x:%x\n", pdev->vendor, pdev->device); 1629 + 1630 + ret = -ENOMEM; 1631 + devinfo = kzalloc(sizeof(*devinfo), GFP_KERNEL); 1632 + if (devinfo == NULL) 1633 + return ret; 1634 + 1635 + devinfo->pdev = pdev; 1636 + pcie_bus_dev = NULL; 1637 + devinfo->ci = brcmf_chip_attach(devinfo, &brcmf_pcie_buscore_ops); 1638 + if (IS_ERR(devinfo->ci)) { 1639 + ret = PTR_ERR(devinfo->ci); 1640 + devinfo->ci = NULL; 1641 + goto fail; 1642 + } 1643 + 1644 + pcie_bus_dev = kzalloc(sizeof(*pcie_bus_dev), GFP_KERNEL); 1645 + if (pcie_bus_dev == NULL) { 1646 + ret = -ENOMEM; 1647 + goto fail; 1648 + } 1649 + 1650 + bus = kzalloc(sizeof(*bus), GFP_KERNEL); 1651 + if (!bus) { 1652 + ret = -ENOMEM; 1653 + goto fail; 1654 + } 1655 + bus->msgbuf = kzalloc(sizeof(*bus->msgbuf), GFP_KERNEL); 1656 + if (!bus->msgbuf) { 1657 + ret = -ENOMEM; 1658 + kfree(bus); 1659 + goto fail; 1660 + } 1661 + 1662 + /* hook it all together. */ 1663 + pcie_bus_dev->devinfo = devinfo; 1664 + pcie_bus_dev->bus = bus; 1665 + bus->dev = &pdev->dev; 1666 + bus->bus_priv.pcie = pcie_bus_dev; 1667 + bus->ops = &brcmf_pcie_bus_ops; 1668 + bus->proto_type = BRCMF_PROTO_MSGBUF; 1669 + bus->chip = devinfo->coreid; 1670 + dev_set_drvdata(&pdev->dev, bus); 1671 + 1672 + ret = brcmf_pcie_get_fwnames(devinfo); 1673 + if (ret) 1674 + goto fail_bus; 1675 + 1676 + ret = brcmf_fw_get_firmwares(bus->dev, BRCMF_FW_REQUEST_NVRAM | 1677 + BRCMF_FW_REQ_NV_OPTIONAL, 1678 + devinfo->fw_name, devinfo->nvram_name, 1679 + brcmf_pcie_setup); 1680 + if (ret == 0) 1681 + return 0; 1682 + fail_bus: 1683 + kfree(bus->msgbuf); 1684 + kfree(bus); 1685 + fail: 1686 + brcmf_err("failed %x:%x\n", pdev->vendor, pdev->device); 1687 + brcmf_pcie_release_resource(devinfo); 1688 + if (devinfo->ci) 1689 + brcmf_chip_detach(devinfo->ci); 1690 + kfree(pcie_bus_dev); 1691 + kfree(devinfo); 1692 + return ret; 1693 + } 1694 + 1695 + 1696 + static void 1697 + brcmf_pcie_remove(struct pci_dev *pdev) 1698 + { 1699 + struct brcmf_pciedev_info *devinfo; 1700 + struct brcmf_bus *bus; 1701 + 1702 + brcmf_dbg(PCIE, "Enter\n"); 1703 + 1704 + bus = dev_get_drvdata(&pdev->dev); 1705 + if (bus == NULL) 1706 + return; 1707 + 1708 + devinfo = bus->bus_priv.pcie->devinfo; 1709 + 1710 + devinfo->state = BRCMFMAC_PCIE_STATE_DOWN; 1711 + if (devinfo->ci) 1712 + brcmf_pcie_intr_disable(devinfo); 1713 + 1714 + brcmf_detach(&pdev->dev); 1715 + 1716 + kfree(bus->bus_priv.pcie); 1717 + kfree(bus->msgbuf->flowrings); 1718 + kfree(bus->msgbuf); 1719 + kfree(bus); 1720 + 1721 + brcmf_pcie_release_irq(devinfo); 1722 + brcmf_pcie_release_scratchbuffers(devinfo); 1723 + brcmf_pcie_release_ringbuffers(devinfo); 1724 + brcmf_pcie_reset_device(devinfo); 1725 + brcmf_pcie_release_resource(devinfo); 1726 + 1727 + if (devinfo->ci) 1728 + brcmf_chip_detach(devinfo->ci); 1729 + 1730 + kfree(devinfo); 1731 + dev_set_drvdata(&pdev->dev, NULL); 1732 + } 1733 + 1734 + 1735 + #ifdef CONFIG_PM 1736 + 1737 + 1738 + static int brcmf_pcie_suspend(struct pci_dev *pdev, pm_message_t state) 1739 + { 1740 + struct brcmf_pciedev_info *devinfo; 1741 + struct brcmf_bus *bus; 1742 + int err; 1743 + 1744 + brcmf_dbg(PCIE, "Enter, state=%d, pdev=%p\n", state.event, pdev); 1745 + 1746 + bus = dev_get_drvdata(&pdev->dev); 1747 + devinfo = bus->bus_priv.pcie->devinfo; 1748 + 1749 + brcmf_bus_change_state(bus, BRCMF_BUS_DOWN); 1750 + 1751 + devinfo->mbdata_completed = false; 1752 + brcmf_pcie_send_mb_data(devinfo, BRCMF_H2D_HOST_D3_INFORM); 1753 + 1754 + wait_event_timeout(devinfo->mbdata_resp_wait, 1755 + devinfo->mbdata_completed, 1756 + msecs_to_jiffies(BRCMF_PCIE_MBDATA_TIMEOUT)); 1757 + if (!devinfo->mbdata_completed) { 1758 + brcmf_err("Timeout on response for entering D3 substate\n"); 1759 + return -EIO; 1760 + } 1761 + brcmf_pcie_release_irq(devinfo); 1762 + 1763 + err = pci_save_state(pdev); 1764 + if (err) { 1765 + brcmf_err("pci_save_state failed, err=%d\n", err); 1766 + return err; 1767 + } 1768 + 1769 + brcmf_chip_detach(devinfo->ci); 1770 + devinfo->ci = NULL; 1771 + 1772 + brcmf_pcie_remove(pdev); 1773 + 1774 + return pci_prepare_to_sleep(pdev); 1775 + } 1776 + 1777 + 1778 + static int brcmf_pcie_resume(struct pci_dev *pdev) 1779 + { 1780 + int err; 1781 + 1782 + brcmf_dbg(PCIE, "Enter, pdev=%p\n", pdev); 1783 + 1784 + err = pci_set_power_state(pdev, PCI_D0); 1785 + if (err) { 1786 + brcmf_err("pci_set_power_state failed, err=%d\n", err); 1787 + return err; 1788 + } 1789 + pci_restore_state(pdev); 1790 + 1791 + err = brcmf_pcie_probe(pdev, NULL); 1792 + if (err) 1793 + brcmf_err("probe after resume failed, err=%d\n", err); 1794 + 1795 + return err; 1796 + } 1797 + 1798 + 1799 + #endif /* CONFIG_PM */ 1800 + 1801 + 1802 + #define BRCMF_PCIE_DEVICE(dev_id) { BRCM_PCIE_VENDOR_ID_BROADCOM, dev_id,\ 1803 + PCI_ANY_ID, PCI_ANY_ID, PCI_CLASS_NETWORK_OTHER << 8, 0xffff00, 0 } 1804 + 1805 + static struct pci_device_id brcmf_pcie_devid_table[] = { 1806 + BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_DEVICE_ID), 1807 + BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID), 1808 + BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID), 1809 + BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID), 1810 + BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_DEVICE_ID), 1811 + { /* end: all zeroes */ } 1812 + }; 1813 + 1814 + 1815 + MODULE_DEVICE_TABLE(pci, brcmf_pcie_devid_table); 1816 + 1817 + 1818 + static struct pci_driver brcmf_pciedrvr = { 1819 + .node = {}, 1820 + .name = KBUILD_MODNAME, 1821 + .id_table = brcmf_pcie_devid_table, 1822 + .probe = brcmf_pcie_probe, 1823 + .remove = brcmf_pcie_remove, 1824 + #ifdef CONFIG_PM 1825 + .suspend = brcmf_pcie_suspend, 1826 + .resume = brcmf_pcie_resume 1827 + #endif /* CONFIG_PM */ 1828 + }; 1829 + 1830 + 1831 + void brcmf_pcie_register(void) 1832 + { 1833 + int err; 1834 + 1835 + brcmf_dbg(PCIE, "Enter\n"); 1836 + err = pci_register_driver(&brcmf_pciedrvr); 1837 + if (err) 1838 + brcmf_err("PCIE driver registration failed, err=%d\n", err); 1839 + } 1840 + 1841 + 1842 + void brcmf_pcie_exit(void) 1843 + { 1844 + brcmf_dbg(PCIE, "Enter\n"); 1845 + pci_unregister_driver(&brcmf_pciedrvr); 1846 + }
+29
drivers/net/wireless/brcm80211/brcmfmac/pcie.h
··· 1 + /* Copyright (c) 2014 Broadcom Corporation 2 + * 3 + * Permission to use, copy, modify, and/or distribute this software for any 4 + * purpose with or without fee is hereby granted, provided that the above 5 + * copyright notice and this permission notice appear in all copies. 6 + * 7 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY 10 + * SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION 12 + * OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN 13 + * CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 + */ 15 + #ifndef BRCMFMAC_PCIE_H 16 + #define BRCMFMAC_PCIE_H 17 + 18 + 19 + struct brcmf_pciedev { 20 + struct brcmf_bus *bus; 21 + struct brcmf_pciedev_info *devinfo; 22 + }; 23 + 24 + 25 + void brcmf_pcie_exit(void); 26 + void brcmf_pcie_register(void); 27 + 28 + 29 + #endif /* BRCMFMAC_PCIE_H */
+24 -5
drivers/net/wireless/brcm80211/brcmfmac/proto.c
··· 21 21 22 22 #include <brcmu_wifi.h> 23 23 #include "dhd.h" 24 + #include "dhd_bus.h" 24 25 #include "dhd_dbg.h" 25 26 #include "proto.h" 26 27 #include "bcdc.h" 28 + #include "msgbuf.h" 27 29 28 30 29 31 int brcmf_proto_attach(struct brcmf_pub *drvr) 30 32 { 31 33 struct brcmf_proto *proto; 32 34 35 + brcmf_dbg(TRACE, "Enter\n"); 36 + 33 37 proto = kzalloc(sizeof(*proto), GFP_ATOMIC); 34 38 if (!proto) 35 39 goto fail; 36 40 37 41 drvr->proto = proto; 38 - /* BCDC protocol is only protocol supported for the moment */ 39 - if (brcmf_proto_bcdc_attach(drvr)) 40 - goto fail; 41 42 43 + if (drvr->bus_if->proto_type == BRCMF_PROTO_BCDC) { 44 + if (brcmf_proto_bcdc_attach(drvr)) 45 + goto fail; 46 + } else if (drvr->bus_if->proto_type == BRCMF_PROTO_MSGBUF) { 47 + if (brcmf_proto_msgbuf_attach(drvr)) 48 + goto fail; 49 + } else { 50 + brcmf_err("Unsupported proto type %d\n", 51 + drvr->bus_if->proto_type); 52 + goto fail; 53 + } 42 54 if ((proto->txdata == NULL) || (proto->hdrpull == NULL) || 43 - (proto->query_dcmd == NULL) || (proto->set_dcmd == NULL)) { 55 + (proto->query_dcmd == NULL) || (proto->set_dcmd == NULL) || 56 + (proto->configure_addr_mode == NULL) || 57 + (proto->delete_peer == NULL) || (proto->add_tdls_peer == NULL)) { 44 58 brcmf_err("Not all proto handlers have been installed\n"); 45 59 goto fail; 46 60 } ··· 68 54 69 55 void brcmf_proto_detach(struct brcmf_pub *drvr) 70 56 { 57 + brcmf_dbg(TRACE, "Enter\n"); 58 + 71 59 if (drvr->proto) { 72 - brcmf_proto_bcdc_detach(drvr); 60 + if (drvr->bus_if->proto_type == BRCMF_PROTO_BCDC) 61 + brcmf_proto_bcdc_detach(drvr); 62 + else if (drvr->bus_if->proto_type == BRCMF_PROTO_MSGBUF) 63 + brcmf_proto_msgbuf_detach(drvr); 73 64 kfree(drvr->proto); 74 65 drvr->proto = NULL; 75 66 }
+30 -1
drivers/net/wireless/brcm80211/brcmfmac/proto.h
··· 16 16 #ifndef BRCMFMAC_PROTO_H 17 17 #define BRCMFMAC_PROTO_H 18 18 19 + 20 + enum proto_addr_mode { 21 + ADDR_INDIRECT = 0, 22 + ADDR_DIRECT 23 + }; 24 + 25 + 19 26 struct brcmf_proto { 20 27 int (*hdrpull)(struct brcmf_pub *drvr, bool do_fws, u8 *ifidx, 21 28 struct sk_buff *skb); ··· 32 25 uint len); 33 26 int (*txdata)(struct brcmf_pub *drvr, int ifidx, u8 offset, 34 27 struct sk_buff *skb); 28 + void (*configure_addr_mode)(struct brcmf_pub *drvr, int ifidx, 29 + enum proto_addr_mode addr_mode); 30 + void (*delete_peer)(struct brcmf_pub *drvr, int ifidx, 31 + u8 peer[ETH_ALEN]); 32 + void (*add_tdls_peer)(struct brcmf_pub *drvr, int ifidx, 33 + u8 peer[ETH_ALEN]); 35 34 void *pd; 36 35 }; 37 36 ··· 61 48 return drvr->proto->set_dcmd(drvr, ifidx, cmd, buf, len); 62 49 } 63 50 static inline int brcmf_proto_txdata(struct brcmf_pub *drvr, int ifidx, 64 - u8 offset, struct sk_buff *skb) 51 + u8 offset, struct sk_buff *skb) 65 52 { 66 53 return drvr->proto->txdata(drvr, ifidx, offset, skb); 54 + } 55 + static inline void 56 + brcmf_proto_configure_addr_mode(struct brcmf_pub *drvr, int ifidx, 57 + enum proto_addr_mode addr_mode) 58 + { 59 + drvr->proto->configure_addr_mode(drvr, ifidx, addr_mode); 60 + } 61 + static inline void 62 + brcmf_proto_delete_peer(struct brcmf_pub *drvr, int ifidx, u8 peer[ETH_ALEN]) 63 + { 64 + drvr->proto->delete_peer(drvr, ifidx, peer); 65 + } 66 + static inline void 67 + brcmf_proto_add_tdls_peer(struct brcmf_pub *drvr, int ifidx, u8 peer[ETH_ALEN]) 68 + { 69 + drvr->proto->add_tdls_peer(drvr, ifidx, peer); 67 70 } 68 71 69 72
+6 -6
drivers/net/wireless/brcm80211/brcmfmac/sdio_host.h
··· 74 74 #define SBSDIO_SPROM_DATA_HIGH 0x10003 75 75 /* sprom indirect access addr byte 0 */ 76 76 #define SBSDIO_SPROM_ADDR_LOW 0x10004 77 - /* sprom indirect access addr byte 0 */ 78 - #define SBSDIO_SPROM_ADDR_HIGH 0x10005 79 - /* xtal_pu (gpio) output */ 80 - #define SBSDIO_CHIP_CTRL_DATA 0x10006 81 - /* xtal_pu (gpio) enable */ 82 - #define SBSDIO_CHIP_CTRL_EN 0x10007 77 + /* gpio select */ 78 + #define SBSDIO_GPIO_SELECT 0x10005 79 + /* gpio output */ 80 + #define SBSDIO_GPIO_OUT 0x10006 81 + /* gpio enable */ 82 + #define SBSDIO_GPIO_EN 0x10007 83 83 /* rev < 7, watermark for sdio device */ 84 84 #define SBSDIO_WATERMARK 0x10008 85 85 /* control busy signal generation */
+56 -1
drivers/net/wireless/brcm80211/brcmfmac/wl_cfg80211.c
··· 35 35 #include "wl_cfg80211.h" 36 36 #include "feature.h" 37 37 #include "fwil.h" 38 + #include "proto.h" 38 39 #include "vendor.h" 39 40 40 41 #define BRCMF_SCAN_IE_LEN_MAX 2048 ··· 494 493 return err; 495 494 } 496 495 496 + static void 497 + brcmf_cfg80211_update_proto_addr_mode(struct wireless_dev *wdev) 498 + { 499 + struct net_device *ndev = wdev->netdev; 500 + struct brcmf_if *ifp = netdev_priv(ndev); 501 + 502 + if ((wdev->iftype == NL80211_IFTYPE_ADHOC) || 503 + (wdev->iftype == NL80211_IFTYPE_AP) || 504 + (wdev->iftype == NL80211_IFTYPE_P2P_GO)) 505 + brcmf_proto_configure_addr_mode(ifp->drvr, ifp->ifidx, 506 + ADDR_DIRECT); 507 + else 508 + brcmf_proto_configure_addr_mode(ifp->drvr, ifp->ifidx, 509 + ADDR_INDIRECT); 510 + } 511 + 497 512 static bool brcmf_is_apmode(struct brcmf_cfg80211_vif *vif) 498 513 { 499 514 enum nl80211_iftype iftype; ··· 529 512 u32 *flags, 530 513 struct vif_params *params) 531 514 { 515 + struct wireless_dev *wdev; 516 + 532 517 brcmf_dbg(TRACE, "enter: %s type %d\n", name, type); 533 518 switch (type) { 534 519 case NL80211_IFTYPE_ADHOC: ··· 544 525 case NL80211_IFTYPE_P2P_CLIENT: 545 526 case NL80211_IFTYPE_P2P_GO: 546 527 case NL80211_IFTYPE_P2P_DEVICE: 547 - return brcmf_p2p_add_vif(wiphy, name, type, flags, params); 528 + wdev = brcmf_p2p_add_vif(wiphy, name, type, flags, params); 529 + if (!IS_ERR(wdev)) 530 + brcmf_cfg80211_update_proto_addr_mode(wdev); 531 + return wdev; 548 532 case NL80211_IFTYPE_UNSPECIFIED: 549 533 default: 550 534 return ERR_PTR(-EINVAL); ··· 741 719 "Adhoc" : "Infra"); 742 720 } 743 721 ndev->ieee80211_ptr->iftype = type; 722 + 723 + brcmf_cfg80211_update_proto_addr_mode(&vif->wdev); 744 724 745 725 done: 746 726 brcmf_dbg(TRACE, "Exit\n"); ··· 4155 4131 clear_bit(BRCMF_SCAN_STATUS_SUPPRESS, &cfg->scan_status); 4156 4132 } 4157 4133 4134 + static s32 4135 + brcmf_notify_tdls_peer_event(struct brcmf_if *ifp, 4136 + const struct brcmf_event_msg *e, void *data) 4137 + { 4138 + switch (e->reason) { 4139 + case BRCMF_E_REASON_TDLS_PEER_DISCOVERED: 4140 + brcmf_dbg(TRACE, "TDLS Peer Discovered\n"); 4141 + break; 4142 + case BRCMF_E_REASON_TDLS_PEER_CONNECTED: 4143 + brcmf_dbg(TRACE, "TDLS Peer Connected\n"); 4144 + brcmf_proto_add_tdls_peer(ifp->drvr, ifp->ifidx, (u8 *)e->addr); 4145 + break; 4146 + case BRCMF_E_REASON_TDLS_PEER_DISCONNECTED: 4147 + brcmf_dbg(TRACE, "TDLS Peer Disconnected\n"); 4148 + brcmf_proto_delete_peer(ifp->drvr, ifp->ifidx, (u8 *)e->addr); 4149 + break; 4150 + } 4151 + 4152 + return 0; 4153 + } 4154 + 4158 4155 static int brcmf_convert_nl80211_tdls_oper(enum nl80211_tdls_operation oper) 4159 4156 { 4160 4157 int ret; ··· 4569 4524 struct brcmf_cfg80211_profile *profile = &ifp->vif->profile; 4570 4525 struct ieee80211_channel *chan; 4571 4526 s32 err = 0; 4527 + 4528 + if ((e->event_code == BRCMF_E_DEAUTH) || 4529 + (e->event_code == BRCMF_E_DEAUTH_IND) || 4530 + (e->event_code == BRCMF_E_DISASSOC_IND) || 4531 + ((e->event_code == BRCMF_E_LINK) && (!e->flags))) { 4532 + brcmf_proto_delete_peer(ifp->drvr, ifp->ifidx, (u8 *)e->addr); 4533 + } 4572 4534 4573 4535 if (brcmf_is_apmode(ifp->vif)) { 4574 4536 err = brcmf_notify_connect_status_ap(cfg, ndev, e, data); ··· 5712 5660 if (err) { 5713 5661 brcmf_dbg(INFO, "TDLS not enabled (%d)\n", err); 5714 5662 wiphy->flags &= ~WIPHY_FLAG_SUPPORTS_TDLS; 5663 + } else { 5664 + brcmf_fweh_register(cfg->pub, BRCMF_E_TDLS_PEER_EVENT, 5665 + brcmf_notify_tdls_peer_event); 5715 5666 } 5716 5667 5717 5668 return cfg;
+11
drivers/net/wireless/brcm80211/include/brcm_hw_ids.h
··· 38 38 #define BRCM_CC_4335_CHIP_ID 0x4335 39 39 #define BRCM_CC_4339_CHIP_ID 0x4339 40 40 #define BRCM_CC_4354_CHIP_ID 0x4354 41 + #define BRCM_CC_4356_CHIP_ID 0x4356 41 42 #define BRCM_CC_43566_CHIP_ID 43566 43 + #define BRCM_CC_43567_CHIP_ID 43567 42 44 #define BRCM_CC_43569_CHIP_ID 43569 45 + #define BRCM_CC_43570_CHIP_ID 43570 46 + #define BRCM_CC_43602_CHIP_ID 43602 43 47 44 48 /* SDIO Device IDs */ 45 49 #define BRCM_SDIO_43143_DEVICE_ID BRCM_CC_43143_CHIP_ID ··· 61 57 #define BRCM_USB_43242_DEVICE_ID 0xbd1f 62 58 #define BRCM_USB_43569_DEVICE_ID 0xbd27 63 59 #define BRCM_USB_BCMFW_DEVICE_ID 0x0bdc 60 + 61 + /* PCIE Device IDs */ 62 + #define BRCM_PCIE_4354_DEVICE_ID 0x43df 63 + #define BRCM_PCIE_4356_DEVICE_ID 0x43ec 64 + #define BRCM_PCIE_43567_DEVICE_ID 0x43d3 65 + #define BRCM_PCIE_43570_DEVICE_ID 0x43d9 66 + #define BRCM_PCIE_43602_DEVICE_ID 0x43ba 64 67 65 68 /* brcmsmac IDs */ 66 69 #define BCM4313_D11N2G_ID 0x4727 /* 4313 802.11n 2.4G device */
+2 -1
drivers/net/wireless/iwlegacy/common.c
··· 2980 2980 /* Driver ilate data, only for Tx (not command) queues, 2981 2981 * not shared with device. */ 2982 2982 if (id != il->cmd_queue) { 2983 - txq->skbs = kcalloc(TFD_QUEUE_SIZE_MAX, sizeof(struct skb *), 2983 + txq->skbs = kcalloc(TFD_QUEUE_SIZE_MAX, 2984 + sizeof(struct sk_buff *), 2984 2985 GFP_KERNEL); 2985 2986 if (!txq->skbs) { 2986 2987 IL_ERR("Fail to alloc skbs\n");
+1 -1
drivers/nfc/Kconfig
··· 72 72 source "drivers/nfc/microread/Kconfig" 73 73 source "drivers/nfc/nfcmrvl/Kconfig" 74 74 source "drivers/nfc/st21nfca/Kconfig" 75 - 75 + source "drivers/nfc/st21nfcb/Kconfig" 76 76 endmenu
+2 -1
drivers/nfc/Makefile
··· 11 11 obj-$(CONFIG_NFC_PORT100) += port100.o 12 12 obj-$(CONFIG_NFC_MRVL) += nfcmrvl/ 13 13 obj-$(CONFIG_NFC_TRF7970A) += trf7970a.o 14 - obj-$(CONFIG_NFC_ST21NFCA) += st21nfca/ 14 + obj-$(CONFIG_NFC_ST21NFCA) += st21nfca/ 15 + obj-$(CONFIG_NFC_ST21NFCB) += st21nfcb/ 15 16 16 17 ccflags-$(CONFIG_NFC_DEBUG) := -DDEBUG
+1 -1
drivers/nfc/st21nfca/Makefile
··· 4 4 5 5 st21nfca_i2c-objs = i2c.o 6 6 7 - obj-$(CONFIG_NFC_ST21NFCA) += st21nfca.o 7 + obj-$(CONFIG_NFC_ST21NFCA) += st21nfca.o st21nfca_dep.o 8 8 obj-$(CONFIG_NFC_ST21NFCA_I2C) += st21nfca_i2c.o
+5 -4
drivers/nfc/st21nfca/i2c.c
··· 93 93 int hard_fault; 94 94 struct mutex phy_lock; 95 95 }; 96 - static u8 len_seq[] = { 13, 24, 15, 29 }; 96 + static u8 len_seq[] = { 16, 24, 12, 29 }; 97 97 static u16 wait_tab[] = { 2, 3, 5, 15, 20, 40}; 98 98 99 99 #define I2C_DUMP_SKB(info, skb) \ ··· 397 397 * The first read sequence does not start with SOF. 398 398 * Data is corrupeted so we drop it. 399 399 */ 400 - if (!phy->current_read_len && buf[0] != ST21NFCA_SOF_EOF) { 400 + if (!phy->current_read_len && !IS_START_OF_FRAME(buf)) { 401 401 skb_trim(skb, 0); 402 402 phy->current_read_len = 0; 403 403 return -EIO; 404 - } else if (phy->current_read_len && 405 - IS_START_OF_FRAME(buf)) { 404 + } else if (phy->current_read_len && IS_START_OF_FRAME(buf)) { 406 405 /* 407 406 * Previous frame transmission was interrupted and 408 407 * the frame got repeated. ··· 486 487 */ 487 488 nfc_hci_recv_frame(phy->hdev, phy->pending_skb); 488 489 phy->crc_trials = 0; 490 + } else { 491 + kfree_skb(phy->pending_skb); 489 492 } 490 493 491 494 phy->pending_skb = alloc_skb(ST21NFCA_HCI_LLC_MAX_SIZE * 2, GFP_KERNEL);
+271 -1
drivers/nfc/st21nfca/st21nfca.c
··· 22 22 #include <net/nfc/llc.h> 23 23 24 24 #include "st21nfca.h" 25 + #include "st21nfca_dep.h" 25 26 26 27 #define DRIVER_DESC "HCI NFC driver for ST21NFCA" 27 28 ··· 54 53 #define ST21NFCA_DM_PIPE_CREATED 0x02 55 54 #define ST21NFCA_DM_PIPE_OPEN 0x04 56 55 #define ST21NFCA_DM_RF_ACTIVE 0x80 56 + #define ST21NFCA_DM_DISCONNECT 0x30 57 57 58 58 #define ST21NFCA_DM_IS_PIPE_OPEN(p) \ 59 59 ((p & 0x0f) == (ST21NFCA_DM_PIPE_CREATED | ST21NFCA_DM_PIPE_OPEN)) ··· 74 72 {ST21NFCA_RF_READER_F_GATE, NFC_HCI_INVALID_PIPE}, 75 73 {ST21NFCA_RF_READER_14443_3_A_GATE, NFC_HCI_INVALID_PIPE}, 76 74 {ST21NFCA_RF_READER_ISO15693_GATE, NFC_HCI_INVALID_PIPE}, 75 + {ST21NFCA_RF_CARD_F_GATE, NFC_HCI_INVALID_PIPE}, 77 76 }; 78 77 79 78 struct st21nfca_pipe_info { ··· 302 299 u32 im_protocols, u32 tm_protocols) 303 300 { 304 301 int r; 302 + u32 pol_req; 303 + u8 param[19]; 304 + struct sk_buff *datarate_skb; 305 305 306 306 pr_info(DRIVER_DESC ": %s protocols 0x%x 0x%x\n", 307 307 __func__, im_protocols, tm_protocols); ··· 337 331 ST21NFCA_RF_READER_F_GATE); 338 332 if (r < 0) 339 333 return r; 334 + } else { 335 + hdev->gb = nfc_get_local_general_bytes(hdev->ndev, 336 + &hdev->gb_len); 337 + 338 + if (hdev->gb == NULL || hdev->gb_len == 0) { 339 + im_protocols &= ~NFC_PROTO_NFC_DEP_MASK; 340 + tm_protocols &= ~NFC_PROTO_NFC_DEP_MASK; 341 + } 342 + 343 + param[0] = ST21NFCA_RF_READER_F_DATARATE_106 | 344 + ST21NFCA_RF_READER_F_DATARATE_212 | 345 + ST21NFCA_RF_READER_F_DATARATE_424; 346 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_READER_F_GATE, 347 + ST21NFCA_RF_READER_F_DATARATE, 348 + param, 1); 349 + if (r < 0) 350 + return r; 351 + 352 + pol_req = 353 + be32_to_cpu(ST21NFCA_RF_READER_F_POL_REQ_DEFAULT); 354 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_READER_F_GATE, 355 + ST21NFCA_RF_READER_F_POL_REQ, 356 + (u8 *) &pol_req, 4); 357 + if (r < 0) 358 + return r; 340 359 } 341 360 342 361 if ((ST21NFCA_RF_READER_14443_3_A_GATE & im_protocols) == 0) { ··· 384 353 nfc_hci_send_event(hdev, NFC_HCI_RF_READER_A_GATE, 385 354 NFC_HCI_EVT_END_OPERATION, NULL, 0); 386 355 } 356 + 357 + if (tm_protocols & NFC_PROTO_NFC_DEP_MASK) { 358 + r = nfc_hci_get_param(hdev, ST21NFCA_RF_CARD_F_GATE, 359 + ST21NFCA_RF_CARD_F_DATARATE, 360 + &datarate_skb); 361 + if (r < 0) 362 + return r; 363 + 364 + /* Configure the maximum supported datarate to 424Kbps */ 365 + if (datarate_skb->len > 0 && 366 + datarate_skb->data[0] != 367 + ST21NFCA_RF_CARD_F_DATARATE_212_424) { 368 + param[0] = ST21NFCA_RF_CARD_F_DATARATE_212_424; 369 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 370 + ST21NFCA_RF_CARD_F_DATARATE, 371 + param, 1); 372 + if (r < 0) 373 + return r; 374 + } 375 + 376 + /* 377 + * Configure sens_res 378 + * 379 + * NFC Forum Digital Spec Table 7: 380 + * NFCID1 size: triple (10 bytes) 381 + */ 382 + param[0] = 0x00; 383 + param[1] = 0x08; 384 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 385 + ST21NFCA_RF_CARD_F_SENS_RES, param, 2); 386 + if (r < 0) 387 + return r; 388 + 389 + /* 390 + * Configure sel_res 391 + * 392 + * NFC Forum Digistal Spec Table 17: 393 + * b3 set to 0b (value b7-b6): 394 + * - 10b: Configured for NFC-DEP Protocol 395 + */ 396 + param[0] = 0x40; 397 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 398 + ST21NFCA_RF_CARD_F_SEL_RES, param, 1); 399 + if (r < 0) 400 + return r; 401 + 402 + /* Configure NFCID1 Random uid */ 403 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 404 + ST21NFCA_RF_CARD_F_NFCID1, NULL, 0); 405 + if (r < 0) 406 + return r; 407 + 408 + /* Configure NFCID2_LIST */ 409 + /* System Code */ 410 + param[0] = 0x00; 411 + param[1] = 0x00; 412 + /* NFCID2 */ 413 + param[2] = 0x01; 414 + param[3] = 0xfe; 415 + param[4] = 'S'; 416 + param[5] = 'T'; 417 + param[6] = 'M'; 418 + param[7] = 'i'; 419 + param[8] = 'c'; 420 + param[9] = 'r'; 421 + /* 8 byte Pad bytes used for polling respone frame */ 422 + 423 + /* 424 + * Configuration byte: 425 + * - bit 0: define the default NFCID2 entry used when the 426 + * system code is equal to 'FFFF' 427 + * - bit 1: use a random value for lowest 6 bytes of 428 + * NFCID2 value 429 + * - bit 2: ignore polling request frame if request code 430 + * is equal to '01' 431 + * - Other bits are RFU 432 + */ 433 + param[18] = 0x01; 434 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 435 + ST21NFCA_RF_CARD_F_NFCID2_LIST, param, 436 + 19); 437 + if (r < 0) 438 + return r; 439 + 440 + param[0] = 0x02; 441 + r = nfc_hci_set_param(hdev, ST21NFCA_RF_CARD_F_GATE, 442 + ST21NFCA_RF_CARD_F_MODE, param, 1); 443 + } 444 + 387 445 return r; 446 + } 447 + 448 + static void st21nfca_hci_stop_poll(struct nfc_hci_dev *hdev) 449 + { 450 + nfc_hci_send_cmd(hdev, ST21NFCA_DEVICE_MGNT_GATE, 451 + ST21NFCA_DM_DISCONNECT, NULL, 0, NULL); 388 452 } 389 453 390 454 static int st21nfca_get_iso14443_3_atqa(struct nfc_hci_dev *hdev, u16 *atqa) ··· 577 451 return r; 578 452 } 579 453 454 + static int st21nfca_hci_dep_link_up(struct nfc_hci_dev *hdev, 455 + struct nfc_target *target, u8 comm_mode, 456 + u8 *gb, size_t gb_len) 457 + { 458 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 459 + 460 + info->dep_info.idx = target->idx; 461 + return st21nfca_im_send_atr_req(hdev, gb, gb_len); 462 + } 463 + 464 + static int st21nfca_hci_dep_link_down(struct nfc_hci_dev *hdev) 465 + { 466 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 467 + 468 + info->state = ST21NFCA_ST_READY; 469 + 470 + return nfc_hci_send_cmd(hdev, ST21NFCA_DEVICE_MGNT_GATE, 471 + ST21NFCA_DM_DISCONNECT, NULL, 0, NULL); 472 + } 473 + 580 474 static int st21nfca_hci_target_from_gate(struct nfc_hci_dev *hdev, u8 gate, 581 475 struct nfc_target *target) 582 476 { ··· 651 505 return 0; 652 506 } 653 507 508 + static int st21nfca_hci_complete_target_discovered(struct nfc_hci_dev *hdev, 509 + u8 gate, 510 + struct nfc_target *target) 511 + { 512 + int r; 513 + struct sk_buff *nfcid2_skb = NULL, *nfcid1_skb; 514 + 515 + if (gate == ST21NFCA_RF_READER_F_GATE) { 516 + r = nfc_hci_get_param(hdev, ST21NFCA_RF_READER_F_GATE, 517 + ST21NFCA_RF_READER_F_NFCID2, &nfcid2_skb); 518 + if (r < 0) 519 + goto exit; 520 + 521 + if (nfcid2_skb->len > NFC_SENSF_RES_MAXSIZE) { 522 + r = -EPROTO; 523 + goto exit; 524 + } 525 + 526 + /* 527 + * - After the recepton of polling response for type F frame 528 + * at 212 or 424 Kbit/s, NFCID2 registry parameters will be 529 + * updated. 530 + * - After the reception of SEL_RES with NFCIP-1 compliant bit 531 + * set for type A frame NFCID1 will be updated 532 + */ 533 + if (nfcid2_skb->len > 0) { 534 + /* P2P in type F */ 535 + memcpy(target->sensf_res, nfcid2_skb->data, 536 + nfcid2_skb->len); 537 + target->sensf_res_len = nfcid2_skb->len; 538 + /* NFC Forum Digital Protocol Table 44 */ 539 + if (target->sensf_res[0] == 0x01 && 540 + target->sensf_res[1] == 0xfe) 541 + target->supported_protocols = 542 + NFC_PROTO_NFC_DEP_MASK; 543 + else 544 + target->supported_protocols = 545 + NFC_PROTO_FELICA_MASK; 546 + } else { 547 + /* P2P in type A */ 548 + r = nfc_hci_get_param(hdev, ST21NFCA_RF_READER_F_GATE, 549 + ST21NFCA_RF_READER_F_NFCID1, 550 + &nfcid1_skb); 551 + if (r < 0) 552 + goto exit; 553 + 554 + if (nfcid1_skb->len > NFC_NFCID1_MAXSIZE) { 555 + r = -EPROTO; 556 + goto exit; 557 + } 558 + memcpy(target->sensf_res, nfcid1_skb->data, 559 + nfcid1_skb->len); 560 + target->sensf_res_len = nfcid1_skb->len; 561 + target->supported_protocols = NFC_PROTO_NFC_DEP_MASK; 562 + } 563 + target->hci_reader_gate = ST21NFCA_RF_READER_F_GATE; 564 + } 565 + r = 1; 566 + exit: 567 + kfree_skb(nfcid2_skb); 568 + return r; 569 + } 570 + 654 571 #define ST21NFCA_CB_TYPE_READER_ISO15693 1 655 572 static void st21nfca_hci_data_exchange_cb(void *context, struct sk_buff *skb, 656 573 int err) ··· 750 541 751 542 switch (target->hci_reader_gate) { 752 543 case ST21NFCA_RF_READER_F_GATE: 544 + if (target->supported_protocols == NFC_PROTO_NFC_DEP_MASK) 545 + return st21nfca_im_send_dep_req(hdev, skb); 546 + 753 547 *skb_push(skb, 1) = 0x1a; 754 548 return nfc_hci_send_cmd_async(hdev, target->hci_reader_gate, 755 549 ST21NFCA_WR_XCHG_DATA, skb->data, ··· 781 569 } 782 570 } 783 571 572 + static int st21nfca_hci_tm_send(struct nfc_hci_dev *hdev, struct sk_buff *skb) 573 + { 574 + return st21nfca_tm_send_dep_res(hdev, skb); 575 + } 576 + 784 577 static int st21nfca_hci_check_presence(struct nfc_hci_dev *hdev, 785 578 struct nfc_target *target) 786 579 { ··· 811 594 } 812 595 } 813 596 597 + /* 598 + * Returns: 599 + * <= 0: driver handled the event, skb consumed 600 + * 1: driver does not handle the event, please do standard processing 601 + */ 602 + static int st21nfca_hci_event_received(struct nfc_hci_dev *hdev, u8 gate, 603 + u8 event, struct sk_buff *skb) 604 + { 605 + int r; 606 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 607 + 608 + pr_debug("hci event: %d\n", event); 609 + 610 + switch (event) { 611 + case ST21NFCA_EVT_CARD_ACTIVATED: 612 + if (gate == ST21NFCA_RF_CARD_F_GATE) 613 + info->dep_info.curr_nfc_dep_pni = 0; 614 + break; 615 + case ST21NFCA_EVT_CARD_DEACTIVATED: 616 + break; 617 + case ST21NFCA_EVT_FIELD_ON: 618 + break; 619 + case ST21NFCA_EVT_FIELD_OFF: 620 + break; 621 + case ST21NFCA_EVT_SEND_DATA: 622 + if (gate == ST21NFCA_RF_CARD_F_GATE) { 623 + r = st21nfca_tm_event_send_data(hdev, skb, gate); 624 + if (r < 0) 625 + goto exit; 626 + return 0; 627 + } else { 628 + info->dep_info.curr_nfc_dep_pni = 0; 629 + return 1; 630 + } 631 + break; 632 + default: 633 + return 1; 634 + } 635 + kfree_skb(skb); 636 + return 0; 637 + exit: 638 + return r; 639 + } 640 + 814 641 static struct nfc_hci_ops st21nfca_hci_ops = { 815 642 .open = st21nfca_hci_open, 816 643 .close = st21nfca_hci_close, ··· 862 601 .hci_ready = st21nfca_hci_ready, 863 602 .xmit = st21nfca_hci_xmit, 864 603 .start_poll = st21nfca_hci_start_poll, 604 + .stop_poll = st21nfca_hci_stop_poll, 605 + .dep_link_up = st21nfca_hci_dep_link_up, 606 + .dep_link_down = st21nfca_hci_dep_link_down, 865 607 .target_from_gate = st21nfca_hci_target_from_gate, 608 + .complete_target_discovered = st21nfca_hci_complete_target_discovered, 866 609 .im_transceive = st21nfca_hci_im_transceive, 610 + .tm_send = st21nfca_hci_tm_send, 867 611 .check_presence = st21nfca_hci_check_presence, 612 + .event_received = st21nfca_hci_event_received, 868 613 }; 869 614 870 615 int st21nfca_hci_probe(void *phy_id, struct nfc_phy_ops *phy_ops, ··· 915 648 NFC_PROTO_FELICA_MASK | 916 649 NFC_PROTO_ISO14443_MASK | 917 650 NFC_PROTO_ISO14443_B_MASK | 918 - NFC_PROTO_ISO15693_MASK; 651 + NFC_PROTO_ISO15693_MASK | 652 + NFC_PROTO_NFC_DEP_MASK; 919 653 920 654 set_bit(NFC_HCI_QUIRK_SHORT_CLEAR, &quirks); 921 655 ··· 939 671 goto err_regdev; 940 672 941 673 *hdev = info->hdev; 674 + st21nfca_dep_init(info->hdev); 942 675 943 676 return 0; 944 677 ··· 957 688 { 958 689 struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 959 690 691 + st21nfca_dep_deinit(hdev); 960 692 nfc_hci_unregister_device(hdev); 961 693 nfc_hci_free_device(hdev); 962 694 kfree(info);
+25 -1
drivers/nfc/st21nfca/st21nfca.h
··· 19 19 20 20 #include <net/nfc/hci.h> 21 21 22 + #include "st21nfca_dep.h" 23 + 22 24 #define HCI_MODE 0 23 25 24 26 /* framing in HCI mode */ ··· 75 73 data_exchange_cb_t async_cb; 76 74 void *async_cb_context; 77 75 78 - } __packed; 76 + struct st21nfca_dep_info dep_info; 77 + }; 79 78 80 79 /* Reader RF commands */ 81 80 #define ST21NFCA_WR_XCHG_DATA 0x10 ··· 86 83 #define ST21NFCA_RF_READER_F_DATARATE_106 0x01 87 84 #define ST21NFCA_RF_READER_F_DATARATE_212 0x02 88 85 #define ST21NFCA_RF_READER_F_DATARATE_424 0x04 86 + #define ST21NFCA_RF_READER_F_POL_REQ 0x02 87 + #define ST21NFCA_RF_READER_F_POL_REQ_DEFAULT 0xffff0000 88 + #define ST21NFCA_RF_READER_F_NFCID2 0x03 89 + #define ST21NFCA_RF_READER_F_NFCID1 0x04 90 + #define ST21NFCA_RF_READER_F_SENS_RES 0x05 91 + 92 + #define ST21NFCA_RF_CARD_F_GATE 0x24 93 + #define ST21NFCA_RF_CARD_F_MODE 0x01 94 + #define ST21NFCA_RF_CARD_F_NFCID2_LIST 0x04 95 + #define ST21NFCA_RF_CARD_F_NFCID1 0x05 96 + #define ST21NFCA_RF_CARD_F_SENS_RES 0x06 97 + #define ST21NFCA_RF_CARD_F_SEL_RES 0x07 98 + #define ST21NFCA_RF_CARD_F_DATARATE 0x08 99 + #define ST21NFCA_RF_CARD_F_DATARATE_106 0x00 100 + #define ST21NFCA_RF_CARD_F_DATARATE_212_424 0x01 101 + 102 + #define ST21NFCA_EVT_SEND_DATA 0x10 103 + #define ST21NFCA_EVT_FIELD_ON 0x11 104 + #define ST21NFCA_EVT_CARD_DEACTIVATED 0x12 105 + #define ST21NFCA_EVT_CARD_ACTIVATED 0x13 106 + #define ST21NFCA_EVT_FIELD_OFF 0x14 89 107 90 108 #endif /* __LOCAL_ST21NFCA_H_ */
+661
drivers/nfc/st21nfca/st21nfca_dep.c
··· 1 + /* 2 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <net/nfc/hci.h> 18 + 19 + #include "st21nfca.h" 20 + #include "st21nfca_dep.h" 21 + 22 + #define ST21NFCA_NFCIP1_INITIATOR 0x00 23 + #define ST21NFCA_NFCIP1_REQ 0xd4 24 + #define ST21NFCA_NFCIP1_RES 0xd5 25 + #define ST21NFCA_NFCIP1_ATR_REQ 0x00 26 + #define ST21NFCA_NFCIP1_ATR_RES 0x01 27 + #define ST21NFCA_NFCIP1_PSL_REQ 0x04 28 + #define ST21NFCA_NFCIP1_PSL_RES 0x05 29 + #define ST21NFCA_NFCIP1_DEP_REQ 0x06 30 + #define ST21NFCA_NFCIP1_DEP_RES 0x07 31 + 32 + #define ST21NFCA_NFC_DEP_PFB_PNI(pfb) ((pfb) & 0x03) 33 + #define ST21NFCA_NFC_DEP_PFB_TYPE(pfb) ((pfb) & 0xE0) 34 + #define ST21NFCA_NFC_DEP_PFB_IS_TIMEOUT(pfb) \ 35 + ((pfb) & ST21NFCA_NFC_DEP_PFB_TIMEOUT_BIT) 36 + #define ST21NFCA_NFC_DEP_DID_BIT_SET(pfb) ((pfb) & 0x04) 37 + #define ST21NFCA_NFC_DEP_NAD_BIT_SET(pfb) ((pfb) & 0x08) 38 + #define ST21NFCA_NFC_DEP_PFB_TIMEOUT_BIT 0x10 39 + 40 + #define ST21NFCA_NFC_DEP_PFB_IS_TIMEOUT(pfb) \ 41 + ((pfb) & ST21NFCA_NFC_DEP_PFB_TIMEOUT_BIT) 42 + 43 + #define ST21NFCA_NFC_DEP_PFB_I_PDU 0x00 44 + #define ST21NFCA_NFC_DEP_PFB_ACK_NACK_PDU 0x40 45 + #define ST21NFCA_NFC_DEP_PFB_SUPERVISOR_PDU 0x80 46 + 47 + #define ST21NFCA_ATR_REQ_MIN_SIZE 17 48 + #define ST21NFCA_ATR_REQ_MAX_SIZE 65 49 + #define ST21NFCA_LR_BITS_PAYLOAD_SIZE_254B 0x30 50 + #define ST21NFCA_GB_BIT 0x02 51 + 52 + #define ST21NFCA_EVT_CARD_F_BITRATE 0x16 53 + #define ST21NFCA_EVT_READER_F_BITRATE 0x13 54 + #define ST21NFCA_PSL_REQ_SEND_SPEED(brs) (brs & 0x38) 55 + #define ST21NFCA_PSL_REQ_RECV_SPEED(brs) (brs & 0x07) 56 + #define ST21NFCA_PP2LRI(pp) ((pp & 0x30) >> 4) 57 + #define ST21NFCA_CARD_BITRATE_212 0x01 58 + #define ST21NFCA_CARD_BITRATE_424 0x02 59 + 60 + #define ST21NFCA_DEFAULT_TIMEOUT 0x0a 61 + 62 + 63 + #define PROTOCOL_ERR(req) pr_err("%d: ST21NFCA Protocol error: %s\n", \ 64 + __LINE__, req) 65 + 66 + struct st21nfca_atr_req { 67 + u8 length; 68 + u8 cmd0; 69 + u8 cmd1; 70 + u8 nfcid3[NFC_NFCID3_MAXSIZE]; 71 + u8 did; 72 + u8 bsi; 73 + u8 bri; 74 + u8 ppi; 75 + u8 gbi[0]; 76 + } __packed; 77 + 78 + struct st21nfca_atr_res { 79 + u8 length; 80 + u8 cmd0; 81 + u8 cmd1; 82 + u8 nfcid3[NFC_NFCID3_MAXSIZE]; 83 + u8 did; 84 + u8 bsi; 85 + u8 bri; 86 + u8 to; 87 + u8 ppi; 88 + u8 gbi[0]; 89 + } __packed; 90 + 91 + struct st21nfca_psl_req { 92 + u8 length; 93 + u8 cmd0; 94 + u8 cmd1; 95 + u8 did; 96 + u8 brs; 97 + u8 fsl; 98 + } __packed; 99 + 100 + struct st21nfca_psl_res { 101 + u8 length; 102 + u8 cmd0; 103 + u8 cmd1; 104 + u8 did; 105 + } __packed; 106 + 107 + struct st21nfca_dep_req_res { 108 + u8 length; 109 + u8 cmd0; 110 + u8 cmd1; 111 + u8 pfb; 112 + u8 did; 113 + u8 nad; 114 + } __packed; 115 + 116 + static void st21nfca_tx_work(struct work_struct *work) 117 + { 118 + struct st21nfca_hci_info *info = container_of(work, 119 + struct st21nfca_hci_info, 120 + dep_info.tx_work); 121 + 122 + struct nfc_dev *dev; 123 + struct sk_buff *skb; 124 + if (info) { 125 + dev = info->hdev->ndev; 126 + skb = info->dep_info.tx_pending; 127 + 128 + device_lock(&dev->dev); 129 + 130 + nfc_hci_send_cmd_async(info->hdev, ST21NFCA_RF_READER_F_GATE, 131 + ST21NFCA_WR_XCHG_DATA, 132 + skb->data, skb->len, 133 + info->async_cb, info); 134 + device_unlock(&dev->dev); 135 + kfree_skb(skb); 136 + } 137 + } 138 + 139 + static void st21nfca_im_send_pdu(struct st21nfca_hci_info *info, 140 + struct sk_buff *skb) 141 + { 142 + info->dep_info.tx_pending = skb; 143 + schedule_work(&info->dep_info.tx_work); 144 + } 145 + 146 + static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev, 147 + struct st21nfca_atr_req *atr_req) 148 + { 149 + struct st21nfca_atr_res *atr_res; 150 + struct sk_buff *skb; 151 + size_t gb_len; 152 + int r; 153 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 154 + 155 + gb_len = atr_req->length - sizeof(struct st21nfca_atr_req); 156 + skb = alloc_skb(atr_req->length + 1, GFP_KERNEL); 157 + if (!skb) 158 + return -ENOMEM; 159 + 160 + skb_put(skb, sizeof(struct st21nfca_atr_res)); 161 + 162 + atr_res = (struct st21nfca_atr_res *)skb->data; 163 + memset(atr_res, 0, sizeof(struct st21nfca_atr_res)); 164 + 165 + atr_res->length = atr_req->length + 1; 166 + atr_res->cmd0 = ST21NFCA_NFCIP1_RES; 167 + atr_res->cmd1 = ST21NFCA_NFCIP1_ATR_RES; 168 + 169 + memcpy(atr_res->nfcid3, atr_req->nfcid3, 6); 170 + atr_res->bsi = 0x00; 171 + atr_res->bri = 0x00; 172 + atr_res->to = ST21NFCA_DEFAULT_TIMEOUT; 173 + atr_res->ppi = ST21NFCA_LR_BITS_PAYLOAD_SIZE_254B; 174 + 175 + if (gb_len) { 176 + skb_put(skb, gb_len); 177 + 178 + atr_res->ppi |= ST21NFCA_GB_BIT; 179 + memcpy(atr_res->gbi, atr_req->gbi, gb_len); 180 + r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi, 181 + gb_len); 182 + if (r < 0) 183 + return r; 184 + } 185 + 186 + info->dep_info.curr_nfc_dep_pni = 0; 187 + 188 + return nfc_hci_send_event(hdev, ST21NFCA_RF_CARD_F_GATE, 189 + ST21NFCA_EVT_SEND_DATA, skb->data, skb->len); 190 + } 191 + 192 + static int st21nfca_tm_recv_atr_req(struct nfc_hci_dev *hdev, 193 + struct sk_buff *skb) 194 + { 195 + struct st21nfca_atr_req *atr_req; 196 + size_t gb_len; 197 + int r; 198 + 199 + skb_trim(skb, skb->len - 1); 200 + if (IS_ERR(skb)) { 201 + r = PTR_ERR(skb); 202 + goto exit; 203 + } 204 + 205 + if (!skb->len) { 206 + r = -EIO; 207 + goto exit; 208 + } 209 + 210 + if (skb->len < ST21NFCA_ATR_REQ_MIN_SIZE) { 211 + r = -EPROTO; 212 + goto exit; 213 + } 214 + 215 + atr_req = (struct st21nfca_atr_req *)skb->data; 216 + 217 + r = st21nfca_tm_send_atr_res(hdev, atr_req); 218 + if (r) 219 + goto exit; 220 + 221 + gb_len = skb->len - sizeof(struct st21nfca_atr_req); 222 + 223 + r = nfc_tm_activated(hdev->ndev, NFC_PROTO_NFC_DEP_MASK, 224 + NFC_COMM_PASSIVE, atr_req->gbi, gb_len); 225 + if (r) 226 + goto exit; 227 + 228 + r = 0; 229 + 230 + exit: 231 + return r; 232 + } 233 + 234 + static int st21nfca_tm_send_psl_res(struct nfc_hci_dev *hdev, 235 + struct st21nfca_psl_req *psl_req) 236 + { 237 + struct st21nfca_psl_res *psl_res; 238 + struct sk_buff *skb; 239 + u8 bitrate[2] = {0, 0}; 240 + 241 + int r; 242 + 243 + skb = alloc_skb(sizeof(struct st21nfca_psl_res), GFP_KERNEL); 244 + if (!skb) 245 + return -ENOMEM; 246 + skb_put(skb, sizeof(struct st21nfca_psl_res)); 247 + 248 + psl_res = (struct st21nfca_psl_res *)skb->data; 249 + 250 + psl_res->length = sizeof(struct st21nfca_psl_res); 251 + psl_res->cmd0 = ST21NFCA_NFCIP1_RES; 252 + psl_res->cmd1 = ST21NFCA_NFCIP1_PSL_RES; 253 + psl_res->did = psl_req->did; 254 + 255 + r = nfc_hci_send_event(hdev, ST21NFCA_RF_CARD_F_GATE, 256 + ST21NFCA_EVT_SEND_DATA, skb->data, skb->len); 257 + 258 + /* 259 + * ST21NFCA only support P2P passive. 260 + * PSL_REQ BRS value != 0 has only a meaning to 261 + * change technology to type F. 262 + * We change to BITRATE 424Kbits. 263 + * In other case switch to BITRATE 106Kbits. 264 + */ 265 + if (ST21NFCA_PSL_REQ_SEND_SPEED(psl_req->brs) && 266 + ST21NFCA_PSL_REQ_RECV_SPEED(psl_req->brs)) { 267 + bitrate[0] = ST21NFCA_CARD_BITRATE_424; 268 + bitrate[1] = ST21NFCA_CARD_BITRATE_424; 269 + } 270 + 271 + /* Send an event to change bitrate change event to card f */ 272 + return nfc_hci_send_event(hdev, ST21NFCA_RF_CARD_F_GATE, 273 + ST21NFCA_EVT_CARD_F_BITRATE, bitrate, 2); 274 + } 275 + 276 + static int st21nfca_tm_recv_psl_req(struct nfc_hci_dev *hdev, 277 + struct sk_buff *skb) 278 + { 279 + struct st21nfca_psl_req *psl_req; 280 + int r; 281 + 282 + skb_trim(skb, skb->len - 1); 283 + if (IS_ERR(skb)) { 284 + r = PTR_ERR(skb); 285 + skb = NULL; 286 + goto exit; 287 + } 288 + 289 + if (!skb->len) { 290 + r = -EIO; 291 + goto exit; 292 + } 293 + 294 + psl_req = (struct st21nfca_psl_req *)skb->data; 295 + 296 + if (skb->len < sizeof(struct st21nfca_psl_req)) { 297 + r = -EIO; 298 + goto exit; 299 + } 300 + 301 + r = st21nfca_tm_send_psl_res(hdev, psl_req); 302 + exit: 303 + return r; 304 + } 305 + 306 + int st21nfca_tm_send_dep_res(struct nfc_hci_dev *hdev, struct sk_buff *skb) 307 + { 308 + int r; 309 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 310 + 311 + *skb_push(skb, 1) = info->dep_info.curr_nfc_dep_pni; 312 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_DEP_RES; 313 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_RES; 314 + *skb_push(skb, 1) = skb->len; 315 + 316 + r = nfc_hci_send_event(hdev, ST21NFCA_RF_CARD_F_GATE, 317 + ST21NFCA_EVT_SEND_DATA, skb->data, skb->len); 318 + kfree_skb(skb); 319 + 320 + return r; 321 + } 322 + EXPORT_SYMBOL(st21nfca_tm_send_dep_res); 323 + 324 + static int st21nfca_tm_recv_dep_req(struct nfc_hci_dev *hdev, 325 + struct sk_buff *skb) 326 + { 327 + struct st21nfca_dep_req_res *dep_req; 328 + u8 size; 329 + int r; 330 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 331 + 332 + skb_trim(skb, skb->len - 1); 333 + if (IS_ERR(skb)) { 334 + r = PTR_ERR(skb); 335 + skb = NULL; 336 + goto exit; 337 + } 338 + 339 + size = 4; 340 + 341 + dep_req = (struct st21nfca_dep_req_res *)skb->data; 342 + if (skb->len < size) { 343 + r = -EIO; 344 + goto exit; 345 + } 346 + 347 + if (ST21NFCA_NFC_DEP_DID_BIT_SET(dep_req->pfb)) 348 + size++; 349 + if (ST21NFCA_NFC_DEP_NAD_BIT_SET(dep_req->pfb)) 350 + size++; 351 + 352 + if (skb->len < size) { 353 + r = -EIO; 354 + goto exit; 355 + } 356 + 357 + /* Receiving DEP_REQ - Decoding */ 358 + switch (ST21NFCA_NFC_DEP_PFB_TYPE(dep_req->pfb)) { 359 + case ST21NFCA_NFC_DEP_PFB_I_PDU: 360 + info->dep_info.curr_nfc_dep_pni = 361 + ST21NFCA_NFC_DEP_PFB_PNI(dep_req->pfb); 362 + break; 363 + case ST21NFCA_NFC_DEP_PFB_ACK_NACK_PDU: 364 + pr_err("Received a ACK/NACK PDU\n"); 365 + break; 366 + case ST21NFCA_NFC_DEP_PFB_SUPERVISOR_PDU: 367 + pr_err("Received a SUPERVISOR PDU\n"); 368 + break; 369 + } 370 + 371 + if (IS_ERR(skb)) { 372 + r = PTR_ERR(skb); 373 + skb = NULL; 374 + goto exit; 375 + } 376 + 377 + skb_pull(skb, size); 378 + 379 + return nfc_tm_data_received(hdev->ndev, skb); 380 + exit: 381 + return r; 382 + } 383 + 384 + int st21nfca_tm_event_send_data(struct nfc_hci_dev *hdev, struct sk_buff *skb, 385 + u8 gate) 386 + { 387 + u8 cmd0, cmd1; 388 + int r; 389 + 390 + cmd0 = skb->data[1]; 391 + switch (cmd0) { 392 + case ST21NFCA_NFCIP1_REQ: 393 + cmd1 = skb->data[2]; 394 + switch (cmd1) { 395 + case ST21NFCA_NFCIP1_ATR_REQ: 396 + r = st21nfca_tm_recv_atr_req(hdev, skb); 397 + break; 398 + case ST21NFCA_NFCIP1_PSL_REQ: 399 + r = st21nfca_tm_recv_psl_req(hdev, skb); 400 + break; 401 + case ST21NFCA_NFCIP1_DEP_REQ: 402 + r = st21nfca_tm_recv_dep_req(hdev, skb); 403 + break; 404 + default: 405 + return 1; 406 + } 407 + default: 408 + return 1; 409 + } 410 + return r; 411 + } 412 + EXPORT_SYMBOL(st21nfca_tm_event_send_data); 413 + 414 + static void st21nfca_im_send_psl_req(struct nfc_hci_dev *hdev, u8 did, u8 bsi, 415 + u8 bri, u8 lri) 416 + { 417 + struct sk_buff *skb; 418 + struct st21nfca_psl_req *psl_req; 419 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 420 + 421 + skb = 422 + alloc_skb(sizeof(struct st21nfca_psl_req) + 1, GFP_KERNEL); 423 + if (!skb) 424 + return; 425 + skb_reserve(skb, 1); 426 + 427 + skb_put(skb, sizeof(struct st21nfca_psl_req)); 428 + psl_req = (struct st21nfca_psl_req *) skb->data; 429 + 430 + psl_req->length = sizeof(struct st21nfca_psl_req); 431 + psl_req->cmd0 = ST21NFCA_NFCIP1_REQ; 432 + psl_req->cmd1 = ST21NFCA_NFCIP1_PSL_REQ; 433 + psl_req->did = did; 434 + psl_req->brs = (0x30 & bsi << 4) | (bri & 0x03); 435 + psl_req->fsl = lri; 436 + 437 + *skb_push(skb, 1) = info->dep_info.to | 0x10; 438 + 439 + st21nfca_im_send_pdu(info, skb); 440 + 441 + kfree_skb(skb); 442 + } 443 + 444 + #define ST21NFCA_CB_TYPE_READER_F 1 445 + static void st21nfca_im_recv_atr_res_cb(void *context, struct sk_buff *skb, 446 + int err) 447 + { 448 + struct st21nfca_hci_info *info = context; 449 + struct st21nfca_atr_res *atr_res; 450 + int r; 451 + 452 + if (err != 0) 453 + return; 454 + 455 + if (IS_ERR(skb)) 456 + return; 457 + 458 + switch (info->async_cb_type) { 459 + case ST21NFCA_CB_TYPE_READER_F: 460 + skb_trim(skb, skb->len - 1); 461 + atr_res = (struct st21nfca_atr_res *)skb->data; 462 + r = nfc_set_remote_general_bytes(info->hdev->ndev, 463 + atr_res->gbi, 464 + skb->len - sizeof(struct st21nfca_atr_res)); 465 + if (r < 0) 466 + return; 467 + 468 + if (atr_res->to >= 0x0e) 469 + info->dep_info.to = 0x0e; 470 + else 471 + info->dep_info.to = atr_res->to + 1; 472 + 473 + info->dep_info.to |= 0x10; 474 + 475 + r = nfc_dep_link_is_up(info->hdev->ndev, info->dep_info.idx, 476 + NFC_COMM_PASSIVE, NFC_RF_INITIATOR); 477 + if (r < 0) 478 + return; 479 + 480 + info->dep_info.curr_nfc_dep_pni = 0; 481 + if (ST21NFCA_PP2LRI(atr_res->ppi) != info->dep_info.lri) 482 + st21nfca_im_send_psl_req(info->hdev, atr_res->did, 483 + atr_res->bsi, atr_res->bri, 484 + ST21NFCA_PP2LRI(atr_res->ppi)); 485 + break; 486 + default: 487 + if (err == 0) 488 + kfree_skb(skb); 489 + break; 490 + } 491 + } 492 + 493 + int st21nfca_im_send_atr_req(struct nfc_hci_dev *hdev, u8 *gb, size_t gb_len) 494 + { 495 + struct sk_buff *skb; 496 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 497 + struct st21nfca_atr_req *atr_req; 498 + struct nfc_target *target; 499 + uint size; 500 + 501 + info->dep_info.to = ST21NFCA_DEFAULT_TIMEOUT; 502 + size = ST21NFCA_ATR_REQ_MIN_SIZE + gb_len; 503 + if (size > ST21NFCA_ATR_REQ_MAX_SIZE) { 504 + PROTOCOL_ERR("14.6.1.1"); 505 + return -EINVAL; 506 + } 507 + 508 + skb = 509 + alloc_skb(sizeof(struct st21nfca_atr_req) + gb_len + 1, GFP_KERNEL); 510 + if (!skb) 511 + return -ENOMEM; 512 + 513 + skb_reserve(skb, 1); 514 + 515 + skb_put(skb, sizeof(struct st21nfca_atr_req)); 516 + 517 + atr_req = (struct st21nfca_atr_req *)skb->data; 518 + memset(atr_req, 0, sizeof(struct st21nfca_atr_req)); 519 + 520 + atr_req->cmd0 = ST21NFCA_NFCIP1_REQ; 521 + atr_req->cmd1 = ST21NFCA_NFCIP1_ATR_REQ; 522 + memset(atr_req->nfcid3, 0, NFC_NFCID3_MAXSIZE); 523 + target = hdev->ndev->targets; 524 + 525 + if (target->sensf_res) 526 + memcpy(atr_req->nfcid3, target->sensf_res, 527 + target->sensf_res_len); 528 + else 529 + get_random_bytes(atr_req->nfcid3, NFC_NFCID3_MAXSIZE); 530 + 531 + atr_req->did = 0x0; 532 + 533 + atr_req->bsi = 0x00; 534 + atr_req->bri = 0x00; 535 + atr_req->ppi = ST21NFCA_LR_BITS_PAYLOAD_SIZE_254B; 536 + if (gb_len) { 537 + atr_req->ppi |= ST21NFCA_GB_BIT; 538 + memcpy(skb_put(skb, gb_len), gb, gb_len); 539 + } 540 + atr_req->length = sizeof(struct st21nfca_atr_req) + hdev->gb_len; 541 + 542 + *skb_push(skb, 1) = info->dep_info.to | 0x10; /* timeout */ 543 + 544 + info->async_cb_type = ST21NFCA_CB_TYPE_READER_F; 545 + info->async_cb_context = info; 546 + info->async_cb = st21nfca_im_recv_atr_res_cb; 547 + info->dep_info.bri = atr_req->bri; 548 + info->dep_info.bsi = atr_req->bsi; 549 + info->dep_info.lri = ST21NFCA_PP2LRI(atr_req->ppi); 550 + 551 + return nfc_hci_send_cmd_async(hdev, ST21NFCA_RF_READER_F_GATE, 552 + ST21NFCA_WR_XCHG_DATA, skb->data, 553 + skb->len, info->async_cb, info); 554 + } 555 + EXPORT_SYMBOL(st21nfca_im_send_atr_req); 556 + 557 + static void st21nfca_im_recv_dep_res_cb(void *context, struct sk_buff *skb, 558 + int err) 559 + { 560 + struct st21nfca_hci_info *info = context; 561 + struct st21nfca_dep_req_res *dep_res; 562 + 563 + int size; 564 + 565 + if (err != 0) 566 + return; 567 + 568 + if (IS_ERR(skb)) 569 + return; 570 + 571 + switch (info->async_cb_type) { 572 + case ST21NFCA_CB_TYPE_READER_F: 573 + dep_res = (struct st21nfca_dep_req_res *)skb->data; 574 + 575 + size = 3; 576 + if (skb->len < size) 577 + goto exit; 578 + 579 + if (ST21NFCA_NFC_DEP_DID_BIT_SET(dep_res->pfb)) 580 + size++; 581 + if (ST21NFCA_NFC_DEP_NAD_BIT_SET(dep_res->pfb)) 582 + size++; 583 + 584 + if (skb->len < size) 585 + goto exit; 586 + 587 + skb_trim(skb, skb->len - 1); 588 + 589 + /* Receiving DEP_REQ - Decoding */ 590 + switch (ST21NFCA_NFC_DEP_PFB_TYPE(dep_res->pfb)) { 591 + case ST21NFCA_NFC_DEP_PFB_ACK_NACK_PDU: 592 + pr_err("Received a ACK/NACK PDU\n"); 593 + case ST21NFCA_NFC_DEP_PFB_I_PDU: 594 + info->dep_info.curr_nfc_dep_pni = 595 + ST21NFCA_NFC_DEP_PFB_PNI(dep_res->pfb + 1); 596 + size++; 597 + skb_pull(skb, size); 598 + nfc_tm_data_received(info->hdev->ndev, skb); 599 + break; 600 + case ST21NFCA_NFC_DEP_PFB_SUPERVISOR_PDU: 601 + pr_err("Received a SUPERVISOR PDU\n"); 602 + skb_pull(skb, size); 603 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_DEP_REQ; 604 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_REQ; 605 + *skb_push(skb, 1) = skb->len; 606 + *skb_push(skb, 1) = info->dep_info.to | 0x10; 607 + 608 + st21nfca_im_send_pdu(info, skb); 609 + break; 610 + } 611 + 612 + return; 613 + default: 614 + break; 615 + } 616 + 617 + exit: 618 + if (err == 0) 619 + kfree_skb(skb); 620 + } 621 + 622 + int st21nfca_im_send_dep_req(struct nfc_hci_dev *hdev, struct sk_buff *skb) 623 + { 624 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 625 + 626 + info->async_cb_type = ST21NFCA_CB_TYPE_READER_F; 627 + info->async_cb_context = info; 628 + info->async_cb = st21nfca_im_recv_dep_res_cb; 629 + 630 + *skb_push(skb, 1) = info->dep_info.curr_nfc_dep_pni; 631 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_DEP_REQ; 632 + *skb_push(skb, 1) = ST21NFCA_NFCIP1_REQ; 633 + *skb_push(skb, 1) = skb->len; 634 + 635 + *skb_push(skb, 1) = info->dep_info.to | 0x10; 636 + 637 + return nfc_hci_send_cmd_async(hdev, ST21NFCA_RF_READER_F_GATE, 638 + ST21NFCA_WR_XCHG_DATA, 639 + skb->data, skb->len, 640 + info->async_cb, info); 641 + } 642 + EXPORT_SYMBOL(st21nfca_im_send_dep_req); 643 + 644 + void st21nfca_dep_init(struct nfc_hci_dev *hdev) 645 + { 646 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 647 + 648 + INIT_WORK(&info->dep_info.tx_work, st21nfca_tx_work); 649 + info->dep_info.curr_nfc_dep_pni = 0; 650 + info->dep_info.idx = 0; 651 + info->dep_info.to = ST21NFCA_DEFAULT_TIMEOUT; 652 + } 653 + EXPORT_SYMBOL(st21nfca_dep_init); 654 + 655 + void st21nfca_dep_deinit(struct nfc_hci_dev *hdev) 656 + { 657 + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); 658 + 659 + cancel_work_sync(&info->dep_info.tx_work); 660 + } 661 + EXPORT_SYMBOL(st21nfca_dep_deinit);
+43
drivers/nfc/st21nfca/st21nfca_dep.h
··· 1 + /* 2 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + * 13 + * You should have received a copy of the GNU General Public License 14 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #ifndef __ST21NFCA_DEP_H 18 + #define __ST21NFCA_DEP_H 19 + 20 + #include <linux/skbuff.h> 21 + #include <linux/workqueue.h> 22 + 23 + struct st21nfca_dep_info { 24 + struct sk_buff *tx_pending; 25 + struct work_struct tx_work; 26 + u8 curr_nfc_dep_pni; 27 + u32 idx; 28 + u8 to; 29 + u8 did; 30 + u8 bsi; 31 + u8 bri; 32 + u8 lri; 33 + } __packed; 34 + 35 + int st21nfca_tm_event_send_data(struct nfc_hci_dev *hdev, struct sk_buff *skb, 36 + u8 gate); 37 + int st21nfca_tm_send_dep_res(struct nfc_hci_dev *hdev, struct sk_buff *skb); 38 + 39 + int st21nfca_im_send_atr_req(struct nfc_hci_dev *hdev, u8 *gb, size_t gb_len); 40 + int st21nfca_im_send_dep_req(struct nfc_hci_dev *hdev, struct sk_buff *skb); 41 + void st21nfca_dep_init(struct nfc_hci_dev *hdev); 42 + void st21nfca_dep_deinit(struct nfc_hci_dev *hdev); 43 + #endif /* __ST21NFCA_DEP_H */
+22
drivers/nfc/st21nfcb/Kconfig
··· 1 + config NFC_ST21NFCB 2 + tristate "STMicroelectronics ST21NFCB NFC driver" 3 + depends on NFC_NCI 4 + default n 5 + ---help--- 6 + STMicroelectronics ST21NFCB core driver. It implements the chipset 7 + NCI logic and hooks into the NFC kernel APIs. Physical layers will 8 + register against it. 9 + 10 + To compile this driver as a module, choose m here. The module will 11 + be called st21nfcb. 12 + Say N if unsure. 13 + 14 + config NFC_ST21NFCB_I2C 15 + tristate "NFC ST21NFCB i2c support" 16 + depends on NFC_ST21NFCB && I2C 17 + ---help--- 18 + This module adds support for the STMicroelectronics st21nfcb i2c interface. 19 + Select this if your platform is using the i2c bus. 20 + 21 + If you choose to build a module, it'll be called st21nfcb_i2c. 22 + Say N if unsure.
+8
drivers/nfc/st21nfcb/Makefile
··· 1 + # 2 + # Makefile for ST21NFCB NCI based NFC driver 3 + # 4 + 5 + st21nfcb_i2c-objs = i2c.o 6 + 7 + obj-$(CONFIG_NFC_ST21NFCB) += st21nfcb.o ndlc.o 8 + obj-$(CONFIG_NFC_ST21NFCB_I2C) += st21nfcb_i2c.o
+462
drivers/nfc/st21nfcb/i2c.c
··· 1 + /* 2 + * I2C Link Layer for ST21NFCB NCI based Driver 3 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 + 20 + #include <linux/crc-ccitt.h> 21 + #include <linux/module.h> 22 + #include <linux/i2c.h> 23 + #include <linux/gpio.h> 24 + #include <linux/of_irq.h> 25 + #include <linux/of_gpio.h> 26 + #include <linux/miscdevice.h> 27 + #include <linux/interrupt.h> 28 + #include <linux/delay.h> 29 + #include <linux/nfc.h> 30 + #include <linux/firmware.h> 31 + #include <linux/unaligned/access_ok.h> 32 + #include <linux/platform_data/st21nfcb.h> 33 + 34 + #include <net/nfc/nci.h> 35 + #include <net/nfc/llc.h> 36 + #include <net/nfc/nfc.h> 37 + 38 + #include "ndlc.h" 39 + 40 + #define DRIVER_DESC "NCI NFC driver for ST21NFCB" 41 + 42 + /* ndlc header */ 43 + #define ST21NFCB_FRAME_HEADROOM 1 44 + #define ST21NFCB_FRAME_TAILROOM 0 45 + 46 + #define ST21NFCB_NCI_I2C_MIN_SIZE 4 /* PCB(1) + NCI Packet header(3) */ 47 + #define ST21NFCB_NCI_I2C_MAX_SIZE 250 /* req 4.2.1 */ 48 + 49 + #define ST21NFCB_NCI_I2C_DRIVER_NAME "st21nfcb_nci_i2c" 50 + 51 + static struct i2c_device_id st21nfcb_nci_i2c_id_table[] = { 52 + {ST21NFCB_NCI_DRIVER_NAME, 0}, 53 + {} 54 + }; 55 + MODULE_DEVICE_TABLE(i2c, st21nfcb_nci_i2c_id_table); 56 + 57 + struct st21nfcb_i2c_phy { 58 + struct i2c_client *i2c_dev; 59 + struct llt_ndlc *ndlc; 60 + 61 + unsigned int gpio_irq; 62 + unsigned int gpio_reset; 63 + unsigned int irq_polarity; 64 + 65 + int powered; 66 + 67 + /* 68 + * < 0 if hardware error occured (e.g. i2c err) 69 + * and prevents normal operation. 70 + */ 71 + int hard_fault; 72 + }; 73 + 74 + #define I2C_DUMP_SKB(info, skb) \ 75 + do { \ 76 + pr_debug("%s:\n", info); \ 77 + print_hex_dump(KERN_DEBUG, "i2c: ", DUMP_PREFIX_OFFSET, \ 78 + 16, 1, (skb)->data, (skb)->len, 0); \ 79 + } while (0) 80 + 81 + static int st21nfcb_nci_i2c_enable(void *phy_id) 82 + { 83 + struct st21nfcb_i2c_phy *phy = phy_id; 84 + 85 + gpio_set_value(phy->gpio_reset, 0); 86 + usleep_range(10000, 15000); 87 + gpio_set_value(phy->gpio_reset, 1); 88 + phy->powered = 1; 89 + usleep_range(80000, 85000); 90 + 91 + return 0; 92 + } 93 + 94 + static void st21nfcb_nci_i2c_disable(void *phy_id) 95 + { 96 + struct st21nfcb_i2c_phy *phy = phy_id; 97 + 98 + pr_info("\n"); 99 + 100 + phy->powered = 0; 101 + /* reset chip in order to flush clf */ 102 + gpio_set_value(phy->gpio_reset, 0); 103 + usleep_range(10000, 15000); 104 + gpio_set_value(phy->gpio_reset, 1); 105 + } 106 + 107 + static void st21nfcb_nci_remove_header(struct sk_buff *skb) 108 + { 109 + skb_pull(skb, ST21NFCB_FRAME_HEADROOM); 110 + } 111 + 112 + /* 113 + * Writing a frame must not return the number of written bytes. 114 + * It must return either zero for success, or <0 for error. 115 + * In addition, it must not alter the skb 116 + */ 117 + static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb) 118 + { 119 + int r = -1; 120 + struct st21nfcb_i2c_phy *phy = phy_id; 121 + struct i2c_client *client = phy->i2c_dev; 122 + 123 + I2C_DUMP_SKB("st21nfcb_nci_i2c_write", skb); 124 + 125 + if (phy->hard_fault != 0) 126 + return phy->hard_fault; 127 + 128 + r = i2c_master_send(client, skb->data, skb->len); 129 + if (r == -EREMOTEIO) { /* Retry, chip was in standby */ 130 + usleep_range(1000, 4000); 131 + r = i2c_master_send(client, skb->data, skb->len); 132 + } 133 + 134 + if (r >= 0) { 135 + if (r != skb->len) 136 + r = -EREMOTEIO; 137 + else 138 + r = 0; 139 + } 140 + 141 + st21nfcb_nci_remove_header(skb); 142 + 143 + return r; 144 + } 145 + 146 + /* 147 + * Reads an ndlc frame and returns it in a newly allocated sk_buff. 148 + * returns: 149 + * frame size : if received frame is complete (find ST21NFCB_SOF_EOF at 150 + * end of read) 151 + * -EAGAIN : if received frame is incomplete (not find ST21NFCB_SOF_EOF 152 + * at end of read) 153 + * -EREMOTEIO : i2c read error (fatal) 154 + * -EBADMSG : frame was incorrect and discarded 155 + * (value returned from st21nfcb_nci_i2c_repack) 156 + * -EIO : if no ST21NFCB_SOF_EOF is found after reaching 157 + * the read length end sequence 158 + */ 159 + static int st21nfcb_nci_i2c_read(struct st21nfcb_i2c_phy *phy, 160 + struct sk_buff **skb) 161 + { 162 + int r; 163 + u8 len; 164 + u8 buf[ST21NFCB_NCI_I2C_MAX_SIZE]; 165 + struct i2c_client *client = phy->i2c_dev; 166 + 167 + r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE); 168 + if (r == -EREMOTEIO) { /* Retry, chip was in standby */ 169 + usleep_range(1000, 4000); 170 + r = i2c_master_recv(client, buf, ST21NFCB_NCI_I2C_MIN_SIZE); 171 + } else if (r != ST21NFCB_NCI_I2C_MIN_SIZE) { 172 + nfc_err(&client->dev, "cannot read ndlc & nci header\n"); 173 + return -EREMOTEIO; 174 + } 175 + 176 + len = be16_to_cpu(*(__be16 *) (buf + 2)); 177 + if (len > ST21NFCB_NCI_I2C_MAX_SIZE) { 178 + nfc_err(&client->dev, "invalid frame len\n"); 179 + return -EBADMSG; 180 + } 181 + 182 + *skb = alloc_skb(ST21NFCB_NCI_I2C_MIN_SIZE + len, GFP_KERNEL); 183 + if (*skb == NULL) 184 + return -ENOMEM; 185 + 186 + skb_reserve(*skb, ST21NFCB_NCI_I2C_MIN_SIZE); 187 + skb_put(*skb, ST21NFCB_NCI_I2C_MIN_SIZE); 188 + memcpy((*skb)->data, buf, ST21NFCB_NCI_I2C_MIN_SIZE); 189 + 190 + if (!len) 191 + return 0; 192 + 193 + r = i2c_master_recv(client, buf, len); 194 + if (r != len) { 195 + kfree_skb(*skb); 196 + return -EREMOTEIO; 197 + } 198 + 199 + skb_put(*skb, len); 200 + memcpy((*skb)->data + ST21NFCB_NCI_I2C_MIN_SIZE, buf, len); 201 + 202 + I2C_DUMP_SKB("i2c frame read", *skb); 203 + 204 + return 0; 205 + } 206 + 207 + /* 208 + * Reads an ndlc frame from the chip. 209 + * 210 + * On ST21NFCB, IRQ goes in idle state when read starts. 211 + */ 212 + static irqreturn_t st21nfcb_nci_irq_thread_fn(int irq, void *phy_id) 213 + { 214 + struct st21nfcb_i2c_phy *phy = phy_id; 215 + struct i2c_client *client; 216 + struct sk_buff *skb = NULL; 217 + int r; 218 + 219 + if (!phy || irq != phy->i2c_dev->irq) { 220 + WARN_ON_ONCE(1); 221 + return IRQ_NONE; 222 + } 223 + 224 + client = phy->i2c_dev; 225 + dev_dbg(&client->dev, "IRQ\n"); 226 + 227 + if (phy->hard_fault) 228 + return IRQ_HANDLED; 229 + 230 + if (!phy->powered) { 231 + st21nfcb_nci_i2c_disable(phy); 232 + return IRQ_HANDLED; 233 + } 234 + 235 + r = st21nfcb_nci_i2c_read(phy, &skb); 236 + if (r == -EREMOTEIO) { 237 + phy->hard_fault = r; 238 + ndlc_recv(phy->ndlc, NULL); 239 + return IRQ_HANDLED; 240 + } else if (r == -ENOMEM || r == -EBADMSG) { 241 + return IRQ_HANDLED; 242 + } 243 + 244 + ndlc_recv(phy->ndlc, skb); 245 + 246 + return IRQ_HANDLED; 247 + } 248 + 249 + static struct nfc_phy_ops i2c_phy_ops = { 250 + .write = st21nfcb_nci_i2c_write, 251 + .enable = st21nfcb_nci_i2c_enable, 252 + .disable = st21nfcb_nci_i2c_disable, 253 + }; 254 + 255 + #ifdef CONFIG_OF 256 + static int st21nfcb_nci_i2c_of_request_resources(struct i2c_client *client) 257 + { 258 + struct st21nfcb_i2c_phy *phy = i2c_get_clientdata(client); 259 + struct device_node *pp; 260 + int gpio; 261 + int r; 262 + 263 + pp = client->dev.of_node; 264 + if (!pp) 265 + return -ENODEV; 266 + 267 + /* Get GPIO from device tree */ 268 + gpio = of_get_named_gpio(pp, "reset-gpios", 0); 269 + if (gpio < 0) { 270 + nfc_err(&client->dev, 271 + "Failed to retrieve reset-gpios from device tree\n"); 272 + return gpio; 273 + } 274 + 275 + /* GPIO request and configuration */ 276 + r = devm_gpio_request(&client->dev, gpio, "clf_reset"); 277 + if (r) { 278 + nfc_err(&client->dev, "Failed to request reset pin\n"); 279 + return -ENODEV; 280 + } 281 + 282 + r = gpio_direction_output(gpio, 1); 283 + if (r) { 284 + nfc_err(&client->dev, 285 + "Failed to set reset pin direction as output\n"); 286 + return -ENODEV; 287 + } 288 + phy->gpio_reset = gpio; 289 + 290 + /* IRQ */ 291 + r = irq_of_parse_and_map(pp, 0); 292 + if (r < 0) { 293 + nfc_err(&client->dev, 294 + "Unable to get irq, error: %d\n", r); 295 + return r; 296 + } 297 + 298 + phy->irq_polarity = irq_get_trigger_type(r); 299 + client->irq = r; 300 + 301 + return 0; 302 + } 303 + #else 304 + static int st21nfcb_nci_i2c_of_request_resources(struct i2c_client *client) 305 + { 306 + return -ENODEV; 307 + } 308 + #endif 309 + 310 + static int st21nfcb_nci_i2c_request_resources(struct i2c_client *client) 311 + { 312 + struct st21nfcb_nfc_platform_data *pdata; 313 + struct st21nfcb_i2c_phy *phy = i2c_get_clientdata(client); 314 + int r; 315 + int irq; 316 + 317 + pdata = client->dev.platform_data; 318 + if (pdata == NULL) { 319 + nfc_err(&client->dev, "No platform data\n"); 320 + return -EINVAL; 321 + } 322 + 323 + /* store for later use */ 324 + phy->gpio_irq = pdata->gpio_irq; 325 + phy->gpio_reset = pdata->gpio_reset; 326 + phy->irq_polarity = pdata->irq_polarity; 327 + 328 + r = devm_gpio_request(&client->dev, phy->gpio_irq, "wake_up"); 329 + if (r) { 330 + pr_err("%s : gpio_request failed\n", __FILE__); 331 + return -ENODEV; 332 + } 333 + 334 + r = gpio_direction_input(phy->gpio_irq); 335 + if (r) { 336 + pr_err("%s : gpio_direction_input failed\n", __FILE__); 337 + return -ENODEV; 338 + } 339 + 340 + r = devm_gpio_request(&client->dev, 341 + phy->gpio_reset, "clf_reset"); 342 + if (r) { 343 + pr_err("%s : reset gpio_request failed\n", __FILE__); 344 + return -ENODEV; 345 + } 346 + 347 + r = gpio_direction_output(phy->gpio_reset, 1); 348 + if (r) { 349 + pr_err("%s : reset gpio_direction_output failed\n", 350 + __FILE__); 351 + return -ENODEV; 352 + } 353 + 354 + /* IRQ */ 355 + irq = gpio_to_irq(phy->gpio_irq); 356 + if (irq < 0) { 357 + nfc_err(&client->dev, 358 + "Unable to get irq number for GPIO %d error %d\n", 359 + phy->gpio_irq, r); 360 + return -ENODEV; 361 + } 362 + client->irq = irq; 363 + 364 + return 0; 365 + } 366 + 367 + static int st21nfcb_nci_i2c_probe(struct i2c_client *client, 368 + const struct i2c_device_id *id) 369 + { 370 + struct st21nfcb_i2c_phy *phy; 371 + struct st21nfcb_nfc_platform_data *pdata; 372 + int r; 373 + 374 + dev_dbg(&client->dev, "%s\n", __func__); 375 + dev_dbg(&client->dev, "IRQ: %d\n", client->irq); 376 + 377 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) { 378 + nfc_err(&client->dev, "Need I2C_FUNC_I2C\n"); 379 + return -ENODEV; 380 + } 381 + 382 + phy = devm_kzalloc(&client->dev, sizeof(struct st21nfcb_i2c_phy), 383 + GFP_KERNEL); 384 + if (!phy) { 385 + nfc_err(&client->dev, 386 + "Cannot allocate memory for st21nfcb i2c phy.\n"); 387 + return -ENOMEM; 388 + } 389 + 390 + phy->i2c_dev = client; 391 + 392 + i2c_set_clientdata(client, phy); 393 + 394 + pdata = client->dev.platform_data; 395 + if (!pdata && client->dev.of_node) { 396 + r = st21nfcb_nci_i2c_of_request_resources(client); 397 + if (r) { 398 + nfc_err(&client->dev, "No platform data\n"); 399 + return r; 400 + } 401 + } else if (pdata) { 402 + r = st21nfcb_nci_i2c_request_resources(client); 403 + if (r) { 404 + nfc_err(&client->dev, 405 + "Cannot get platform resources\n"); 406 + return r; 407 + } 408 + } else { 409 + nfc_err(&client->dev, 410 + "st21nfcb platform resources not available\n"); 411 + return -ENODEV; 412 + } 413 + 414 + r = devm_request_threaded_irq(&client->dev, client->irq, NULL, 415 + st21nfcb_nci_irq_thread_fn, 416 + phy->irq_polarity | IRQF_ONESHOT, 417 + ST21NFCB_NCI_DRIVER_NAME, phy); 418 + if (r < 0) { 419 + nfc_err(&client->dev, "Unable to register IRQ handler\n"); 420 + return r; 421 + } 422 + 423 + return ndlc_probe(phy, &i2c_phy_ops, &client->dev, 424 + ST21NFCB_FRAME_HEADROOM, ST21NFCB_FRAME_TAILROOM, 425 + &phy->ndlc); 426 + } 427 + 428 + static int st21nfcb_nci_i2c_remove(struct i2c_client *client) 429 + { 430 + struct st21nfcb_i2c_phy *phy = i2c_get_clientdata(client); 431 + 432 + dev_dbg(&client->dev, "%s\n", __func__); 433 + 434 + ndlc_remove(phy->ndlc); 435 + 436 + if (phy->powered) 437 + st21nfcb_nci_i2c_disable(phy); 438 + 439 + return 0; 440 + } 441 + 442 + static const struct of_device_id of_st21nfcb_i2c_match[] = { 443 + { .compatible = "st,st21nfcb_i2c", }, 444 + {} 445 + }; 446 + 447 + static struct i2c_driver st21nfcb_nci_i2c_driver = { 448 + .driver = { 449 + .owner = THIS_MODULE, 450 + .name = ST21NFCB_NCI_I2C_DRIVER_NAME, 451 + .owner = THIS_MODULE, 452 + .of_match_table = of_match_ptr(of_st21nfcb_i2c_match), 453 + }, 454 + .probe = st21nfcb_nci_i2c_probe, 455 + .id_table = st21nfcb_nci_i2c_id_table, 456 + .remove = st21nfcb_nci_i2c_remove, 457 + }; 458 + 459 + module_i2c_driver(st21nfcb_nci_i2c_driver); 460 + 461 + MODULE_LICENSE("GPL"); 462 + MODULE_DESCRIPTION(DRIVER_DESC);
+298
drivers/nfc/st21nfcb/ndlc.c
··· 1 + /* 2 + * Low Level Transport (NDLC) Driver for STMicroelectronics NFC Chip 3 + * 4 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #include <linux/sched.h> 20 + #include <net/nfc/nci_core.h> 21 + 22 + #include "ndlc.h" 23 + #include "st21nfcb.h" 24 + 25 + #define NDLC_TIMER_T1 100 26 + #define NDLC_TIMER_T1_WAIT 400 27 + #define NDLC_TIMER_T2 1200 28 + 29 + #define PCB_TYPE_DATAFRAME 0x80 30 + #define PCB_TYPE_SUPERVISOR 0xc0 31 + #define PCB_TYPE_MASK PCB_TYPE_SUPERVISOR 32 + 33 + #define PCB_SYNC_ACK 0x20 34 + #define PCB_SYNC_NACK 0x10 35 + #define PCB_SYNC_WAIT 0x30 36 + #define PCB_SYNC_NOINFO 0x00 37 + #define PCB_SYNC_MASK PCB_SYNC_WAIT 38 + 39 + #define PCB_DATAFRAME_RETRANSMIT_YES 0x00 40 + #define PCB_DATAFRAME_RETRANSMIT_NO 0x04 41 + #define PCB_DATAFRAME_RETRANSMIT_MASK PCB_DATAFRAME_RETRANSMIT_NO 42 + 43 + #define PCB_SUPERVISOR_RETRANSMIT_YES 0x00 44 + #define PCB_SUPERVISOR_RETRANSMIT_NO 0x02 45 + #define PCB_SUPERVISOR_RETRANSMIT_MASK PCB_SUPERVISOR_RETRANSMIT_NO 46 + 47 + #define PCB_FRAME_CRC_INFO_PRESENT 0x08 48 + #define PCB_FRAME_CRC_INFO_NOTPRESENT 0x00 49 + #define PCB_FRAME_CRC_INFO_MASK PCB_FRAME_CRC_INFO_PRESENT 50 + 51 + #define NDLC_DUMP_SKB(info, skb) \ 52 + do { \ 53 + pr_debug("%s:\n", info); \ 54 + print_hex_dump(KERN_DEBUG, "ndlc: ", DUMP_PREFIX_OFFSET, \ 55 + 16, 1, skb->data, skb->len, 0); \ 56 + } while (0) 57 + 58 + int ndlc_open(struct llt_ndlc *ndlc) 59 + { 60 + /* toggle reset pin */ 61 + ndlc->ops->enable(ndlc->phy_id); 62 + return 0; 63 + } 64 + EXPORT_SYMBOL(ndlc_open); 65 + 66 + void ndlc_close(struct llt_ndlc *ndlc) 67 + { 68 + /* toggle reset pin */ 69 + ndlc->ops->disable(ndlc->phy_id); 70 + } 71 + EXPORT_SYMBOL(ndlc_close); 72 + 73 + int ndlc_send(struct llt_ndlc *ndlc, struct sk_buff *skb) 74 + { 75 + /* add ndlc header */ 76 + u8 pcb = PCB_TYPE_DATAFRAME | PCB_DATAFRAME_RETRANSMIT_NO | 77 + PCB_FRAME_CRC_INFO_NOTPRESENT; 78 + 79 + *skb_push(skb, 1) = pcb; 80 + skb_queue_tail(&ndlc->send_q, skb); 81 + 82 + schedule_work(&ndlc->sm_work); 83 + 84 + return 0; 85 + } 86 + EXPORT_SYMBOL(ndlc_send); 87 + 88 + static void llt_ndlc_send_queue(struct llt_ndlc *ndlc) 89 + { 90 + struct sk_buff *skb; 91 + int r; 92 + unsigned long time_sent; 93 + 94 + if (ndlc->send_q.qlen) 95 + pr_debug("sendQlen=%d unackQlen=%d\n", 96 + ndlc->send_q.qlen, ndlc->ack_pending_q.qlen); 97 + 98 + while (ndlc->send_q.qlen) { 99 + skb = skb_dequeue(&ndlc->send_q); 100 + NDLC_DUMP_SKB("ndlc frame written", skb); 101 + r = ndlc->ops->write(ndlc->phy_id, skb); 102 + if (r < 0) { 103 + ndlc->hard_fault = r; 104 + break; 105 + } 106 + time_sent = jiffies; 107 + *(unsigned long *)skb->cb = time_sent; 108 + 109 + skb_queue_tail(&ndlc->ack_pending_q, skb); 110 + 111 + /* start timer t1 for ndlc aknowledge */ 112 + ndlc->t1_active = true; 113 + mod_timer(&ndlc->t1_timer, time_sent + 114 + msecs_to_jiffies(NDLC_TIMER_T1)); 115 + } 116 + } 117 + 118 + static void llt_ndlc_requeue_data_pending(struct llt_ndlc *ndlc) 119 + { 120 + struct sk_buff *skb; 121 + u8 pcb; 122 + 123 + while ((skb = skb_dequeue_tail(&ndlc->ack_pending_q))) { 124 + pcb = skb->data[0]; 125 + switch (pcb & PCB_TYPE_MASK) { 126 + case PCB_TYPE_SUPERVISOR: 127 + skb->data[0] = (pcb & ~PCB_SUPERVISOR_RETRANSMIT_MASK) | 128 + PCB_SUPERVISOR_RETRANSMIT_YES; 129 + break; 130 + case PCB_TYPE_DATAFRAME: 131 + skb->data[0] = (pcb & ~PCB_DATAFRAME_RETRANSMIT_MASK) | 132 + PCB_DATAFRAME_RETRANSMIT_YES; 133 + break; 134 + default: 135 + pr_err("UNKNOWN Packet Control Byte=%d\n", pcb); 136 + kfree_skb(skb); 137 + break; 138 + } 139 + skb_queue_head(&ndlc->send_q, skb); 140 + } 141 + } 142 + 143 + static void llt_ndlc_rcv_queue(struct llt_ndlc *ndlc) 144 + { 145 + struct sk_buff *skb; 146 + u8 pcb; 147 + unsigned long time_sent; 148 + 149 + if (ndlc->rcv_q.qlen) 150 + pr_debug("rcvQlen=%d\n", ndlc->rcv_q.qlen); 151 + 152 + while ((skb = skb_dequeue(&ndlc->rcv_q)) != NULL) { 153 + pcb = skb->data[0]; 154 + skb_pull(skb, 1); 155 + if ((pcb & PCB_TYPE_MASK) == PCB_TYPE_SUPERVISOR) { 156 + switch (pcb & PCB_SYNC_MASK) { 157 + case PCB_SYNC_ACK: 158 + del_timer_sync(&ndlc->t1_timer); 159 + del_timer_sync(&ndlc->t2_timer); 160 + ndlc->t2_active = false; 161 + ndlc->t1_active = false; 162 + break; 163 + case PCB_SYNC_NACK: 164 + llt_ndlc_requeue_data_pending(ndlc); 165 + llt_ndlc_send_queue(ndlc); 166 + /* start timer t1 for ndlc aknowledge */ 167 + time_sent = jiffies; 168 + ndlc->t1_active = true; 169 + mod_timer(&ndlc->t1_timer, time_sent + 170 + msecs_to_jiffies(NDLC_TIMER_T1)); 171 + break; 172 + case PCB_SYNC_WAIT: 173 + time_sent = jiffies; 174 + ndlc->t1_active = true; 175 + mod_timer(&ndlc->t1_timer, time_sent + 176 + msecs_to_jiffies(NDLC_TIMER_T1_WAIT)); 177 + break; 178 + default: 179 + pr_err("UNKNOWN Packet Control Byte=%d\n", pcb); 180 + kfree_skb(skb); 181 + break; 182 + } 183 + } else { 184 + nci_recv_frame(ndlc->ndev, skb); 185 + } 186 + } 187 + } 188 + 189 + static void llt_ndlc_sm_work(struct work_struct *work) 190 + { 191 + struct llt_ndlc *ndlc = container_of(work, struct llt_ndlc, sm_work); 192 + 193 + llt_ndlc_send_queue(ndlc); 194 + llt_ndlc_rcv_queue(ndlc); 195 + 196 + if (ndlc->t1_active && timer_pending(&ndlc->t1_timer) == 0) { 197 + pr_debug 198 + ("Handle T1(recv SUPERVISOR) elapsed (T1 now inactive)\n"); 199 + ndlc->t1_active = false; 200 + 201 + llt_ndlc_requeue_data_pending(ndlc); 202 + llt_ndlc_send_queue(ndlc); 203 + } 204 + 205 + if (ndlc->t2_active && timer_pending(&ndlc->t2_timer) == 0) { 206 + pr_debug("Handle T2(recv DATA) elapsed (T2 now inactive)\n"); 207 + ndlc->t2_active = false; 208 + ndlc->t1_active = false; 209 + del_timer_sync(&ndlc->t1_timer); 210 + 211 + ndlc_close(ndlc); 212 + ndlc->hard_fault = -EREMOTEIO; 213 + } 214 + } 215 + 216 + void ndlc_recv(struct llt_ndlc *ndlc, struct sk_buff *skb) 217 + { 218 + if (skb == NULL) { 219 + pr_err("NULL Frame -> link is dead\n"); 220 + ndlc->hard_fault = -EREMOTEIO; 221 + ndlc_close(ndlc); 222 + } else { 223 + NDLC_DUMP_SKB("incoming frame", skb); 224 + skb_queue_tail(&ndlc->rcv_q, skb); 225 + } 226 + 227 + schedule_work(&ndlc->sm_work); 228 + } 229 + EXPORT_SYMBOL(ndlc_recv); 230 + 231 + static void ndlc_t1_timeout(unsigned long data) 232 + { 233 + struct llt_ndlc *ndlc = (struct llt_ndlc *)data; 234 + 235 + pr_debug("\n"); 236 + 237 + schedule_work(&ndlc->sm_work); 238 + } 239 + 240 + static void ndlc_t2_timeout(unsigned long data) 241 + { 242 + struct llt_ndlc *ndlc = (struct llt_ndlc *)data; 243 + 244 + pr_debug("\n"); 245 + 246 + schedule_work(&ndlc->sm_work); 247 + } 248 + 249 + int ndlc_probe(void *phy_id, struct nfc_phy_ops *phy_ops, struct device *dev, 250 + int phy_headroom, int phy_tailroom, struct llt_ndlc **ndlc_id) 251 + { 252 + struct llt_ndlc *ndlc; 253 + 254 + ndlc = devm_kzalloc(dev, sizeof(struct llt_ndlc), GFP_KERNEL); 255 + if (!ndlc) { 256 + nfc_err(dev, "Cannot allocate memory for ndlc.\n"); 257 + return -ENOMEM; 258 + } 259 + ndlc->ops = phy_ops; 260 + ndlc->phy_id = phy_id; 261 + ndlc->dev = dev; 262 + 263 + *ndlc_id = ndlc; 264 + 265 + /* start timers */ 266 + init_timer(&ndlc->t1_timer); 267 + ndlc->t1_timer.data = (unsigned long)ndlc; 268 + ndlc->t1_timer.function = ndlc_t1_timeout; 269 + 270 + init_timer(&ndlc->t2_timer); 271 + ndlc->t2_timer.data = (unsigned long)ndlc; 272 + ndlc->t2_timer.function = ndlc_t2_timeout; 273 + 274 + skb_queue_head_init(&ndlc->rcv_q); 275 + skb_queue_head_init(&ndlc->send_q); 276 + skb_queue_head_init(&ndlc->ack_pending_q); 277 + 278 + INIT_WORK(&ndlc->sm_work, llt_ndlc_sm_work); 279 + 280 + return st21nfcb_nci_probe(ndlc, phy_headroom, phy_tailroom); 281 + } 282 + EXPORT_SYMBOL(ndlc_probe); 283 + 284 + void ndlc_remove(struct llt_ndlc *ndlc) 285 + { 286 + /* cancel timers */ 287 + del_timer_sync(&ndlc->t1_timer); 288 + del_timer_sync(&ndlc->t2_timer); 289 + ndlc->t2_active = false; 290 + ndlc->t1_active = false; 291 + 292 + skb_queue_purge(&ndlc->rcv_q); 293 + skb_queue_purge(&ndlc->send_q); 294 + 295 + st21nfcb_nci_remove(ndlc->ndev); 296 + kfree(ndlc); 297 + } 298 + EXPORT_SYMBOL(ndlc_remove);
+55
drivers/nfc/st21nfcb/ndlc.h
··· 1 + /* 2 + * NCI based Driver for STMicroelectronics NFC Chip 3 + * 4 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __LOCAL_NDLC_H_ 20 + #define __LOCAL_NDLC_H_ 21 + 22 + #include <linux/skbuff.h> 23 + #include <net/nfc/nfc.h> 24 + 25 + /* Low Level Transport description */ 26 + struct llt_ndlc { 27 + struct nci_dev *ndev; 28 + struct nfc_phy_ops *ops; 29 + void *phy_id; 30 + 31 + struct timer_list t1_timer; 32 + bool t1_active; 33 + 34 + struct timer_list t2_timer; 35 + bool t2_active; 36 + 37 + struct sk_buff_head rcv_q; 38 + struct sk_buff_head send_q; 39 + struct sk_buff_head ack_pending_q; 40 + 41 + struct work_struct sm_work; 42 + 43 + struct device *dev; 44 + 45 + int hard_fault; 46 + }; 47 + 48 + int ndlc_open(struct llt_ndlc *ndlc); 49 + void ndlc_close(struct llt_ndlc *ndlc); 50 + int ndlc_send(struct llt_ndlc *ndlc, struct sk_buff *skb); 51 + void ndlc_recv(struct llt_ndlc *ndlc, struct sk_buff *skb); 52 + int ndlc_probe(void *phy_id, struct nfc_phy_ops *phy_ops, struct device *dev, 53 + int phy_headroom, int phy_tailroom, struct llt_ndlc **ndlc_id); 54 + void ndlc_remove(struct llt_ndlc *ndlc); 55 + #endif /* __LOCAL_NDLC_H__ */
+129
drivers/nfc/st21nfcb/st21nfcb.c
··· 1 + /* 2 + * NCI based Driver for STMicroelectronics NFC Chip 3 + * 4 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #include <linux/module.h> 20 + #include <linux/nfc.h> 21 + #include <net/nfc/nci.h> 22 + #include <net/nfc/nci_core.h> 23 + 24 + #include "st21nfcb.h" 25 + #include "ndlc.h" 26 + 27 + #define DRIVER_DESC "NCI NFC driver for ST21NFCB" 28 + 29 + static int st21nfcb_nci_open(struct nci_dev *ndev) 30 + { 31 + struct st21nfcb_nci_info *info = nci_get_drvdata(ndev); 32 + int r; 33 + 34 + if (test_and_set_bit(ST21NFCB_NCI_RUNNING, &info->flags)) 35 + return 0; 36 + 37 + r = ndlc_open(info->ndlc); 38 + if (r) 39 + clear_bit(ST21NFCB_NCI_RUNNING, &info->flags); 40 + 41 + return r; 42 + } 43 + 44 + static int st21nfcb_nci_close(struct nci_dev *ndev) 45 + { 46 + struct st21nfcb_nci_info *info = nci_get_drvdata(ndev); 47 + 48 + if (!test_and_clear_bit(ST21NFCB_NCI_RUNNING, &info->flags)) 49 + return 0; 50 + 51 + ndlc_close(info->ndlc); 52 + 53 + return 0; 54 + } 55 + 56 + static int st21nfcb_nci_send(struct nci_dev *ndev, struct sk_buff *skb) 57 + { 58 + struct st21nfcb_nci_info *info = nci_get_drvdata(ndev); 59 + 60 + skb->dev = (void *)ndev; 61 + 62 + if (!test_bit(ST21NFCB_NCI_RUNNING, &info->flags)) 63 + return -EBUSY; 64 + 65 + return ndlc_send(info->ndlc, skb); 66 + } 67 + 68 + static struct nci_ops st21nfcb_nci_ops = { 69 + .open = st21nfcb_nci_open, 70 + .close = st21nfcb_nci_close, 71 + .send = st21nfcb_nci_send, 72 + }; 73 + 74 + int st21nfcb_nci_probe(struct llt_ndlc *ndlc, int phy_headroom, 75 + int phy_tailroom) 76 + { 77 + struct st21nfcb_nci_info *info; 78 + int r; 79 + u32 protocols; 80 + 81 + info = devm_kzalloc(ndlc->dev, 82 + sizeof(struct st21nfcb_nci_info), GFP_KERNEL); 83 + if (!info) 84 + return -ENOMEM; 85 + 86 + protocols = NFC_PROTO_JEWEL_MASK 87 + | NFC_PROTO_MIFARE_MASK 88 + | NFC_PROTO_FELICA_MASK 89 + | NFC_PROTO_ISO14443_MASK 90 + | NFC_PROTO_ISO14443_B_MASK 91 + | NFC_PROTO_NFC_DEP_MASK; 92 + 93 + ndlc->ndev = nci_allocate_device(&st21nfcb_nci_ops, protocols, 94 + phy_headroom, phy_tailroom); 95 + if (!ndlc->ndev) { 96 + pr_err("Cannot allocate nfc ndev\n"); 97 + r = -ENOMEM; 98 + goto err_alloc_ndev; 99 + } 100 + info->ndlc = ndlc; 101 + 102 + nci_set_drvdata(ndlc->ndev, info); 103 + 104 + r = nci_register_device(ndlc->ndev); 105 + if (r) 106 + goto err_regdev; 107 + 108 + return r; 109 + err_regdev: 110 + nci_free_device(ndlc->ndev); 111 + 112 + err_alloc_ndev: 113 + kfree(info); 114 + return r; 115 + } 116 + EXPORT_SYMBOL_GPL(st21nfcb_nci_probe); 117 + 118 + void st21nfcb_nci_remove(struct nci_dev *ndev) 119 + { 120 + struct st21nfcb_nci_info *info = nci_get_drvdata(ndev); 121 + 122 + nci_unregister_device(ndev); 123 + nci_free_device(ndev); 124 + kfree(info); 125 + } 126 + EXPORT_SYMBOL_GPL(st21nfcb_nci_remove); 127 + 128 + MODULE_LICENSE("GPL"); 129 + MODULE_DESCRIPTION(DRIVER_DESC);
+38
drivers/nfc/st21nfcb/st21nfcb.h
··· 1 + /* 2 + * NCI based Driver for STMicroelectronics NFC Chip 3 + * 4 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef __LOCAL_ST21NFCB_H_ 20 + #define __LOCAL_ST21NFCB_H_ 21 + 22 + #include <net/nfc/nci_core.h> 23 + 24 + #include "ndlc.h" 25 + 26 + /* Define private flags: */ 27 + #define ST21NFCB_NCI_RUNNING 1 28 + 29 + struct st21nfcb_nci_info { 30 + struct llt_ndlc *ndlc; 31 + unsigned long flags; 32 + }; 33 + 34 + void st21nfcb_nci_remove(struct nci_dev *ndev); 35 + int st21nfcb_nci_probe(struct llt_ndlc *ndlc, int phy_headroom, 36 + int phy_tailroom); 37 + 38 + #endif /* __LOCAL_ST21NFCB_H_ */
+12 -11
include/linux/bcma/bcma.h
··· 73 73 /* Core-ID values. */ 74 74 #define BCMA_CORE_OOB_ROUTER 0x367 /* Out of band */ 75 75 #define BCMA_CORE_4706_CHIPCOMMON 0x500 76 - #define BCMA_CORE_PCIEG2 0x501 77 - #define BCMA_CORE_DMA 0x502 78 - #define BCMA_CORE_SDIO3 0x503 79 - #define BCMA_CORE_USB20 0x504 80 - #define BCMA_CORE_USB30 0x505 81 - #define BCMA_CORE_A9JTAG 0x506 82 - #define BCMA_CORE_DDR23 0x507 83 - #define BCMA_CORE_ROM 0x508 84 - #define BCMA_CORE_NAND 0x509 85 - #define BCMA_CORE_QSPI 0x50A 86 - #define BCMA_CORE_CHIPCOMMON_B 0x50B 76 + #define BCMA_CORE_NS_PCIEG2 0x501 77 + #define BCMA_CORE_NS_DMA 0x502 78 + #define BCMA_CORE_NS_SDIO3 0x503 79 + #define BCMA_CORE_NS_USB20 0x504 80 + #define BCMA_CORE_NS_USB30 0x505 81 + #define BCMA_CORE_NS_A9JTAG 0x506 82 + #define BCMA_CORE_NS_DDR23 0x507 83 + #define BCMA_CORE_NS_ROM 0x508 84 + #define BCMA_CORE_NS_NAND 0x509 85 + #define BCMA_CORE_NS_QSPI 0x50A 86 + #define BCMA_CORE_NS_CHIPCOMMON_B 0x50B 87 87 #define BCMA_CORE_4706_SOC_RAM 0x50E 88 88 #define BCMA_CORE_ARMCA9 0x510 89 89 #define BCMA_CORE_4706_MAC_GBIT 0x52D ··· 158 158 /* Chip IDs of PCIe devices */ 159 159 #define BCMA_CHIP_ID_BCM4313 0x4313 160 160 #define BCMA_CHIP_ID_BCM43142 43142 161 + #define BCMA_CHIP_ID_BCM43131 43131 161 162 #define BCMA_CHIP_ID_BCM43217 43217 162 163 #define BCMA_CHIP_ID_BCM43222 43222 163 164 #define BCMA_CHIP_ID_BCM43224 43224
+32
include/linux/platform_data/st21nfcb.h
··· 1 + /* 2 + * Driver include for the ST21NFCB NFC chip. 3 + * 4 + * Copyright (C) 2014 STMicroelectronics SAS. All rights reserved. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #ifndef _ST21NFCB_NCI_H_ 20 + #define _ST21NFCB_NCI_H_ 21 + 22 + #include <linux/i2c.h> 23 + 24 + #define ST21NFCB_NCI_DRIVER_NAME "st21nfcb_nci" 25 + 26 + struct st21nfcb_nfc_platform_data { 27 + unsigned int gpio_irq; 28 + unsigned int gpio_reset; 29 + unsigned int irq_polarity; 30 + }; 31 + 32 + #endif /* _ST21NFCA_HCI_H_ */
-50
include/net/6lowpan.h
··· 75 75 (((a)->s6_addr[14]) == (m)[6]) && \ 76 76 (((a)->s6_addr[15]) == (m)[7])) 77 77 78 - /* ipv6 address is unspecified */ 79 - #define is_addr_unspecified(a) \ 80 - ((((a)->s6_addr32[0]) == 0) && \ 81 - (((a)->s6_addr32[1]) == 0) && \ 82 - (((a)->s6_addr32[2]) == 0) && \ 83 - (((a)->s6_addr32[3]) == 0)) 84 - 85 - /* compare ipv6 addresses prefixes */ 86 - #define ipaddr_prefixcmp(addr1, addr2, length) \ 87 - (memcmp(addr1, addr2, length >> 3) == 0) 88 - 89 - /* local link, i.e. FE80::/10 */ 90 - #define is_addr_link_local(a) (((a)->s6_addr16[0]) == htons(0xFE80)) 91 - 92 78 /* 93 79 * check whether we can compress the IID to 16 bits, 94 80 * it's possible for unicast adresses with first 49 bits are zero only. ··· 86 100 (((a)->s6_addr[12]) == 0xfe) && \ 87 101 (((a)->s6_addr[13]) == 0)) 88 102 89 - /* multicast address */ 90 - #define is_addr_mcast(a) (((a)->s6_addr[0]) == 0xFF) 91 - 92 103 /* check whether the 112-bit gid of the multicast address is mappable to: */ 93 - 94 - /* 9 bits, for FF02::1 (all nodes) and FF02::2 (all routers) addresses only. */ 95 - #define lowpan_is_mcast_addr_compressable(a) \ 96 - ((((a)->s6_addr16[1]) == 0) && \ 97 - (((a)->s6_addr16[2]) == 0) && \ 98 - (((a)->s6_addr16[3]) == 0) && \ 99 - (((a)->s6_addr16[4]) == 0) && \ 100 - (((a)->s6_addr16[5]) == 0) && \ 101 - (((a)->s6_addr16[6]) == 0) && \ 102 - (((a)->s6_addr[14]) == 0) && \ 103 - ((((a)->s6_addr[15]) == 1) || (((a)->s6_addr[15]) == 2))) 104 104 105 105 /* 48 bits, FFXX::00XX:XXXX:XXXX */ 106 106 #define lowpan_is_mcast_addr_compressable48(a) \ ··· 138 166 139 167 #define LOWPAN_FRAG1_HEAD_SIZE 0x4 140 168 #define LOWPAN_FRAGN_HEAD_SIZE 0x5 141 - 142 - /* 143 - * According IEEE802.15.4 standard: 144 - * - MTU is 127 octets 145 - * - maximum MHR size is 37 octets 146 - * - MFR size is 2 octets 147 - * 148 - * so minimal payload size that we may guarantee is: 149 - * MTU - MHR - MFR = 88 octets 150 - */ 151 - #define LOWPAN_FRAG_SIZE 88 152 169 153 170 /* 154 171 * Values of fields within the IPHC encoding first byte ··· 236 275 237 276 *val = skb->data[0]; 238 277 skb_pull(skb, 1); 239 - 240 - return 0; 241 - } 242 - 243 - static inline int lowpan_fetch_skb_u16(struct sk_buff *skb, u16 *val) 244 - { 245 - if (unlikely(!pskb_may_pull(skb, 2))) 246 - return -EINVAL; 247 - 248 - *val = (skb->data[0] << 8) | skb->data[1]; 249 - skb_pull(skb, 2); 250 278 251 279 return 0; 252 280 }
+5 -1
include/net/bluetooth/hci.h
··· 167 167 HCI_AUTO_OFF, 168 168 HCI_RFKILLED, 169 169 HCI_MGMT, 170 - HCI_PAIRABLE, 170 + HCI_BONDABLE, 171 171 HCI_SERVICE_CACHE, 172 172 HCI_KEEP_DEBUG_KEYS, 173 173 HCI_USE_DEBUG_KEYS, ··· 1074 1074 __le16 num_blocks; 1075 1075 } __packed; 1076 1076 1077 + #define HCI_OP_READ_LOCAL_CODECS 0x100b 1078 + 1077 1079 #define HCI_OP_READ_PAGE_SCAN_ACTIVITY 0x0c1b 1078 1080 struct hci_rp_read_page_scan_activity { 1079 1081 __u8 status; ··· 1171 1169 __u8 status; 1172 1170 __u8 phy_handle; 1173 1171 } __packed; 1172 + 1173 + #define HCI_OP_GET_MWS_TRANSPORT_CONFIG 0x140c 1174 1174 1175 1175 #define HCI_OP_ENABLE_DUT_MODE 0x1803 1176 1176
+3
include/net/bluetooth/hci_core.h
··· 203 203 __u16 page_scan_window; 204 204 __u8 page_scan_type; 205 205 __u8 le_adv_channel_map; 206 + __u16 le_adv_min_interval; 207 + __u16 le_adv_max_interval; 206 208 __u8 le_scan_type; 207 209 __u16 le_scan_interval; 208 210 __u16 le_scan_window; ··· 460 458 enum { 461 459 HCI_AUTO_CONN_DISABLED, 462 460 HCI_AUTO_CONN_REPORT, 461 + HCI_AUTO_CONN_DIRECT, 463 462 HCI_AUTO_CONN_ALWAYS, 464 463 HCI_AUTO_CONN_LINK_LOSS, 465 464 } auto_connect;
+2 -2
include/net/bluetooth/mgmt.h
··· 87 87 #define MGMT_SETTING_CONNECTABLE 0x00000002 88 88 #define MGMT_SETTING_FAST_CONNECTABLE 0x00000004 89 89 #define MGMT_SETTING_DISCOVERABLE 0x00000008 90 - #define MGMT_SETTING_PAIRABLE 0x00000010 90 + #define MGMT_SETTING_BONDABLE 0x00000010 91 91 #define MGMT_SETTING_LINK_SECURITY 0x00000020 92 92 #define MGMT_SETTING_SSP 0x00000040 93 93 #define MGMT_SETTING_BREDR 0x00000080 ··· 131 131 132 132 #define MGMT_OP_SET_FAST_CONNECTABLE 0x0008 133 133 134 - #define MGMT_OP_SET_PAIRABLE 0x0009 134 + #define MGMT_OP_SET_BONDABLE 0x0009 135 135 136 136 #define MGMT_OP_SET_LINK_SECURITY 0x000A 137 137
+13
include/net/nfc/digital.h
··· 49 49 NFC_DIGITAL_FRAMING_NFCA_SHORT = 0, 50 50 NFC_DIGITAL_FRAMING_NFCA_STANDARD, 51 51 NFC_DIGITAL_FRAMING_NFCA_STANDARD_WITH_CRC_A, 52 + NFC_DIGITAL_FRAMING_NFCA_ANTICOL_COMPLETE, 52 53 53 54 NFC_DIGITAL_FRAMING_NFCA_T1T, 54 55 NFC_DIGITAL_FRAMING_NFCA_T2T, ··· 127 126 * the NFC-DEP ATR_REQ command through cb. The digital stack deducts the RF 128 127 * tech by analyzing the SoD of the frame containing the ATR_REQ command. 129 128 * This is an asynchronous function. 129 + * @tg_listen_md: If supported, put the device in automatic listen mode with 130 + * mode detection but without automatic anti-collision. In this mode, the 131 + * device automatically detects the RF technology. What the actual 132 + * RF technology is can be retrieved by calling @tg_get_rf_tech. 133 + * The digital stack will then perform the appropriate anti-collision 134 + * sequence. This is an asynchronous function. 135 + * @tg_get_rf_tech: Required when @tg_listen_md is supported, unused otherwise. 136 + * Return the RF Technology that was detected by the @tg_listen_md call. 137 + * This is a synchronous function. 130 138 * 131 139 * @switch_rf: Turns device radio on or off. The stack does not call explicitly 132 140 * switch_rf to turn the radio on. A call to in|tg_configure_hw must turn ··· 170 160 struct digital_tg_mdaa_params *mdaa_params, 171 161 u16 timeout, nfc_digital_cmd_complete_t cb, 172 162 void *arg); 163 + int (*tg_listen_md)(struct nfc_digital_dev *ddev, u16 timeout, 164 + nfc_digital_cmd_complete_t cb, void *arg); 165 + int (*tg_get_rf_tech)(struct nfc_digital_dev *ddev, u8 *rf_tech); 173 166 174 167 int (*switch_rf)(struct nfc_digital_dev *ddev, bool on); 175 168 void (*abort_cmd)(struct nfc_digital_dev *ddev);
+1
include/net/nfc/hci.h
··· 37 37 int (*xmit) (struct nfc_hci_dev *hdev, struct sk_buff *skb); 38 38 int (*start_poll) (struct nfc_hci_dev *hdev, 39 39 u32 im_protocols, u32 tm_protocols); 40 + void (*stop_poll) (struct nfc_hci_dev *hdev); 40 41 int (*dep_link_up)(struct nfc_hci_dev *hdev, struct nfc_target *target, 41 42 u8 comm_mode, u8 *gb, size_t gb_len); 42 43 int (*dep_link_down)(struct nfc_hci_dev *hdev);
+105 -107
net/6lowpan/iphc.c
··· 177 177 struct sk_buff *new; 178 178 int stat; 179 179 180 - new = skb_copy_expand(skb, sizeof(struct ipv6hdr), 181 - skb_tailroom(skb), GFP_ATOMIC); 180 + new = skb_copy_expand(skb, sizeof(struct ipv6hdr), skb_tailroom(skb), 181 + GFP_ATOMIC); 182 182 kfree_skb(skb); 183 183 184 184 if (!new) ··· 205 205 /* Uncompress function for multicast destination address, 206 206 * when M bit is set. 207 207 */ 208 - static int 209 - lowpan_uncompress_multicast_daddr(struct sk_buff *skb, 210 - struct in6_addr *ipaddr, 211 - const u8 dam) 208 + static int lowpan_uncompress_multicast_daddr(struct sk_buff *skb, 209 + struct in6_addr *ipaddr, 210 + const u8 dam) 212 211 { 213 212 bool fail; 214 213 ··· 253 254 } 254 255 255 256 raw_dump_inline(NULL, "Reconstructed ipv6 multicast addr is", 256 - ipaddr->s6_addr, 16); 257 + ipaddr->s6_addr, 16); 257 258 258 259 return 0; 259 260 } 260 261 261 - static int 262 - uncompress_udp_header(struct sk_buff *skb, struct udphdr *uh) 262 + static int uncompress_udp_header(struct sk_buff *skb, struct udphdr *uh) 263 263 { 264 264 bool fail; 265 265 u8 tmp = 0, val = 0; 266 266 267 - if (!uh) 268 - goto err; 269 - 270 - fail = lowpan_fetch_skb(skb, &tmp, 1); 267 + fail = lowpan_fetch_skb(skb, &tmp, sizeof(tmp)); 271 268 272 269 if ((tmp & LOWPAN_NHC_UDP_MASK) == LOWPAN_NHC_UDP_ID) { 273 270 pr_debug("UDP header uncompression\n"); 274 271 switch (tmp & LOWPAN_NHC_UDP_CS_P_11) { 275 272 case LOWPAN_NHC_UDP_CS_P_00: 276 - fail |= lowpan_fetch_skb(skb, &uh->source, 2); 277 - fail |= lowpan_fetch_skb(skb, &uh->dest, 2); 273 + fail |= lowpan_fetch_skb(skb, &uh->source, 274 + sizeof(uh->source)); 275 + fail |= lowpan_fetch_skb(skb, &uh->dest, 276 + sizeof(uh->dest)); 278 277 break; 279 278 case LOWPAN_NHC_UDP_CS_P_01: 280 - fail |= lowpan_fetch_skb(skb, &uh->source, 2); 281 - fail |= lowpan_fetch_skb(skb, &val, 1); 279 + fail |= lowpan_fetch_skb(skb, &uh->source, 280 + sizeof(uh->source)); 281 + fail |= lowpan_fetch_skb(skb, &val, sizeof(val)); 282 282 uh->dest = htons(val + LOWPAN_NHC_UDP_8BIT_PORT); 283 283 break; 284 284 case LOWPAN_NHC_UDP_CS_P_10: 285 - fail |= lowpan_fetch_skb(skb, &val, 1); 285 + fail |= lowpan_fetch_skb(skb, &val, sizeof(val)); 286 286 uh->source = htons(val + LOWPAN_NHC_UDP_8BIT_PORT); 287 - fail |= lowpan_fetch_skb(skb, &uh->dest, 2); 287 + fail |= lowpan_fetch_skb(skb, &uh->dest, 288 + sizeof(uh->dest)); 288 289 break; 289 290 case LOWPAN_NHC_UDP_CS_P_11: 290 - fail |= lowpan_fetch_skb(skb, &val, 1); 291 + fail |= lowpan_fetch_skb(skb, &val, sizeof(val)); 291 292 uh->source = htons(LOWPAN_NHC_UDP_4BIT_PORT + 292 293 (val >> 4)); 293 294 uh->dest = htons(LOWPAN_NHC_UDP_4BIT_PORT + ··· 306 307 pr_debug_ratelimited("checksum elided currently not supported\n"); 307 308 goto err; 308 309 } else { 309 - fail |= lowpan_fetch_skb(skb, &uh->check, 2); 310 + fail |= lowpan_fetch_skb(skb, &uh->check, 311 + sizeof(uh->check)); 310 312 } 311 313 312 - /* UDP lenght needs to be infered from the lower layers 314 + /* UDP length needs to be infered from the lower layers 313 315 * here, we obtain the hint from the remaining size of the 314 316 * frame 315 317 */ ··· 333 333 static const u8 lowpan_ttl_values[] = { 0, 1, 64, 255 }; 334 334 335 335 int lowpan_process_data(struct sk_buff *skb, struct net_device *dev, 336 - const u8 *saddr, const u8 saddr_type, 337 - const u8 saddr_len, const u8 *daddr, 338 - const u8 daddr_type, const u8 daddr_len, 336 + const u8 *saddr, const u8 saddr_type, const u8 saddr_len, 337 + const u8 *daddr, const u8 daddr_type, const u8 daddr_len, 339 338 u8 iphc0, u8 iphc1, skb_delivery_cb deliver_skb) 340 339 { 341 340 struct ipv6hdr hdr = {}; ··· 347 348 /* another if the CID flag is set */ 348 349 if (iphc1 & LOWPAN_IPHC_CID) { 349 350 pr_debug("CID flag is set, increase header with one\n"); 350 - if (lowpan_fetch_skb_u8(skb, &num_context)) 351 + if (lowpan_fetch_skb(skb, &num_context, sizeof(num_context))) 351 352 goto drop; 352 353 } 353 354 ··· 359 360 * ECN + DSCP + 4-bit Pad + Flow Label (4 bytes) 360 361 */ 361 362 case 0: /* 00b */ 362 - if (lowpan_fetch_skb_u8(skb, &tmp)) 363 + if (lowpan_fetch_skb(skb, &tmp, sizeof(tmp))) 363 364 goto drop; 364 365 365 366 memcpy(&hdr.flow_lbl, &skb->data[0], 3); ··· 372 373 * ECN + DSCP (1 byte), Flow Label is elided 373 374 */ 374 375 case 2: /* 10b */ 375 - if (lowpan_fetch_skb_u8(skb, &tmp)) 376 + if (lowpan_fetch_skb(skb, &tmp, sizeof(tmp))) 376 377 goto drop; 377 378 378 379 hdr.priority = ((tmp >> 2) & 0x0f); ··· 382 383 * ECN + 2-bit Pad + Flow Label (3 bytes), DSCP is elided 383 384 */ 384 385 case 1: /* 01b */ 385 - if (lowpan_fetch_skb_u8(skb, &tmp)) 386 + if (lowpan_fetch_skb(skb, &tmp, sizeof(tmp))) 386 387 goto drop; 387 388 388 389 hdr.flow_lbl[0] = (skb->data[0] & 0x0F) | ((tmp >> 2) & 0x30); ··· 399 400 /* Next Header */ 400 401 if ((iphc0 & LOWPAN_IPHC_NH_C) == 0) { 401 402 /* Next header is carried inline */ 402 - if (lowpan_fetch_skb_u8(skb, &(hdr.nexthdr))) 403 + if (lowpan_fetch_skb(skb, &hdr.nexthdr, sizeof(hdr.nexthdr))) 403 404 goto drop; 404 405 405 406 pr_debug("NH flag is set, next header carried inline: %02x\n", ··· 410 411 if ((iphc0 & 0x03) != LOWPAN_IPHC_TTL_I) { 411 412 hdr.hop_limit = lowpan_ttl_values[iphc0 & 0x03]; 412 413 } else { 413 - if (lowpan_fetch_skb_u8(skb, &(hdr.hop_limit))) 414 + if (lowpan_fetch_skb(skb, &hdr.hop_limit, 415 + sizeof(hdr.hop_limit))) 414 416 goto drop; 415 417 } 416 418 ··· 421 421 if (iphc1 & LOWPAN_IPHC_SAC) { 422 422 /* Source address context based uncompression */ 423 423 pr_debug("SAC bit is set. Handle context based source address.\n"); 424 - err = uncompress_context_based_src_addr( 425 - skb, &hdr.saddr, tmp); 424 + err = uncompress_context_based_src_addr(skb, &hdr.saddr, tmp); 426 425 } else { 427 426 /* Source address uncompression */ 428 427 pr_debug("source address stateless compression\n"); ··· 442 443 pr_debug("dest: context-based mcast compression\n"); 443 444 /* TODO: implement this */ 444 445 } else { 445 - err = lowpan_uncompress_multicast_daddr( 446 - skb, &hdr.daddr, tmp); 446 + err = lowpan_uncompress_multicast_daddr(skb, &hdr.daddr, 447 + tmp); 448 + 447 449 if (err) 448 450 goto drop; 449 451 } ··· 497 497 hdr.version, ntohs(hdr.payload_len), hdr.nexthdr, 498 498 hdr.hop_limit, &hdr.daddr); 499 499 500 - raw_dump_table(__func__, "raw header dump", 501 - (u8 *)&hdr, sizeof(hdr)); 500 + raw_dump_table(__func__, "raw header dump", (u8 *)&hdr, sizeof(hdr)); 502 501 503 502 return skb_deliver(skb, &hdr, dev, deliver_skb); 504 503 ··· 507 508 } 508 509 EXPORT_SYMBOL_GPL(lowpan_process_data); 509 510 510 - static u8 lowpan_compress_addr_64(u8 **hc06_ptr, u8 shift, 511 + static u8 lowpan_compress_addr_64(u8 **hc_ptr, u8 shift, 511 512 const struct in6_addr *ipaddr, 512 513 const unsigned char *lladdr) 513 514 { ··· 518 519 pr_debug("address compression 0 bits\n"); 519 520 } else if (lowpan_is_iid_16_bit_compressable(ipaddr)) { 520 521 /* compress IID to 16 bits xxxx::XXXX */ 521 - memcpy(*hc06_ptr, &ipaddr->s6_addr16[7], 2); 522 - *hc06_ptr += 2; 522 + lowpan_push_hc_data(hc_ptr, &ipaddr->s6_addr16[7], 2); 523 523 val = 2; /* 16-bits */ 524 524 raw_dump_inline(NULL, "Compressed ipv6 addr is (16 bits)", 525 - *hc06_ptr - 2, 2); 525 + *hc_ptr - 2, 2); 526 526 } else { 527 527 /* do not compress IID => xxxx::IID */ 528 - memcpy(*hc06_ptr, &ipaddr->s6_addr16[4], 8); 529 - *hc06_ptr += 8; 528 + lowpan_push_hc_data(hc_ptr, &ipaddr->s6_addr16[4], 8); 530 529 val = 1; /* 64-bits */ 531 530 raw_dump_inline(NULL, "Compressed ipv6 addr is (64 bits)", 532 - *hc06_ptr - 8, 8); 531 + *hc_ptr - 8, 8); 533 532 } 534 533 535 534 return rol8(val, shift); 536 535 } 537 536 538 - static void compress_udp_header(u8 **hc06_ptr, struct sk_buff *skb) 537 + static void compress_udp_header(u8 **hc_ptr, struct sk_buff *skb) 539 538 { 540 539 struct udphdr *uh = udp_hdr(skb); 541 540 u8 tmp; ··· 545 548 pr_debug("UDP header: both ports compression to 4 bits\n"); 546 549 /* compression value */ 547 550 tmp = LOWPAN_NHC_UDP_CS_P_11; 548 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 551 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 549 552 /* source and destination port */ 550 553 tmp = ntohs(uh->dest) - LOWPAN_NHC_UDP_4BIT_PORT + 551 554 ((ntohs(uh->source) - LOWPAN_NHC_UDP_4BIT_PORT) << 4); 552 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 555 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 553 556 } else if ((ntohs(uh->dest) & LOWPAN_NHC_UDP_8BIT_MASK) == 554 557 LOWPAN_NHC_UDP_8BIT_PORT) { 555 558 pr_debug("UDP header: remove 8 bits of dest\n"); 556 559 /* compression value */ 557 560 tmp = LOWPAN_NHC_UDP_CS_P_01; 558 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 561 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 559 562 /* source port */ 560 - lowpan_push_hc_data(hc06_ptr, &uh->source, sizeof(uh->source)); 563 + lowpan_push_hc_data(hc_ptr, &uh->source, sizeof(uh->source)); 561 564 /* destination port */ 562 565 tmp = ntohs(uh->dest) - LOWPAN_NHC_UDP_8BIT_PORT; 563 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 566 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 564 567 } else if ((ntohs(uh->source) & LOWPAN_NHC_UDP_8BIT_MASK) == 565 568 LOWPAN_NHC_UDP_8BIT_PORT) { 566 569 pr_debug("UDP header: remove 8 bits of source\n"); 567 570 /* compression value */ 568 571 tmp = LOWPAN_NHC_UDP_CS_P_10; 569 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 572 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 570 573 /* source port */ 571 574 tmp = ntohs(uh->source) - LOWPAN_NHC_UDP_8BIT_PORT; 572 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 575 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 573 576 /* destination port */ 574 - lowpan_push_hc_data(hc06_ptr, &uh->dest, sizeof(uh->dest)); 577 + lowpan_push_hc_data(hc_ptr, &uh->dest, sizeof(uh->dest)); 575 578 } else { 576 579 pr_debug("UDP header: can't compress\n"); 577 580 /* compression value */ 578 581 tmp = LOWPAN_NHC_UDP_CS_P_00; 579 - lowpan_push_hc_data(hc06_ptr, &tmp, sizeof(tmp)); 582 + lowpan_push_hc_data(hc_ptr, &tmp, sizeof(tmp)); 580 583 /* source port */ 581 - lowpan_push_hc_data(hc06_ptr, &uh->source, sizeof(uh->source)); 584 + lowpan_push_hc_data(hc_ptr, &uh->source, sizeof(uh->source)); 582 585 /* destination port */ 583 - lowpan_push_hc_data(hc06_ptr, &uh->dest, sizeof(uh->dest)); 586 + lowpan_push_hc_data(hc_ptr, &uh->dest, sizeof(uh->dest)); 584 587 } 585 588 586 589 /* checksum is always inline */ 587 - lowpan_push_hc_data(hc06_ptr, &uh->check, sizeof(uh->check)); 590 + lowpan_push_hc_data(hc_ptr, &uh->check, sizeof(uh->check)); 588 591 589 592 /* skip the UDP header */ 590 593 skb_pull(skb, sizeof(struct udphdr)); ··· 594 597 unsigned short type, const void *_daddr, 595 598 const void *_saddr, unsigned int len) 596 599 { 597 - u8 tmp, iphc0, iphc1, *hc06_ptr; 600 + u8 tmp, iphc0, iphc1, *hc_ptr; 598 601 struct ipv6hdr *hdr; 599 602 u8 head[100] = {}; 603 + int addr_type; 600 604 601 605 if (type != ETH_P_IPV6) 602 606 return -EINVAL; 603 607 604 608 hdr = ipv6_hdr(skb); 605 - hc06_ptr = head + 2; 609 + hc_ptr = head + 2; 606 610 607 611 pr_debug("IPv6 header dump:\n\tversion = %d\n\tlength = %d\n" 608 612 "\tnexthdr = 0x%02x\n\thop_lim = %d\n\tdest = %pI6c\n", ··· 628 630 raw_dump_inline(__func__, "daddr", 629 631 (unsigned char *)_daddr, IEEE802154_ADDR_LEN); 630 632 631 - raw_dump_table(__func__, 632 - "sending raw skb network uncompressed packet", 633 + raw_dump_table(__func__, "sending raw skb network uncompressed packet", 633 634 skb->data, skb->len); 634 635 635 636 /* Traffic class, flow label ··· 637 640 * class depends on the presence of version and flow label 638 641 */ 639 642 640 - /* hc06 format of TC is ECN | DSCP , original one is DSCP | ECN */ 643 + /* hc format of TC is ECN | DSCP , original one is DSCP | ECN */ 641 644 tmp = (hdr->priority << 4) | (hdr->flow_lbl[0] >> 4); 642 645 tmp = ((tmp & 0x03) << 6) | (tmp >> 2); 643 646 ··· 651 654 iphc0 |= LOWPAN_IPHC_TC_C; 652 655 } else { 653 656 /* compress only the flow label */ 654 - *hc06_ptr = tmp; 655 - hc06_ptr += 1; 657 + *hc_ptr = tmp; 658 + hc_ptr += 1; 656 659 } 657 660 } else { 658 661 /* Flow label cannot be compressed */ ··· 660 663 ((hdr->flow_lbl[0] & 0xF0) == 0)) { 661 664 /* compress only traffic class */ 662 665 iphc0 |= LOWPAN_IPHC_TC_C; 663 - *hc06_ptr = (tmp & 0xc0) | (hdr->flow_lbl[0] & 0x0F); 664 - memcpy(hc06_ptr + 1, &hdr->flow_lbl[1], 2); 665 - hc06_ptr += 3; 666 + *hc_ptr = (tmp & 0xc0) | (hdr->flow_lbl[0] & 0x0F); 667 + memcpy(hc_ptr + 1, &hdr->flow_lbl[1], 2); 668 + hc_ptr += 3; 666 669 } else { 667 670 /* compress nothing */ 668 - memcpy(hc06_ptr, hdr, 4); 671 + memcpy(hc_ptr, hdr, 4); 669 672 /* replace the top byte with new ECN | DSCP format */ 670 - *hc06_ptr = tmp; 671 - hc06_ptr += 4; 673 + *hc_ptr = tmp; 674 + hc_ptr += 4; 672 675 } 673 676 } 674 677 ··· 678 681 if (hdr->nexthdr == UIP_PROTO_UDP) 679 682 iphc0 |= LOWPAN_IPHC_NH_C; 680 683 681 - if ((iphc0 & LOWPAN_IPHC_NH_C) == 0) { 682 - *hc06_ptr = hdr->nexthdr; 683 - hc06_ptr += 1; 684 - } 684 + if ((iphc0 & LOWPAN_IPHC_NH_C) == 0) 685 + lowpan_push_hc_data(&hc_ptr, &hdr->nexthdr, 686 + sizeof(hdr->nexthdr)); 685 687 686 688 /* Hop limit 687 689 * if 1: compress, encoding is 01 ··· 699 703 iphc0 |= LOWPAN_IPHC_TTL_255; 700 704 break; 701 705 default: 702 - *hc06_ptr = hdr->hop_limit; 703 - hc06_ptr += 1; 704 - break; 706 + lowpan_push_hc_data(&hc_ptr, &hdr->hop_limit, 707 + sizeof(hdr->hop_limit)); 705 708 } 706 709 710 + addr_type = ipv6_addr_type(&hdr->saddr); 707 711 /* source address compression */ 708 - if (is_addr_unspecified(&hdr->saddr)) { 712 + if (addr_type == IPV6_ADDR_ANY) { 709 713 pr_debug("source address is unspecified, setting SAC\n"); 710 714 iphc1 |= LOWPAN_IPHC_SAC; 711 - /* TODO: context lookup */ 712 - } else if (is_addr_link_local(&hdr->saddr)) { 713 - iphc1 |= lowpan_compress_addr_64(&hc06_ptr, 714 - LOWPAN_IPHC_SAM_BIT, &hdr->saddr, _saddr); 715 - pr_debug("source address unicast link-local %pI6c " 716 - "iphc1 0x%02x\n", &hdr->saddr, iphc1); 717 715 } else { 718 - pr_debug("send the full source address\n"); 719 - memcpy(hc06_ptr, &hdr->saddr.s6_addr16[0], 16); 720 - hc06_ptr += 16; 716 + if (addr_type & IPV6_ADDR_LINKLOCAL) { 717 + iphc1 |= lowpan_compress_addr_64(&hc_ptr, 718 + LOWPAN_IPHC_SAM_BIT, 719 + &hdr->saddr, _saddr); 720 + pr_debug("source address unicast link-local %pI6c iphc1 0x%02x\n", 721 + &hdr->saddr, iphc1); 722 + } else { 723 + pr_debug("send the full source address\n"); 724 + lowpan_push_hc_data(&hc_ptr, hdr->saddr.s6_addr, 16); 725 + } 721 726 } 722 727 728 + addr_type = ipv6_addr_type(&hdr->daddr); 723 729 /* destination address compression */ 724 - if (is_addr_mcast(&hdr->daddr)) { 730 + if (addr_type & IPV6_ADDR_MULTICAST) { 725 731 pr_debug("destination address is multicast: "); 726 732 iphc1 |= LOWPAN_IPHC_M; 727 733 if (lowpan_is_mcast_addr_compressable8(&hdr->daddr)) { 728 734 pr_debug("compressed to 1 octet\n"); 729 735 iphc1 |= LOWPAN_IPHC_DAM_11; 730 736 /* use last byte */ 731 - *hc06_ptr = hdr->daddr.s6_addr[15]; 732 - hc06_ptr += 1; 737 + lowpan_push_hc_data(&hc_ptr, 738 + &hdr->daddr.s6_addr[15], 1); 733 739 } else if (lowpan_is_mcast_addr_compressable32(&hdr->daddr)) { 734 740 pr_debug("compressed to 4 octets\n"); 735 741 iphc1 |= LOWPAN_IPHC_DAM_10; 736 742 /* second byte + the last three */ 737 - *hc06_ptr = hdr->daddr.s6_addr[1]; 738 - memcpy(hc06_ptr + 1, &hdr->daddr.s6_addr[13], 3); 739 - hc06_ptr += 4; 743 + lowpan_push_hc_data(&hc_ptr, 744 + &hdr->daddr.s6_addr[1], 1); 745 + lowpan_push_hc_data(&hc_ptr, 746 + &hdr->daddr.s6_addr[13], 3); 740 747 } else if (lowpan_is_mcast_addr_compressable48(&hdr->daddr)) { 741 748 pr_debug("compressed to 6 octets\n"); 742 749 iphc1 |= LOWPAN_IPHC_DAM_01; 743 750 /* second byte + the last five */ 744 - *hc06_ptr = hdr->daddr.s6_addr[1]; 745 - memcpy(hc06_ptr + 1, &hdr->daddr.s6_addr[11], 5); 746 - hc06_ptr += 6; 751 + lowpan_push_hc_data(&hc_ptr, 752 + &hdr->daddr.s6_addr[1], 1); 753 + lowpan_push_hc_data(&hc_ptr, 754 + &hdr->daddr.s6_addr[11], 5); 747 755 } else { 748 756 pr_debug("using full address\n"); 749 757 iphc1 |= LOWPAN_IPHC_DAM_00; 750 - memcpy(hc06_ptr, &hdr->daddr.s6_addr[0], 16); 751 - hc06_ptr += 16; 758 + lowpan_push_hc_data(&hc_ptr, hdr->daddr.s6_addr, 16); 752 759 } 753 760 } else { 754 - /* TODO: context lookup */ 755 - if (is_addr_link_local(&hdr->daddr)) { 756 - iphc1 |= lowpan_compress_addr_64(&hc06_ptr, 761 + if (addr_type & IPV6_ADDR_LINKLOCAL) { 762 + /* TODO: context lookup */ 763 + iphc1 |= lowpan_compress_addr_64(&hc_ptr, 757 764 LOWPAN_IPHC_DAM_BIT, &hdr->daddr, _daddr); 758 765 pr_debug("dest address unicast link-local %pI6c " 759 - "iphc1 0x%02x\n", &hdr->daddr, iphc1); 766 + "iphc1 0x%02x\n", &hdr->daddr, iphc1); 760 767 } else { 761 768 pr_debug("dest address unicast %pI6c\n", &hdr->daddr); 762 - memcpy(hc06_ptr, &hdr->daddr.s6_addr16[0], 16); 763 - hc06_ptr += 16; 769 + lowpan_push_hc_data(&hc_ptr, hdr->daddr.s6_addr, 16); 764 770 } 765 771 } 766 772 767 773 /* UDP header compression */ 768 774 if (hdr->nexthdr == UIP_PROTO_UDP) 769 - compress_udp_header(&hc06_ptr, skb); 775 + compress_udp_header(&hc_ptr, skb); 770 776 771 777 head[0] = iphc0; 772 778 head[1] = iphc1; 773 779 774 780 skb_pull(skb, sizeof(struct ipv6hdr)); 775 781 skb_reset_transport_header(skb); 776 - memcpy(skb_push(skb, hc06_ptr - head), head, hc06_ptr - head); 782 + memcpy(skb_push(skb, hc_ptr - head), head, hc_ptr - head); 777 783 skb_reset_network_header(skb); 778 784 779 - pr_debug("header len %d skb %u\n", (int)(hc06_ptr - head), skb->len); 785 + pr_debug("header len %d skb %u\n", (int)(hc_ptr - head), skb->len); 780 786 781 787 raw_dump_table(__func__, "raw skb data dump compressed", 782 788 skb->data, skb->len);
+182 -3
net/bluetooth/hci_core.c
··· 970 970 DEFINE_SIMPLE_ATTRIBUTE(adv_channel_map_fops, adv_channel_map_get, 971 971 adv_channel_map_set, "%llu\n"); 972 972 973 + static int adv_min_interval_set(void *data, u64 val) 974 + { 975 + struct hci_dev *hdev = data; 976 + 977 + if (val < 0x0020 || val > 0x4000 || val > hdev->le_adv_max_interval) 978 + return -EINVAL; 979 + 980 + hci_dev_lock(hdev); 981 + hdev->le_adv_min_interval = val; 982 + hci_dev_unlock(hdev); 983 + 984 + return 0; 985 + } 986 + 987 + static int adv_min_interval_get(void *data, u64 *val) 988 + { 989 + struct hci_dev *hdev = data; 990 + 991 + hci_dev_lock(hdev); 992 + *val = hdev->le_adv_min_interval; 993 + hci_dev_unlock(hdev); 994 + 995 + return 0; 996 + } 997 + 998 + DEFINE_SIMPLE_ATTRIBUTE(adv_min_interval_fops, adv_min_interval_get, 999 + adv_min_interval_set, "%llu\n"); 1000 + 1001 + static int adv_max_interval_set(void *data, u64 val) 1002 + { 1003 + struct hci_dev *hdev = data; 1004 + 1005 + if (val < 0x0020 || val > 0x4000 || val < hdev->le_adv_min_interval) 1006 + return -EINVAL; 1007 + 1008 + hci_dev_lock(hdev); 1009 + hdev->le_adv_max_interval = val; 1010 + hci_dev_unlock(hdev); 1011 + 1012 + return 0; 1013 + } 1014 + 1015 + static int adv_max_interval_get(void *data, u64 *val) 1016 + { 1017 + struct hci_dev *hdev = data; 1018 + 1019 + hci_dev_lock(hdev); 1020 + *val = hdev->le_adv_max_interval; 1021 + hci_dev_unlock(hdev); 1022 + 1023 + return 0; 1024 + } 1025 + 1026 + DEFINE_SIMPLE_ATTRIBUTE(adv_max_interval_fops, adv_max_interval_get, 1027 + adv_max_interval_set, "%llu\n"); 1028 + 973 1029 static int device_list_show(struct seq_file *f, void *ptr) 974 1030 { 975 1031 struct hci_dev *hdev = f->private; ··· 1623 1567 1624 1568 if (test_bit(HCI_LE_ENABLED, &hdev->dev_flags)) { 1625 1569 cp.le = 0x01; 1626 - cp.simul = lmp_le_br_capable(hdev); 1570 + cp.simul = 0x00; 1627 1571 } 1628 1572 1629 1573 if (cp.le != lmp_host_le_capable(hdev)) ··· 1741 1685 /* Set event mask page 2 if the HCI command for it is supported */ 1742 1686 if (hdev->commands[22] & 0x04) 1743 1687 hci_set_event_mask_page_2(req); 1688 + 1689 + /* Read local codec list if the HCI command is supported */ 1690 + if (hdev->commands[29] & 0x20) 1691 + hci_req_add(req, HCI_OP_READ_LOCAL_CODECS, 0, NULL); 1692 + 1693 + /* Get MWS transport configuration if the HCI command is supported */ 1694 + if (hdev->commands[30] & 0x08) 1695 + hci_req_add(req, HCI_OP_GET_MWS_TRANSPORT_CONFIG, 0, NULL); 1744 1696 1745 1697 /* Check for Synchronization Train support */ 1746 1698 if (lmp_sync_train_capable(hdev)) ··· 1889 1825 hdev, &supervision_timeout_fops); 1890 1826 debugfs_create_file("adv_channel_map", 0644, hdev->debugfs, 1891 1827 hdev, &adv_channel_map_fops); 1828 + debugfs_create_file("adv_min_interval", 0644, hdev->debugfs, 1829 + hdev, &adv_min_interval_fops); 1830 + debugfs_create_file("adv_max_interval", 0644, hdev->debugfs, 1831 + hdev, &adv_max_interval_fops); 1892 1832 debugfs_create_file("device_list", 0444, hdev->debugfs, hdev, 1893 1833 &device_list_fops); 1894 1834 debugfs_create_u16("discov_interleaved_timeout", 0644, ··· 2521 2453 flush_workqueue(hdev->req_workqueue); 2522 2454 2523 2455 /* For controllers not using the management interface and that 2524 - * are brought up using legacy ioctl, set the HCI_PAIRABLE bit 2456 + * are brought up using legacy ioctl, set the HCI_BONDABLE bit 2525 2457 * so that pairing works for them. Once the management interface 2526 2458 * is in use this bit will be cleared again and userspace has 2527 2459 * to explicitly enable it. 2528 2460 */ 2529 2461 if (!test_bit(HCI_USER_CHANNEL, &hdev->dev_flags) && 2530 2462 !test_bit(HCI_MGMT, &hdev->dev_flags)) 2531 - set_bit(HCI_PAIRABLE, &hdev->dev_flags); 2463 + set_bit(HCI_BONDABLE, &hdev->dev_flags); 2532 2464 2533 2465 err = hci_dev_do_open(hdev); 2534 2466 ··· 3707 3639 list_add(&params->action, &hdev->pend_le_reports); 3708 3640 hci_update_background_scan(hdev); 3709 3641 break; 3642 + case HCI_AUTO_CONN_DIRECT: 3710 3643 case HCI_AUTO_CONN_ALWAYS: 3711 3644 if (!is_connected(hdev, addr, addr_type)) { 3712 3645 list_add(&params->action, &hdev->pend_le_conns); ··· 3983 3914 hdev->sniff_min_interval = 80; 3984 3915 3985 3916 hdev->le_adv_channel_map = 0x07; 3917 + hdev->le_adv_min_interval = 0x0800; 3918 + hdev->le_adv_max_interval = 0x0800; 3986 3919 hdev->le_scan_interval = 0x0060; 3987 3920 hdev->le_scan_window = 0x0030; 3988 3921 hdev->le_conn_min_interval = 0x0028; ··· 5468 5397 hci_req_add(req, HCI_OP_LE_SET_SCAN_ENABLE, sizeof(cp), &cp); 5469 5398 } 5470 5399 5400 + static void add_to_white_list(struct hci_request *req, 5401 + struct hci_conn_params *params) 5402 + { 5403 + struct hci_cp_le_add_to_white_list cp; 5404 + 5405 + cp.bdaddr_type = params->addr_type; 5406 + bacpy(&cp.bdaddr, &params->addr); 5407 + 5408 + hci_req_add(req, HCI_OP_LE_ADD_TO_WHITE_LIST, sizeof(cp), &cp); 5409 + } 5410 + 5411 + static u8 update_white_list(struct hci_request *req) 5412 + { 5413 + struct hci_dev *hdev = req->hdev; 5414 + struct hci_conn_params *params; 5415 + struct bdaddr_list *b; 5416 + uint8_t white_list_entries = 0; 5417 + 5418 + /* Go through the current white list programmed into the 5419 + * controller one by one and check if that address is still 5420 + * in the list of pending connections or list of devices to 5421 + * report. If not present in either list, then queue the 5422 + * command to remove it from the controller. 5423 + */ 5424 + list_for_each_entry(b, &hdev->le_white_list, list) { 5425 + struct hci_cp_le_del_from_white_list cp; 5426 + 5427 + if (hci_pend_le_action_lookup(&hdev->pend_le_conns, 5428 + &b->bdaddr, b->bdaddr_type) || 5429 + hci_pend_le_action_lookup(&hdev->pend_le_reports, 5430 + &b->bdaddr, b->bdaddr_type)) { 5431 + white_list_entries++; 5432 + continue; 5433 + } 5434 + 5435 + cp.bdaddr_type = b->bdaddr_type; 5436 + bacpy(&cp.bdaddr, &b->bdaddr); 5437 + 5438 + hci_req_add(req, HCI_OP_LE_DEL_FROM_WHITE_LIST, 5439 + sizeof(cp), &cp); 5440 + } 5441 + 5442 + /* Since all no longer valid white list entries have been 5443 + * removed, walk through the list of pending connections 5444 + * and ensure that any new device gets programmed into 5445 + * the controller. 5446 + * 5447 + * If the list of the devices is larger than the list of 5448 + * available white list entries in the controller, then 5449 + * just abort and return filer policy value to not use the 5450 + * white list. 5451 + */ 5452 + list_for_each_entry(params, &hdev->pend_le_conns, action) { 5453 + if (hci_bdaddr_list_lookup(&hdev->le_white_list, 5454 + &params->addr, params->addr_type)) 5455 + continue; 5456 + 5457 + if (white_list_entries >= hdev->le_white_list_size) { 5458 + /* Select filter policy to accept all advertising */ 5459 + return 0x00; 5460 + } 5461 + 5462 + if (hci_find_irk_by_addr(hdev, &params->addr, 5463 + params->addr_type)) { 5464 + /* White list can not be used with RPAs */ 5465 + return 0x00; 5466 + } 5467 + 5468 + white_list_entries++; 5469 + add_to_white_list(req, params); 5470 + } 5471 + 5472 + /* After adding all new pending connections, walk through 5473 + * the list of pending reports and also add these to the 5474 + * white list if there is still space. 5475 + */ 5476 + list_for_each_entry(params, &hdev->pend_le_reports, action) { 5477 + if (hci_bdaddr_list_lookup(&hdev->le_white_list, 5478 + &params->addr, params->addr_type)) 5479 + continue; 5480 + 5481 + if (white_list_entries >= hdev->le_white_list_size) { 5482 + /* Select filter policy to accept all advertising */ 5483 + return 0x00; 5484 + } 5485 + 5486 + if (hci_find_irk_by_addr(hdev, &params->addr, 5487 + params->addr_type)) { 5488 + /* White list can not be used with RPAs */ 5489 + return 0x00; 5490 + } 5491 + 5492 + white_list_entries++; 5493 + add_to_white_list(req, params); 5494 + } 5495 + 5496 + /* Select filter policy to use white list */ 5497 + return 0x01; 5498 + } 5499 + 5471 5500 void hci_req_add_le_passive_scan(struct hci_request *req) 5472 5501 { 5473 5502 struct hci_cp_le_set_scan_param param_cp; 5474 5503 struct hci_cp_le_set_scan_enable enable_cp; 5475 5504 struct hci_dev *hdev = req->hdev; 5476 5505 u8 own_addr_type; 5506 + u8 filter_policy; 5477 5507 5478 5508 /* Set require_privacy to false since no SCAN_REQ are send 5479 5509 * during passive scanning. Not using an unresolvable address ··· 5585 5413 if (hci_update_random_address(req, false, &own_addr_type)) 5586 5414 return; 5587 5415 5416 + /* Adding or removing entries from the white list must 5417 + * happen before enabling scanning. The controller does 5418 + * not allow white list modification while scanning. 5419 + */ 5420 + filter_policy = update_white_list(req); 5421 + 5588 5422 memset(&param_cp, 0, sizeof(param_cp)); 5589 5423 param_cp.type = LE_SCAN_PASSIVE; 5590 5424 param_cp.interval = cpu_to_le16(hdev->le_scan_interval); 5591 5425 param_cp.window = cpu_to_le16(hdev->le_scan_window); 5592 5426 param_cp.own_address_type = own_addr_type; 5427 + param_cp.filter_policy = filter_policy; 5593 5428 hci_req_add(req, HCI_OP_LE_SET_SCAN_PARAM, sizeof(param_cp), 5594 5429 &param_cp); 5595 5430
+37 -13
net/bluetooth/hci_event.c
··· 317 317 if (param & SCAN_PAGE) 318 318 set_bit(HCI_PSCAN, &hdev->flags); 319 319 else 320 - clear_bit(HCI_ISCAN, &hdev->flags); 320 + clear_bit(HCI_PSCAN, &hdev->flags); 321 321 322 322 done: 323 323 hci_dev_unlock(hdev); ··· 2259 2259 break; 2260 2260 /* Fall through */ 2261 2261 2262 + case HCI_AUTO_CONN_DIRECT: 2262 2263 case HCI_AUTO_CONN_ALWAYS: 2263 2264 list_del_init(&params->action); 2264 2265 list_add(&params->action, &hdev->pend_le_conns); ··· 3119 3118 hci_conn_drop(conn); 3120 3119 } 3121 3120 3122 - if (!test_bit(HCI_PAIRABLE, &hdev->dev_flags) && 3121 + if (!test_bit(HCI_BONDABLE, &hdev->dev_flags) && 3123 3122 !test_bit(HCI_CONN_AUTH_INITIATOR, &conn->flags)) { 3124 3123 hci_send_cmd(hdev, HCI_OP_PIN_CODE_NEG_REPLY, 3125 3124 sizeof(ev->bdaddr), &ev->bdaddr); ··· 3652 3651 /* Allow pairing if we're pairable, the initiators of the 3653 3652 * pairing or if the remote is not requesting bonding. 3654 3653 */ 3655 - if (test_bit(HCI_PAIRABLE, &hdev->dev_flags) || 3654 + if (test_bit(HCI_BONDABLE, &hdev->dev_flags) || 3656 3655 test_bit(HCI_CONN_AUTH_INITIATOR, &conn->flags) || 3657 3656 (conn->remote_auth & ~0x01) == HCI_AT_NO_BONDING) { 3658 3657 struct hci_cp_io_capability_reply cp; ··· 3671 3670 if (conn->io_capability != HCI_IO_NO_INPUT_OUTPUT && 3672 3671 conn->auth_type != HCI_AT_NO_BONDING) 3673 3672 conn->auth_type |= 0x01; 3674 - 3675 - cp.authentication = conn->auth_type; 3676 3673 } else { 3677 3674 conn->auth_type = hci_get_auth_req(conn); 3678 - cp.authentication = conn->auth_type; 3679 3675 } 3676 + 3677 + /* If we're not bondable, force one of the non-bondable 3678 + * authentication requirement values. 3679 + */ 3680 + if (!test_bit(HCI_BONDABLE, &hdev->dev_flags)) 3681 + conn->auth_type &= HCI_AT_NO_BONDING_MITM; 3682 + 3683 + cp.authentication = conn->auth_type; 3680 3684 3681 3685 if (hci_find_remote_oob_data(hdev, &conn->dst) && 3682 3686 (conn->out || test_bit(HCI_CONN_REMOTE_OOB, &conn->flags))) ··· 4257 4251 u8 addr_type, u8 adv_type) 4258 4252 { 4259 4253 struct hci_conn *conn; 4254 + struct hci_conn_params *params; 4260 4255 4261 4256 /* If the event is not connectable don't proceed further */ 4262 4257 if (adv_type != LE_ADV_IND && adv_type != LE_ADV_DIRECT_IND) ··· 4273 4266 if (hdev->conn_hash.le_num_slave > 0) 4274 4267 return; 4275 4268 4276 - /* If we're connectable, always connect any ADV_DIRECT_IND event */ 4277 - if (test_bit(HCI_CONNECTABLE, &hdev->dev_flags) && 4278 - adv_type == LE_ADV_DIRECT_IND) 4279 - goto connect; 4280 - 4281 4269 /* If we're not connectable only connect devices that we have in 4282 4270 * our pend_le_conns list. 4283 4271 */ 4284 - if (!hci_pend_le_action_lookup(&hdev->pend_le_conns, addr, addr_type)) 4272 + params = hci_pend_le_action_lookup(&hdev->pend_le_conns, 4273 + addr, addr_type); 4274 + if (!params) 4285 4275 return; 4286 4276 4287 - connect: 4277 + switch (params->auto_connect) { 4278 + case HCI_AUTO_CONN_DIRECT: 4279 + /* Only devices advertising with ADV_DIRECT_IND are 4280 + * triggering a connection attempt. This is allowing 4281 + * incoming connections from slave devices. 4282 + */ 4283 + if (adv_type != LE_ADV_DIRECT_IND) 4284 + return; 4285 + break; 4286 + case HCI_AUTO_CONN_ALWAYS: 4287 + /* Devices advertising with ADV_IND or ADV_DIRECT_IND 4288 + * are triggering a connection attempt. This means 4289 + * that incoming connectioms from slave device are 4290 + * accepted and also outgoing connections to slave 4291 + * devices are established when found. 4292 + */ 4293 + break; 4294 + default: 4295 + return; 4296 + } 4297 + 4288 4298 conn = hci_connect_le(hdev, addr, addr_type, BT_SECURITY_LOW, 4289 4299 HCI_LE_AUTOCONN_TIMEOUT, HCI_ROLE_MASTER); 4290 4300 if (!IS_ERR(conn))
+1 -1
net/bluetooth/hidp/core.c
··· 154 154 (!!test_bit(LED_COMPOSE, dev->led) << 3) | 155 155 (!!test_bit(LED_SCROLLL, dev->led) << 2) | 156 156 (!!test_bit(LED_CAPSL, dev->led) << 1) | 157 - (!!test_bit(LED_NUML, dev->led)); 157 + (!!test_bit(LED_NUML, dev->led) << 0); 158 158 159 159 if (session->leds == newleds) 160 160 return 0;
+36 -21
net/bluetooth/mgmt.c
··· 44 44 MGMT_OP_SET_DISCOVERABLE, 45 45 MGMT_OP_SET_CONNECTABLE, 46 46 MGMT_OP_SET_FAST_CONNECTABLE, 47 - MGMT_OP_SET_PAIRABLE, 47 + MGMT_OP_SET_BONDABLE, 48 48 MGMT_OP_SET_LINK_SECURITY, 49 49 MGMT_OP_SET_SSP, 50 50 MGMT_OP_SET_HS, ··· 553 553 u32 settings = 0; 554 554 555 555 settings |= MGMT_SETTING_POWERED; 556 - settings |= MGMT_SETTING_PAIRABLE; 556 + settings |= MGMT_SETTING_BONDABLE; 557 557 settings |= MGMT_SETTING_DEBUG_KEYS; 558 558 settings |= MGMT_SETTING_CONNECTABLE; 559 559 settings |= MGMT_SETTING_DISCOVERABLE; ··· 603 603 if (test_bit(HCI_DISCOVERABLE, &hdev->dev_flags)) 604 604 settings |= MGMT_SETTING_DISCOVERABLE; 605 605 606 - if (test_bit(HCI_PAIRABLE, &hdev->dev_flags)) 607 - settings |= MGMT_SETTING_PAIRABLE; 606 + if (test_bit(HCI_BONDABLE, &hdev->dev_flags)) 607 + settings |= MGMT_SETTING_BONDABLE; 608 608 609 609 if (test_bit(HCI_BREDR_ENABLED, &hdev->dev_flags)) 610 610 settings |= MGMT_SETTING_BREDR; ··· 1086 1086 return; 1087 1087 1088 1088 memset(&cp, 0, sizeof(cp)); 1089 - cp.min_interval = cpu_to_le16(0x0800); 1090 - cp.max_interval = cpu_to_le16(0x0800); 1089 + cp.min_interval = cpu_to_le16(hdev->le_adv_min_interval); 1090 + cp.max_interval = cpu_to_le16(hdev->le_adv_max_interval); 1091 1091 cp.type = connectable ? LE_ADV_IND : LE_ADV_NONCONN_IND; 1092 1092 cp.own_address_type = own_addr_type; 1093 1093 cp.channel_map = hdev->le_adv_channel_map; ··· 1152 1152 * for mgmt we require user-space to explicitly enable 1153 1153 * it 1154 1154 */ 1155 - clear_bit(HCI_PAIRABLE, &hdev->dev_flags); 1155 + clear_bit(HCI_BONDABLE, &hdev->dev_flags); 1156 1156 } 1157 1157 1158 1158 static int read_controller_info(struct sock *sk, struct hci_dev *hdev, ··· 1881 1881 if (cp->val) { 1882 1882 scan = SCAN_PAGE; 1883 1883 } else { 1884 - scan = 0; 1884 + /* If we don't have any whitelist entries just 1885 + * disable all scanning. If there are entries 1886 + * and we had both page and inquiry scanning 1887 + * enabled then fall back to only page scanning. 1888 + * Otherwise no changes are needed. 1889 + */ 1890 + if (list_empty(&hdev->whitelist)) 1891 + scan = SCAN_DISABLED; 1892 + else if (test_bit(HCI_ISCAN, &hdev->flags)) 1893 + scan = SCAN_PAGE; 1894 + else 1895 + goto no_scan_update; 1885 1896 1886 1897 if (test_bit(HCI_ISCAN, &hdev->flags) && 1887 1898 hdev->discov_timeout > 0) ··· 1902 1891 hci_req_add(&req, HCI_OP_WRITE_SCAN_ENABLE, 1, &scan); 1903 1892 } 1904 1893 1894 + no_scan_update: 1905 1895 /* If we're going from non-connectable to connectable or 1906 1896 * vice-versa when fast connectable is enabled ensure that fast 1907 1897 * connectable gets disabled. write_fast_connectable won't do ··· 1930 1918 return err; 1931 1919 } 1932 1920 1933 - static int set_pairable(struct sock *sk, struct hci_dev *hdev, void *data, 1921 + static int set_bondable(struct sock *sk, struct hci_dev *hdev, void *data, 1934 1922 u16 len) 1935 1923 { 1936 1924 struct mgmt_mode *cp = data; ··· 1940 1928 BT_DBG("request for %s", hdev->name); 1941 1929 1942 1930 if (cp->val != 0x00 && cp->val != 0x01) 1943 - return cmd_status(sk, hdev->id, MGMT_OP_SET_PAIRABLE, 1931 + return cmd_status(sk, hdev->id, MGMT_OP_SET_BONDABLE, 1944 1932 MGMT_STATUS_INVALID_PARAMS); 1945 1933 1946 1934 hci_dev_lock(hdev); 1947 1935 1948 1936 if (cp->val) 1949 - changed = !test_and_set_bit(HCI_PAIRABLE, &hdev->dev_flags); 1937 + changed = !test_and_set_bit(HCI_BONDABLE, &hdev->dev_flags); 1950 1938 else 1951 - changed = test_and_clear_bit(HCI_PAIRABLE, &hdev->dev_flags); 1939 + changed = test_and_clear_bit(HCI_BONDABLE, &hdev->dev_flags); 1952 1940 1953 - err = send_settings_rsp(sk, MGMT_OP_SET_PAIRABLE, hdev); 1941 + err = send_settings_rsp(sk, MGMT_OP_SET_BONDABLE, hdev); 1954 1942 if (err < 0) 1955 1943 goto unlock; 1956 1944 ··· 2276 2264 2277 2265 if (val) { 2278 2266 hci_cp.le = val; 2279 - hci_cp.simul = lmp_le_br_capable(hdev); 2267 + hci_cp.simul = 0x00; 2280 2268 } else { 2281 2269 if (test_bit(HCI_LE_ADV, &hdev->dev_flags)) 2282 2270 disable_advertising(&req); ··· 3213 3201 conn->io_capability = cp->io_cap; 3214 3202 cmd->user_data = conn; 3215 3203 3216 - if (conn->state == BT_CONNECTED && 3204 + if ((conn->state == BT_CONNECTED || conn->state == BT_CONFIG) && 3217 3205 hci_conn_security(conn, sec_level, auth_type, true)) 3218 3206 pairing_complete(cmd, 0); 3219 3207 ··· 5283 5271 MGMT_STATUS_INVALID_PARAMS, 5284 5272 &cp->addr, sizeof(cp->addr)); 5285 5273 5286 - if (cp->action != 0x00 && cp->action != 0x01) 5274 + if (cp->action != 0x00 && cp->action != 0x01 && cp->action != 0x02) 5287 5275 return cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE, 5288 5276 MGMT_STATUS_INVALID_PARAMS, 5289 5277 &cp->addr, sizeof(cp->addr)); ··· 5293 5281 if (cp->addr.type == BDADDR_BREDR) { 5294 5282 bool update_scan; 5295 5283 5296 - /* Only "connect" action supported for now */ 5284 + /* Only incoming connections action is supported for now */ 5297 5285 if (cp->action != 0x01) { 5298 5286 err = cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE, 5299 5287 MGMT_STATUS_INVALID_PARAMS, ··· 5319 5307 else 5320 5308 addr_type = ADDR_LE_DEV_RANDOM; 5321 5309 5322 - if (cp->action) 5310 + if (cp->action == 0x02) 5323 5311 auto_conn = HCI_AUTO_CONN_ALWAYS; 5312 + else if (cp->action == 0x01) 5313 + auto_conn = HCI_AUTO_CONN_DIRECT; 5324 5314 else 5325 5315 auto_conn = HCI_AUTO_CONN_REPORT; 5326 5316 ··· 5679 5665 { set_discoverable, false, MGMT_SET_DISCOVERABLE_SIZE }, 5680 5666 { set_connectable, false, MGMT_SETTING_SIZE }, 5681 5667 { set_fast_connectable, false, MGMT_SETTING_SIZE }, 5682 - { set_pairable, false, MGMT_SETTING_SIZE }, 5668 + { set_bondable, false, MGMT_SETTING_SIZE }, 5683 5669 { set_link_security, false, MGMT_SETTING_SIZE }, 5684 5670 { set_ssp, false, MGMT_SETTING_SIZE }, 5685 5671 { set_hs, false, MGMT_SETTING_SIZE }, ··· 5884 5870 list_del_init(&p->action); 5885 5871 5886 5872 switch (p->auto_connect) { 5873 + case HCI_AUTO_CONN_DIRECT: 5887 5874 case HCI_AUTO_CONN_ALWAYS: 5888 5875 list_add(&p->action, &hdev->pend_le_conns); 5889 5876 break; ··· 5937 5922 lmp_bredr_capable(hdev)) { 5938 5923 struct hci_cp_write_le_host_supported cp; 5939 5924 5940 - cp.le = 1; 5941 - cp.simul = lmp_le_br_capable(hdev); 5925 + cp.le = 0x01; 5926 + cp.simul = 0x00; 5942 5927 5943 5928 /* Check first if we already have the right 5944 5929 * host state (host features set)
+5 -2
net/bluetooth/rfcomm/core.c
··· 1910 1910 /* Get data directly from socket receive queue without copying it. */ 1911 1911 while ((skb = skb_dequeue(&sk->sk_receive_queue))) { 1912 1912 skb_orphan(skb); 1913 - if (!skb_linearize(skb)) 1913 + if (!skb_linearize(skb)) { 1914 1914 s = rfcomm_recv_frame(s, skb); 1915 - else 1915 + if (!s) 1916 + break; 1917 + } else { 1916 1918 kfree_skb(skb); 1919 + } 1917 1920 } 1918 1921 1919 1922 if (s && (sk->sk_state == BT_CLOSED))
+26 -7
net/bluetooth/smp.c
··· 307 307 struct hci_dev *hdev = hcon->hdev; 308 308 u8 local_dist = 0, remote_dist = 0; 309 309 310 - if (test_bit(HCI_PAIRABLE, &conn->hcon->hdev->dev_flags)) { 310 + if (test_bit(HCI_BONDABLE, &conn->hcon->hdev->dev_flags)) { 311 311 local_dist = SMP_DIST_ENC_KEY | SMP_DIST_SIGN; 312 312 remote_dist = SMP_DIST_ENC_KEY | SMP_DIST_SIGN; 313 313 authreq |= SMP_AUTH_BONDING; ··· 579 579 struct smp_chan *smp; 580 580 581 581 smp = kzalloc(sizeof(*smp), GFP_ATOMIC); 582 - if (!smp) 582 + if (!smp) { 583 + clear_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags); 583 584 return NULL; 585 + } 584 586 585 587 smp->tfm_aes = crypto_alloc_blkcipher("ecb(aes)", 0, CRYPTO_ALG_ASYNC); 586 588 if (IS_ERR(smp->tfm_aes)) { 587 589 BT_ERR("Unable to create ECB crypto context"); 588 590 kfree(smp); 591 + clear_bit(HCI_CONN_LE_SMP_PEND, &conn->hcon->flags); 589 592 return NULL; 590 593 } 591 594 ··· 704 701 if (!smp) 705 702 return SMP_UNSPECIFIED; 706 703 707 - if (!test_bit(HCI_PAIRABLE, &hdev->dev_flags) && 704 + if (!test_bit(HCI_BONDABLE, &hdev->dev_flags) && 708 705 (req->auth_req & SMP_AUTH_BONDING)) 709 706 return SMP_PAIRING_NOTSUPP; 710 707 ··· 926 923 if (test_and_set_bit(HCI_CONN_LE_SMP_PEND, &hcon->flags)) 927 924 return 0; 928 925 929 - if (!test_bit(HCI_PAIRABLE, &hcon->hdev->dev_flags) && 930 - (rp->auth_req & SMP_AUTH_BONDING)) 931 - return SMP_PAIRING_NOTSUPP; 932 - 933 926 smp = smp_chan_create(conn); 934 927 if (!smp) 935 928 return SMP_UNSPECIFIED; 929 + 930 + if (!test_bit(HCI_BONDABLE, &hcon->hdev->dev_flags) && 931 + (rp->auth_req & SMP_AUTH_BONDING)) 932 + return SMP_PAIRING_NOTSUPP; 936 933 937 934 skb_pull(skb, sizeof(*rp)); 938 935 ··· 1294 1291 bacpy(&hcon->dst, &smp->remote_irk->bdaddr); 1295 1292 hcon->dst_type = smp->remote_irk->addr_type; 1296 1293 l2cap_conn_update_id_addr(hcon); 1294 + 1295 + /* When receiving an indentity resolving key for 1296 + * a remote device that does not use a resolvable 1297 + * private address, just remove the key so that 1298 + * it is possible to use the controller white 1299 + * list for scanning. 1300 + * 1301 + * Userspace will have been told to not store 1302 + * this key at this point. So it is safe to 1303 + * just remove it. 1304 + */ 1305 + if (!bacmp(&smp->remote_irk->rpa, BDADDR_ANY)) { 1306 + list_del(&smp->remote_irk->list); 1307 + kfree(smp->remote_irk); 1308 + smp->remote_irk = NULL; 1309 + } 1297 1310 } 1298 1311 1299 1312 /* The LTKs and CSRKs should be persistent only if both sides
+3
net/nfc/digital.h
··· 29 29 #define DIGITAL_CMD_TG_SEND 1 30 30 #define DIGITAL_CMD_TG_LISTEN 2 31 31 #define DIGITAL_CMD_TG_LISTEN_MDAA 3 32 + #define DIGITAL_CMD_TG_LISTEN_MD 4 32 33 33 34 #define DIGITAL_MAX_HEADER_LEN 7 34 35 #define DIGITAL_CRC_LEN 2 ··· 122 121 123 122 int digital_tg_listen_nfca(struct nfc_digital_dev *ddev, u8 rf_tech); 124 123 int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech); 124 + void digital_tg_recv_md_req(struct nfc_digital_dev *ddev, void *arg, 125 + struct sk_buff *resp); 125 126 126 127 typedef u16 (*crc_func_t)(u16, const u8 *, size_t); 127 128
+23 -4
net/nfc/digital_core.c
··· 201 201 digital_send_cmd_complete, cmd); 202 202 break; 203 203 204 + case DIGITAL_CMD_TG_LISTEN_MD: 205 + rc = ddev->ops->tg_listen_md(ddev, cmd->timeout, 206 + digital_send_cmd_complete, cmd); 207 + break; 208 + 204 209 default: 205 210 pr_err("Unknown cmd type %d\n", cmd->type); 206 211 return; ··· 298 293 500, digital_tg_recv_atr_req, NULL); 299 294 } 300 295 296 + static int digital_tg_listen_md(struct nfc_digital_dev *ddev, u8 rf_tech) 297 + { 298 + return digital_send_cmd(ddev, DIGITAL_CMD_TG_LISTEN_MD, NULL, NULL, 500, 299 + digital_tg_recv_md_req, NULL); 300 + } 301 + 301 302 int digital_target_found(struct nfc_digital_dev *ddev, 302 303 struct nfc_target *target, u8 protocol) 303 304 { 304 305 int rc; 305 306 u8 framing; 306 307 u8 rf_tech; 308 + u8 poll_tech_count; 307 309 int (*check_crc)(struct sk_buff *skb); 308 310 void (*add_crc)(struct sk_buff *skb); 309 311 ··· 387 375 return rc; 388 376 389 377 target->supported_protocols = (1 << protocol); 390 - rc = nfc_targets_found(ddev->nfc_dev, target, 1); 391 - if (rc) 392 - return rc; 393 378 379 + poll_tech_count = ddev->poll_tech_count; 394 380 ddev->poll_tech_count = 0; 381 + 382 + rc = nfc_targets_found(ddev->nfc_dev, target, 1); 383 + if (rc) { 384 + ddev->poll_tech_count = poll_tech_count; 385 + return rc; 386 + } 395 387 396 388 return 0; 397 389 } ··· 521 505 if (ddev->ops->tg_listen_mdaa) { 522 506 digital_add_poll_tech(ddev, 0, 523 507 digital_tg_listen_mdaa); 508 + } else if (ddev->ops->tg_listen_md) { 509 + digital_add_poll_tech(ddev, 0, 510 + digital_tg_listen_md); 524 511 } else { 525 512 digital_add_poll_tech(ddev, NFC_DIGITAL_RF_TECH_106A, 526 513 digital_tg_listen_nfca); ··· 751 732 752 733 if (!ops->in_configure_hw || !ops->in_send_cmd || !ops->tg_listen || 753 734 !ops->tg_configure_hw || !ops->tg_send_cmd || !ops->abort_cmd || 754 - !ops->switch_rf) 735 + !ops->switch_rf || (ops->tg_listen_md && !ops->tg_get_rf_tech)) 755 736 return NULL; 756 737 757 738 ddev = kzalloc(sizeof(struct nfc_digital_dev), GFP_KERNEL);
+8 -3
net/nfc/digital_dep.c
··· 671 671 int rc; 672 672 struct digital_atr_req *atr_req; 673 673 size_t gb_len, min_size; 674 + u8 poll_tech_count; 674 675 675 676 if (IS_ERR(resp)) { 676 677 rc = PTR_ERR(resp); ··· 729 728 goto exit; 730 729 731 730 gb_len = resp->len - sizeof(struct digital_atr_req); 731 + 732 + poll_tech_count = ddev->poll_tech_count; 733 + ddev->poll_tech_count = 0; 734 + 732 735 rc = nfc_tm_activated(ddev->nfc_dev, NFC_PROTO_NFC_DEP_MASK, 733 736 NFC_COMM_PASSIVE, atr_req->gb, gb_len); 734 - if (rc) 737 + if (rc) { 738 + ddev->poll_tech_count = poll_tech_count; 735 739 goto exit; 736 - 737 - ddev->poll_tech_count = 0; 740 + } 738 741 739 742 rc = 0; 740 743 exit:
+90 -14
net/nfc/digital_technology.c
··· 318 318 319 319 if (DIGITAL_SEL_RES_IS_T2T(sel_res)) { 320 320 nfc_proto = NFC_PROTO_MIFARE; 321 + } else if (DIGITAL_SEL_RES_IS_NFC_DEP(sel_res)) { 322 + nfc_proto = NFC_PROTO_NFC_DEP; 321 323 } else if (DIGITAL_SEL_RES_IS_T4T(sel_res)) { 322 324 rc = digital_in_send_rats(ddev, target); 323 325 if (rc) ··· 329 327 * done when receiving the ATS 330 328 */ 331 329 goto exit_free_skb; 332 - } else if (DIGITAL_SEL_RES_IS_NFC_DEP(sel_res)) { 333 - nfc_proto = NFC_PROTO_NFC_DEP; 334 330 } else { 335 331 rc = -EOPNOTSUPP; 336 332 goto exit; ··· 944 944 if (!DIGITAL_DRV_CAPS_TG_CRC(ddev)) 945 945 digital_skb_add_crc_a(skb); 946 946 947 + rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 948 + NFC_DIGITAL_FRAMING_NFCA_ANTICOL_COMPLETE); 949 + if (rc) { 950 + kfree_skb(skb); 951 + return rc; 952 + } 953 + 947 954 rc = digital_tg_send_cmd(ddev, skb, 300, digital_tg_recv_atr_req, 948 955 NULL); 949 956 if (rc) ··· 1009 1002 for (i = 0; i < 4; i++) 1010 1003 sdd_res->bcc ^= sdd_res->nfcid1[i]; 1011 1004 1005 + rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1006 + NFC_DIGITAL_FRAMING_NFCA_STANDARD_WITH_CRC_A); 1007 + if (rc) { 1008 + kfree_skb(skb); 1009 + return rc; 1010 + } 1011 + 1012 1012 rc = digital_tg_send_cmd(ddev, skb, 300, digital_tg_recv_sel_req, 1013 1013 NULL); 1014 1014 if (rc) ··· 1067 1053 1068 1054 sens_res[0] = (DIGITAL_SENS_RES_NFC_DEP >> 8) & 0xFF; 1069 1055 sens_res[1] = DIGITAL_SENS_RES_NFC_DEP & 0xFF; 1056 + 1057 + rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1058 + NFC_DIGITAL_FRAMING_NFCA_STANDARD); 1059 + if (rc) { 1060 + kfree_skb(skb); 1061 + return rc; 1062 + } 1070 1063 1071 1064 rc = digital_tg_send_cmd(ddev, skb, 300, digital_tg_recv_sdd_req, 1072 1065 NULL); ··· 1218 1197 dev_kfree_skb(resp); 1219 1198 } 1220 1199 1200 + static int digital_tg_config_nfca(struct nfc_digital_dev *ddev) 1201 + { 1202 + int rc; 1203 + 1204 + rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_RF_TECH, 1205 + NFC_DIGITAL_RF_TECH_106A); 1206 + if (rc) 1207 + return rc; 1208 + 1209 + return digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1210 + NFC_DIGITAL_FRAMING_NFCA_NFC_DEP); 1211 + } 1212 + 1221 1213 int digital_tg_listen_nfca(struct nfc_digital_dev *ddev, u8 rf_tech) 1214 + { 1215 + int rc; 1216 + 1217 + rc = digital_tg_config_nfca(ddev); 1218 + if (rc) 1219 + return rc; 1220 + 1221 + return digital_tg_listen(ddev, 300, digital_tg_recv_sens_req, NULL); 1222 + } 1223 + 1224 + static int digital_tg_config_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech) 1222 1225 { 1223 1226 int rc; 1224 1227 ··· 1250 1205 if (rc) 1251 1206 return rc; 1252 1207 1253 - rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1254 - NFC_DIGITAL_FRAMING_NFCA_NFC_DEP); 1255 - if (rc) 1256 - return rc; 1257 - 1258 - return digital_tg_listen(ddev, 300, digital_tg_recv_sens_req, NULL); 1208 + return digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1209 + NFC_DIGITAL_FRAMING_NFCF_NFC_DEP); 1259 1210 } 1260 1211 1261 1212 int digital_tg_listen_nfcf(struct nfc_digital_dev *ddev, u8 rf_tech) ··· 1259 1218 int rc; 1260 1219 u8 *nfcid2; 1261 1220 1262 - rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_RF_TECH, rf_tech); 1263 - if (rc) 1264 - return rc; 1265 - 1266 - rc = digital_tg_configure_hw(ddev, NFC_DIGITAL_CONFIG_FRAMING, 1267 - NFC_DIGITAL_FRAMING_NFCF_NFC_DEP); 1221 + rc = digital_tg_config_nfcf(ddev, rf_tech); 1268 1222 if (rc) 1269 1223 return rc; 1270 1224 ··· 1272 1236 get_random_bytes(nfcid2 + 2, NFC_NFCID2_MAXSIZE - 2); 1273 1237 1274 1238 return digital_tg_listen(ddev, 300, digital_tg_recv_sensf_req, nfcid2); 1239 + } 1240 + 1241 + void digital_tg_recv_md_req(struct nfc_digital_dev *ddev, void *arg, 1242 + struct sk_buff *resp) 1243 + { 1244 + u8 rf_tech; 1245 + int rc; 1246 + 1247 + if (IS_ERR(resp)) { 1248 + resp = NULL; 1249 + goto exit_free_skb; 1250 + } 1251 + 1252 + rc = ddev->ops->tg_get_rf_tech(ddev, &rf_tech); 1253 + if (rc) 1254 + goto exit_free_skb; 1255 + 1256 + switch (rf_tech) { 1257 + case NFC_DIGITAL_RF_TECH_106A: 1258 + rc = digital_tg_config_nfca(ddev); 1259 + if (rc) 1260 + goto exit_free_skb; 1261 + digital_tg_recv_sens_req(ddev, arg, resp); 1262 + break; 1263 + case NFC_DIGITAL_RF_TECH_212F: 1264 + case NFC_DIGITAL_RF_TECH_424F: 1265 + rc = digital_tg_config_nfcf(ddev, rf_tech); 1266 + if (rc) 1267 + goto exit_free_skb; 1268 + digital_tg_recv_sensf_req(ddev, arg, resp); 1269 + break; 1270 + default: 1271 + goto exit_free_skb; 1272 + } 1273 + 1274 + return; 1275 + 1276 + exit_free_skb: 1277 + digital_poll_next_tech(ddev); 1278 + dev_kfree_skb(resp); 1275 1279 }
+5 -2
net/nfc/hci/core.c
··· 553 553 { 554 554 struct nfc_hci_dev *hdev = nfc_get_drvdata(nfc_dev); 555 555 556 - nfc_hci_send_event(hdev, NFC_HCI_RF_READER_A_GATE, 557 - NFC_HCI_EVT_END_OPERATION, NULL, 0); 556 + if (hdev->ops->stop_poll) 557 + hdev->ops->stop_poll(hdev); 558 + else 559 + nfc_hci_send_event(hdev, NFC_HCI_RF_READER_A_GATE, 560 + NFC_HCI_EVT_END_OPERATION, NULL, 0); 558 561 } 559 562 560 563 static int hci_dep_link_up(struct nfc_dev *nfc_dev, struct nfc_target *target,
+3 -1
net/nfc/nci/ntf.c
··· 166 166 struct rf_tech_specific_params_nfcf_poll *nfcf_poll; 167 167 __u32 protocol; 168 168 169 - if (rf_protocol == NCI_RF_PROTOCOL_T2T) 169 + if (rf_protocol == NCI_RF_PROTOCOL_T1T) 170 + protocol = NFC_PROTO_JEWEL_MASK; 171 + else if (rf_protocol == NCI_RF_PROTOCOL_T2T) 170 172 protocol = NFC_PROTO_MIFARE_MASK; 171 173 else if (rf_protocol == NCI_RF_PROTOCOL_ISO_DEP) 172 174 if (rf_tech_and_mode == NCI_NFC_A_PASSIVE_POLL_MODE)