···1717class/input/event*/device/capabilities/, and the properties of a device are1818provided in class/input/event*/device/properties.19192020-Types:2121-==========2222-Types are groupings of codes under a logical input construct. Each type has a2323-set of applicable codes to be used in generating events. See the Codes section2424-for details on valid codes for each type.2020+Event types:2121+===========2222+Event types are groupings of codes under a logical input construct. Each2323+type has a set of applicable codes to be used in generating events. See the2424+Codes section for details on valid codes for each type.25252626* EV_SYN:2727 - Used as markers to separate events. Events may be separated in time or in···6363* EV_FF_STATUS:6464 - Used to receive force feedback device status.65656666-Codes:6767-==========6868-Codes define the precise type of event.6666+Event codes:6767+===========6868+Event codes define the precise type of event.69697070EV_SYN:7171----------···220220EV_PWR events are a special type of event used specifically for power221221mangement. Its usage is not well defined. To be addressed later.222222223223+Device properties:224224+=================225225+Normally, userspace sets up an input device based on the data it emits,226226+i.e., the event types. In the case of two devices emitting the same event227227+types, additional information can be provided in the form of device228228+properties.229229+230230+INPUT_PROP_DIRECT + INPUT_PROP_POINTER:231231+--------------------------------------232232+The INPUT_PROP_DIRECT property indicates that device coordinates should be233233+directly mapped to screen coordinates (not taking into account trivial234234+transformations, such as scaling, flipping and rotating). Non-direct input235235+devices require non-trivial transformation, such as absolute to relative236236+transformation for touchpads. Typical direct input devices: touchscreens,237237+drawing tablets; non-direct devices: touchpads, mice.238238+239239+The INPUT_PROP_POINTER property indicates that the device is not transposed240240+on the screen and thus requires use of an on-screen pointer to trace user's241241+movements. Typical pointer devices: touchpads, tablets, mice; non-pointer242242+device: touchscreen.243243+244244+If neither INPUT_PROP_DIRECT or INPUT_PROP_POINTER are set, the property is245245+considered undefined and the device type should be deduced in the246246+traditional way, using emitted event types.247247+248248+INPUT_PROP_BUTTONPAD:249249+--------------------250250+For touchpads where the button is placed beneath the surface, such that251251+pressing down on the pad causes a button click, this property should be252252+set. Common in clickpad notebooks and macbooks from 2009 and onwards.253253+254254+Originally, the buttonpad property was coded into the bcm5974 driver255255+version field under the name integrated button. For backwards256256+compatibility, both methods need to be checked in userspace.257257+258258+INPUT_PROP_SEMI_MT:259259+------------------260260+Some touchpads, most common between 2008 and 2011, can detect the presence261261+of multiple contacts without resolving the individual positions; only the262262+number of contacts and a rectangular shape is known. For such263263+touchpads, the semi-mt property should be set.264264+265265+Depending on the device, the rectangle may enclose all touches, like a266266+bounding box, or just some of them, for instance the two most recent267267+touches. The diversity makes the rectangle of limited use, but some268268+gestures can normally be extracted from it.269269+270270+If INPUT_PROP_SEMI_MT is not set, the device is assumed to be a true MT271271+device.272272+223273Guidelines:224274==========225275The guidelines below ensure proper single-touch and multi-finger functionality.···290240BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch291241contact. BTN_TOOL_<name> events should be reported where possible.292242243243+For new hardware, INPUT_PROP_DIRECT should be set.244244+293245Trackpads:294246----------295247Legacy trackpads that only provide relative position information must report···301249location of the touch. BTN_TOUCH should be used to report when a touch is active302250on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should303251be used to report the number of touches active on the trackpad.252252+253253+For new hardware, INPUT_PROP_POINTER should be set.304254305255Tablets:306256----------···314260BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use315261meaningful buttons, like BTN_FORWARD, unless the button is labeled for that316262purpose on the device.263263+264264+For new hardware, both INPUT_PROP_DIRECT and INPUT_PROP_POINTER should be set.
+2
Documentation/sysctl/kernel.txt
···601601 instead of using the one provided by the hardware.602602 512 - A kernel warning has occurred.6036031024 - A module from drivers/staging was loaded.604604+2048 - The system is working around a severe firmware bug.605605+4096 - An out-of-tree module has been loaded.604606605607==============================================================606608
···198198 unsigned long addr)199199{200200 pgtable_page_dtor(pte);201201- tlb_add_flush(tlb, addr);201201+202202+ /*203203+ * With the classic ARM MMU, a pte page has two corresponding pmd204204+ * entries, each covering 1MB.205205+ */206206+ addr &= PMD_MASK;207207+ tlb_add_flush(tlb, addr + SZ_1M - PAGE_SIZE);208208+ tlb_add_flush(tlb, addr + SZ_1M);209209+202210 tlb_remove_page(tlb, pte);203211}204212
+1-1
arch/arm/kernel/entry-armv.S
···790790 smp_dmb arm791791 rsbs r0, r3, #0 @ set returned val and C flag792792 ldmfd sp!, {r4, r5, r6, r7}793793- bx lr793793+ usr_ret lr794794795795#elif !defined(CONFIG_SMP)796796
···227227 if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)228228 return -EINVAL;229229230230+ vfp_flush_hwstate(thread);231231+230232 /*231233 * Copy the floating point registers. There can be unused232234 * registers see asm/hwcap.h for details.···252250253251 __get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err);254252 __get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err);255255-256256- if (!err)257257- vfp_flush_hwstate(thread);258253259254 return err ? -EFAULT : 0;260255}
···33333434#include <mach/timer.h>35353636-#include <linux/mm.h>3736#include <linux/pfn.h>3837#include <linux/atomic.h>3938#include <linux/sched.h>4039#include <mach/dma.h>4141-4242-/* I don't quite understand why dc4 fails when this is set to 1 and DMA is enabled */4343-/* especially since dc4 doesn't use kmalloc'd memory. */4444-4545-#define ALLOW_MAP_OF_KMALLOC_MEMORY 046404741/* ---- Public Variables ------------------------------------------------- */4842···4753#define CONTROLLER_FROM_HANDLE(handle) (((handle) >> 4) & 0x0f)4854#define CHANNEL_FROM_HANDLE(handle) ((handle) & 0x0f)49555050-#define DMA_MAP_DEBUG 05151-5252-#if DMA_MAP_DEBUG5353-# define DMA_MAP_PRINT(fmt, args...) printk("%s: " fmt, __func__, ## args)5454-#else5555-# define DMA_MAP_PRINT(fmt, args...)5656-#endif57565857/* ---- Private Variables ------------------------------------------------ */59586059static DMA_Global_t gDMA;6160static struct proc_dir_entry *gDmaDir;62616363-static atomic_t gDmaStatMemTypeKmalloc = ATOMIC_INIT(0);6464-static atomic_t gDmaStatMemTypeVmalloc = ATOMIC_INIT(0);6565-static atomic_t gDmaStatMemTypeUser = ATOMIC_INIT(0);6666-static atomic_t gDmaStatMemTypeCoherent = ATOMIC_INIT(0);6767-6862#include "dma_device.c"69637064/* ---- Private Function Prototypes -------------------------------------- */71657266/* ---- Functions ------------------------------------------------------- */7373-7474-/****************************************************************************/7575-/**7676-* Displays information for /proc/dma/mem-type7777-*/7878-/****************************************************************************/7979-8080-static int dma_proc_read_mem_type(char *buf, char **start, off_t offset,8181- int count, int *eof, void *data)8282-{8383- int len = 0;8484-8585- len += sprintf(buf + len, "dma_map_mem statistics\n");8686- len +=8787- sprintf(buf + len, "coherent: %d\n",8888- atomic_read(&gDmaStatMemTypeCoherent));8989- len +=9090- sprintf(buf + len, "kmalloc: %d\n",9191- atomic_read(&gDmaStatMemTypeKmalloc));9292- len +=9393- sprintf(buf + len, "vmalloc: %d\n",9494- atomic_read(&gDmaStatMemTypeVmalloc));9595- len +=9696- sprintf(buf + len, "user: %d\n",9797- atomic_read(&gDmaStatMemTypeUser));9898-9999- return len;100100-}1016710268/****************************************************************************/10369/**···800846 dma_proc_read_channels, NULL);801847 create_proc_read_entry("devices", 0, gDmaDir,802848 dma_proc_read_devices, NULL);803803- create_proc_read_entry("mem-type", 0, gDmaDir,804804- dma_proc_read_mem_type, NULL);805849 }806850807851out:···15171565}1518156615191567EXPORT_SYMBOL(dma_set_device_handler);15201520-15211521-/****************************************************************************/15221522-/**15231523-* Initializes a memory mapping structure15241524-*/15251525-/****************************************************************************/15261526-15271527-int dma_init_mem_map(DMA_MemMap_t *memMap)15281528-{15291529- memset(memMap, 0, sizeof(*memMap));15301530-15311531- sema_init(&memMap->lock, 1);15321532-15331533- return 0;15341534-}15351535-15361536-EXPORT_SYMBOL(dma_init_mem_map);15371537-15381538-/****************************************************************************/15391539-/**15401540-* Releases any memory currently being held by a memory mapping structure.15411541-*/15421542-/****************************************************************************/15431543-15441544-int dma_term_mem_map(DMA_MemMap_t *memMap)15451545-{15461546- down(&memMap->lock); /* Just being paranoid */15471547-15481548- /* Free up any allocated memory */15491549-15501550- up(&memMap->lock);15511551- memset(memMap, 0, sizeof(*memMap));15521552-15531553- return 0;15541554-}15551555-15561556-EXPORT_SYMBOL(dma_term_mem_map);15571557-15581558-/****************************************************************************/15591559-/**15601560-* Looks at a memory address and categorizes it.15611561-*15621562-* @return One of the values from the DMA_MemType_t enumeration.15631563-*/15641564-/****************************************************************************/15651565-15661566-DMA_MemType_t dma_mem_type(void *addr)15671567-{15681568- unsigned long addrVal = (unsigned long)addr;15691569-15701570- if (addrVal >= CONSISTENT_BASE) {15711571- /* NOTE: DMA virtual memory space starts at 0xFFxxxxxx */15721572-15731573- /* dma_alloc_xxx pages are physically and virtually contiguous */15741574-15751575- return DMA_MEM_TYPE_DMA;15761576- }15771577-15781578- /* Technically, we could add one more classification. Addresses between VMALLOC_END */15791579- /* and the beginning of the DMA virtual address could be considered to be I/O space. */15801580- /* Right now, nobody cares about this particular classification, so we ignore it. */15811581-15821582- if (is_vmalloc_addr(addr)) {15831583- /* Address comes from the vmalloc'd region. Pages are virtually */15841584- /* contiguous but NOT physically contiguous */15851585-15861586- return DMA_MEM_TYPE_VMALLOC;15871587- }15881588-15891589- if (addrVal >= PAGE_OFFSET) {15901590- /* PAGE_OFFSET is typically 0xC0000000 */15911591-15921592- /* kmalloc'd pages are physically contiguous */15931593-15941594- return DMA_MEM_TYPE_KMALLOC;15951595- }15961596-15971597- return DMA_MEM_TYPE_USER;15981598-}15991599-16001600-EXPORT_SYMBOL(dma_mem_type);16011601-16021602-/****************************************************************************/16031603-/**16041604-* Looks at a memory address and determines if we support DMA'ing to/from16051605-* that type of memory.16061606-*16071607-* @return boolean -16081608-* return value != 0 means dma supported16091609-* return value == 0 means dma not supported16101610-*/16111611-/****************************************************************************/16121612-16131613-int dma_mem_supports_dma(void *addr)16141614-{16151615- DMA_MemType_t memType = dma_mem_type(addr);16161616-16171617- return (memType == DMA_MEM_TYPE_DMA)16181618-#if ALLOW_MAP_OF_KMALLOC_MEMORY16191619- || (memType == DMA_MEM_TYPE_KMALLOC)16201620-#endif16211621- || (memType == DMA_MEM_TYPE_USER);16221622-}16231623-16241624-EXPORT_SYMBOL(dma_mem_supports_dma);16251625-16261626-/****************************************************************************/16271627-/**16281628-* Maps in a memory region such that it can be used for performing a DMA.16291629-*16301630-* @return16311631-*/16321632-/****************************************************************************/16331633-16341634-int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */16351635- enum dma_data_direction dir /* Direction that the mapping will be going */16361636- ) {16371637- int rc;16381638-16391639- down(&memMap->lock);16401640-16411641- DMA_MAP_PRINT("memMap: %p\n", memMap);16421642-16431643- if (memMap->inUse) {16441644- printk(KERN_ERR "%s: memory map %p is already being used\n",16451645- __func__, memMap);16461646- rc = -EBUSY;16471647- goto out;16481648- }16491649-16501650- memMap->inUse = 1;16511651- memMap->dir = dir;16521652- memMap->numRegionsUsed = 0;16531653-16541654- rc = 0;16551655-16561656-out:16571657-16581658- DMA_MAP_PRINT("returning %d", rc);16591659-16601660- up(&memMap->lock);16611661-16621662- return rc;16631663-}16641664-16651665-EXPORT_SYMBOL(dma_map_start);16661666-16671667-/****************************************************************************/16681668-/**16691669-* Adds a segment of memory to a memory map. Each segment is both16701670-* physically and virtually contiguous.16711671-*16721672-* @return 0 on success, error code otherwise.16731673-*/16741674-/****************************************************************************/16751675-16761676-static int dma_map_add_segment(DMA_MemMap_t *memMap, /* Stores state information about the map */16771677- DMA_Region_t *region, /* Region that the segment belongs to */16781678- void *virtAddr, /* Virtual address of the segment being added */16791679- dma_addr_t physAddr, /* Physical address of the segment being added */16801680- size_t numBytes /* Number of bytes of the segment being added */16811681- ) {16821682- DMA_Segment_t *segment;16831683-16841684- DMA_MAP_PRINT("memMap:%p va:%p pa:0x%x #:%d\n", memMap, virtAddr,16851685- physAddr, numBytes);16861686-16871687- /* Sanity check */16881688-16891689- if (((unsigned long)virtAddr < (unsigned long)region->virtAddr)16901690- || (((unsigned long)virtAddr + numBytes)) >16911691- ((unsigned long)region->virtAddr + region->numBytes)) {16921692- printk(KERN_ERR16931693- "%s: virtAddr %p is outside region @ %p len: %d\n",16941694- __func__, virtAddr, region->virtAddr, region->numBytes);16951695- return -EINVAL;16961696- }16971697-16981698- if (region->numSegmentsUsed > 0) {16991699- /* Check to see if this segment is physically contiguous with the previous one */17001700-17011701- segment = ®ion->segment[region->numSegmentsUsed - 1];17021702-17031703- if ((segment->physAddr + segment->numBytes) == physAddr) {17041704- /* It is - just add on to the end */17051705-17061706- DMA_MAP_PRINT("appending %d bytes to last segment\n",17071707- numBytes);17081708-17091709- segment->numBytes += numBytes;17101710-17111711- return 0;17121712- }17131713- }17141714-17151715- /* Reallocate to hold more segments, if required. */17161716-17171717- if (region->numSegmentsUsed >= region->numSegmentsAllocated) {17181718- DMA_Segment_t *newSegment;17191719- size_t oldSize =17201720- region->numSegmentsAllocated * sizeof(*newSegment);17211721- int newAlloc = region->numSegmentsAllocated + 4;17221722- size_t newSize = newAlloc * sizeof(*newSegment);17231723-17241724- newSegment = kmalloc(newSize, GFP_KERNEL);17251725- if (newSegment == NULL) {17261726- return -ENOMEM;17271727- }17281728- memcpy(newSegment, region->segment, oldSize);17291729- memset(&((uint8_t *) newSegment)[oldSize], 0,17301730- newSize - oldSize);17311731- kfree(region->segment);17321732-17331733- region->numSegmentsAllocated = newAlloc;17341734- region->segment = newSegment;17351735- }17361736-17371737- segment = ®ion->segment[region->numSegmentsUsed];17381738- region->numSegmentsUsed++;17391739-17401740- segment->virtAddr = virtAddr;17411741- segment->physAddr = physAddr;17421742- segment->numBytes = numBytes;17431743-17441744- DMA_MAP_PRINT("returning success\n");17451745-17461746- return 0;17471747-}17481748-17491749-/****************************************************************************/17501750-/**17511751-* Adds a region of memory to a memory map. Each region is virtually17521752-* contiguous, but not necessarily physically contiguous.17531753-*17541754-* @return 0 on success, error code otherwise.17551755-*/17561756-/****************************************************************************/17571757-17581758-int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */17591759- void *mem, /* Virtual address that we want to get a map of */17601760- size_t numBytes /* Number of bytes being mapped */17611761- ) {17621762- unsigned long addr = (unsigned long)mem;17631763- unsigned int offset;17641764- int rc = 0;17651765- DMA_Region_t *region;17661766- dma_addr_t physAddr;17671767-17681768- down(&memMap->lock);17691769-17701770- DMA_MAP_PRINT("memMap:%p va:%p #:%d\n", memMap, mem, numBytes);17711771-17721772- if (!memMap->inUse) {17731773- printk(KERN_ERR "%s: Make sure you call dma_map_start first\n",17741774- __func__);17751775- rc = -EINVAL;17761776- goto out;17771777- }17781778-17791779- /* Reallocate to hold more regions. */17801780-17811781- if (memMap->numRegionsUsed >= memMap->numRegionsAllocated) {17821782- DMA_Region_t *newRegion;17831783- size_t oldSize =17841784- memMap->numRegionsAllocated * sizeof(*newRegion);17851785- int newAlloc = memMap->numRegionsAllocated + 4;17861786- size_t newSize = newAlloc * sizeof(*newRegion);17871787-17881788- newRegion = kmalloc(newSize, GFP_KERNEL);17891789- if (newRegion == NULL) {17901790- rc = -ENOMEM;17911791- goto out;17921792- }17931793- memcpy(newRegion, memMap->region, oldSize);17941794- memset(&((uint8_t *) newRegion)[oldSize], 0, newSize - oldSize);17951795-17961796- kfree(memMap->region);17971797-17981798- memMap->numRegionsAllocated = newAlloc;17991799- memMap->region = newRegion;18001800- }18011801-18021802- region = &memMap->region[memMap->numRegionsUsed];18031803- memMap->numRegionsUsed++;18041804-18051805- offset = addr & ~PAGE_MASK;18061806-18071807- region->memType = dma_mem_type(mem);18081808- region->virtAddr = mem;18091809- region->numBytes = numBytes;18101810- region->numSegmentsUsed = 0;18111811- region->numLockedPages = 0;18121812- region->lockedPages = NULL;18131813-18141814- switch (region->memType) {18151815- case DMA_MEM_TYPE_VMALLOC:18161816- {18171817- atomic_inc(&gDmaStatMemTypeVmalloc);18181818-18191819- /* printk(KERN_ERR "%s: vmalloc'd pages are not supported\n", __func__); */18201820-18211821- /* vmalloc'd pages are not physically contiguous */18221822-18231823- rc = -EINVAL;18241824- break;18251825- }18261826-18271827- case DMA_MEM_TYPE_KMALLOC:18281828- {18291829- atomic_inc(&gDmaStatMemTypeKmalloc);18301830-18311831- /* kmalloc'd pages are physically contiguous, so they'll have exactly */18321832- /* one segment */18331833-18341834-#if ALLOW_MAP_OF_KMALLOC_MEMORY18351835- physAddr =18361836- dma_map_single(NULL, mem, numBytes, memMap->dir);18371837- rc = dma_map_add_segment(memMap, region, mem, physAddr,18381838- numBytes);18391839-#else18401840- rc = -EINVAL;18411841-#endif18421842- break;18431843- }18441844-18451845- case DMA_MEM_TYPE_DMA:18461846- {18471847- /* dma_alloc_xxx pages are physically contiguous */18481848-18491849- atomic_inc(&gDmaStatMemTypeCoherent);18501850-18511851- physAddr = (vmalloc_to_pfn(mem) << PAGE_SHIFT) + offset;18521852-18531853- dma_sync_single_for_cpu(NULL, physAddr, numBytes,18541854- memMap->dir);18551855- rc = dma_map_add_segment(memMap, region, mem, physAddr,18561856- numBytes);18571857- break;18581858- }18591859-18601860- case DMA_MEM_TYPE_USER:18611861- {18621862- size_t firstPageOffset;18631863- size_t firstPageSize;18641864- struct page **pages;18651865- struct task_struct *userTask;18661866-18671867- atomic_inc(&gDmaStatMemTypeUser);18681868-18691869-#if 118701870- /* If the pages are user pages, then the dma_mem_map_set_user_task function */18711871- /* must have been previously called. */18721872-18731873- if (memMap->userTask == NULL) {18741874- printk(KERN_ERR18751875- "%s: must call dma_mem_map_set_user_task when using user-mode memory\n",18761876- __func__);18771877- return -EINVAL;18781878- }18791879-18801880- /* User pages need to be locked. */18811881-18821882- firstPageOffset =18831883- (unsigned long)region->virtAddr & (PAGE_SIZE - 1);18841884- firstPageSize = PAGE_SIZE - firstPageOffset;18851885-18861886- region->numLockedPages = (firstPageOffset18871887- + region->numBytes +18881888- PAGE_SIZE - 1) / PAGE_SIZE;18891889- pages =18901890- kmalloc(region->numLockedPages *18911891- sizeof(struct page *), GFP_KERNEL);18921892-18931893- if (pages == NULL) {18941894- region->numLockedPages = 0;18951895- return -ENOMEM;18961896- }18971897-18981898- userTask = memMap->userTask;18991899-19001900- down_read(&userTask->mm->mmap_sem);19011901- rc = get_user_pages(userTask, /* task */19021902- userTask->mm, /* mm */19031903- (unsigned long)region->virtAddr, /* start */19041904- region->numLockedPages, /* len */19051905- memMap->dir == DMA_FROM_DEVICE, /* write */19061906- 0, /* force */19071907- pages, /* pages (array of pointers to page) */19081908- NULL); /* vmas */19091909- up_read(&userTask->mm->mmap_sem);19101910-19111911- if (rc != region->numLockedPages) {19121912- kfree(pages);19131913- region->numLockedPages = 0;19141914-19151915- if (rc >= 0) {19161916- rc = -EINVAL;19171917- }19181918- } else {19191919- uint8_t *virtAddr = region->virtAddr;19201920- size_t bytesRemaining;19211921- int pageIdx;19221922-19231923- rc = 0; /* Since get_user_pages returns +ve number */19241924-19251925- region->lockedPages = pages;19261926-19271927- /* We've locked the user pages. Now we need to walk them and figure */19281928- /* out the physical addresses. */19291929-19301930- /* The first page may be partial */19311931-19321932- dma_map_add_segment(memMap,19331933- region,19341934- virtAddr,19351935- PFN_PHYS(page_to_pfn19361936- (pages[0])) +19371937- firstPageOffset,19381938- firstPageSize);19391939-19401940- virtAddr += firstPageSize;19411941- bytesRemaining =19421942- region->numBytes - firstPageSize;19431943-19441944- for (pageIdx = 1;19451945- pageIdx < region->numLockedPages;19461946- pageIdx++) {19471947- size_t bytesThisPage =19481948- (bytesRemaining >19491949- PAGE_SIZE ? PAGE_SIZE :19501950- bytesRemaining);19511951-19521952- DMA_MAP_PRINT19531953- ("pageIdx:%d pages[pageIdx]=%p pfn=%u phys=%u\n",19541954- pageIdx, pages[pageIdx],19551955- page_to_pfn(pages[pageIdx]),19561956- PFN_PHYS(page_to_pfn19571957- (pages[pageIdx])));19581958-19591959- dma_map_add_segment(memMap,19601960- region,19611961- virtAddr,19621962- PFN_PHYS(page_to_pfn19631963- (pages19641964- [pageIdx])),19651965- bytesThisPage);19661966-19671967- virtAddr += bytesThisPage;19681968- bytesRemaining -= bytesThisPage;19691969- }19701970- }19711971-#else19721972- printk(KERN_ERR19731973- "%s: User mode pages are not yet supported\n",19741974- __func__);19751975-19761976- /* user pages are not physically contiguous */19771977-19781978- rc = -EINVAL;19791979-#endif19801980- break;19811981- }19821982-19831983- default:19841984- {19851985- printk(KERN_ERR "%s: Unsupported memory type: %d\n",19861986- __func__, region->memType);19871987-19881988- rc = -EINVAL;19891989- break;19901990- }19911991- }19921992-19931993- if (rc != 0) {19941994- memMap->numRegionsUsed--;19951995- }19961996-19971997-out:19981998-19991999- DMA_MAP_PRINT("returning %d\n", rc);20002000-20012001- up(&memMap->lock);20022002-20032003- return rc;20042004-}20052005-20062006-EXPORT_SYMBOL(dma_map_add_segment);20072007-20082008-/****************************************************************************/20092009-/**20102010-* Maps in a memory region such that it can be used for performing a DMA.20112011-*20122012-* @return 0 on success, error code otherwise.20132013-*/20142014-/****************************************************************************/20152015-20162016-int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */20172017- void *mem, /* Virtual address that we want to get a map of */20182018- size_t numBytes, /* Number of bytes being mapped */20192019- enum dma_data_direction dir /* Direction that the mapping will be going */20202020- ) {20212021- int rc;20222022-20232023- rc = dma_map_start(memMap, dir);20242024- if (rc == 0) {20252025- rc = dma_map_add_region(memMap, mem, numBytes);20262026- if (rc < 0) {20272027- /* Since the add fails, this function will fail, and the caller won't */20282028- /* call unmap, so we need to do it here. */20292029-20302030- dma_unmap(memMap, 0);20312031- }20322032- }20332033-20342034- return rc;20352035-}20362036-20372037-EXPORT_SYMBOL(dma_map_mem);20382038-20392039-/****************************************************************************/20402040-/**20412041-* Setup a descriptor ring for a given memory map.20422042-*20432043-* It is assumed that the descriptor ring has already been initialized, and20442044-* this routine will only reallocate a new descriptor ring if the existing20452045-* one is too small.20462046-*20472047-* @return 0 on success, error code otherwise.20482048-*/20492049-/****************************************************************************/20502050-20512051-int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */20522052- DMA_MemMap_t *memMap, /* Memory map that will be used */20532053- dma_addr_t devPhysAddr /* Physical address of device */20542054- ) {20552055- int rc;20562056- int numDescriptors;20572057- DMA_DeviceAttribute_t *devAttr;20582058- DMA_Region_t *region;20592059- DMA_Segment_t *segment;20602060- dma_addr_t srcPhysAddr;20612061- dma_addr_t dstPhysAddr;20622062- int regionIdx;20632063- int segmentIdx;20642064-20652065- devAttr = &DMA_gDeviceAttribute[dev];20662066-20672067- down(&memMap->lock);20682068-20692069- /* Figure out how many descriptors we need */20702070-20712071- numDescriptors = 0;20722072- for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {20732073- region = &memMap->region[regionIdx];20742074-20752075- for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;20762076- segmentIdx++) {20772077- segment = ®ion->segment[segmentIdx];20782078-20792079- if (memMap->dir == DMA_TO_DEVICE) {20802080- srcPhysAddr = segment->physAddr;20812081- dstPhysAddr = devPhysAddr;20822082- } else {20832083- srcPhysAddr = devPhysAddr;20842084- dstPhysAddr = segment->physAddr;20852085- }20862086-20872087- rc =20882088- dma_calculate_descriptor_count(dev, srcPhysAddr,20892089- dstPhysAddr,20902090- segment->20912091- numBytes);20922092- if (rc < 0) {20932093- printk(KERN_ERR20942094- "%s: dma_calculate_descriptor_count failed: %d\n",20952095- __func__, rc);20962096- goto out;20972097- }20982098- numDescriptors += rc;20992099- }21002100- }21012101-21022102- /* Adjust the size of the ring, if it isn't big enough */21032103-21042104- if (numDescriptors > devAttr->ring.descriptorsAllocated) {21052105- dma_free_descriptor_ring(&devAttr->ring);21062106- rc =21072107- dma_alloc_descriptor_ring(&devAttr->ring,21082108- numDescriptors);21092109- if (rc < 0) {21102110- printk(KERN_ERR21112111- "%s: dma_alloc_descriptor_ring failed: %d\n",21122112- __func__, rc);21132113- goto out;21142114- }21152115- } else {21162116- rc =21172117- dma_init_descriptor_ring(&devAttr->ring,21182118- numDescriptors);21192119- if (rc < 0) {21202120- printk(KERN_ERR21212121- "%s: dma_init_descriptor_ring failed: %d\n",21222122- __func__, rc);21232123- goto out;21242124- }21252125- }21262126-21272127- /* Populate the descriptors */21282128-21292129- for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {21302130- region = &memMap->region[regionIdx];21312131-21322132- for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;21332133- segmentIdx++) {21342134- segment = ®ion->segment[segmentIdx];21352135-21362136- if (memMap->dir == DMA_TO_DEVICE) {21372137- srcPhysAddr = segment->physAddr;21382138- dstPhysAddr = devPhysAddr;21392139- } else {21402140- srcPhysAddr = devPhysAddr;21412141- dstPhysAddr = segment->physAddr;21422142- }21432143-21442144- rc =21452145- dma_add_descriptors(&devAttr->ring, dev,21462146- srcPhysAddr, dstPhysAddr,21472147- segment->numBytes);21482148- if (rc < 0) {21492149- printk(KERN_ERR21502150- "%s: dma_add_descriptors failed: %d\n",21512151- __func__, rc);21522152- goto out;21532153- }21542154- }21552155- }21562156-21572157- rc = 0;21582158-21592159-out:21602160-21612161- up(&memMap->lock);21622162- return rc;21632163-}21642164-21652165-EXPORT_SYMBOL(dma_map_create_descriptor_ring);21662166-21672167-/****************************************************************************/21682168-/**21692169-* Maps in a memory region such that it can be used for performing a DMA.21702170-*21712171-* @return21722172-*/21732173-/****************************************************************************/21742174-21752175-int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */21762176- int dirtied /* non-zero if any of the pages were modified */21772177- ) {21782178-21792179- int rc = 0;21802180- int regionIdx;21812181- int segmentIdx;21822182- DMA_Region_t *region;21832183- DMA_Segment_t *segment;21842184-21852185- down(&memMap->lock);21862186-21872187- for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) {21882188- region = &memMap->region[regionIdx];21892189-21902190- for (segmentIdx = 0; segmentIdx < region->numSegmentsUsed;21912191- segmentIdx++) {21922192- segment = ®ion->segment[segmentIdx];21932193-21942194- switch (region->memType) {21952195- case DMA_MEM_TYPE_VMALLOC:21962196- {21972197- printk(KERN_ERR21982198- "%s: vmalloc'd pages are not yet supported\n",21992199- __func__);22002200- rc = -EINVAL;22012201- goto out;22022202- }22032203-22042204- case DMA_MEM_TYPE_KMALLOC:22052205- {22062206-#if ALLOW_MAP_OF_KMALLOC_MEMORY22072207- dma_unmap_single(NULL,22082208- segment->physAddr,22092209- segment->numBytes,22102210- memMap->dir);22112211-#endif22122212- break;22132213- }22142214-22152215- case DMA_MEM_TYPE_DMA:22162216- {22172217- dma_sync_single_for_cpu(NULL,22182218- segment->22192219- physAddr,22202220- segment->22212221- numBytes,22222222- memMap->dir);22232223- break;22242224- }22252225-22262226- case DMA_MEM_TYPE_USER:22272227- {22282228- /* Nothing to do here. */22292229-22302230- break;22312231- }22322232-22332233- default:22342234- {22352235- printk(KERN_ERR22362236- "%s: Unsupported memory type: %d\n",22372237- __func__, region->memType);22382238- rc = -EINVAL;22392239- goto out;22402240- }22412241- }22422242-22432243- segment->virtAddr = NULL;22442244- segment->physAddr = 0;22452245- segment->numBytes = 0;22462246- }22472247-22482248- if (region->numLockedPages > 0) {22492249- int pageIdx;22502250-22512251- /* Some user pages were locked. We need to go and unlock them now. */22522252-22532253- for (pageIdx = 0; pageIdx < region->numLockedPages;22542254- pageIdx++) {22552255- struct page *page =22562256- region->lockedPages[pageIdx];22572257-22582258- if (memMap->dir == DMA_FROM_DEVICE) {22592259- SetPageDirty(page);22602260- }22612261- page_cache_release(page);22622262- }22632263- kfree(region->lockedPages);22642264- region->numLockedPages = 0;22652265- region->lockedPages = NULL;22662266- }22672267-22682268- region->memType = DMA_MEM_TYPE_NONE;22692269- region->virtAddr = NULL;22702270- region->numBytes = 0;22712271- region->numSegmentsUsed = 0;22722272- }22732273- memMap->userTask = NULL;22742274- memMap->numRegionsUsed = 0;22752275- memMap->inUse = 0;22762276-22772277-out:22782278- up(&memMap->lock);22792279-22802280- return rc;22812281-}22822282-22832283-EXPORT_SYMBOL(dma_unmap);
-196
arch/arm/mach-bcmring/include/mach/dma.h
···2626/* ---- Include Files ---------------------------------------------------- */27272828#include <linux/kernel.h>2929-#include <linux/wait.h>3029#include <linux/semaphore.h>3130#include <csp/dmacHw.h>3231#include <mach/timer.h>3333-#include <linux/scatterlist.h>3434-#include <linux/dma-mapping.h>3535-#include <linux/mm.h>3636-#include <linux/vmalloc.h>3737-#include <linux/pagemap.h>38323933/* ---- Constants and Types ---------------------------------------------- */4034···104110 size_t bytesAllocated; /* Number of bytes allocated in the descriptor ring */105111106112} DMA_DescriptorRing_t;107107-108108-/****************************************************************************109109-*110110-* The DMA_MemType_t and DMA_MemMap_t are helper structures used to setup111111-* DMA chains from a variety of memory sources.112112-*113113-*****************************************************************************/114114-115115-#define DMA_MEM_MAP_MIN_SIZE 4096 /* Pages less than this size are better */116116- /* off not being DMA'd. */117117-118118-typedef enum {119119- DMA_MEM_TYPE_NONE, /* Not a valid setting */120120- DMA_MEM_TYPE_VMALLOC, /* Memory came from vmalloc call */121121- DMA_MEM_TYPE_KMALLOC, /* Memory came from kmalloc call */122122- DMA_MEM_TYPE_DMA, /* Memory came from dma_alloc_xxx call */123123- DMA_MEM_TYPE_USER, /* Memory came from user space. */124124-125125-} DMA_MemType_t;126126-127127-/* A segment represents a physically and virtually contiguous chunk of memory. */128128-/* i.e. each segment can be DMA'd */129129-/* A user of the DMA code will add memory regions. Each region may need to be */130130-/* represented by one or more segments. */131131-132132-typedef struct {133133- void *virtAddr; /* Virtual address used for this segment */134134- dma_addr_t physAddr; /* Physical address this segment maps to */135135- size_t numBytes; /* Size of the segment, in bytes */136136-137137-} DMA_Segment_t;138138-139139-/* A region represents a virtually contiguous chunk of memory, which may be */140140-/* made up of multiple segments. */141141-142142-typedef struct {143143- DMA_MemType_t memType;144144- void *virtAddr;145145- size_t numBytes;146146-147147- /* Each region (virtually contiguous) consists of one or more segments. Each */148148- /* segment is virtually and physically contiguous. */149149-150150- int numSegmentsUsed;151151- int numSegmentsAllocated;152152- DMA_Segment_t *segment;153153-154154- /* When a region corresponds to user memory, we need to lock all of the pages */155155- /* down before we can figure out the physical addresses. The lockedPage array contains */156156- /* the pages that were locked, and which subsequently need to be unlocked once the */157157- /* memory is unmapped. */158158-159159- unsigned numLockedPages;160160- struct page **lockedPages;161161-162162-} DMA_Region_t;163163-164164-typedef struct {165165- int inUse; /* Is this mapping currently being used? */166166- struct semaphore lock; /* Acquired when using this structure */167167- enum dma_data_direction dir; /* Direction this transfer is intended for */168168-169169- /* In the event that we're mapping user memory, we need to know which task */170170- /* the memory is for, so that we can obtain the correct mm locks. */171171-172172- struct task_struct *userTask;173173-174174- int numRegionsUsed;175175- int numRegionsAllocated;176176- DMA_Region_t *region;177177-178178-} DMA_MemMap_t;179113180114/****************************************************************************181115*···488566 dma_addr_t dstData1, /* Physical address of first destination buffer */489567 dma_addr_t dstData2, /* Physical address of second destination buffer */490568 size_t numBytes /* Number of bytes in each destination buffer */491491- );492492-493493-/****************************************************************************/494494-/**495495-* Initializes a DMA_MemMap_t data structure496496-*/497497-/****************************************************************************/498498-499499-int dma_init_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */500500- );501501-502502-/****************************************************************************/503503-/**504504-* Releases any memory currently being held by a memory mapping structure.505505-*/506506-/****************************************************************************/507507-508508-int dma_term_mem_map(DMA_MemMap_t *memMap /* Stores state information about the map */509509- );510510-511511-/****************************************************************************/512512-/**513513-* Looks at a memory address and categorizes it.514514-*515515-* @return One of the values from the DMA_MemType_t enumeration.516516-*/517517-/****************************************************************************/518518-519519-DMA_MemType_t dma_mem_type(void *addr);520520-521521-/****************************************************************************/522522-/**523523-* Sets the process (aka userTask) associated with a mem map. This is524524-* required if user-mode segments will be added to the mapping.525525-*/526526-/****************************************************************************/527527-528528-static inline void dma_mem_map_set_user_task(DMA_MemMap_t *memMap,529529- struct task_struct *task)530530-{531531- memMap->userTask = task;532532-}533533-534534-/****************************************************************************/535535-/**536536-* Looks at a memory address and determines if we support DMA'ing to/from537537-* that type of memory.538538-*539539-* @return boolean -540540-* return value != 0 means dma supported541541-* return value == 0 means dma not supported542542-*/543543-/****************************************************************************/544544-545545-int dma_mem_supports_dma(void *addr);546546-547547-/****************************************************************************/548548-/**549549-* Initializes a memory map for use. Since this function acquires a550550-* sempaphore within the memory map, it is VERY important that dma_unmap551551-* be called when you're finished using the map.552552-*/553553-/****************************************************************************/554554-555555-int dma_map_start(DMA_MemMap_t *memMap, /* Stores state information about the map */556556- enum dma_data_direction dir /* Direction that the mapping will be going */557557- );558558-559559-/****************************************************************************/560560-/**561561-* Adds a segment of memory to a memory map.562562-*563563-* @return 0 on success, error code otherwise.564564-*/565565-/****************************************************************************/566566-567567-int dma_map_add_region(DMA_MemMap_t *memMap, /* Stores state information about the map */568568- void *mem, /* Virtual address that we want to get a map of */569569- size_t numBytes /* Number of bytes being mapped */570570- );571571-572572-/****************************************************************************/573573-/**574574-* Creates a descriptor ring from a memory mapping.575575-*576576-* @return 0 on success, error code otherwise.577577-*/578578-/****************************************************************************/579579-580580-int dma_map_create_descriptor_ring(DMA_Device_t dev, /* DMA device (where the ring is stored) */581581- DMA_MemMap_t *memMap, /* Memory map that will be used */582582- dma_addr_t devPhysAddr /* Physical address of device */583583- );584584-585585-/****************************************************************************/586586-/**587587-* Maps in a memory region such that it can be used for performing a DMA.588588-*589589-* @return590590-*/591591-/****************************************************************************/592592-593593-int dma_map_mem(DMA_MemMap_t *memMap, /* Stores state information about the map */594594- void *addr, /* Virtual address that we want to get a map of */595595- size_t count, /* Number of bytes being mapped */596596- enum dma_data_direction dir /* Direction that the mapping will be going */597597- );598598-599599-/****************************************************************************/600600-/**601601-* Maps in a memory region such that it can be used for performing a DMA.602602-*603603-* @return604604-*/605605-/****************************************************************************/606606-607607-int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */608608- int dirtied /* non-zero if any of the pages were modified */609569 );610570611571/****************************************************************************/
···5454 return 0;5555}56565757-#define DM365_EVM_PHY_ID "0:01"5757+#define DM365_EVM_PHY_ID "davinci_mdio-0:01"5858/*5959 * A MAX-II CPLD is used for various board control functions.6060 */
···736736 .enabled_uarts = (1 << 0),737737};738738739739-#define DM646X_EVM_PHY_ID "0:01"739739+#define DM646X_EVM_PHY_ID "davinci_mdio-0:01"740740/*741741 * The following EDMA channels/slots are not being used by drivers (for742742 * example: Timer, GPIO, UART events etc) on dm646x, hence they are being
···897897 ret = sr_late_init(sr_info);898898 if (ret) {899899 pr_warning("%s: Error in SR late init\n", __func__);900900- return ret;900900+ goto err_iounmap;901901 }902902 }903903
···225225 if ((area->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))226226 continue;227227 if (__phys_to_pfn(area->phys_addr) > pfn ||228228- __pfn_to_phys(pfn) + offset + size-1 >229229- area->phys_addr + area->size-1)228228+ __pfn_to_phys(pfn) + size-1 > area->phys_addr + area->size-1)230229 continue;231230 /* we can drop the lock here as we know *area is static */232231 read_unlock(&vmlist_lock);
···2626#include <linux/cache.h>2727#include <linux/of_platform.h>2828#include <linux/dma-mapping.h>2929-#include <linux/cpu.h>3029#include <asm/cacheflush.h>3130#include <asm/entry.h>3231#include <asm/cpuinfo.h>···226227227228 return 0;228229}230230+229231arch_initcall(setup_bus_notifier);230230-231231-static DEFINE_PER_CPU(struct cpu, cpu_devices);232232-233233-static int __init topology_init(void)234234-{235235- int i, ret;236236-237237- for_each_present_cpu(i) {238238- struct cpu *c = &per_cpu(cpu_devices, i);239239-240240- ret = register_cpu(c, i);241241- if (ret)242242- printk(KERN_WARNING "topology_init: register_cpu %d "243243- "failed (%d)\n", i, ret);244244- }245245-246246- return 0;247247-}248248-subsys_initcall(topology_init);
+1
arch/mips/Kconfig
···23562356 depends on HW_HAS_PCI23572357 select PCI_DOMAINS23582358 select GENERIC_PCI_IOMAP23592359+ select NO_GENERIC_PCI_IOPORT_MAP23592360 help23602361 Find out whether you have a PCI motherboard. PCI is the name of a23612362 bus system, i.e. the way the CPU talks to the other stuff inside
+2-2
arch/mips/lib/iomap-pci.c
···1010#include <linux/module.h>1111#include <asm/io.h>12121313-static void __iomem *ioport_map_pci(struct pci_dev *dev,1414- unsigned long port, unsigned int nr)1313+void __iomem *__pci_ioport_map(struct pci_dev *dev,1414+ unsigned long port, unsigned int nr)1515{1616 struct pci_controller *ctrl = dev->bus->sysdata;1717 unsigned long base = ctrl->io_map_base;
+1
arch/sh/Kconfig
···859859 depends on SYS_SUPPORTS_PCI860860 select PCI_DOMAINS861861 select GENERIC_PCI_IOMAP862862+ select NO_GENERIC_PCI_IOPORT_MAP862863 help863864 Find out whether you have a PCI motherboard. PCI is the name of a864865 bus system, i.e. the way the CPU talks to the other stuff inside
+2-2
arch/sh/drivers/pci/pci.c
···356356357357#ifndef CONFIG_GENERIC_IOMAP358358359359-static void __iomem *ioport_map_pci(struct pci_dev *dev,360360- unsigned long port, unsigned int nr)359359+void __iomem *__pci_ioport_map(struct pci_dev *dev,360360+ unsigned long port, unsigned int nr)361361{362362 struct pci_channel *chan = dev->sysdata;363363
···129129 if (!stack) {130130 if (regs)131131 stack = (unsigned long *)regs->sp;132132- else if (task && task != current)132132+ else if (task != current)133133 stack = (unsigned long *)task->thread.sp;134134 else135135 stack = &dummy;···269269 unsigned char c;270270 u8 *ip;271271272272- printk(KERN_EMERG "Stack:\n");272272+ printk(KERN_DEFAULT "Stack:\n");273273 show_stack_log_lvl(NULL, regs, (unsigned long *)sp,274274- 0, KERN_EMERG);274274+ 0, KERN_DEFAULT);275275276276- printk(KERN_EMERG "Code: ");276276+ printk(KERN_DEFAULT "Code: ");277277278278 ip = (u8 *)regs->ip - code_prologue;279279 if (ip < (u8 *)PAGE_OFFSET || probe_kernel_address(ip, c)) {
+26-10
arch/x86/kernel/reboot.c
···3939enum reboot_type reboot_type = BOOT_ACPI;4040int reboot_force;41414242+/* This variable is used privately to keep track of whether or not4343+ * reboot_type is still set to its default value (i.e., reboot= hasn't4444+ * been set on the command line). This is needed so that we can4545+ * suppress DMI scanning for reboot quirks. Without it, it's4646+ * impossible to override a faulty reboot quirk without recompiling.4747+ */4848+static int reboot_default = 1;4949+4250#if defined(CONFIG_X86_32) && defined(CONFIG_SMP)4351static int reboot_cpu = -1;4452#endif···7567static int __init reboot_setup(char *str)7668{7769 for (;;) {7070+ /* Having anything passed on the command line via7171+ * reboot= will cause us to disable DMI checking7272+ * below.7373+ */7474+ reboot_default = 0;7575+7876 switch (*str) {7977 case 'w':8078 reboot_mode = 0x1234;···309295 DMI_MATCH(DMI_BOARD_NAME, "P4S800"),310296 },311297 },312312- { /* Handle problems with rebooting on VersaLogic Menlow boards */313313- .callback = set_bios_reboot,314314- .ident = "VersaLogic Menlow based board",315315- .matches = {316316- DMI_MATCH(DMI_BOARD_VENDOR, "VersaLogic Corporation"),317317- DMI_MATCH(DMI_BOARD_NAME, "VersaLogic Menlow board"),318318- },319319- },320298 { /* Handle reboot issue on Acer Aspire one */321299 .callback = set_kbd_reboot,322300 .ident = "Acer Aspire One A110",···322316323317static int __init reboot_init(void)324318{325325- dmi_check_system(reboot_dmi_table);319319+ /* Only do the DMI check if reboot_type hasn't been overridden320320+ * on the command line321321+ */322322+ if (reboot_default) {323323+ dmi_check_system(reboot_dmi_table);324324+ }326325 return 0;327326}328327core_initcall(reboot_init);···476465477466static int __init pci_reboot_init(void)478467{479479- dmi_check_system(pci_reboot_dmi_table);468468+ /* Only do the DMI check if reboot_type hasn't been overridden469469+ * on the command line470470+ */471471+ if (reboot_default) {472472+ dmi_check_system(pci_reboot_dmi_table);473473+ }480474 return 0;481475}482476core_initcall(pci_reboot_init);
+51
arch/x86/kvm/emulate.c
···18911891 ss->p = 1;18921892}1893189318941894+static bool em_syscall_is_enabled(struct x86_emulate_ctxt *ctxt)18951895+{18961896+ struct x86_emulate_ops *ops = ctxt->ops;18971897+ u32 eax, ebx, ecx, edx;18981898+18991899+ /*19001900+ * syscall should always be enabled in longmode - so only become19011901+ * vendor specific (cpuid) if other modes are active...19021902+ */19031903+ if (ctxt->mode == X86EMUL_MODE_PROT64)19041904+ return true;19051905+19061906+ eax = 0x00000000;19071907+ ecx = 0x00000000;19081908+ if (ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx)) {19091909+ /*19101910+ * Intel ("GenuineIntel")19111911+ * remark: Intel CPUs only support "syscall" in 64bit19121912+ * longmode. Also an 64bit guest with a19131913+ * 32bit compat-app running will #UD !! While this19141914+ * behaviour can be fixed (by emulating) into AMD19151915+ * response - CPUs of AMD can't behave like Intel.19161916+ */19171917+ if (ebx == X86EMUL_CPUID_VENDOR_GenuineIntel_ebx &&19181918+ ecx == X86EMUL_CPUID_VENDOR_GenuineIntel_ecx &&19191919+ edx == X86EMUL_CPUID_VENDOR_GenuineIntel_edx)19201920+ return false;19211921+19221922+ /* AMD ("AuthenticAMD") */19231923+ if (ebx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ebx &&19241924+ ecx == X86EMUL_CPUID_VENDOR_AuthenticAMD_ecx &&19251925+ edx == X86EMUL_CPUID_VENDOR_AuthenticAMD_edx)19261926+ return true;19271927+19281928+ /* AMD ("AMDisbetter!") */19291929+ if (ebx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ebx &&19301930+ ecx == X86EMUL_CPUID_VENDOR_AMDisbetterI_ecx &&19311931+ edx == X86EMUL_CPUID_VENDOR_AMDisbetterI_edx)19321932+ return true;19331933+ }19341934+19351935+ /* default: (not Intel, not AMD), apply Intel's stricter rules... */19361936+ return false;19371937+}19381938+18941939static int em_syscall(struct x86_emulate_ctxt *ctxt)18951940{18961941 struct x86_emulate_ops *ops = ctxt->ops;···19491904 ctxt->mode == X86EMUL_MODE_VM86)19501905 return emulate_ud(ctxt);1951190619071907+ if (!(em_syscall_is_enabled(ctxt)))19081908+ return emulate_ud(ctxt);19091909+19521910 ops->get_msr(ctxt, MSR_EFER, &efer);19531911 setup_syscalls_segments(ctxt, &cs, &ss);19121912+19131913+ if (!(efer & EFER_SCE))19141914+ return emulate_ud(ctxt);1954191519551916 ops->get_msr(ctxt, MSR_STAR, &msr_data);19561917 msr_data >>= 32;
+45
arch/x86/kvm/x86.c
···1495149514961496int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data)14971497{14981498+ bool pr = false;14991499+14981500 switch (msr) {14991501 case MSR_EFER:15001502 return set_efer(vcpu, data);···16361634 case MSR_K7_PERFCTR3:16371635 pr_unimpl(vcpu, "unimplemented perfctr wrmsr: "16381636 "0x%x data 0x%llx\n", msr, data);16371637+ break;16381638+ case MSR_P6_PERFCTR0:16391639+ case MSR_P6_PERFCTR1:16401640+ pr = true;16411641+ case MSR_P6_EVNTSEL0:16421642+ case MSR_P6_EVNTSEL1:16431643+ if (kvm_pmu_msr(vcpu, msr))16441644+ return kvm_pmu_set_msr(vcpu, msr, data);16451645+16461646+ if (pr || data != 0)16471647+ pr_unimpl(vcpu, "disabled perfctr wrmsr: "16481648+ "0x%x data 0x%llx\n", msr, data);16391649 break;16401650 case MSR_K7_CLK_CTL:16411651 /*···18471833 case MSR_K8_INT_PENDING_MSG:18481834 case MSR_AMD64_NB_CFG:18491835 case MSR_FAM10H_MMIO_CONF_BASE:18361836+ data = 0;18371837+ break;18381838+ case MSR_P6_PERFCTR0:18391839+ case MSR_P6_PERFCTR1:18401840+ case MSR_P6_EVNTSEL0:18411841+ case MSR_P6_EVNTSEL1:18421842+ if (kvm_pmu_msr(vcpu, msr))18431843+ return kvm_pmu_get_msr(vcpu, msr, pdata);18501844 data = 0;18511845 break;18521846 case MSR_IA32_UCODE_REV:···42024180 return kvm_x86_ops->check_intercept(emul_to_vcpu(ctxt), info, stage);42034181}4204418241834183+static bool emulator_get_cpuid(struct x86_emulate_ctxt *ctxt,41844184+ u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)41854185+{41864186+ struct kvm_cpuid_entry2 *cpuid = NULL;41874187+41884188+ if (eax && ecx)41894189+ cpuid = kvm_find_cpuid_entry(emul_to_vcpu(ctxt),41904190+ *eax, *ecx);41914191+41924192+ if (cpuid) {41934193+ *eax = cpuid->eax;41944194+ *ecx = cpuid->ecx;41954195+ if (ebx)41964196+ *ebx = cpuid->ebx;41974197+ if (edx)41984198+ *edx = cpuid->edx;41994199+ return true;42004200+ }42014201+42024202+ return false;42034203+}42044204+42054205static struct x86_emulate_ops emulate_ops = {42064206 .read_std = kvm_read_guest_virt_system,42074207 .write_std = kvm_write_guest_virt_system,···42554211 .get_fpu = emulator_get_fpu,42564212 .put_fpu = emulator_put_fpu,42574213 .intercept = emulator_intercept,42144214+ .get_cpuid = emulator_get_cpuid,42584215};4259421642604217static void cache_all_regs(struct kvm_vcpu *vcpu)
+2-2
arch/x86/mm/fault.c
···673673674674 stackend = end_of_stack(tsk);675675 if (tsk != &init_task && *stackend != STACK_END_MAGIC)676676- printk(KERN_ALERT "Thread overran stack, or stack corrupted\n");676676+ printk(KERN_EMERG "Thread overran stack, or stack corrupted\n");677677678678 tsk->thread.cr2 = address;679679 tsk->thread.trap_no = 14;···684684 sig = 0;685685686686 /* Executive summary in case the body of the oops scrolled away */687687- printk(KERN_EMERG "CR2: %016lx\n", address);687687+ printk(KERN_DEFAULT "CR2: %016lx\n", address);688688689689 oops_end(flags, regs, sig);690690}
-3
arch/xtensa/include/asm/string.h
···118118/* Don't build bcopy at all ... */119119#define __HAVE_ARCH_BCOPY120120121121-#define __HAVE_ARCH_MEMSCAN122122-#define memscan memchr123123-124121#endif /* _XTENSA_STRING_H */
-7
drivers/acpi/processor_driver.c
···586586 if (pr->flags.need_hotplug_init)587587 return 0;588588589589- /*590590- * Do not start hotplugged CPUs now, but when they591591- * are onlined the first time592592- */593593- if (pr->flags.need_hotplug_init)594594- return 0;595595-596589 result = acpi_processor_start(pr);597590 if (result)598591 goto err_remove_sysfs;
···219219 if (ret)220220 return ret;221221222222+ /* power on internal panel if it's not already. the init tables of223223+ * some vbios default this to off for some reason, causing the224224+ * panel to not work after resume225225+ */226226+ if (nouveau_gpio_func_get(dev, DCB_GPIO_PANEL_POWER) == 0) {227227+ nouveau_gpio_func_set(dev, DCB_GPIO_PANEL_POWER, true);228228+ msleep(300);229229+ }230230+231231+ /* enable polling for external displays */222232 drm_kms_helper_poll_enable(dev);223233224234 /* enable hotplug interrupts */
+1-1
drivers/gpu/drm/nouveau/nouveau_drv.c
···124124int nouveau_ctxfw;125125module_param_named(ctxfw, nouveau_ctxfw, int, 0400);126126127127-MODULE_PARM_DESC(ctxfw, "Santise DCB table according to MXM-SIS\n");127127+MODULE_PARM_DESC(mxmdcb, "Santise DCB table according to MXM-SIS\n");128128int nouveau_mxmdcb = 1;129129module_param_named(mxmdcb, nouveau_mxmdcb, int, 0400);130130
+21-2
drivers/gpu/drm/nouveau/nouveau_gem.c
···380380}381381382382static int383383+validate_sync(struct nouveau_channel *chan, struct nouveau_bo *nvbo)384384+{385385+ struct nouveau_fence *fence = NULL;386386+ int ret = 0;387387+388388+ spin_lock(&nvbo->bo.bdev->fence_lock);389389+ if (nvbo->bo.sync_obj)390390+ fence = nouveau_fence_ref(nvbo->bo.sync_obj);391391+ spin_unlock(&nvbo->bo.bdev->fence_lock);392392+393393+ if (fence) {394394+ ret = nouveau_fence_sync(fence, chan);395395+ nouveau_fence_unref(&fence);396396+ }397397+398398+ return ret;399399+}400400+401401+static int383402validate_list(struct nouveau_channel *chan, struct list_head *list,384403 struct drm_nouveau_gem_pushbuf_bo *pbbo, uint64_t user_pbbo_ptr)385404{···412393 list_for_each_entry(nvbo, list, entry) {413394 struct drm_nouveau_gem_pushbuf_bo *b = &pbbo[nvbo->pbbo_index];414395415415- ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);396396+ ret = validate_sync(chan, nvbo);416397 if (unlikely(ret)) {417398 NV_ERROR(dev, "fail pre-validate sync\n");418399 return ret;···435416 return ret;436417 }437418438438- ret = nouveau_fence_sync(nvbo->bo.sync_obj, chan);419419+ ret = validate_sync(chan, nvbo);439420 if (unlikely(ret)) {440421 NV_ERROR(dev, "fail post-validate sync\n");441422 return ret;
+9
drivers/gpu/drm/nouveau/nouveau_mxm.c
···656656657657 if (mxm_shadow(dev, mxm[0])) {658658 MXM_MSG(dev, "failed to locate valid SIS\n");659659+#if 0660660+ /* we should, perhaps, fall back to some kind of limited661661+ * mode here if the x86 vbios hasn't already done the662662+ * work for us (so we prevent loading with completely663663+ * whacked vbios tables).664664+ */659665 return -EINVAL;666666+#else667667+ return 0;668668+#endif660669 }661670662671 MXM_MSG(dev, "MXMS Version %d.%d\n",
+2-2
drivers/gpu/drm/nouveau/nv50_pm.c
···495495 struct drm_nouveau_private *dev_priv = dev->dev_private;496496 struct nv50_pm_state *info;497497 struct pll_lims pll;498498- int ret = -EINVAL;498498+ int clk, ret = -EINVAL;499499 int N, M, P1, P2;500500- u32 clk, out;500500+ u32 out;501501502502 if (dev_priv->chipset == 0xaa ||503503 dev_priv->chipset == 0xac)
+2-2
drivers/gpu/drm/radeon/atombios_crtc.c
···11841184 WREG32(EVERGREEN_GRPH_ENABLE + radeon_crtc->crtc_offset, 1);1185118511861186 WREG32(EVERGREEN_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,11871187- crtc->mode.vdisplay);11871187+ target_fb->height);11881188 x &= ~3;11891189 y &= ~1;11901190 WREG32(EVERGREEN_VIEWPORT_START + radeon_crtc->crtc_offset,···13531353 WREG32(AVIVO_D1GRPH_ENABLE + radeon_crtc->crtc_offset, 1);1354135413551355 WREG32(AVIVO_D1MODE_DESKTOP_HEIGHT + radeon_crtc->crtc_offset,13561356- crtc->mode.vdisplay);13561356+ target_fb->height);13571357 x &= ~3;13581358 y &= ~1;13591359 WREG32(AVIVO_D1MODE_VIEWPORT_START + radeon_crtc->crtc_offset,
+15-3
drivers/gpu/drm/radeon/atombios_dp.c
···564564 ENCODER_OBJECT_ID_NUTMEG)565565 panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;566566 else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) ==567567- ENCODER_OBJECT_ID_TRAVIS)568568- panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;569569- else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {567567+ ENCODER_OBJECT_ID_TRAVIS) {568568+ u8 id[6];569569+ int i;570570+ for (i = 0; i < 6; i++)571571+ id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i);572572+ if (id[0] == 0x73 &&573573+ id[1] == 0x69 &&574574+ id[2] == 0x76 &&575575+ id[3] == 0x61 &&576576+ id[4] == 0x72 &&577577+ id[5] == 0x54)578578+ panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE;579579+ else580580+ panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;581581+ } else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {570582 u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP);571583 if (tmp & 1)572584 panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE;
+25-10
drivers/gpu/drm/radeon/r600_blit_kms.c
···468468 radeon_ring_write(ring, sq_stack_resource_mgmt_2);469469}470470471471+#define I2F_MAX_BITS 15472472+#define I2F_MAX_INPUT ((1 << I2F_MAX_BITS) - 1)473473+#define I2F_SHIFT (24 - I2F_MAX_BITS)474474+475475+/*476476+ * Converts unsigned integer into 32-bit IEEE floating point representation.477477+ * Conversion is not universal and only works for the range from 0478478+ * to 2^I2F_MAX_BITS-1. Currently we only use it with inputs between479479+ * 0 and 16384 (inclusive), so I2F_MAX_BITS=15 is enough. If necessary,480480+ * I2F_MAX_BITS can be increased, but that will add to the loop iterations481481+ * and slow us down. Conversion is done by shifting the input and counting482482+ * down until the first 1 reaches bit position 23. The resulting counter483483+ * and the shifted input are, respectively, the exponent and the fraction.484484+ * The sign is always zero.485485+ */471486static uint32_t i2f(uint32_t input)472487{473488 u32 result, i, exponent, fraction;474489475475- if ((input & 0x3fff) == 0)476476- result = 0; /* 0 is a special case */490490+ WARN_ON_ONCE(input > I2F_MAX_INPUT);491491+492492+ if ((input & I2F_MAX_INPUT) == 0)493493+ result = 0;477494 else {478478- exponent = 140; /* exponent biased by 127; */479479- fraction = (input & 0x3fff) << 10; /* cheat and only480480- handle numbers below 2^^15 */481481- for (i = 0; i < 14; i++) {495495+ exponent = 126 + I2F_MAX_BITS;496496+ fraction = (input & I2F_MAX_INPUT) << I2F_SHIFT;497497+498498+ for (i = 0; i < I2F_MAX_BITS; i++) {482499 if (fraction & 0x800000)483500 break;484501 else {485485- fraction = fraction << 1; /* keep486486- shifting left until top bit = 1 */502502+ fraction = fraction << 1;487503 exponent = exponent - 1;488504 }489505 }490490- result = exponent << 23 | (fraction & 0x7fffff); /* mask491491- off top bit; assumed 1 */506506+ result = exponent << 23 | (fraction & 0x7fffff);492507 }493508 return result;494509}
···257257 return IB_MAD_RESULT_SUCCESS;258258259259 /*260260- * Don't process SMInfo queries or vendor-specific261261- * MADs -- the SMA can't handle them.260260+ * Don't process SMInfo queries -- the SMA can't handle them.262261 */263263- if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO ||264264- ((in_mad->mad_hdr.attr_id & IB_SMP_ATTR_VENDOR_MASK) ==265265- IB_SMP_ATTR_VENDOR_MASK))262262+ if (in_mad->mad_hdr.attr_id == IB_SMP_ATTR_SM_INFO)266263 return IB_MAD_RESULT_SUCCESS;267264 } else if (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT ||268265 in_mad->mad_hdr.mgmt_class == MLX4_IB_VENDOR_CLASS1 ||
+1-1
drivers/infiniband/hw/nes/nes.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.44 *55 * This software is available to you under a choice of one of two
+1-1
drivers/infiniband/hw/nes/nes.h
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.44 *55 * This software is available to you under a choice of one of two
+7-3
drivers/infiniband/hw/nes/nes_cm.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU···233233 u8 *start_ptr = &start_addr;234234 u8 **start_buff = &start_ptr;235235 u16 buff_len = 0;236236+ struct ietf_mpa_v1 *mpa_frame;236237237238 skb = dev_alloc_skb(MAX_CM_BUFFER);238239 if (!skb) {···243242244243 /* send an MPA reject frame */245244 cm_build_mpa_frame(cm_node, start_buff, &buff_len, NULL, MPA_KEY_REPLY);245245+ mpa_frame = (struct ietf_mpa_v1 *)*start_buff;246246+ mpa_frame->flags |= IETF_MPA_FLAGS_REJECT;246247 form_cm_frame(skb, cm_node, NULL, 0, *start_buff, buff_len, SET_ACK | SET_FIN);247248248249 cm_node->state = NES_CM_STATE_FIN_WAIT1;···13631360 if (!memcmp(nesadapter->arp_table[arpindex].mac_addr,13641361 neigh->ha, ETH_ALEN)) {13651362 /* Mac address same as in nes_arp_table */13661366- ip_rt_put(rt);13671367- return rc;13631363+ goto out;13681364 }1369136513701366 nes_manage_arp_cache(nesvnic->netdev,···13791377 neigh_event_send(neigh, NULL);13801378 }13811379 }13801380+13811381+out:13821382 rcu_read_unlock();13831383 ip_rt_put(rt);13841384 return rc;
+1-1
drivers/infiniband/hw/nes/nes_cm.h
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_context.h
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_hw.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_hw.h
···11/*22-* Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+* Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33*44* This software is available to you under a choice of one of two55* licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_mgt.c
···11/*22- * Copyright (c) 2006 - 2009 Intel-NE, Inc. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_mgt.h
···11/*22-* Copyright (c) 2010 Intel-NE, Inc. All rights reserved.22+* Copyright (c) 2006 - 2011 Intel-NE, Inc. All rights reserved.33*44* This software is available to you under a choice of one of two55* licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_nic.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+1-1
drivers/infiniband/hw/nes/nes_user.h
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 * Copyright (c) 2005 Topspin Communications. All rights reserved.44 * Copyright (c) 2005 Cisco Systems. All rights reserved.55 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
+1-1
drivers/infiniband/hw/nes/nes_utils.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU
+4-2
drivers/infiniband/hw/nes/nes_verbs.c
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 *44 * This software is available to you under a choice of one of two55 * licenses. You may choose to be licensed under the terms of the GNU···34283428 NES_IWARP_SQ_FMR_WQE_LENGTH_LOW_IDX,34293429 ib_wr->wr.fast_reg.length);34303430 set_wqe_32bit_value(wqe->wqe_words,34313431+ NES_IWARP_SQ_FMR_WQE_LENGTH_HIGH_IDX, 0);34323432+ set_wqe_32bit_value(wqe->wqe_words,34313433 NES_IWARP_SQ_FMR_WQE_MR_STAG_IDX,34323434 ib_wr->wr.fast_reg.rkey);34333435 /* Set page size: */···37263724 entry->opcode = IB_WC_SEND;37273725 break;37283726 case NES_IWARP_SQ_OP_LOCINV:37293729- entry->opcode = IB_WR_LOCAL_INV;37273727+ entry->opcode = IB_WC_LOCAL_INV;37303728 break;37313729 case NES_IWARP_SQ_OP_FAST_REG:37323730 entry->opcode = IB_WC_FAST_REG_MR;
+1-1
drivers/infiniband/hw/nes/nes_verbs.h
···11/*22- * Copyright (c) 2006 - 2009 Intel Corporation. All rights reserved.22+ * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.33 * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.44 *55 * This software is available to you under a choice of one of two
+1-1
drivers/infiniband/hw/qib/qib_iba6120.c
···21052105 dd->cspec->dummy_hdrq = dma_alloc_coherent(&dd->pcidev->dev,21062106 dd->rcd[0]->rcvhdrq_size,21072107 &dd->cspec->dummy_hdrq_phys,21082108- GFP_KERNEL | __GFP_COMP);21082108+ GFP_ATOMIC | __GFP_COMP);21092109 if (!dd->cspec->dummy_hdrq) {21102110 qib_devinfo(dd->pcidev, "Couldn't allocate dummy hdrq\n");21112111 /* fallback to just 0'ing */
+1-1
drivers/infiniband/hw/qib/qib_pcie.c
···560560 * BIOS may not set PCIe bus-utilization parameters for best performance.561561 * Check and optionally adjust them to maximize our throughput.562562 */563563-static int qib_pcie_caps = 0x51;563563+static int qib_pcie_caps;564564module_param_named(pcie_caps, qib_pcie_caps, int, S_IRUGO);565565MODULE_PARM_DESC(pcie_caps, "Max PCIe tuning: Payload (0..3), ReadReq (4..7)");566566
+7-10
drivers/infiniband/ulp/srpt/ib_srpt.c
···6969 */70707171static u64 srpt_service_guid;7272-static spinlock_t srpt_dev_lock; /* Protects srpt_dev_list. */7373-static struct list_head srpt_dev_list; /* List of srpt_device structures. */7272+static DEFINE_SPINLOCK(srpt_dev_lock); /* Protects srpt_dev_list. */7373+static LIST_HEAD(srpt_dev_list); /* List of srpt_device structures. */74747575static unsigned srp_max_req_size = DEFAULT_MAX_REQ_SIZE;7676module_param(srp_max_req_size, int, 0444);···687687 while (--i >= 0)688688 srpt_free_ioctx(sdev, ring[i], dma_size, dir);689689 kfree(ring);690690+ ring = NULL;690691out:691692 return ring;692693}···25962595 }2597259625982597 ch->sess = transport_init_session();25992599- if (!ch->sess) {25982598+ if (IS_ERR(ch->sess)) {26002599 rej->reason = __constant_cpu_to_be32(26012600 SRP_LOGIN_REJ_INSUFFICIENT_RESOURCES);26022601 pr_debug("Failed to create session\n");···32653264 for (i = 0; i < sdev->srq_size; ++i)32663265 srpt_post_recv(sdev, sdev->ioctx_ring[i]);3267326632683268- WARN_ON(sdev->device->phys_port_cnt32693269- > sizeof(sdev->port)/sizeof(sdev->port[0]));32673267+ WARN_ON(sdev->device->phys_port_cnt > ARRAY_SIZE(sdev->port));3270326832713269 for (i = 1; i <= sdev->device->phys_port_cnt; i++) {32723270 sport = &sdev->port[i - 1];···40104010 goto out;40114011 }4012401240134013- spin_lock_init(&srpt_dev_lock);40144014- INIT_LIST_HEAD(&srpt_dev_list);40154015-40164016- ret = -ENODEV;40174013 srpt_target = target_fabric_configfs_init(THIS_MODULE, "srpt");40184018- if (!srpt_target) {40144014+ if (IS_ERR(srpt_target)) {40194015 printk(KERN_ERR "couldn't register\n");40164016+ ret = PTR_ERR(srpt_target);40204017 goto out;40214018 }40224019
···386386 struct evdev_client *client = file->private_data;387387 struct evdev *evdev = client->evdev;388388 struct input_event event;389389- int retval;389389+ int retval = 0;390390391391 if (count < input_event_size())392392 return -EINVAL;
+1-3
drivers/input/keyboard/twl4030_keypad.c
···3434#include <linux/i2c/twl.h>3535#include <linux/slab.h>36363737-3837/*3938 * The TWL4030 family chips include a keypad controller that supports4039 * up to an 8x8 switch matrix. The controller can issue system wakeup···301302 if (twl4030_kpwrite_u8(kp, i, KEYP_DEB) < 0)302303 return -EIO;303304304304- /* Set timeout period to 100 ms */305305+ /* Set timeout period to 200 ms */305306 i = KEYP_PERIOD_US(200000, PTV_PRESCALER);306307 if (twl4030_kpwrite_u8(kp, (i & 0xFF), KEYP_TIMEOUT_L) < 0)307308 return -EIO;···465466MODULE_DESCRIPTION("TWL4030 Keypad Driver");466467MODULE_LICENSE("GPL");467468MODULE_ALIAS("platform:twl4030_keypad");468468-
···161161 !!host->board->rdy_pin_active_low;162162}163163164164+/*165165+ * Minimal-overhead PIO for data access.166166+ */167167+static void atmel_read_buf8(struct mtd_info *mtd, u8 *buf, int len)168168+{169169+ struct nand_chip *nand_chip = mtd->priv;170170+171171+ __raw_readsb(nand_chip->IO_ADDR_R, buf, len);172172+}173173+174174+static void atmel_read_buf16(struct mtd_info *mtd, u8 *buf, int len)175175+{176176+ struct nand_chip *nand_chip = mtd->priv;177177+178178+ __raw_readsw(nand_chip->IO_ADDR_R, buf, len / 2);179179+}180180+181181+static void atmel_write_buf8(struct mtd_info *mtd, const u8 *buf, int len)182182+{183183+ struct nand_chip *nand_chip = mtd->priv;184184+185185+ __raw_writesb(nand_chip->IO_ADDR_W, buf, len);186186+}187187+188188+static void atmel_write_buf16(struct mtd_info *mtd, const u8 *buf, int len)189189+{190190+ struct nand_chip *nand_chip = mtd->priv;191191+192192+ __raw_writesw(nand_chip->IO_ADDR_W, buf, len / 2);193193+}194194+164195static void dma_complete_func(void *completion)165196{166197 complete(completion);···266235static void atmel_read_buf(struct mtd_info *mtd, u8 *buf, int len)267236{268237 struct nand_chip *chip = mtd->priv;238238+ struct atmel_nand_host *host = chip->priv;269239270240 if (use_dma && len > mtd->oobsize)271241 /* only use DMA for bigger than oob size: better performances */272242 if (atmel_nand_dma_op(mtd, buf, len, 1) == 0)273243 return;274244275275- /* if no DMA operation possible, use PIO */276276- memcpy_fromio(buf, chip->IO_ADDR_R, len);245245+ if (host->board->bus_width_16)246246+ atmel_read_buf16(mtd, buf, len);247247+ else248248+ atmel_read_buf8(mtd, buf, len);277249}278250279251static void atmel_write_buf(struct mtd_info *mtd, const u8 *buf, int len)280252{281253 struct nand_chip *chip = mtd->priv;254254+ struct atmel_nand_host *host = chip->priv;282255283256 if (use_dma && len > mtd->oobsize)284257 /* only use DMA for bigger than oob size: better performances */285258 if (atmel_nand_dma_op(mtd, (void *)buf, len, 0) == 0)286259 return;287260288288- /* if no DMA operation possible, use PIO */289289- memcpy_toio(chip->IO_ADDR_W, buf, len);261261+ if (host->board->bus_width_16)262262+ atmel_write_buf16(mtd, buf, len);263263+ else264264+ atmel_write_buf8(mtd, buf, len);290265}291266292267/*
+14-4
drivers/mtd/nand/gpmi-nand/gpmi-lib.c
···6969 * [1] enable the module.7070 * [2] reset the module.7171 *7272- * In most of the cases, it's ok. But there is a hardware bug in the BCH block.7272+ * In most of the cases, it's ok.7373+ * But in MX23, there is a hardware bug in the BCH block (see erratum #2847).7374 * If you try to soft reset the BCH block, it becomes unusable until7475 * the next hard reset. This case occurs in the NAND boot mode. When the board7576 * boots by NAND, the ROM of the chip will initialize the BCH blocks itself.7677 * So If the driver tries to reset the BCH again, the BCH will not work anymore.7777- * You will see a DMA timeout in this case.7878+ * You will see a DMA timeout in this case. The bug has been fixed7979+ * in the following chips, such as MX28.7880 *7981 * To avoid this bug, just add a new parameter `just_enable` for8082 * the mxs_reset_block(), and rewrite it here.8183 */8282-int gpmi_reset_block(void __iomem *reset_addr, bool just_enable)8484+static int gpmi_reset_block(void __iomem *reset_addr, bool just_enable)8385{8486 int ret;8587 int timeout = 0x400;···208206 if (ret)209207 goto err_out;210208211211- ret = gpmi_reset_block(r->bch_regs, true);209209+ /*210210+ * Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this211211+ * chip, otherwise it will lock up. So we skip resetting BCH on the MX23.212212+ * On the other hand, the MX28 needs the reset, because one case has been213213+ * seen where the BCH produced ECC errors constantly after 10000214214+ * consecutive reboots. The latter case has not been seen on the MX23 yet,215215+ * still we don't know if it could happen there as well.216216+ */217217+ ret = gpmi_reset_block(r->bch_regs, GPMI_IS_MX23(this));212218 if (ret)213219 goto err_out;214220
+1-1
drivers/mtd/nand/nand_base.c
···25882588 instr->state = MTD_ERASING;2589258925902590 while (len) {25912591- /* Heck if we have a bad block, we do not erase bad blocks! */25912591+ /* Check if we have a bad block, we do not erase bad blocks! */25922592 if (nand_block_checkbad(mtd, ((loff_t) page) <<25932593 chip->page_shift, 0, allowbbt)) {25942594 pr_warn("%s: attempt to erase a bad block at page 0x%08x\n",
+1-3
drivers/pcmcia/ds.c
···1269126912701270static int pcmcia_bus_early_resume(struct pcmcia_socket *skt)12711271{12721272- if (!verify_cis_cache(skt)) {12731273- pcmcia_put_socket(skt);12721272+ if (!verify_cis_cache(skt))12741273 return 0;12751275- }1276127412771275 dev_dbg(&skt->dev, "cis mismatch - different card\n");12781276
+1-1
drivers/spi/Kconfig
···299299300300config SPI_S3C64XX301301 tristate "Samsung S3C64XX series type SPI"302302- depends on (ARCH_S3C64XX || ARCH_S5P64X0)302302+ depends on (ARCH_S3C64XX || ARCH_S5P64X0 || ARCH_EXYNOS)303303 select S3C64XX_DMA if ARCH_S3C64XX304304 help305305 SPI driver for Samsung S3C64XX and newer SoCs.
+3-3
drivers/spi/spi-topcliff-pch.c
···1720172017211721#endif1722172217231723-static struct pci_driver pch_spi_pcidev = {17231723+static struct pci_driver pch_spi_pcidev_driver = {17241724 .name = "pch_spi",17251725 .id_table = pch_spi_pcidev_id,17261726 .probe = pch_spi_probe,···17361736 if (ret)17371737 return ret;1738173817391739- ret = pci_register_driver(&pch_spi_pcidev);17391739+ ret = pci_register_driver(&pch_spi_pcidev_driver);17401740 if (ret)17411741 return ret;17421742···1746174617471747static void __exit pch_spi_exit(void)17481748{17491749- pci_unregister_driver(&pch_spi_pcidev);17491749+ pci_unregister_driver(&pch_spi_pcidev_driver);17501750 platform_driver_unregister(&pch_spi_pd_driver);17511751}17521752module_exit(pch_spi_exit);
···849849 case ISCSI_OP_SCSI_TMFUNC:850850 transport_generic_free_cmd(&cmd->se_cmd, 1);851851 break;852852+ case ISCSI_OP_REJECT:853853+ /*854854+ * Handle special case for REJECT when iscsi_add_reject*() has855855+ * overwritten the original iscsi_opcode assignment, and the856856+ * associated cmd->se_cmd needs to be released.857857+ */858858+ if (cmd->se_cmd.se_tfo != NULL) {859859+ transport_generic_free_cmd(&cmd->se_cmd, 1);860860+ break;861861+ }862862+ /* Fall-through */852863 default:853864 iscsit_release_cmd(cmd);854865 break;
···320320void core_dec_lacl_count(struct se_node_acl *se_nacl, struct se_cmd *se_cmd)321321{322322 struct se_dev_entry *deve;323323+ unsigned long flags;323324324324- spin_lock_irq(&se_nacl->device_list_lock);325325+ spin_lock_irqsave(&se_nacl->device_list_lock, flags);325326 deve = &se_nacl->device_list[se_cmd->orig_fe_lun];326327 deve->deve_cmds--;327327- spin_unlock_irq(&se_nacl->device_list_lock);328328+ spin_unlock_irqrestore(&se_nacl->device_list_lock, flags);328329}329330330331void core_update_device_list_access(···657656 unsigned char *buf;658657 u32 cdb_offset = 0, lun_count = 0, offset = 8, i;659658660660- buf = transport_kmap_first_data_page(se_cmd);659659+ buf = (unsigned char *) transport_kmap_data_sg(se_cmd);661660662661 /*663662 * If no struct se_session pointer is present, this struct se_cmd is···695694 * See SPC3 r07, page 159.696695 */697696done:698698- transport_kunmap_first_data_page(se_cmd);697697+ transport_kunmap_data_sg(se_cmd);699698 lun_count *= 8;700699 buf[0] = ((lun_count >> 24) & 0xff);701700 buf[1] = ((lun_count >> 16) & 0xff);···12951294{12961295 struct se_lun *lun_p;12971296 u32 lun_access = 0;12971297+ int rc;1298129812991299 if (atomic_read(&dev->dev_access_obj.obj_access_count) != 0) {13001300 pr_err("Unable to export struct se_device while dev_access_obj: %d\n",13011301 atomic_read(&dev->dev_access_obj.obj_access_count));13021302- return NULL;13021302+ return ERR_PTR(-EACCES);13031303 }1304130413051305 lun_p = core_tpg_pre_addlun(tpg, lun);13061306- if ((IS_ERR(lun_p)) || !lun_p)13071307- return NULL;13061306+ if (IS_ERR(lun_p))13071307+ return lun_p;1308130813091309 if (dev->dev_flags & DF_READ_ONLY)13101310 lun_access = TRANSPORT_LUNFLAGS_READ_ONLY;13111311 else13121312 lun_access = TRANSPORT_LUNFLAGS_READ_WRITE;1313131313141314- if (core_tpg_post_addlun(tpg, lun_p, lun_access, dev) < 0)13151315- return NULL;13141314+ rc = core_tpg_post_addlun(tpg, lun_p, lun_access, dev);13151315+ if (rc < 0)13161316+ return ERR_PTR(rc);1316131713171318 pr_debug("%s_TPG[%u]_LUN[%u] - Activated %s Logical Unit from"13181319 " CORE HBA: %u\n", tpg->se_tpg_tfo->get_fabric_name(),···13511348 u32 unpacked_lun)13521349{13531350 struct se_lun *lun;13541354- int ret = 0;1355135113561356- lun = core_tpg_pre_dellun(tpg, unpacked_lun, &ret);13571357- if (!lun)13581358- return ret;13521352+ lun = core_tpg_pre_dellun(tpg, unpacked_lun);13531353+ if (IS_ERR(lun))13541354+ return PTR_ERR(lun);1359135513601356 core_tpg_post_dellun(tpg, lun);13611357
+2-2
drivers/target/target_core_fabric_configfs.c
···766766767767 lun_p = core_dev_add_lun(se_tpg, dev->se_hba, dev,768768 lun->unpacked_lun);769769- if (IS_ERR(lun_p) || !lun_p) {769769+ if (IS_ERR(lun_p)) {770770 pr_err("core_dev_add_lun() failed\n");771771- ret = -EINVAL;771771+ ret = PTR_ERR(lun_p);772772 goto out;773773 }774774
+9-2
drivers/target/target_core_iblock.c
···129129 /*130130 * These settings need to be made tunable..131131 */132132- ib_dev->ibd_bio_set = bioset_create(32, 64);132132+ ib_dev->ibd_bio_set = bioset_create(32, 0);133133 if (!ib_dev->ibd_bio_set) {134134 pr_err("IBLOCK: Unable to create bioset()\n");135135 return ERR_PTR(-ENOMEM);···181181 */182182 dev->se_sub_dev->se_dev_attrib.max_unmap_block_desc_count = 1;183183 dev->se_sub_dev->se_dev_attrib.unmap_granularity =184184- q->limits.discard_granularity;184184+ q->limits.discard_granularity >> 9;185185 dev->se_sub_dev->se_dev_attrib.unmap_granularity_alignment =186186 q->limits.discard_alignment;187187···487487 struct iblock_dev *ib_dev = task->task_se_cmd->se_dev->dev_ptr;488488 struct iblock_req *ib_req = IBLOCK_REQ(task);489489 struct bio *bio;490490+491491+ /*492492+ * Only allocate as many vector entries as the bio code allows us to,493493+ * we'll loop later on until we have handled the whole request.494494+ */495495+ if (sg_num > BIO_MAX_PAGES)496496+ sg_num = BIO_MAX_PAGES;490497491498 bio = bio_alloc_bioset(GFP_NOIO, sg_num, ib_dev->ibd_bio_set);492499 if (!bio) {
···720720721721 DSSDBG("dss_runtime_put\n");722722723723- r = pm_runtime_put(&dss.pdev->dev);723723+ r = pm_runtime_put_sync(&dss.pdev->dev);724724 WARN_ON(r < 0);725725}726726
+4-1
drivers/video/omap2/dss/hdmi.c
···176176177177 DSSDBG("hdmi_runtime_put\n");178178179179- r = pm_runtime_put(&hdmi.pdev->dev);179179+ r = pm_runtime_put_sync(&hdmi.pdev->dev);180180 WARN_ON(r < 0);181181}182182···497497498498int omapdss_hdmi_display_enable(struct omap_dss_device *dssdev)499499{500500+ struct omap_dss_hdmi_data *priv = dssdev->data;500501 int r = 0;501502502503 DSSDBG("ENTER hdmi_display_enable\n");···509508 r = -ENODEV;510509 goto err0;511510 }511511+512512+ hdmi.ip_data.hpd_gpio = priv->hpd_gpio;512513513514 r = omap_dss_start_device(dssdev);514515 if (r) {
+1-1
drivers/video/omap2/dss/rfbi.c
···140140141141 DSSDBG("rfbi_runtime_put\n");142142143143- r = pm_runtime_put(&rfbi.pdev->dev);143143+ r = pm_runtime_put_sync(&rfbi.pdev->dev);144144 WARN_ON(r < 0);145145}146146
+4
drivers/video/omap2/dss/ti_hdmi.h
···126126 const struct ti_hdmi_ip_ops *ops;127127 struct hdmi_config cfg;128128 struct hdmi_pll_info pll_data;129129+130130+ /* ti_hdmi_4xxx_ip private data. These should be in a separate struct */131131+ int hpd_gpio;132132+ bool phy_tx_enabled;129133};130134int ti_hdmi_4xxx_phy_enable(struct hdmi_ip_data *ip_data);131135void ti_hdmi_4xxx_phy_disable(struct hdmi_ip_data *ip_data);
+64-4
drivers/video/omap2/dss/ti_hdmi_4xxx_ip.c
···2828#include <linux/delay.h>2929#include <linux/string.h>3030#include <linux/seq_file.h>3131+#include <linux/gpio.h>31323233#include "ti_hdmi_4xxx_ip.h"3334#include "dss.h"···224223 hdmi_set_pll_pwr(ip_data, HDMI_PLLPWRCMD_ALLOFF);225224}226225226226+static int hdmi_check_hpd_state(struct hdmi_ip_data *ip_data)227227+{228228+ unsigned long flags;229229+ bool hpd;230230+ int r;231231+ /* this should be in ti_hdmi_4xxx_ip private data */232232+ static DEFINE_SPINLOCK(phy_tx_lock);233233+234234+ spin_lock_irqsave(&phy_tx_lock, flags);235235+236236+ hpd = gpio_get_value(ip_data->hpd_gpio);237237+238238+ if (hpd == ip_data->phy_tx_enabled) {239239+ spin_unlock_irqrestore(&phy_tx_lock, flags);240240+ return 0;241241+ }242242+243243+ if (hpd)244244+ r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_TXON);245245+ else246246+ r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_LDOON);247247+248248+ if (r) {249249+ DSSERR("Failed to %s PHY TX power\n",250250+ hpd ? "enable" : "disable");251251+ goto err;252252+ }253253+254254+ ip_data->phy_tx_enabled = hpd;255255+err:256256+ spin_unlock_irqrestore(&phy_tx_lock, flags);257257+ return r;258258+}259259+260260+static irqreturn_t hpd_irq_handler(int irq, void *data)261261+{262262+ struct hdmi_ip_data *ip_data = data;263263+264264+ hdmi_check_hpd_state(ip_data);265265+266266+ return IRQ_HANDLED;267267+}268268+227269int ti_hdmi_4xxx_phy_enable(struct hdmi_ip_data *ip_data)228270{229271 u16 r = 0;230272 void __iomem *phy_base = hdmi_phy_base(ip_data);231273232274 r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_LDOON);233233- if (r)234234- return r;235235-236236- r = hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_TXON);237275 if (r)238276 return r;239277···297257 /* Write to phy address 3 to change the polarity control */298258 REG_FLD_MOD(phy_base, HDMI_TXPHY_PAD_CFG_CTRL, 0x1, 27, 27);299259260260+ r = request_threaded_irq(gpio_to_irq(ip_data->hpd_gpio),261261+ NULL, hpd_irq_handler,262262+ IRQF_DISABLED | IRQF_TRIGGER_RISING |263263+ IRQF_TRIGGER_FALLING, "hpd", ip_data);264264+ if (r) {265265+ DSSERR("HPD IRQ request failed\n");266266+ hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF);267267+ return r;268268+ }269269+270270+ r = hdmi_check_hpd_state(ip_data);271271+ if (r) {272272+ free_irq(gpio_to_irq(ip_data->hpd_gpio), ip_data);273273+ hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF);274274+ return r;275275+ }276276+300277 return 0;301278}302279303280void ti_hdmi_4xxx_phy_disable(struct hdmi_ip_data *ip_data)304281{282282+ free_irq(gpio_to_irq(ip_data->hpd_gpio), ip_data);283283+305284 hdmi_set_phy_pwr(ip_data, HDMI_PHYPWRCMD_OFF);285285+ ip_data->phy_tx_enabled = false;306286}307287308288static int hdmi_core_ddc_init(struct hdmi_ip_data *ip_data)
+1-1
drivers/video/omap2/dss/venc.c
···401401402402 DSSDBG("venc_runtime_put\n");403403404404- r = pm_runtime_put(&venc.pdev->dev);404404+ r = pm_runtime_put_sync(&venc.pdev->dev);405405 WARN_ON(r < 0);406406}407407
+2-2
fs/ceph/caps.c
···641641 unsigned long ttl;642642 u32 gen;643643644644- spin_lock(&cap->session->s_cap_lock);644644+ spin_lock(&cap->session->s_gen_ttl_lock);645645 gen = cap->session->s_cap_gen;646646 ttl = cap->session->s_cap_ttl;647647- spin_unlock(&cap->session->s_cap_lock);647647+ spin_unlock(&cap->session->s_gen_ttl_lock);648648649649 if (cap->cap_gen < gen || time_after_eq(jiffies, ttl)) {650650 dout("__cap_is_valid %p cap %p issued %s "
+2-2
fs/ceph/dir.c
···975975 di = ceph_dentry(dentry);976976 if (di->lease_session) {977977 s = di->lease_session;978978- spin_lock(&s->s_cap_lock);978978+ spin_lock(&s->s_gen_ttl_lock);979979 gen = s->s_cap_gen;980980 ttl = s->s_cap_ttl;981981- spin_unlock(&s->s_cap_lock);981981+ spin_unlock(&s->s_gen_ttl_lock);982982983983 if (di->lease_gen == gen &&984984 time_before(jiffies, dentry->d_time) &&
···117117 void *s_authorizer_buf, *s_authorizer_reply_buf;118118 size_t s_authorizer_buf_len, s_authorizer_reply_buf_len;119119120120- /* protected by s_cap_lock */121121- spinlock_t s_cap_lock;120120+ /* protected by s_gen_ttl_lock */121121+ spinlock_t s_gen_ttl_lock;122122 u32 s_cap_gen; /* inc each time we get mds stale msg */123123 unsigned long s_cap_ttl; /* when session caps expire */124124+125125+ /* protected by s_cap_lock */126126+ spinlock_t s_cap_lock;124127 struct list_head s_caps; /* all caps issued by this session */125128 int s_nr_caps, s_trim_caps;126129 int s_num_cap_releases;
+3-1
fs/ceph/xattr.c
···111111}112112113113static struct ceph_vxattr_cb ceph_file_vxattrs[] = {114114+ { true, "ceph.file.layout", ceph_vxattrcb_layout},115115+ /* The following extended attribute name is deprecated */114116 { true, "ceph.layout", ceph_vxattrcb_layout},115115- { NULL, NULL }117117+ { true, NULL, NULL }116118};117119118120static struct ceph_vxattr_cb *ceph_inode_vxattrs(struct inode *inode)
+2-2
fs/cifs/Kconfig
···139139 points. If unsure, say N.140140141141config CIFS_FSCACHE142142- bool "Provide CIFS client caching support (EXPERIMENTAL)"142142+ bool "Provide CIFS client caching support"143143 depends on CIFS=m && FSCACHE || CIFS=y && FSCACHE=y144144 help145145 Makes CIFS FS-Cache capable. Say Y here if you want your CIFS data···147147 manager. If unsure, say N.148148149149config CIFS_ACL150150- bool "Provide CIFS ACL support (EXPERIMENTAL)"150150+ bool "Provide CIFS ACL support"151151 depends on CIFS_XATTR && KEYS152152 help153153 Allows to fetch CIFS/NTFS ACL from the server. The DACL blob
+6-8
fs/cifs/connect.c
···2142214221432143 len = delim - payload;21442144 if (len > MAX_USERNAME_SIZE || len <= 0) {21452145- cFYI(1, "Bad value from username search (len=%ld)", len);21452145+ cFYI(1, "Bad value from username search (len=%zd)", len);21462146 rc = -EINVAL;21472147 goto out_key_put;21482148 }2149214921502150 vol->username = kstrndup(payload, len, GFP_KERNEL);21512151 if (!vol->username) {21522152- cFYI(1, "Unable to allocate %ld bytes for username", len);21522152+ cFYI(1, "Unable to allocate %zd bytes for username", len);21532153 rc = -ENOMEM;21542154 goto out_key_put;21552155 }···2157215721582158 len = key->datalen - (len + 1);21592159 if (len > MAX_PASSWORD_SIZE || len <= 0) {21602160- cFYI(1, "Bad len for password search (len=%ld)", len);21602160+ cFYI(1, "Bad len for password search (len=%zd)", len);21612161 rc = -EINVAL;21622162 kfree(vol->username);21632163 vol->username = NULL;···21672167 ++delim;21682168 vol->password = kstrndup(delim, len, GFP_KERNEL);21692169 if (!vol->password) {21702170- cFYI(1, "Unable to allocate %ld bytes for password", len);21702170+ cFYI(1, "Unable to allocate %zd bytes for password", len);21712171 rc = -ENOMEM;21722172 kfree(vol->username);21732173 vol->username = NULL;···38573857 struct smb_vol *vol_info;3858385838593859 vol_info = kzalloc(sizeof(*vol_info), GFP_KERNEL);38603860- if (vol_info == NULL) {38613861- tcon = ERR_PTR(-ENOMEM);38623862- goto out;38633863- }38603860+ if (vol_info == NULL)38613861+ return ERR_PTR(-ENOMEM);3864386238653863 vol_info->local_nls = cifs_sb->local_nls;38663864 vol_info->linux_uid = fsuid;
+7-4
fs/cifs/sess.c
···246246 /* copy user */247247 /* BB what about null user mounts - check that we do this BB */248248 /* copy user */249249- if (ses->user_name != NULL)249249+ if (ses->user_name != NULL) {250250 strncpy(bcc_ptr, ses->user_name, MAX_USERNAME_SIZE);251251+ bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE);252252+ }251253 /* else null user mount */252252-253253- bcc_ptr += strnlen(ses->user_name, MAX_USERNAME_SIZE);254254 *bcc_ptr = 0;255255 bcc_ptr++; /* account for null termination */256256257257 /* copy domain */258258-259258 if (ses->domainName != NULL) {260259 strncpy(bcc_ptr, ses->domainName, 256);261260 bcc_ptr += strnlen(ses->domainName, 256);···394395 ses->ntlmssp->server_flags = le32_to_cpu(pblob->NegotiateFlags);395396 tioffset = le32_to_cpu(pblob->TargetInfoArray.BufferOffset);396397 tilen = le16_to_cpu(pblob->TargetInfoArray.Length);398398+ if (tioffset > blob_len || tioffset + tilen > blob_len) {399399+ cERROR(1, "tioffset + tilen too high %u + %u", tioffset, tilen);400400+ return -EINVAL;401401+ }397402 if (tilen) {398403 ses->auth_key.response = kmalloc(tilen, GFP_KERNEL);399404 if (!ses->auth_key.response) {
+17-16
fs/exec.c
···10711071 perf_event_comm(tsk);10721072}1073107310741074+static void filename_to_taskname(char *tcomm, const char *fn, unsigned int len)10751075+{10761076+ int i, ch;10771077+10781078+ /* Copies the binary name from after last slash */10791079+ for (i = 0; (ch = *(fn++)) != '\0';) {10801080+ if (ch == '/')10811081+ i = 0; /* overwrite what we wrote */10821082+ else10831083+ if (i < len - 1)10841084+ tcomm[i++] = ch;10851085+ }10861086+ tcomm[i] = '\0';10871087+}10881088+10741089int flush_old_exec(struct linux_binprm * bprm)10751090{10761091 int retval;···1100108511011086 set_mm_exe_file(bprm->mm, bprm->file);1102108710881088+ filename_to_taskname(bprm->tcomm, bprm->filename, sizeof(bprm->tcomm));11031089 /*11041090 * Release all of the old mmap stuff11051091 */···1132111611331117void setup_new_exec(struct linux_binprm * bprm)11341118{11351135- int i, ch;11361136- const char *name;11371137- char tcomm[sizeof(current->comm)];11381138-11391119 arch_pick_mmap_layout(current->mm);1140112011411121 /* This is the point of no return */···11421130 else11431131 set_dumpable(current->mm, suid_dumpable);1144113211451145- name = bprm->filename;11461146-11471147- /* Copies the binary name from after last slash */11481148- for (i=0; (ch = *(name++)) != '\0';) {11491149- if (ch == '/')11501150- i = 0; /* overwrite what we wrote */11511151- else11521152- if (i < (sizeof(tcomm) - 1))11531153- tcomm[i++] = ch;11541154- }11551155- tcomm[i] = '\0';11561156- set_task_comm(current, tcomm);11331133+ set_task_comm(current, bprm->tcomm);1157113411581135 /* Set the new mm task size. We have to do that late because it may11591136 * depend on TIF_32BIT which is only updated in flush_thread() on
+1-1
fs/jffs2/erase.c
···335335 void *ebuf;336336 uint32_t ofs;337337 size_t retlen;338338- int ret = -EIO;338338+ int ret;339339 unsigned long *wordebuf;340340341341 ret = mtd_point(c->mtd, jeb->offset, c->sector_size, &retlen,
···603603 nsegs = argv[4].v_nmembs;604604 if (argv[4].v_size != argsz[4])605605 goto out;606606+ if (nsegs > UINT_MAX / sizeof(__u64))607607+ goto out;606608607609 /*608610 * argv[4] points to segment numbers this ioctl cleans. We
+48-82
fs/proc/base.c
···198198 return result;199199}200200201201-static struct mm_struct *mm_access(struct task_struct *task, unsigned int mode)202202-{203203- struct mm_struct *mm;204204- int err;205205-206206- err = mutex_lock_killable(&task->signal->cred_guard_mutex);207207- if (err)208208- return ERR_PTR(err);209209-210210- mm = get_task_mm(task);211211- if (mm && mm != current->mm &&212212- !ptrace_may_access(task, mode)) {213213- mmput(mm);214214- mm = ERR_PTR(-EACCES);215215- }216216- mutex_unlock(&task->signal->cred_guard_mutex);217217-218218- return mm;219219-}220220-221201struct mm_struct *mm_for_maps(struct task_struct *task)222202{223203 return mm_access(task, PTRACE_MODE_READ);···691711 if (IS_ERR(mm))692712 return PTR_ERR(mm);693713714714+ if (mm) {715715+ /* ensure this mm_struct can't be freed */716716+ atomic_inc(&mm->mm_count);717717+ /* but do not pin its memory */718718+ mmput(mm);719719+ }720720+694721 /* OK to pass negative loff_t, we can catch out-of-range */695722 file->f_mode |= FMODE_UNSIGNED_OFFSET;696723 file->private_data = mm;···705718 return 0;706719}707720708708-static ssize_t mem_read(struct file * file, char __user * buf,709709- size_t count, loff_t *ppos)721721+static ssize_t mem_rw(struct file *file, char __user *buf,722722+ size_t count, loff_t *ppos, int write)710723{711711- int ret;712712- char *page;713713- unsigned long src = *ppos;714724 struct mm_struct *mm = file->private_data;715715-716716- if (!mm)717717- return 0;718718-719719- page = (char *)__get_free_page(GFP_TEMPORARY);720720- if (!page)721721- return -ENOMEM;722722-723723- ret = 0;724724-725725- while (count > 0) {726726- int this_len, retval;727727-728728- this_len = (count > PAGE_SIZE) ? PAGE_SIZE : count;729729- retval = access_remote_vm(mm, src, page, this_len, 0);730730- if (!retval) {731731- if (!ret)732732- ret = -EIO;733733- break;734734- }735735-736736- if (copy_to_user(buf, page, retval)) {737737- ret = -EFAULT;738738- break;739739- }740740-741741- ret += retval;742742- src += retval;743743- buf += retval;744744- count -= retval;745745- }746746- *ppos = src;747747-748748- free_page((unsigned long) page);749749- return ret;750750-}751751-752752-static ssize_t mem_write(struct file * file, const char __user *buf,753753- size_t count, loff_t *ppos)754754-{755755- int copied;725725+ unsigned long addr = *ppos;726726+ ssize_t copied;756727 char *page;757757- unsigned long dst = *ppos;758758- struct mm_struct *mm = file->private_data;759728760729 if (!mm)761730 return 0;···721778 return -ENOMEM;722779723780 copied = 0;724724- while (count > 0) {725725- int this_len, retval;781781+ if (!atomic_inc_not_zero(&mm->mm_users))782782+ goto free;726783727727- this_len = (count > PAGE_SIZE) ? PAGE_SIZE : count;728728- if (copy_from_user(page, buf, this_len)) {784784+ while (count > 0) {785785+ int this_len = min_t(int, count, PAGE_SIZE);786786+787787+ if (write && copy_from_user(page, buf, this_len)) {729788 copied = -EFAULT;730789 break;731790 }732732- retval = access_remote_vm(mm, dst, page, this_len, 1);733733- if (!retval) {791791+792792+ this_len = access_remote_vm(mm, addr, page, this_len, write);793793+ if (!this_len) {734794 if (!copied)735795 copied = -EIO;736796 break;737797 }738738- copied += retval;739739- buf += retval;740740- dst += retval;741741- count -= retval; 742742- }743743- *ppos = dst;744798799799+ if (!write && copy_to_user(buf, page, this_len)) {800800+ copied = -EFAULT;801801+ break;802802+ }803803+804804+ buf += this_len;805805+ addr += this_len;806806+ copied += this_len;807807+ count -= this_len;808808+ }809809+ *ppos = addr;810810+811811+ mmput(mm);812812+free:745813 free_page((unsigned long) page);746814 return copied;815815+}816816+817817+static ssize_t mem_read(struct file *file, char __user *buf,818818+ size_t count, loff_t *ppos)819819+{820820+ return mem_rw(file, buf, count, ppos, 0);821821+}822822+823823+static ssize_t mem_write(struct file *file, const char __user *buf,824824+ size_t count, loff_t *ppos)825825+{826826+ return mem_rw(file, (char __user*)buf, count, ppos, 1);747827}748828749829loff_t mem_lseek(struct file *file, loff_t offset, int orig)···788822static int mem_release(struct inode *inode, struct file *file)789823{790824 struct mm_struct *mm = file->private_data;791791-792792- mmput(mm);825825+ if (mm)826826+ mmdrop(mm);793827 return 0;794828}795829
+10
include/asm-generic/pci_iomap.h
···1515#ifdef CONFIG_PCI1616/* Create a virtual mapping cookie for a PCI BAR (memory or IO) */1717extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max);1818+/* Create a virtual mapping cookie for a port on a given PCI device.1919+ * Do not call this directly, it exists to make it easier for architectures2020+ * to override */2121+#ifdef CONFIG_NO_GENERIC_PCI_IOPORT_MAP2222+extern void __iomem *__pci_ioport_map(struct pci_dev *dev, unsigned long port,2323+ unsigned int nr);2424+#else2525+#define __pci_ioport_map(dev, port, nr) ioport_map((port), (nr))2626+#endif2727+1828#else1929static inline void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max)2030{
···110110 { return; }111111112112static inline int pm_qos_request(int pm_qos_class)113113- { return 0; }113113+{114114+ switch (pm_qos_class) {115115+ case PM_QOS_CPU_DMA_LATENCY:116116+ return PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;117117+ case PM_QOS_NETWORK_LATENCY:118118+ return PM_QOS_NETWORK_LAT_DEFAULT_VALUE;119119+ case PM_QOS_NETWORK_THROUGHPUT:120120+ return PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE;121121+ default:122122+ return PM_QOS_DEFAULT_VALUE;123123+ }124124+}125125+114126static inline int pm_qos_add_notifier(int pm_qos_class,115127 struct notifier_block *notifier)116128 { return 0; }
+6
include/linux/sched.h
···22592259extern void mmput(struct mm_struct *);22602260/* Grab a reference to a task's mm, if it is not already going away */22612261extern struct mm_struct *get_task_mm(struct task_struct *task);22622262+/*22632263+ * Grab a reference to a task's mm, if it is not already going away22642264+ * and ptrace_may_access with the mode parameter passed to it22652265+ * succeeds.22662266+ */22672267+extern struct mm_struct *mm_access(struct task_struct *task, unsigned int mode);22622268/* Remove the current tasks stale references to the old mm_struct */22632269extern void mm_release(struct task_struct *, struct mm_struct *);22642270/* Allocate a new mm structure and copy contents from tsk->mm */
+1
include/linux/sh_dma.h
···7070 unsigned int needs_tend_set:1;7171 unsigned int no_dmars:1;7272 unsigned int chclr_present:1;7373+ unsigned int slave_only:1;7374};74757576/* DMA register */
···590590 int (*get_backlight)(struct omap_dss_device *dssdev);591591};592592593593+struct omap_dss_hdmi_data594594+{595595+ int hpd_gpio;596596+};597597+593598struct omap_dss_driver {594599 struct device_driver driver;595600
+66-38
kernel/events/core.c
···23002300 return div64_u64(dividend, divisor);23012301}2302230223032303+static DEFINE_PER_CPU(int, perf_throttled_count);23042304+static DEFINE_PER_CPU(u64, perf_throttled_seq);23052305+23032306static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)23042307{23052308 struct hw_perf_event *hwc = &event->hw;···23282325 }23292326}2330232723312331-static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period)23282328+/*23292329+ * combine freq adjustment with unthrottling to avoid two passes over the23302330+ * events. At the same time, make sure, having freq events does not change23312331+ * the rate of unthrottling as that would introduce bias.23322332+ */23332333+static void perf_adjust_freq_unthr_context(struct perf_event_context *ctx,23342334+ int needs_unthr)23322335{23332336 struct perf_event *event;23342337 struct hw_perf_event *hwc;23352335- u64 interrupts, now;23382338+ u64 now, period = TICK_NSEC;23362339 s64 delta;2337234023382338- if (!ctx->nr_freq)23412341+ /*23422342+ * only need to iterate over all events iff:23432343+ * - context have events in frequency mode (needs freq adjust)23442344+ * - there are events to unthrottle on this cpu23452345+ */23462346+ if (!(ctx->nr_freq || needs_unthr))23392347 return;23482348+23492349+ raw_spin_lock(&ctx->lock);2340235023412351 list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {23422352 if (event->state != PERF_EVENT_STATE_ACTIVE)···2360234423612345 hwc = &event->hw;2362234623632363- interrupts = hwc->interrupts;23642364- hwc->interrupts = 0;23652365-23662366- /*23672367- * unthrottle events on the tick23682368- */23692369- if (interrupts == MAX_INTERRUPTS) {23472347+ if (needs_unthr && hwc->interrupts == MAX_INTERRUPTS) {23482348+ hwc->interrupts = 0;23702349 perf_log_throttle(event, 1);23712350 event->pmu->start(event, 0);23722351 }···23692358 if (!event->attr.freq || !event->attr.sample_freq)23702359 continue;2371236023722372- event->pmu->read(event);23612361+ /*23622362+ * stop the event and update event->count23632363+ */23642364+ event->pmu->stop(event, PERF_EF_UPDATE);23652365+23732366 now = local64_read(&event->count);23742367 delta = now - hwc->freq_count_stamp;23752368 hwc->freq_count_stamp = now;2376236923702370+ /*23712371+ * restart the event23722372+ * reload only if value has changed23732373+ */23772374 if (delta > 0)23782375 perf_adjust_period(event, period, delta);23762376+23772377+ event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);23792378 }23792379+23802380+ raw_spin_unlock(&ctx->lock);23802381}2381238223822383/*···24112388 */24122389static void perf_rotate_context(struct perf_cpu_context *cpuctx)24132390{24142414- u64 interval = (u64)cpuctx->jiffies_interval * TICK_NSEC;24152391 struct perf_event_context *ctx = NULL;24162416- int rotate = 0, remove = 1, freq = 0;23922392+ int rotate = 0, remove = 1;2417239324182394 if (cpuctx->ctx.nr_events) {24192395 remove = 0;24202396 if (cpuctx->ctx.nr_events != cpuctx->ctx.nr_active)24212397 rotate = 1;24222422- if (cpuctx->ctx.nr_freq)24232423- freq = 1;24242398 }2425239924262400 ctx = cpuctx->task_ctx;···24252405 remove = 0;24262406 if (ctx->nr_events != ctx->nr_active)24272407 rotate = 1;24282428- if (ctx->nr_freq)24292429- freq = 1;24302408 }2431240924322432- if (!rotate && !freq)24102410+ if (!rotate)24332411 goto done;2434241224352413 perf_ctx_lock(cpuctx, cpuctx->task_ctx);24362414 perf_pmu_disable(cpuctx->ctx.pmu);2437241524382438- if (freq) {24392439- perf_ctx_adjust_freq(&cpuctx->ctx, interval);24402440- if (ctx)24412441- perf_ctx_adjust_freq(ctx, interval);24422442- }24162416+ cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);24172417+ if (ctx)24182418+ ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE);2443241924442444- if (rotate) {24452445- cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);24462446- if (ctx)24472447- ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE);24202420+ rotate_ctx(&cpuctx->ctx);24212421+ if (ctx)24222422+ rotate_ctx(ctx);2448242324492449- rotate_ctx(&cpuctx->ctx);24502450- if (ctx)24512451- rotate_ctx(ctx);24522452-24532453- perf_event_sched_in(cpuctx, ctx, current);24542454- }24242424+ perf_event_sched_in(cpuctx, ctx, current);2455242524562426 perf_pmu_enable(cpuctx->ctx.pmu);24572427 perf_ctx_unlock(cpuctx, cpuctx->task_ctx);24582458-24592428done:24602429 if (remove)24612430 list_del_init(&cpuctx->rotation_list);···24542445{24552446 struct list_head *head = &__get_cpu_var(rotation_list);24562447 struct perf_cpu_context *cpuctx, *tmp;24482448+ struct perf_event_context *ctx;24492449+ int throttled;2457245024582451 WARN_ON(!irqs_disabled());2459245224532453+ __this_cpu_inc(perf_throttled_seq);24542454+ throttled = __this_cpu_xchg(perf_throttled_count, 0);24552455+24602456 list_for_each_entry_safe(cpuctx, tmp, head, rotation_list) {24572457+ ctx = &cpuctx->ctx;24582458+ perf_adjust_freq_unthr_context(ctx, throttled);24592459+24602460+ ctx = cpuctx->task_ctx;24612461+ if (ctx)24622462+ perf_adjust_freq_unthr_context(ctx, throttled);24632463+24612464 if (cpuctx->jiffies_interval == 1 ||24622465 !(jiffies % cpuctx->jiffies_interval))24632466 perf_rotate_context(cpuctx);···45304509{45314510 int events = atomic_read(&event->event_limit);45324511 struct hw_perf_event *hwc = &event->hw;45124512+ u64 seq;45334513 int ret = 0;4534451445354515 /*···45404518 if (unlikely(!is_sampling_event(event)))45414519 return 0;4542452045434543- if (unlikely(hwc->interrupts >= max_samples_per_tick)) {45444544- if (throttle) {45214521+ seq = __this_cpu_read(perf_throttled_seq);45224522+ if (seq != hwc->interrupts_seq) {45234523+ hwc->interrupts_seq = seq;45244524+ hwc->interrupts = 1;45254525+ } else {45264526+ hwc->interrupts++;45274527+ if (unlikely(throttle45284528+ && hwc->interrupts >= max_samples_per_tick)) {45294529+ __this_cpu_inc(perf_throttled_count);45454530 hwc->interrupts = MAX_INTERRUPTS;45464531 perf_log_throttle(event, 0);45474532 ret = 1;45484533 }45494549- } else45504550- hwc->interrupts++;45344534+ }4551453545524536 if (event->attr.freq) {45534537 u64 now = perf_clock();
+16
kernel/exit.c
···10381038 if (tsk->nr_dirtied)10391039 __this_cpu_add(dirty_throttle_leaks, tsk->nr_dirtied);10401040 exit_rcu();10411041+10421042+ /*10431043+ * The setting of TASK_RUNNING by try_to_wake_up() may be delayed10441044+ * when the following two conditions become true.10451045+ * - There is race condition of mmap_sem (It is acquired by10461046+ * exit_mm()), and10471047+ * - SMI occurs before setting TASK_RUNINNG.10481048+ * (or hypervisor of virtual machine switches to other guest)10491049+ * As a result, we may become TASK_RUNNING after becoming TASK_DEAD10501050+ *10511051+ * To avoid it, we have to wait for releasing tsk->pi_lock which10521052+ * is held by try_to_wake_up()10531053+ */10541054+ smp_mb();10551055+ raw_spin_unlock_wait(&tsk->pi_lock);10561056+10411057 /* causes final put_task_struct in finish_task_switch(). */10421058 tsk->state = TASK_DEAD;10431059 tsk->flags |= PF_NOFREEZE; /* tell freezer to ignore us */
+20
kernel/fork.c
···647647}648648EXPORT_SYMBOL_GPL(get_task_mm);649649650650+struct mm_struct *mm_access(struct task_struct *task, unsigned int mode)651651+{652652+ struct mm_struct *mm;653653+ int err;654654+655655+ err = mutex_lock_killable(&task->signal->cred_guard_mutex);656656+ if (err)657657+ return ERR_PTR(err);658658+659659+ mm = get_task_mm(task);660660+ if (mm && mm != current->mm &&661661+ !ptrace_may_access(task, mode)) {662662+ mmput(mm);663663+ mm = ERR_PTR(-EACCES);664664+ }665665+ mutex_unlock(&task->signal->cred_guard_mutex);666666+667667+ return mm;668668+}669669+650670/* Please note the differences between mmput and mm_release.651671 * mmput is called whenever we stop holding onto a mm_struct,652672 * error success whatever.
···231231#ifdef CONFIG_SUSPEND_FREEZER232232static inline int suspend_freeze_processes(void)233233{234234- int error = freeze_processes();235235- return error ? : freeze_kernel_threads();234234+ int error;235235+236236+ error = freeze_processes();237237+238238+ /*239239+ * freeze_processes() automatically thaws every task if freezing240240+ * fails. So we need not do anything extra upon error.241241+ */242242+ if (error)243243+ goto Finish;244244+245245+ error = freeze_kernel_threads();246246+247247+ /*248248+ * freeze_kernel_threads() thaws only kernel threads upon freezing249249+ * failure. So we have to thaw the userspace tasks ourselves.250250+ */251251+ if (error)252252+ thaw_processes();253253+254254+ Finish:255255+ return error;236256}237257238258static inline void suspend_thaw_processes(void)
+5-2
kernel/power/process.c
···143143/**144144 * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator.145145 *146146- * On success, returns 0. On failure, -errno and system is fully thawed.146146+ * On success, returns 0. On failure, -errno and only the kernel threads are147147+ * thawed, so as to give a chance to the caller to do additional cleanups148148+ * (if any) before thawing the userspace tasks. So, it is the responsibility149149+ * of the caller to thaw the userspace tasks, when the time is right.147150 */148151int freeze_kernel_threads(void)149152{···162159 BUG_ON(in_atomic());163160164161 if (error)165165- thaw_processes();162162+ thaw_kernel_threads();166163 return error;167164}168165
+4-2
kernel/power/user.c
···249249 }250250 pm_restore_gfp_mask();251251 error = hibernation_snapshot(data->platform_support);252252- if (!error) {252252+ if (error) {253253+ thaw_kernel_threads();254254+ } else {253255 error = put_user(in_suspend, (int __user *)arg);254256 if (!error && !freezer_test_done)255257 data->ready = 1;256258 if (freezer_test_done) {257259 freezer_test_done = false;258258- thaw_processes();260260+ thaw_kernel_threads();259261 }260262 }261263 break;
+7-12
kernel/sched/core.c
···74747575#include <asm/tlb.h>7676#include <asm/irq_regs.h>7777+#include <asm/mutex.h>7778#ifdef CONFIG_PARAVIRT7879#include <asm/paravirt.h>7980#endif···724723 p->sched_class->dequeue_task(rq, p, flags);725724}726725727727-/*728728- * activate_task - move a task to the runqueue.729729- */730726void activate_task(struct rq *rq, struct task_struct *p, int flags)731727{732728 if (task_contributes_to_load(p))···732734 enqueue_task(rq, p, flags);733735}734736735735-/*736736- * deactivate_task - remove a task from the runqueue.737737- */738737void deactivate_task(struct rq *rq, struct task_struct *p, int flags)739738{740739 if (task_contributes_to_load(p))···41294134 on_rq = p->on_rq;41304135 running = task_current(rq, p);41314136 if (on_rq)41324132- deactivate_task(rq, p, 0);41374137+ dequeue_task(rq, p, 0);41334138 if (running)41344139 p->sched_class->put_prev_task(rq, p);41354140···41424147 if (running)41434148 p->sched_class->set_curr_task(rq);41444149 if (on_rq)41454145- activate_task(rq, p, 0);41504150+ enqueue_task(rq, p, 0);4146415141474152 check_class_changed(rq, p, prev_class, oldprio);41484153 task_rq_unlock(rq, p, &flags);···49934998 * placed properly.49944999 */49955000 if (p->on_rq) {49964996- deactivate_task(rq_src, p, 0);50015001+ dequeue_task(rq_src, p, 0);49975002 set_task_cpu(p, dest_cpu);49984998- activate_task(rq_dest, p, 0);50035003+ enqueue_task(rq_dest, p, 0);49995004 check_preempt_curr(rq_dest, p, 0);50005005 }50015006done:···7027703270287033 on_rq = p->on_rq;70297034 if (on_rq)70307030- deactivate_task(rq, p, 0);70357035+ dequeue_task(rq, p, 0);70317036 __setscheduler(rq, p, SCHED_NORMAL, 0);70327037 if (on_rq) {70337033- activate_task(rq, p, 0);70387038+ enqueue_task(rq, p, 0);70347039 resched_task(rq->curr);70357040 }70367041
+29-5
kernel/sched/fair.c
···48664866 return;48674867}4868486848694869+static inline void clear_nohz_tick_stopped(int cpu)48704870+{48714871+ if (unlikely(test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))) {48724872+ cpumask_clear_cpu(cpu, nohz.idle_cpus_mask);48734873+ atomic_dec(&nohz.nr_cpus);48744874+ clear_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu));48754875+ }48764876+}48774877+48694878static inline void set_cpu_sd_state_busy(void)48704879{48714880 struct sched_domain *sd;···49134904{49144905 int cpu = smp_processor_id();4915490649074907+ /*49084908+ * If this cpu is going down, then nothing needs to be done.49094909+ */49104910+ if (!cpu_active(cpu))49114911+ return;49124912+49164913 if (stop_tick) {49174914 if (test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))49184915 return;···49284913 set_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu));49294914 }49304915 return;49164916+}49174917+49184918+static int __cpuinit sched_ilb_notifier(struct notifier_block *nfb,49194919+ unsigned long action, void *hcpu)49204920+{49214921+ switch (action & ~CPU_TASKS_FROZEN) {49224922+ case CPU_DYING:49234923+ clear_nohz_tick_stopped(smp_processor_id());49244924+ return NOTIFY_OK;49254925+ default:49264926+ return NOTIFY_DONE;49274927+ }49314928}49324929#endif49334930···50975070 * busy tick after returning from idle, we will update the busy stats.50985071 */50995072 set_cpu_sd_state_busy();51005100- if (unlikely(test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))) {51015101- clear_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu));51025102- cpumask_clear_cpu(cpu, nohz.idle_cpus_mask);51035103- atomic_dec(&nohz.nr_cpus);51045104- }50735073+ clear_nohz_tick_stopped(cpu);5105507451065075 /*51075076 * None are in tickless mode and hence no need for NOHZ idle load···5613559056145591#ifdef CONFIG_NO_HZ56155592 zalloc_cpumask_var(&nohz.idle_cpus_mask, GFP_NOWAIT);55935593+ cpu_notifier(sched_ilb_notifier, 0);56165594#endif56175595#endif /* SMP */56185596
+5
kernel/sched/rt.c
···15871587 if (!next_task)15881588 return 0;1589158915901590+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW15911591+ if (unlikely(task_running(rq, next_task)))15921592+ return 0;15931593+#endif15941594+15901595retry:15911596 if (unlikely(next_task == rq->curr)) {15921597 WARN_ON(1);
···1919config GENERIC_FIND_FIRST_BIT2020 bool21212222+config NO_GENERIC_PCI_IOPORT_MAP2323+ bool2424+2225config GENERIC_PCI_IOMAP2326 bool2427···282279283280 If unsure, say N.284281282282+config CLZ_TAB283283+ bool284284+285285config CORDIC286286 tristate "CORDIC algorithm"287287 help···293287294288config MPILIB295289 tristate290290+ select CLZ_TAB296291 help297292 Multiprecision maths library from GnuPG.298293 It is used to implement RSA digital signature verification,
···149149 mpi_ptr_t marker[5];150150 int markidx = 0;151151152152+ if (!dsize)153153+ return -EINVAL;154154+152155 memset(marker, 0, sizeof(marker));153156154157 /* Ensure space is enough for quotient and remainder.···210207 * numerator would be gradually overwritten by the quotient limbs. */211208 if (qp == np) { /* Copy NP object to temporary space. */212209 np = marker[markidx++] = mpi_alloc_limb_space(nsize);210210+ if (!np)211211+ goto nomem;213212 MPN_COPY(np, qp, nsize);214213 }215214 } else /* Put quotient at top of remainder. */
+1-1
lib/mpi/mpi-pow.c
···5959 ep = exp->d;60606161 if (!msize)6262- msize = 1 / msize; /* provoke a signal */6262+ return -EINVAL;63636464 if (!esize) {6565 /* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0
+2-89
lib/mpi/mpicoder.c
···20202121#include "mpi-internal.h"22222323-#define DIM(v) (sizeof(v)/sizeof((v)[0]))2423#define MAX_EXTERN_MPI_BITS 163842525-2626-static uint8_t asn[15] = /* Object ID is 1.3.14.3.2.26 */2727-{ 0x30, 0x21, 0x30, 0x09, 0x06, 0x05, 0x2b, 0x0e, 0x03,2828- 0x02, 0x1a, 0x05, 0x00, 0x04, 0x142929-};3030-3131-MPI do_encode_md(const void *sha_buffer, unsigned nbits)3232-{3333- int nframe = (nbits + 7) / 8;3434- uint8_t *frame, *fr_pt;3535- int i = 0, n;3636- size_t asnlen = DIM(asn);3737- MPI a = MPI_NULL;3838-3939- if (SHA1_DIGEST_LENGTH + asnlen + 4 > nframe)4040- pr_info("MPI: can't encode a %d bit MD into a %d bits frame\n",4141- (int)(SHA1_DIGEST_LENGTH * 8), (int)nbits);4242-4343- /* We encode the MD in this way:4444- *4545- * 0 A PAD(n bytes) 0 ASN(asnlen bytes) MD(len bytes)4646- *4747- * PAD consists of FF bytes.4848- */4949- frame = kmalloc(nframe, GFP_KERNEL);5050- if (!frame)5151- return MPI_NULL;5252- n = 0;5353- frame[n++] = 0;5454- frame[n++] = 1; /* block type */5555- i = nframe - SHA1_DIGEST_LENGTH - asnlen - 3;5656-5757- if (i <= 1) {5858- pr_info("MPI: message digest encoding failed\n");5959- kfree(frame);6060- return a;6161- }6262-6363- memset(frame + n, 0xff, i);6464- n += i;6565- frame[n++] = 0;6666- memcpy(frame + n, &asn, asnlen);6767- n += asnlen;6868- memcpy(frame + n, sha_buffer, SHA1_DIGEST_LENGTH);6969- n += SHA1_DIGEST_LENGTH;7070-7171- i = nframe;7272- fr_pt = frame;7373-7474- if (n != nframe) {7575- printk7676- ("MPI: message digest encoding failed, frame length is wrong\n");7777- kfree(frame);7878- return a;7979- }8080-8181- a = mpi_alloc((nframe + BYTES_PER_MPI_LIMB - 1) / BYTES_PER_MPI_LIMB);8282- mpi_set_buffer(a, frame, nframe, 0);8383- kfree(frame);8484-8585- return a;8686-}87248825MPI mpi_read_from_buffer(const void *xbuffer, unsigned *ret_nread)8926{···2891 int i, j;2992 unsigned nbits, nbytes, nlimbs, nread = 0;3093 mpi_limb_t a;3131- MPI val = MPI_NULL;9494+ MPI val = NULL;32953396 if (*ret_nread < 2)3497 goto leave;···45108 nlimbs = (nbytes + BYTES_PER_MPI_LIMB - 1) / BYTES_PER_MPI_LIMB;46109 val = mpi_alloc(nlimbs);47110 if (!val)4848- return MPI_NULL;111111+ return NULL;49112 i = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB;50113 i %= BYTES_PER_MPI_LIMB;51114 val->nbits = nbits;···147210 return 0;148211}149212EXPORT_SYMBOL_GPL(mpi_fromstr);150150-151151-/****************152152- * Special function to get the low 8 bytes from an mpi.153153- * This can be used as a keyid; KEYID is an 2 element array.154154- * Return the low 4 bytes.155155- */156156-u32 mpi_get_keyid(const MPI a, u32 *keyid)157157-{158158-#if BYTES_PER_MPI_LIMB == 4159159- if (keyid) {160160- keyid[0] = a->nlimbs >= 2 ? a->d[1] : 0;161161- keyid[1] = a->nlimbs >= 1 ? a->d[0] : 0;162162- }163163- return a->nlimbs >= 1 ? a->d[0] : 0;164164-#elif BYTES_PER_MPI_LIMB == 8165165- if (keyid) {166166- keyid[0] = a->nlimbs ? (u32) (a->d[0] >> 32) : 0;167167- keyid[1] = a->nlimbs ? (u32) (a->d[0] & 0xffffffff) : 0;168168- }169169- return a->nlimbs ? (u32) (a->d[0] & 0xffffffff) : 0;170170-#else171171-#error Make this function work with other LIMB sizes172172-#endif173173-}174213175214/****************176215 * Return an allocated buffer with the MPI (msb first).
+4
lib/mpi/mpih-div.c
···217217 case 0:218218 /* We are asked to divide by zero, so go ahead and do it! (To make219219 the compiler not remove this statement, return the value.) */220220+ /*221221+ * existing clients of this function have been modified222222+ * not to call it with dsize == 0, so this should not happen223223+ */220224 return 1 / dsize;221225222226 case 1:
···3434 if (maxlen && len > maxlen)3535 len = maxlen;3636 if (flags & IORESOURCE_IO)3737- return ioport_map(start, len);3737+ return __pci_ioport_map(dev, start, len);3838 if (flags & IORESOURCE_MEM) {3939 if (flags & IORESOURCE_CACHEABLE)4040 return ioremap(start, len);
+23-1
mm/compaction.c
···313313 } else if (!locked)314314 spin_lock_irq(&zone->lru_lock);315315316316+ /*317317+ * migrate_pfn does not necessarily start aligned to a318318+ * pageblock. Ensure that pfn_valid is called when moving319319+ * into a new MAX_ORDER_NR_PAGES range in case of large320320+ * memory holes within the zone321321+ */322322+ if ((low_pfn & (MAX_ORDER_NR_PAGES - 1)) == 0) {323323+ if (!pfn_valid(low_pfn)) {324324+ low_pfn += MAX_ORDER_NR_PAGES - 1;325325+ continue;326326+ }327327+ }328328+316329 if (!pfn_valid_within(low_pfn))317330 continue;318331 nr_scanned++;319332320320- /* Get the page and skip if free */333333+ /*334334+ * Get the page and ensure the page is within the same zone.335335+ * See the comment in isolate_freepages about overlapping336336+ * nodes. It is deliberate that the new zone lock is not taken337337+ * as memory compaction should not move pages between nodes.338338+ */321339 page = pfn_to_page(low_pfn);340340+ if (page_zone(page) != zone)341341+ continue;342342+343343+ /* Skip if free */322344 if (PageBuddy(page))323345 continue;324346
+4-4
mm/filemap.c
···14001400 unsigned long seg = 0;14011401 size_t count;14021402 loff_t *ppos = &iocb->ki_pos;14031403- struct blk_plug plug;1404140314051404 count = 0;14061405 retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);14071406 if (retval)14081407 return retval;14091409-14101410- blk_start_plug(&plug);1411140814121409 /* coalesce the iovecs and go direct-to-BIO for O_DIRECT */14131410 if (filp->f_flags & O_DIRECT) {···14211424 retval = filemap_write_and_wait_range(mapping, pos,14221425 pos + iov_length(iov, nr_segs) - 1);14231426 if (!retval) {14271427+ struct blk_plug plug;14281428+14291429+ blk_start_plug(&plug);14241430 retval = mapping->a_ops->direct_IO(READ, iocb,14251431 iov, pos, nr_segs);14321432+ blk_finish_plug(&plug);14261433 }14271434 if (retval > 0) {14281435 *ppos = pos + retval;···14821481 break;14831482 }14841483out:14851485- blk_finish_plug(&plug);14861484 return retval;14871485}14881486EXPORT_SYMBOL(generic_file_aio_read);
+6-1
mm/filemap_xip.c
···263263 xip_pfn);264264 if (err == -ENOMEM)265265 return VM_FAULT_OOM;266266- BUG_ON(err);266266+ /*267267+ * err == -EBUSY is fine, we've raced against another thread268268+ * that faulted-in the same page269269+ */270270+ if (err != -EBUSY)271271+ BUG_ON(err);267272 return VM_FAULT_NOPAGE;268273 } else {269274 int err, ret = VM_FAULT_OOM;
···10361036{10371037 pr_debug("%s(0x%p)\n", __func__, ptr);1038103810391039- if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr))10391039+ if (atomic_read(&kmemleak_enabled) && ptr && size && !IS_ERR(ptr))10401040 add_scan_area((unsigned long)ptr, size, gfp);10411041 else if (atomic_read(&kmemleak_early_log))10421042 log_early(KMEMLEAK_SCAN_AREA, ptr, size, 0);···1757175717581758#ifdef CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF17591759 if (!kmemleak_skip_disable) {17601760+ atomic_set(&kmemleak_early_log, 0);17601761 kmemleak_disable();17611762 return;17621763 }
+2-1
mm/memcontrol.c
···776776 /* threshold event is triggered in finer grain than soft limit */777777 if (unlikely(mem_cgroup_event_ratelimit(memcg,778778 MEM_CGROUP_TARGET_THRESH))) {779779- bool do_softlimit, do_numainfo;779779+ bool do_softlimit;780780+ bool do_numainfo __maybe_unused;780781781782 do_softlimit = mem_cgroup_event_ratelimit(memcg,782783 MEM_CGROUP_TARGET_SOFTLIMIT);
+1-1
mm/migrate.c
···445445 ClearPageSwapCache(page);446446 ClearPagePrivate(page);447447 set_page_private(page, 0);448448- page->mapping = NULL;449448450449 /*451450 * If any waiters have accumulated on the new page then···666667 } else {667668 if (remap_swapcache)668669 remove_migration_ptes(page, newpage);670670+ page->mapping = NULL;669671 }670672671673 unlock_page(newpage);
+9-14
mm/process_vm_access.c
···298298 goto free_proc_pages;299299 }300300301301- task_lock(task);302302- if (__ptrace_may_access(task, PTRACE_MODE_ATTACH)) {303303- task_unlock(task);304304- rc = -EPERM;301301+ mm = mm_access(task, PTRACE_MODE_ATTACH);302302+ if (!mm || IS_ERR(mm)) {303303+ rc = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH;304304+ /*305305+ * Explicitly map EACCES to EPERM as EPERM is a more a306306+ * appropriate error code for process_vw_readv/writev307307+ */308308+ if (rc == -EACCES)309309+ rc = -EPERM;305310 goto put_task_struct;306311 }307307- mm = task->mm;308308-309309- if (!mm || (task->flags & PF_KTHREAD)) {310310- task_unlock(task);311311- rc = -EINVAL;312312- goto put_task_struct;313313- }314314-315315- atomic_inc(&mm->mm_users);316316- task_unlock(task);317312318313 for (i = 0; i < riovcnt && iov_l_curr_idx < liovcnt; i++) {319314 rc = process_vm_rw_single_vec(
···230230231231/* GTA02 specific routes and controls */232232233233-#ifdef CONFIG_MACH_NEO1973_GTA02234234-235233static int gta02_speaker_enabled;236234237235static int lm4853_set_spk(struct snd_kcontrol *kcontrol,···309311 return 0;310312}311313312312-#else313313-static int neo1973_gta02_wm8753_init(struct snd_soc_code *codec) { return 0; }314314-#endif315315-316314static int neo1973_wm8753_init(struct snd_soc_pcm_runtime *rtd)317315{318316 struct snd_soc_codec *codec = rtd->codec;···316322 int ret;317323318324 /* set up NC codec pins */319319- if (machine_is_neo1973_gta01()) {320320- snd_soc_dapm_nc_pin(dapm, "LOUT2");321321- snd_soc_dapm_nc_pin(dapm, "ROUT2");322322- }323325 snd_soc_dapm_nc_pin(dapm, "OUT3");324326 snd_soc_dapm_nc_pin(dapm, "OUT4");325327 snd_soc_dapm_nc_pin(dapm, "LINE1");···360370 return 0;361371}362372363363-/* GTA01 specific controls */364364-365365-#ifdef CONFIG_MACH_NEO1973_GTA01366366-367367-static const struct snd_soc_dapm_route neo1973_lm4857_routes[] = {368368- {"Amp IN", NULL, "ROUT1"},369369- {"Amp IN", NULL, "LOUT1"},370370-371371- {"Handset Spk", NULL, "Amp EP"},372372- {"Stereo Out", NULL, "Amp LS"},373373- {"Headphone", NULL, "Amp HP"},374374-};375375-376376-static const struct snd_soc_dapm_widget neo1973_lm4857_dapm_widgets[] = {377377- SND_SOC_DAPM_SPK("Handset Spk", NULL),378378- SND_SOC_DAPM_SPK("Stereo Out", NULL),379379- SND_SOC_DAPM_HP("Headphone", NULL),380380-};381381-382382-static int neo1973_lm4857_init(struct snd_soc_dapm_context *dapm)383383-{384384- int ret;385385-386386- ret = snd_soc_dapm_new_controls(dapm, neo1973_lm4857_dapm_widgets,387387- ARRAY_SIZE(neo1973_lm4857_dapm_widgets));388388- if (ret)389389- return ret;390390-391391- ret = snd_soc_dapm_add_routes(dapm, neo1973_lm4857_routes,392392- ARRAY_SIZE(neo1973_lm4857_routes));393393- if (ret)394394- return ret;395395-396396- snd_soc_dapm_ignore_suspend(dapm, "Stereo Out");397397- snd_soc_dapm_ignore_suspend(dapm, "Handset Spk");398398- snd_soc_dapm_ignore_suspend(dapm, "Headphone");399399-400400- return 0;401401-}402402-403403-#else404404-static int neo1973_lm4857_init(struct snd_soc_dapm_context *dapm) { return 0; };405405-#endif406406-407373static struct snd_soc_dai_link neo1973_dai[] = {408374{ /* Hifi Playback - for similatious use with voice below */409375 .name = "WM8753",···386440 .name = "dfbmcs320",387441 .codec_name = "dfbmcs320.0",388442 },389389- {390390- .name = "lm4857",391391- .codec_name = "lm4857.0-007c",392392- .init = neo1973_lm4857_init,393393- },394443};395444396445static struct snd_soc_codec_conf neo1973_codec_conf[] = {···395454 },396455};397456398398-#ifdef CONFIG_MACH_NEO1973_GTA02399457static const struct gpio neo1973_gta02_gpios[] = {400458 { GTA02_GPIO_HP_IN, GPIOF_OUT_INIT_HIGH, "GTA02_HP_IN" },401459 { GTA02_GPIO_AMP_SHUT, GPIOF_OUT_INIT_HIGH, "GTA02_AMP_SHUT" },402460};403403-#else404404-static const struct gpio neo1973_gta02_gpios[] = {};405405-#endif406461407462static struct snd_soc_card neo1973 = {408463 .name = "neo1973",···417480{418481 int ret;419482420420- if (!machine_is_neo1973_gta01() && !machine_is_neo1973_gta02())483483+ if (!machine_is_neo1973_gta02())421484 return -ENODEV;422485423486 if (machine_is_neo1973_gta02()) {
+11
sound/soc/soc-core.c
···567567 if (!codec->suspended && codec->driver->suspend) {568568 switch (codec->dapm.bias_level) {569569 case SND_SOC_BIAS_STANDBY:570570+ /*571571+ * If the CODEC is capable of idle572572+ * bias off then being in STANDBY573573+ * means it's doing something,574574+ * otherwise fall through.575575+ */576576+ if (codec->dapm.idle_bias_off) {577577+ dev_dbg(codec->dev,578578+ "idle_bias_off CODEC on over suspend\n");579579+ break;580580+ }570581 case SND_SOC_BIAS_OFF:571582 codec->driver->suspend(codec);572583 codec->suspended = 1;
···104104105105CFLAGS = -fno-omit-frame-pointer -ggdb3 -Wall -Wextra -std=gnu99 $(CFLAGS_WERROR) $(CFLAGS_OPTIMIZE) -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) $(EXTRA_CFLAGS)106106EXTLIBS = -lpthread -lrt -lelf -lm107107-ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64107107+ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE108108ALL_LDFLAGS = $(LDFLAGS)109109STRIP ?= strip110110···168168169169### --- END CONFIGURATION SECTION ---170170171171-# Those must not be GNU-specific; they are shared with perl/ which may172172-# be built by a different compiler. (Note that this is an artifact now173173-# but it still might be nice to keep that distinction.)174174-BASIC_CFLAGS = -Iutil/include -Iarch/$(ARCH)/include171171+BASIC_CFLAGS = -Iutil/include -Iarch/$(ARCH)/include -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE175172BASIC_LDFLAGS =176173177174# Guard against environment variables
-2
tools/perf/builtin-probe.c
···2020 * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.2121 *2222 */2323-#define _GNU_SOURCE2423#include <sys/utsname.h>2524#include <sys/types.h>2625#include <sys/stat.h>···3031#include <stdlib.h>3132#include <string.h>32333333-#undef _GNU_SOURCE3434#include "perf.h"3535#include "builtin.h"3636#include "util/util.h"
···2121 * The parts for function graph printing was taken and modified from the2222 * Linux Kernel that were written by Frederic Weisbecker.2323 */2424-#define _GNU_SOURCE2424+2525#include <stdio.h>2626#include <stdlib.h>2727#include <string.h>2828#include <ctype.h>2929#include <errno.h>30303131-#undef _GNU_SOURCE3231#include "../perf.h"3332#include "util.h"3433#include "trace-event.h"