···898898normal scheduling policy and absolute bandwidth allocation model for899899realtime scheduling policy.900900901901+WARNING: cgroup2 doesn't yet support control of realtime processes and902902+the cpu controller can only be enabled when all RT processes are in903903+the root cgroup. Be aware that system management software may already904904+have placed RT processes into nonroot cgroups during the system boot905905+process, and these processes may need to be moved to the root cgroup906906+before the cpu controller can be enabled.907907+901908902909CPU Interface Files903910~~~~~~~~~~~~~~~~~~~
···156156 root of the overlay. Finally the directory is moved to the new157157 location.158158159159+There are several ways to tune the "redirect_dir" feature.160160+161161+Kernel config options:162162+163163+- OVERLAY_FS_REDIRECT_DIR:164164+ If this is enabled, then redirect_dir is turned on by default.165165+- OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW:166166+ If this is enabled, then redirects are always followed by default. Enabling167167+ this results in a less secure configuration. Enable this option only when168168+ worried about backward compatibility with kernels that have the redirect_dir169169+ feature and follow redirects even if turned off.170170+171171+Module options (can also be changed through /sys/module/overlay/parameters/*):172172+173173+- "redirect_dir=BOOL":174174+ See OVERLAY_FS_REDIRECT_DIR kernel config option above.175175+- "redirect_always_follow=BOOL":176176+ See OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW kernel config option above.177177+- "redirect_max=NUM":178178+ The maximum number of bytes in an absolute redirect (default is 256).179179+180180+Mount options:181181+182182+- "redirect_dir=on":183183+ Redirects are enabled.184184+- "redirect_dir=follow":185185+ Redirects are not created, but followed.186186+- "redirect_dir=off":187187+ Redirects are not created and only followed if "redirect_always_follow"188188+ feature is enabled in the kernel/module config.189189+- "redirect_dir=nofollow":190190+ Redirects are not created and not followed (equivalent to "redirect_dir=off"191191+ if "redirect_always_follow" feature is not enabled).192192+159193Non-directories160194---------------161195
-874
Documentation/locking/crossrelease.txt
···11-Crossrelease22-============33-44-Started by Byungchul Park <byungchul.park@lge.com>55-66-Contents:77-88- (*) Background99-1010- - What causes deadlock1111- - How lockdep works1212-1313- (*) Limitation1414-1515- - Limit lockdep1616- - Pros from the limitation1717- - Cons from the limitation1818- - Relax the limitation1919-2020- (*) Crossrelease2121-2222- - Introduce crossrelease2323- - Introduce commit2424-2525- (*) Implementation2626-2727- - Data structures2828- - How crossrelease works2929-3030- (*) Optimizations3131-3232- - Avoid duplication3333- - Lockless for hot paths3434-3535- (*) APPENDIX A: What lockdep does to work aggresively3636-3737- (*) APPENDIX B: How to avoid adding false dependencies3838-3939-4040-==========4141-Background4242-==========4343-4444-What causes deadlock4545---------------------4646-4747-A deadlock occurs when a context is waiting for an event to happen,4848-which is impossible because another (or the) context who can trigger the4949-event is also waiting for another (or the) event to happen, which is5050-also impossible due to the same reason.5151-5252-For example:5353-5454- A context going to trigger event C is waiting for event A to happen.5555- A context going to trigger event A is waiting for event B to happen.5656- A context going to trigger event B is waiting for event C to happen.5757-5858-A deadlock occurs when these three wait operations run at the same time,5959-because event C cannot be triggered if event A does not happen, which in6060-turn cannot be triggered if event B does not happen, which in turn6161-cannot be triggered if event C does not happen. After all, no event can6262-be triggered since any of them never meets its condition to wake up.6363-6464-A dependency might exist between two waiters and a deadlock might happen6565-due to an incorrect releationship between dependencies. Thus, we must6666-define what a dependency is first. A dependency exists between them if:6767-6868- 1. There are two waiters waiting for each event at a given time.6969- 2. The only way to wake up each waiter is to trigger its event.7070- 3. Whether one can be woken up depends on whether the other can.7171-7272-Each wait in the example creates its dependency like:7373-7474- Event C depends on event A.7575- Event A depends on event B.7676- Event B depends on event C.7777-7878- NOTE: Precisely speaking, a dependency is one between whether a7979- waiter for an event can be woken up and whether another waiter for8080- another event can be woken up. However from now on, we will describe8181- a dependency as if it's one between an event and another event for8282- simplicity.8383-8484-And they form circular dependencies like:8585-8686- -> C -> A -> B -8787- / \8888- \ /8989- ----------------9090-9191- where 'A -> B' means that event A depends on event B.9292-9393-Such circular dependencies lead to a deadlock since no waiter can meet9494-its condition to wake up as described.9595-9696-CONCLUSION9797-9898-Circular dependencies cause a deadlock.9999-100100-101101-How lockdep works102102------------------103103-104104-Lockdep tries to detect a deadlock by checking dependencies created by105105-lock operations, acquire and release. Waiting for a lock corresponds to106106-waiting for an event, and releasing a lock corresponds to triggering an107107-event in the previous section.108108-109109-In short, lockdep does:110110-111111- 1. Detect a new dependency.112112- 2. Add the dependency into a global graph.113113- 3. Check if that makes dependencies circular.114114- 4. Report a deadlock or its possibility if so.115115-116116-For example, consider a graph built by lockdep that looks like:117117-118118- A -> B -119119- \120120- -> E121121- /122122- C -> D -123123-124124- where A, B,..., E are different lock classes.125125-126126-Lockdep will add a dependency into the graph on detection of a new127127-dependency. For example, it will add a dependency 'E -> C' when a new128128-dependency between lock E and lock C is detected. Then the graph will be:129129-130130- A -> B -131131- \132132- -> E -133133- / \134134- -> C -> D - \135135- / /136136- \ /137137- ------------------138138-139139- where A, B,..., E are different lock classes.140140-141141-This graph contains a subgraph which demonstrates circular dependencies:142142-143143- -> E -144144- / \145145- -> C -> D - \146146- / /147147- \ /148148- ------------------149149-150150- where C, D and E are different lock classes.151151-152152-This is the condition under which a deadlock might occur. Lockdep153153-reports it on detection after adding a new dependency. This is the way154154-how lockdep works.155155-156156-CONCLUSION157157-158158-Lockdep detects a deadlock or its possibility by checking if circular159159-dependencies were created after adding each new dependency.160160-161161-162162-==========163163-Limitation164164-==========165165-166166-Limit lockdep167167--------------168168-169169-Limiting lockdep to work on only typical locks e.g. spin locks and170170-mutexes, which are released within the acquire context, the171171-implementation becomes simple but its capacity for detection becomes172172-limited. Let's check pros and cons in next section.173173-174174-175175-Pros from the limitation176176-------------------------177177-178178-Given the limitation, when acquiring a lock, locks in a held_locks179179-cannot be released if the context cannot acquire it so has to wait to180180-acquire it, which means all waiters for the locks in the held_locks are181181-stuck. It's an exact case to create dependencies between each lock in182182-the held_locks and the lock to acquire.183183-184184-For example:185185-186186- CONTEXT X187187- ---------188188- acquire A189189- acquire B /* Add a dependency 'A -> B' */190190- release B191191- release A192192-193193- where A and B are different lock classes.194194-195195-When acquiring lock A, the held_locks of CONTEXT X is empty thus no196196-dependency is added. But when acquiring lock B, lockdep detects and adds197197-a new dependency 'A -> B' between lock A in the held_locks and lock B.198198-They can be simply added whenever acquiring each lock.199199-200200-And data required by lockdep exists in a local structure, held_locks201201-embedded in task_struct. Forcing to access the data within the context,202202-lockdep can avoid racy problems without explicit locks while handling203203-the local data.204204-205205-Lastly, lockdep only needs to keep locks currently being held, to build206206-a dependency graph. However, relaxing the limitation, it needs to keep207207-even locks already released, because a decision whether they created208208-dependencies might be long-deferred.209209-210210-To sum up, we can expect several advantages from the limitation:211211-212212- 1. Lockdep can easily identify a dependency when acquiring a lock.213213- 2. Races are avoidable while accessing local locks in a held_locks.214214- 3. Lockdep only needs to keep locks currently being held.215215-216216-CONCLUSION217217-218218-Given the limitation, the implementation becomes simple and efficient.219219-220220-221221-Cons from the limitation222222-------------------------223223-224224-Given the limitation, lockdep is applicable only to typical locks. For225225-example, page locks for page access or completions for synchronization226226-cannot work with lockdep.227227-228228-Can we detect deadlocks below, under the limitation?229229-230230-Example 1:231231-232232- CONTEXT X CONTEXT Y CONTEXT Z233233- --------- --------- ----------234234- mutex_lock A235235- lock_page B236236- lock_page B237237- mutex_lock A /* DEADLOCK */238238- unlock_page B held by X239239- unlock_page B240240- mutex_unlock A241241- mutex_unlock A242242-243243- where A and B are different lock classes.244244-245245-No, we cannot.246246-247247-Example 2:248248-249249- CONTEXT X CONTEXT Y250250- --------- ---------251251- mutex_lock A252252- mutex_lock A253253- wait_for_complete B /* DEADLOCK */254254- complete B255255- mutex_unlock A256256- mutex_unlock A257257-258258- where A is a lock class and B is a completion variable.259259-260260-No, we cannot.261261-262262-CONCLUSION263263-264264-Given the limitation, lockdep cannot detect a deadlock or its265265-possibility caused by page locks or completions.266266-267267-268268-Relax the limitation269269---------------------270270-271271-Under the limitation, things to create dependencies are limited to272272-typical locks. However, synchronization primitives like page locks and273273-completions, which are allowed to be released in any context, also274274-create dependencies and can cause a deadlock. So lockdep should track275275-these locks to do a better job. We have to relax the limitation for276276-these locks to work with lockdep.277277-278278-Detecting dependencies is very important for lockdep to work because279279-adding a dependency means adding an opportunity to check whether it280280-causes a deadlock. The more lockdep adds dependencies, the more it281281-thoroughly works. Thus Lockdep has to do its best to detect and add as282282-many true dependencies into a graph as possible.283283-284284-For example, considering only typical locks, lockdep builds a graph like:285285-286286- A -> B -287287- \288288- -> E289289- /290290- C -> D -291291-292292- where A, B,..., E are different lock classes.293293-294294-On the other hand, under the relaxation, additional dependencies might295295-be created and added. Assuming additional 'FX -> C' and 'E -> GX' are296296-added thanks to the relaxation, the graph will be:297297-298298- A -> B -299299- \300300- -> E -> GX301301- /302302- FX -> C -> D -303303-304304- where A, B,..., E, FX and GX are different lock classes, and a suffix305305- 'X' is added on non-typical locks.306306-307307-The latter graph gives us more chances to check circular dependencies308308-than the former. However, it might suffer performance degradation since309309-relaxing the limitation, with which design and implementation of lockdep310310-can be efficient, might introduce inefficiency inevitably. So lockdep311311-should provide two options, strong detection and efficient detection.312312-313313-Choosing efficient detection:314314-315315- Lockdep works with only locks restricted to be released within the316316- acquire context. However, lockdep works efficiently.317317-318318-Choosing strong detection:319319-320320- Lockdep works with all synchronization primitives. However, lockdep321321- suffers performance degradation.322322-323323-CONCLUSION324324-325325-Relaxing the limitation, lockdep can add additional dependencies giving326326-additional opportunities to check circular dependencies.327327-328328-329329-============330330-Crossrelease331331-============332332-333333-Introduce crossrelease334334-----------------------335335-336336-In order to allow lockdep to handle additional dependencies by what337337-might be released in any context, namely 'crosslock', we have to be able338338-to identify those created by crosslocks. The proposed 'crossrelease'339339-feature provoides a way to do that.340340-341341-Crossrelease feature has to do:342342-343343- 1. Identify dependencies created by crosslocks.344344- 2. Add the dependencies into a dependency graph.345345-346346-That's all. Once a meaningful dependency is added into graph, then347347-lockdep would work with the graph as it did. The most important thing348348-crossrelease feature has to do is to correctly identify and add true349349-dependencies into the global graph.350350-351351-A dependency e.g. 'A -> B' can be identified only in the A's release352352-context because a decision required to identify the dependency can be353353-made only in the release context. That is to decide whether A can be354354-released so that a waiter for A can be woken up. It cannot be made in355355-other than the A's release context.356356-357357-It's no matter for typical locks because each acquire context is same as358358-its release context, thus lockdep can decide whether a lock can be359359-released in the acquire context. However for crosslocks, lockdep cannot360360-make the decision in the acquire context but has to wait until the361361-release context is identified.362362-363363-Therefore, deadlocks by crosslocks cannot be detected just when it364364-happens, because those cannot be identified until the crosslocks are365365-released. However, deadlock possibilities can be detected and it's very366366-worth. See 'APPENDIX A' section to check why.367367-368368-CONCLUSION369369-370370-Using crossrelease feature, lockdep can work with what might be released371371-in any context, namely crosslock.372372-373373-374374-Introduce commit375375-----------------376376-377377-Since crossrelease defers the work adding true dependencies of378378-crosslocks until they are actually released, crossrelease has to queue379379-all acquisitions which might create dependencies with the crosslocks.380380-Then it identifies dependencies using the queued data in batches at a381381-proper time. We call it 'commit'.382382-383383-There are four types of dependencies:384384-385385-1. TT type: 'typical lock A -> typical lock B'386386-387387- Just when acquiring B, lockdep can see it's in the A's release388388- context. So the dependency between A and B can be identified389389- immediately. Commit is unnecessary.390390-391391-2. TC type: 'typical lock A -> crosslock BX'392392-393393- Just when acquiring BX, lockdep can see it's in the A's release394394- context. So the dependency between A and BX can be identified395395- immediately. Commit is unnecessary, too.396396-397397-3. CT type: 'crosslock AX -> typical lock B'398398-399399- When acquiring B, lockdep cannot identify the dependency because400400- there's no way to know if it's in the AX's release context. It has401401- to wait until the decision can be made. Commit is necessary.402402-403403-4. CC type: 'crosslock AX -> crosslock BX'404404-405405- When acquiring BX, lockdep cannot identify the dependency because406406- there's no way to know if it's in the AX's release context. It has407407- to wait until the decision can be made. Commit is necessary.408408- But, handling CC type is not implemented yet. It's a future work.409409-410410-Lockdep can work without commit for typical locks, but commit step is411411-necessary once crosslocks are involved. Introducing commit, lockdep412412-performs three steps. What lockdep does in each step is:413413-414414-1. Acquisition: For typical locks, lockdep does what it originally did415415- and queues the lock so that CT type dependencies can be checked using416416- it at the commit step. For crosslocks, it saves data which will be417417- used at the commit step and increases a reference count for it.418418-419419-2. Commit: No action is reauired for typical locks. For crosslocks,420420- lockdep adds CT type dependencies using the data saved at the421421- acquisition step.422422-423423-3. Release: No changes are required for typical locks. When a crosslock424424- is released, it decreases a reference count for it.425425-426426-CONCLUSION427427-428428-Crossrelease introduces commit step to handle dependencies of crosslocks429429-in batches at a proper time.430430-431431-432432-==============433433-Implementation434434-==============435435-436436-Data structures437437----------------438438-439439-Crossrelease introduces two main data structures.440440-441441-1. hist_lock442442-443443- This is an array embedded in task_struct, for keeping lock history so444444- that dependencies can be added using them at the commit step. Since445445- it's local data, it can be accessed locklessly in the owner context.446446- The array is filled at the acquisition step and consumed at the447447- commit step. And it's managed in circular manner.448448-449449-2. cross_lock450450-451451- One per lockdep_map exists. This is for keeping data of crosslocks452452- and used at the commit step.453453-454454-455455-How crossrelease works456456-----------------------457457-458458-It's the key of how crossrelease works, to defer necessary works to an459459-appropriate point in time and perform in at once at the commit step.460460-Let's take a look with examples step by step, starting from how lockdep461461-works without crossrelease for typical locks.462462-463463- acquire A /* Push A onto held_locks */464464- acquire B /* Push B onto held_locks and add 'A -> B' */465465- acquire C /* Push C onto held_locks and add 'B -> C' */466466- release C /* Pop C from held_locks */467467- release B /* Pop B from held_locks */468468- release A /* Pop A from held_locks */469469-470470- where A, B and C are different lock classes.471471-472472- NOTE: This document assumes that readers already understand how473473- lockdep works without crossrelease thus omits details. But there's474474- one thing to note. Lockdep pretends to pop a lock from held_locks475475- when releasing it. But it's subtly different from the original pop476476- operation because lockdep allows other than the top to be poped.477477-478478-In this case, lockdep adds 'the top of held_locks -> the lock to acquire'479479-dependency every time acquiring a lock.480480-481481-After adding 'A -> B', a dependency graph will be:482482-483483- A -> B484484-485485- where A and B are different lock classes.486486-487487-And after adding 'B -> C', the graph will be:488488-489489- A -> B -> C490490-491491- where A, B and C are different lock classes.492492-493493-Let's performs commit step even for typical locks to add dependencies.494494-Of course, commit step is not necessary for them, however, it would work495495-well because this is a more general way.496496-497497- acquire A498498- /*499499- * Queue A into hist_locks500500- *501501- * In hist_locks: A502502- * In graph: Empty503503- */504504-505505- acquire B506506- /*507507- * Queue B into hist_locks508508- *509509- * In hist_locks: A, B510510- * In graph: Empty511511- */512512-513513- acquire C514514- /*515515- * Queue C into hist_locks516516- *517517- * In hist_locks: A, B, C518518- * In graph: Empty519519- */520520-521521- commit C522522- /*523523- * Add 'C -> ?'524524- * Answer the following to decide '?'525525- * What has been queued since acquire C: Nothing526526- *527527- * In hist_locks: A, B, C528528- * In graph: Empty529529- */530530-531531- release C532532-533533- commit B534534- /*535535- * Add 'B -> ?'536536- * Answer the following to decide '?'537537- * What has been queued since acquire B: C538538- *539539- * In hist_locks: A, B, C540540- * In graph: 'B -> C'541541- */542542-543543- release B544544-545545- commit A546546- /*547547- * Add 'A -> ?'548548- * Answer the following to decide '?'549549- * What has been queued since acquire A: B, C550550- *551551- * In hist_locks: A, B, C552552- * In graph: 'B -> C', 'A -> B', 'A -> C'553553- */554554-555555- release A556556-557557- where A, B and C are different lock classes.558558-559559-In this case, dependencies are added at the commit step as described.560560-561561-After commits for A, B and C, the graph will be:562562-563563- A -> B -> C564564-565565- where A, B and C are different lock classes.566566-567567- NOTE: A dependency 'A -> C' is optimized out.568568-569569-We can see the former graph built without commit step is same as the570570-latter graph built using commit steps. Of course the former way leads to571571-earlier finish for building the graph, which means we can detect a572572-deadlock or its possibility sooner. So the former way would be prefered573573-when possible. But we cannot avoid using the latter way for crosslocks.574574-575575-Let's look at how commit steps work for crosslocks. In this case, the576576-commit step is performed only on crosslock AX as real. And it assumes577577-that the AX release context is different from the AX acquire context.578578-579579- BX RELEASE CONTEXT BX ACQUIRE CONTEXT580580- ------------------ ------------------581581- acquire A582582- /*583583- * Push A onto held_locks584584- * Queue A into hist_locks585585- *586586- * In held_locks: A587587- * In hist_locks: A588588- * In graph: Empty589589- */590590-591591- acquire BX592592- /*593593- * Add 'the top of held_locks -> BX'594594- *595595- * In held_locks: A596596- * In hist_locks: A597597- * In graph: 'A -> BX'598598- */599599-600600- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~601601- It must be guaranteed that the following operations are seen after602602- acquiring BX globally. It can be done by things like barrier.603603- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~604604-605605- acquire C606606- /*607607- * Push C onto held_locks608608- * Queue C into hist_locks609609- *610610- * In held_locks: C611611- * In hist_locks: C612612- * In graph: 'A -> BX'613613- */614614-615615- release C616616- /*617617- * Pop C from held_locks618618- *619619- * In held_locks: Empty620620- * In hist_locks: C621621- * In graph: 'A -> BX'622622- */623623- acquire D624624- /*625625- * Push D onto held_locks626626- * Queue D into hist_locks627627- * Add 'the top of held_locks -> D'628628- *629629- * In held_locks: A, D630630- * In hist_locks: A, D631631- * In graph: 'A -> BX', 'A -> D'632632- */633633- acquire E634634- /*635635- * Push E onto held_locks636636- * Queue E into hist_locks637637- *638638- * In held_locks: E639639- * In hist_locks: C, E640640- * In graph: 'A -> BX', 'A -> D'641641- */642642-643643- release E644644- /*645645- * Pop E from held_locks646646- *647647- * In held_locks: Empty648648- * In hist_locks: D, E649649- * In graph: 'A -> BX', 'A -> D'650650- */651651- release D652652- /*653653- * Pop D from held_locks654654- *655655- * In held_locks: A656656- * In hist_locks: A, D657657- * In graph: 'A -> BX', 'A -> D'658658- */659659- commit BX660660- /*661661- * Add 'BX -> ?'662662- * What has been queued since acquire BX: C, E663663- *664664- * In held_locks: Empty665665- * In hist_locks: D, E666666- * In graph: 'A -> BX', 'A -> D',667667- * 'BX -> C', 'BX -> E'668668- */669669-670670- release BX671671- /*672672- * In held_locks: Empty673673- * In hist_locks: D, E674674- * In graph: 'A -> BX', 'A -> D',675675- * 'BX -> C', 'BX -> E'676676- */677677- release A678678- /*679679- * Pop A from held_locks680680- *681681- * In held_locks: Empty682682- * In hist_locks: A, D683683- * In graph: 'A -> BX', 'A -> D',684684- * 'BX -> C', 'BX -> E'685685- */686686-687687- where A, BX, C,..., E are different lock classes, and a suffix 'X' is688688- added on crosslocks.689689-690690-Crossrelease considers all acquisitions after acqiuring BX are691691-candidates which might create dependencies with BX. True dependencies692692-will be determined when identifying the release context of BX. Meanwhile,693693-all typical locks are queued so that they can be used at the commit step.694694-And then two dependencies 'BX -> C' and 'BX -> E' are added at the695695-commit step when identifying the release context.696696-697697-The final graph will be, with crossrelease:698698-699699- -> C700700- /701701- -> BX -702702- / \703703- A - -> E704704- \705705- -> D706706-707707- where A, BX, C,..., E are different lock classes, and a suffix 'X' is708708- added on crosslocks.709709-710710-However, the final graph will be, without crossrelease:711711-712712- A -> D713713-714714- where A and D are different lock classes.715715-716716-The former graph has three more dependencies, 'A -> BX', 'BX -> C' and717717-'BX -> E' giving additional opportunities to check if they cause718718-deadlocks. This way lockdep can detect a deadlock or its possibility719719-caused by crosslocks.720720-721721-CONCLUSION722722-723723-We checked how crossrelease works with several examples.724724-725725-726726-=============727727-Optimizations728728-=============729729-730730-Avoid duplication731731------------------732732-733733-Crossrelease feature uses a cache like what lockdep already uses for734734-dependency chains, but this time it's for caching CT type dependencies.735735-Once that dependency is cached, the same will never be added again.736736-737737-738738-Lockless for hot paths739739-----------------------740740-741741-To keep all locks for later use at the commit step, crossrelease adopts742742-a local array embedded in task_struct, which makes access to the data743743-lockless by forcing it to happen only within the owner context. It's744744-like how lockdep handles held_locks. Lockless implmentation is important745745-since typical locks are very frequently acquired and released.746746-747747-748748-=================================================749749-APPENDIX A: What lockdep does to work aggresively750750-=================================================751751-752752-A deadlock actually occurs when all wait operations creating circular753753-dependencies run at the same time. Even though they don't, a potential754754-deadlock exists if the problematic dependencies exist. Thus it's755755-meaningful to detect not only an actual deadlock but also its potential756756-possibility. The latter is rather valuable. When a deadlock occurs757757-actually, we can identify what happens in the system by some means or758758-other even without lockdep. However, there's no way to detect possiblity759759-without lockdep unless the whole code is parsed in head. It's terrible.760760-Lockdep does the both, and crossrelease only focuses on the latter.761761-762762-Whether or not a deadlock actually occurs depends on several factors.763763-For example, what order contexts are switched in is a factor. Assuming764764-circular dependencies exist, a deadlock would occur when contexts are765765-switched so that all wait operations creating the dependencies run766766-simultaneously. Thus to detect a deadlock possibility even in the case767767-that it has not occured yet, lockdep should consider all possible768768-combinations of dependencies, trying to:769769-770770-1. Use a global dependency graph.771771-772772- Lockdep combines all dependencies into one global graph and uses them,773773- regardless of which context generates them or what order contexts are774774- switched in. Aggregated dependencies are only considered so they are775775- prone to be circular if a problem exists.776776-777777-2. Check dependencies between classes instead of instances.778778-779779- What actually causes a deadlock are instances of lock. However,780780- lockdep checks dependencies between classes instead of instances.781781- This way lockdep can detect a deadlock which has not happened but782782- might happen in future by others but the same class.783783-784784-3. Assume all acquisitions lead to waiting.785785-786786- Although locks might be acquired without waiting which is essential787787- to create dependencies, lockdep assumes all acquisitions lead to788788- waiting since it might be true some time or another.789789-790790-CONCLUSION791791-792792-Lockdep detects not only an actual deadlock but also its possibility,793793-and the latter is more valuable.794794-795795-796796-==================================================797797-APPENDIX B: How to avoid adding false dependencies798798-==================================================799799-800800-Remind what a dependency is. A dependency exists if:801801-802802- 1. There are two waiters waiting for each event at a given time.803803- 2. The only way to wake up each waiter is to trigger its event.804804- 3. Whether one can be woken up depends on whether the other can.805805-806806-For example:807807-808808- acquire A809809- acquire B /* A dependency 'A -> B' exists */810810- release B811811- release A812812-813813- where A and B are different lock classes.814814-815815-A depedency 'A -> B' exists since:816816-817817- 1. A waiter for A and a waiter for B might exist when acquiring B.818818- 2. Only way to wake up each is to release what it waits for.819819- 3. Whether the waiter for A can be woken up depends on whether the820820- other can. IOW, TASK X cannot release A if it fails to acquire B.821821-822822-For another example:823823-824824- TASK X TASK Y825825- ------ ------826826- acquire AX827827- acquire B /* A dependency 'AX -> B' exists */828828- release B829829- release AX held by Y830830-831831- where AX and B are different lock classes, and a suffix 'X' is added832832- on crosslocks.833833-834834-Even in this case involving crosslocks, the same rule can be applied. A835835-depedency 'AX -> B' exists since:836836-837837- 1. A waiter for AX and a waiter for B might exist when acquiring B.838838- 2. Only way to wake up each is to release what it waits for.839839- 3. Whether the waiter for AX can be woken up depends on whether the840840- other can. IOW, TASK X cannot release AX if it fails to acquire B.841841-842842-Let's take a look at more complicated example:843843-844844- TASK X TASK Y845845- ------ ------846846- acquire B847847- release B848848- fork Y849849- acquire AX850850- acquire C /* A dependency 'AX -> C' exists */851851- release C852852- release AX held by Y853853-854854- where AX, B and C are different lock classes, and a suffix 'X' is855855- added on crosslocks.856856-857857-Does a dependency 'AX -> B' exist? Nope.858858-859859-Two waiters are essential to create a dependency. However, waiters for860860-AX and B to create 'AX -> B' cannot exist at the same time in this861861-example. Thus the dependency 'AX -> B' cannot be created.862862-863863-It would be ideal if the full set of true ones can be considered. But864864-we can ensure nothing but what actually happened. Relying on what865865-actually happens at runtime, we can anyway add only true ones, though866866-they might be a subset of true ones. It's similar to how lockdep works867867-for typical locks. There might be more true dependencies than what868868-lockdep has detected in runtime. Lockdep has no choice but to rely on869869-what actually happens. Crossrelease also relies on it.870870-871871-CONCLUSION872872-873873-Relying on what actually happens, lockdep can avoid adding false874874-dependencies.
+12-3
Documentation/virtual/kvm/api.txt
···2901290129022902struct kvm_s390_irq_state {29032903 __u64 buf;29042904- __u32 flags;29042904+ __u32 flags; /* will stay unused for compatibility reasons */29052905 __u32 len;29062906- __u32 reserved[4];29062906+ __u32 reserved[4]; /* will stay unused for compatibility reasons */29072907};2908290829092909Userspace passes in the above struct and for each pending interrupt a29102910struct kvm_s390_irq is copied to the provided buffer.29112911+29122912+The structure contains a flags and a reserved field for future extensions. As29132913+the kernel never checked for flags == 0 and QEMU never pre-zeroed flags and29142914+reserved, these fields can not be used in the future without breaking29152915+compatibility.2911291629122917If -ENOBUFS is returned the buffer provided was too small and userspace29132918may retry with a bigger buffer.···2937293229382933struct kvm_s390_irq_state {29392934 __u64 buf;29352935+ __u32 flags; /* will stay unused for compatibility reasons */29402936 __u32 len;29412941- __u32 pad;29372937+ __u32 reserved[4]; /* will stay unused for compatibility reasons */29422938};29392939+29402940+The restrictions for flags and reserved apply as well.29412941+(see KVM_S390_GET_IRQ_STATE)2943294229442943The userspace memory referenced by buf contains a struct kvm_s390_irq29452944for each interrupt to be injected into the guest.
+21-1
Documentation/vm/zswap.txt
···9898original compressor. Once all pages are removed from an old zpool, the zpool9999and its compressor are freed.100100101101+Some of the pages in zswap are same-value filled pages (i.e. contents of the102102+page have same value or repetitive pattern). These pages include zero-filled103103+pages and they are handled differently. During store operation, a page is104104+checked if it is a same-value filled page before compressing it. If true, the105105+compressed length of the page is set to zero and the pattern or same-filled106106+value is stored.107107+108108+Same-value filled pages identification feature is enabled by default and can be109109+disabled at boot time by setting the "same_filled_pages_enabled" attribute to 0,110110+e.g. zswap.same_filled_pages_enabled=0. It can also be enabled and disabled at111111+runtime using the sysfs "same_filled_pages_enabled" attribute, e.g.112112+113113+echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled114114+115115+When zswap same-filled page identification is disabled at runtime, it will stop116116+checking for the same-value filled pages during store operation. However, the117117+existing pages which are marked as same-value filled pages remain stored118118+unchanged in zswap until they are either loaded or invalidated.119119+101120A debugfs interface is provided for various statistic about pool size, number102102-of pages stored, and various counters for the reasons pages are rejected.121121+of pages stored, same-value filled pages and various counters for the reasons122122+pages are rejected.
+3-2
MAINTAINERS
···20472047F: arch/arm/include/asm/hardware/cache-uniphier.h20482048F: arch/arm/mach-uniphier/20492049F: arch/arm/mm/cache-uniphier.c20502050-F: arch/arm64/boot/dts/socionext/20502050+F: arch/arm64/boot/dts/socionext/uniphier*20512051F: drivers/bus/uniphier-system-bus.c20522052F: drivers/clk/uniphier/20532053F: drivers/gpio/gpio-uniphier.c···5435543554365436FCOE SUBSYSTEM (libfc, libfcoe, fcoe)54375437M: Johannes Thumshirn <jth@kernel.org>54385438-L: fcoe-devel@open-fcoe.org54385438+L: linux-scsi@vger.kernel.org54395439W: www.Open-FCoE.org54405440S: Supported54415441F: drivers/scsi/libfc/···13133131331313413134SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER1313513135M: Jie Deng <jiedeng@synopsys.com>1313613136+M: Jose Abreu <Jose.Abreu@synopsys.com>1313613137L: netdev@vger.kernel.org1313713138S: Supported1313813139F: drivers/net/ethernet/synopsys/
···8181/* ... and its pointer from SRAM after copy */8282extern void (*omap3_do_wfi_sram)(void);83838484-/* save_secure_ram_context function pointer and size, for copy to SRAM */8585-extern int save_secure_ram_context(u32 *addr);8686-extern unsigned int save_secure_ram_context_sz;8787-8884extern void omap3_save_scratchpad_contents(void);89859086#define PM_RTA_ERRATUM_i608 (1 << 0)
+4-9
arch/arm/mach-omap2/pm34xx.c
···4848#include "prm3xxx.h"4949#include "pm.h"5050#include "sdrc.h"5151+#include "omap-secure.h"5152#include "sram.h"5253#include "control.h"5354#include "vc.h"···67666867static LIST_HEAD(pwrst_list);69687070-static int (*_omap_save_secure_sram)(u32 *addr);7169void (*omap3_do_wfi_sram)(void);72707371static struct powerdomain *mpu_pwrdm, *neon_pwrdm;···121121 * will hang the system.122122 */123123 pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);124124- ret = _omap_save_secure_sram((u32 *)(unsigned long)125125- __pa(omap3_secure_ram_storage));124124+ ret = omap3_save_secure_ram(omap3_secure_ram_storage,125125+ OMAP3_SAVE_SECURE_RAM_SZ);126126 pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);127127 /* Following is for error tracking, it should not happen */128128 if (ret) {···434434 *435435 * The minimum set of functions is pushed to SRAM for execution:436436 * - omap3_do_wfi for erratum i581 WA,437437- * - save_secure_ram_context for security extensions.438437 */439438void omap_push_sram_idle(void)440439{441440 omap3_do_wfi_sram = omap_sram_push(omap3_do_wfi, omap3_do_wfi_sz);442442-443443- if (omap_type() != OMAP2_DEVICE_TYPE_GP)444444- _omap_save_secure_sram = omap_sram_push(save_secure_ram_context,445445- save_secure_ram_context_sz);446441}447442448443static void __init pm_errata_configure(void)···548553 clkdm_add_wkdep(neon_clkdm, mpu_clkdm);549554 if (omap_type() != OMAP2_DEVICE_TYPE_GP) {550555 omap3_secure_ram_storage =551551- kmalloc(0x803F, GFP_KERNEL);556556+ kmalloc(OMAP3_SAVE_SECURE_RAM_SZ, GFP_KERNEL);552557 if (!omap3_secure_ram_storage)553558 pr_err("Memory allocation failed when allocating for secure sram context\n");554559
···176176 return v;177177}178178179179-static int am33xx_pwrdm_read_prev_pwrst(struct powerdomain *pwrdm)180180-{181181- u32 v;182182-183183- v = am33xx_prm_read_reg(pwrdm->prcm_offs, pwrdm->pwrstst_offs);184184- v &= AM33XX_LASTPOWERSTATEENTERED_MASK;185185- v >>= AM33XX_LASTPOWERSTATEENTERED_SHIFT;186186-187187- return v;188188-}189189-190179static int am33xx_pwrdm_set_lowpwrstchange(struct powerdomain *pwrdm)191180{192181 am33xx_prm_rmw_reg_bits(AM33XX_LOWPOWERSTATECHANGE_MASK,···346357 .pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst,347358 .pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst,348359 .pwrdm_read_pwrst = am33xx_pwrdm_read_pwrst,349349- .pwrdm_read_prev_pwrst = am33xx_pwrdm_read_prev_pwrst,350360 .pwrdm_set_logic_retst = am33xx_pwrdm_set_logic_retst,351361 .pwrdm_read_logic_pwrst = am33xx_pwrdm_read_logic_pwrst,352362 .pwrdm_read_logic_retst = am33xx_pwrdm_read_logic_retst,
+4-22
arch/arm/mach-omap2/sleep34xx.S
···9393ENDPROC(enable_omap3630_toggle_l2_on_restore)94949595/*9696- * Function to call rom code to save secure ram context. This gets9797- * relocated to SRAM, so it can be all in .data section. Otherwise9898- * we need to initialize api_params separately.9696+ * Function to call rom code to save secure ram context.9797+ *9898+ * r0 = physical address of the parameters9999 */100100- .data101101- .align 3102100ENTRY(save_secure_ram_context)103101 stmfd sp!, {r4 - r11, lr} @ save registers on stack104104- adr r3, api_params @ r3 points to parameters105105- str r0, [r3,#0x4] @ r0 has sdram address106106- ldr r12, high_mask107107- and r3, r3, r12108108- ldr r12, sram_phy_addr_mask109109- orr r3, r3, r12102102+ mov r3, r0 @ physical address of parameters110103 mov r0, #25 @ set service ID for PPA111104 mov r12, r0 @ copy secure service ID in r12112105 mov r1, #0 @ set task id for ROM code in r1···113120 nop114121 nop115122 ldmfd sp!, {r4 - r11, pc}116116- .align117117-sram_phy_addr_mask:118118- .word SRAM_BASE_P119119-high_mask:120120- .word 0xffff121121-api_params:122122- .word 0x4, 0x0, 0x0, 0x1, 0x1123123ENDPROC(save_secure_ram_context)124124-ENTRY(save_secure_ram_context_sz)125125- .word . - save_secure_ram_context126126-127127- .text128124129125/*130126 * ======================
+11-1
arch/arm64/Kconfig
···557557558558 If unsure, say Y.559559560560-561560config SOCIONEXT_SYNQUACER_PREITS562561 bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"563562 default y···575576 a 128kB offset to be applied to the target address in this commands.576577577578 If unsure, say Y.579579+580580+config QCOM_FALKOR_ERRATUM_E1041581581+ bool "Falkor E1041: Speculative instruction fetches might cause errant memory access"582582+ default y583583+ help584584+ Falkor CPU may speculatively fetch instructions from an improper585585+ memory location when MMU translation is changed from SCTLR_ELn[M]=1586586+ to SCTLR_ELn[M]=0. Prefix an ISB instruction to fix the problem.587587+588588+ If unsure, say Y.589589+578590endmenu579591580592
···512512#endif513513 .endm514514515515+/**516516+ * Errata workaround prior to disable MMU. Insert an ISB immediately prior517517+ * to executing the MSR that will change SCTLR_ELn[M] from a value of 1 to 0.518518+ */519519+ .macro pre_disable_mmu_workaround520520+#ifdef CONFIG_QCOM_FALKOR_ERRATUM_E1041521521+ isb522522+#endif523523+ .endm524524+515525#endif /* __ASM_ASSEMBLER_H */
+3
arch/arm64/include/asm/cpufeature.h
···6060#define FTR_VISIBLE true /* Feature visible to the user space */6161#define FTR_HIDDEN false /* Feature is hidden from the user */62626363+#define FTR_VISIBLE_IF_IS_ENABLED(config) \6464+ (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)6565+6366struct arm64_ftr_bits {6467 bool sign; /* Value is signed ? */6568 bool visible;
···750750 * to take into account by discarding the current kernel mapping and751751 * creating a new one.752752 */753753+ pre_disable_mmu_workaround753754 msr sctlr_el1, x20 // disable the MMU754755 isb755756 bl __create_page_tables // recreate kernel mapping
+1-1
arch/arm64/kernel/hw_breakpoint.c
···2828#include <linux/perf_event.h>2929#include <linux/ptrace.h>3030#include <linux/smp.h>3131+#include <linux/uaccess.h>31323233#include <asm/compat.h>3334#include <asm/current.h>···3736#include <asm/traps.h>3837#include <asm/cputype.h>3938#include <asm/system_misc.h>4040-#include <asm/uaccess.h>41394240/* Breakpoint currently in use for each BRP. */4341static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]);
···221221 }222222 }223223}224224+225225+226226+/*227227+ * After successfully emulating an instruction, we might want to228228+ * return to user space with a KVM_EXIT_DEBUG. We can only do this229229+ * once the emulation is complete, though, so for userspace emulations230230+ * we have to wait until we have re-entered KVM before calling this231231+ * helper.232232+ *233233+ * Return true (and set exit_reason) to return to userspace or false234234+ * if no further action is required.235235+ */236236+bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)237237+{238238+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {239239+ run->exit_reason = KVM_EXIT_DEBUG;240240+ run->debug.arch.hsr = ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT;241241+ return true;242242+ }243243+ return false;244244+}
+42-15
arch/arm64/kvm/handle_exit.c
···2828#include <asm/kvm_emulate.h>2929#include <asm/kvm_mmu.h>3030#include <asm/kvm_psci.h>3131+#include <asm/debug-monitors.h>31323233#define CREATE_TRACE_POINTS3334#include "trace.h"···188187}189188190189/*190190+ * We may be single-stepping an emulated instruction. If the emulation191191+ * has been completed in the kernel, we can return to userspace with a192192+ * KVM_EXIT_DEBUG, otherwise userspace needs to complete its193193+ * emulation first.194194+ */195195+static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)196196+{197197+ int handled;198198+199199+ /*200200+ * See ARM ARM B1.14.1: "Hyp traps on instructions201201+ * that fail their condition code check"202202+ */203203+ if (!kvm_condition_valid(vcpu)) {204204+ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));205205+ handled = 1;206206+ } else {207207+ exit_handle_fn exit_handler;208208+209209+ exit_handler = kvm_get_exit_handler(vcpu);210210+ handled = exit_handler(vcpu, run);211211+ }212212+213213+ /*214214+ * kvm_arm_handle_step_debug() sets the exit_reason on the kvm_run215215+ * structure if we need to return to userspace.216216+ */217217+ if (handled > 0 && kvm_arm_handle_step_debug(vcpu, run))218218+ handled = 0;219219+220220+ return handled;221221+}222222+223223+/*191224 * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on192225 * proper exit to userspace.193226 */194227int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,195228 int exception_index)196229{197197- exit_handle_fn exit_handler;198198-199230 if (ARM_SERROR_PENDING(exception_index)) {200231 u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));201232···253220 return 1;254221 case ARM_EXCEPTION_EL1_SERROR:255222 kvm_inject_vabt(vcpu);256256- return 1;257257- case ARM_EXCEPTION_TRAP:258258- /*259259- * See ARM ARM B1.14.1: "Hyp traps on instructions260260- * that fail their condition code check"261261- */262262- if (!kvm_condition_valid(vcpu)) {263263- kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));223223+ /* We may still need to return for single-step */224224+ if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS)225225+ && kvm_arm_handle_step_debug(vcpu, run))226226+ return 0;227227+ else264228 return 1;265265- }266266-267267- exit_handler = kvm_get_exit_handler(vcpu);268268-269269- return exit_handler(vcpu, run);229229+ case ARM_EXCEPTION_TRAP:230230+ return handle_trap_exceptions(vcpu, run);270231 case ARM_EXCEPTION_HYP_GONE:271232 /*272233 * EL2 has been reset to the hyp-stub. This happens when a guest
···2222#include <asm/kvm_emulate.h>2323#include <asm/kvm_hyp.h>2424#include <asm/fpsimd.h>2525+#include <asm/debug-monitors.h>25262627static bool __hyp_text __fpsimd_enabled_nvhe(void)2728{···270269 return true;271270}272271273273-static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)272272+/* Skip an instruction which has been emulated. Returns true if273273+ * execution can continue or false if we need to exit hyp mode because274274+ * single-step was in effect.275275+ */276276+static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu)274277{275278 *vcpu_pc(vcpu) = read_sysreg_el2(elr);276279···287282 }288283289284 write_sysreg_el2(*vcpu_pc(vcpu), elr);285285+286286+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {287287+ vcpu->arch.fault.esr_el2 =288288+ (ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT) | 0x22;289289+ return false;290290+ } else {291291+ return true;292292+ }290293}291294292295int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)···355342 int ret = __vgic_v2_perform_cpuif_access(vcpu);356343357344 if (ret == 1) {358358- __skip_instr(vcpu);359359- goto again;345345+ if (__skip_instr(vcpu))346346+ goto again;347347+ else348348+ exit_code = ARM_EXCEPTION_TRAP;360349 }361350362351 if (ret == -1) {363363- /* Promote an illegal access to an SError */364364- __skip_instr(vcpu);352352+ /* Promote an illegal access to an353353+ * SError. If we would be returning354354+ * due to single-step clear the SS355355+ * bit so handle_exit knows what to356356+ * do after dealing with the error.357357+ */358358+ if (!__skip_instr(vcpu))359359+ *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;365360 exit_code = ARM_EXCEPTION_EL1_SERROR;366361 }367362···384363 int ret = __vgic_v3_perform_cpuif_access(vcpu);385364386365 if (ret == 1) {387387- __skip_instr(vcpu);388388- goto again;366366+ if (__skip_instr(vcpu))367367+ goto again;368368+ else369369+ exit_code = ARM_EXCEPTION_TRAP;389370 }390371391372 /* 0 falls through to be handled out of EL2 */
···3838#define smp_rmb() RISCV_FENCE(r,r)3939#define smp_wmb() RISCV_FENCE(w,w)40404141+/*4242+ * This is a very specific barrier: it's currently only used in two places in4343+ * the kernel, both in the scheduler. See include/linux/spinlock.h for the two4444+ * orderings it guarantees, but the "critical section is RCsc" guarantee4545+ * mandates a barrier on RISC-V. The sequence looks like:4646+ *4747+ * lr.aq lock4848+ * sc lock <= LOCKED4949+ * smp_mb__after_spinlock()5050+ * // critical section5151+ * lr lock5252+ * sc.rl lock <= UNLOCKED5353+ *5454+ * The AQ/RL pair provides a RCpc critical section, but there's not really any5555+ * way we can take advantage of that here because the ordering is only enforced5656+ * on that one lock. Thus, we're just doing a full fence.5757+ */5858+#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw)5959+4160#include <asm-generic/barrier.h>42614362#endif /* __ASSEMBLY__ */
···11+# SPDX-License-Identifier: GPL-2.012# Makefile for kernel virtual machines on s39023#34# Copyright IBM Corp. 200844-#55-# This program is free software; you can redistribute it and/or modify66-# it under the terms of the GNU General Public License (version 2 only)77-# as published by the Free Software Foundation.8596KVM := ../../../virt/kvm107common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/async_pf.o $(KVM)/irqchip.o $(KVM)/vfio.o
+1-4
arch/s390/kvm/diag.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * handling diagnose instructions34 *45 * Copyright IBM Corp. 2008, 201155- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>
+1-4
arch/s390/kvm/gaccess.h
···11+/* SPDX-License-Identifier: GPL-2.0 */12/*23 * access guest memory34 *45 * Copyright IBM Corp. 2008, 201455- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 */
+1-4
arch/s390/kvm/guestdbg.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * kvm guest debug support34 *45 * Copyright IBM Corp. 201455- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com>118 */
+1-4
arch/s390/kvm/intercept.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * in-kernel handling for sie intercepts34 *45 * Copyright IBM Corp. 2008, 201455- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>
+1-4
arch/s390/kvm/interrupt.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * handling kvm guest interrupts34 *45 * Copyright IBM Corp. 2008, 201555- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 */
+1-4
arch/s390/kvm/irq.h
···11+/* SPDX-License-Identifier: GPL-2.0 */12/*23 * s390 irqchip routines34 *45 * Copyright IBM Corp. 201455- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com>118 */
+5-6
arch/s390/kvm/kvm-s390.c
···11+// SPDX-License-Identifier: GPL-2.012/*22- * hosting zSeries kernel virtual machines33+ * hosting IBM Z kernel virtual machines (s390x)34 *44- * Copyright IBM Corp. 2008, 200955- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.55+ * Copyright IBM Corp. 2008, 201796 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>···38053808 r = -EINVAL;38063809 break;38073810 }38113811+ /* do not use irq_state.flags, it will break old QEMUs */38083812 r = kvm_s390_set_irq_state(vcpu,38093813 (void __user *) irq_state.buf,38103814 irq_state.len);···38213823 r = -EINVAL;38223824 break;38233825 }38263826+ /* do not use irq_state.flags, it will break old QEMUs */38243827 r = kvm_s390_get_irq_state(vcpu,38253828 (__u8 __user *) irq_state.buf,38263829 irq_state.len);
+1-4
arch/s390/kvm/kvm-s390.h
···11+/* SPDX-License-Identifier: GPL-2.0 */12/*23 * definition for kvm on s39034 *45 * Copyright IBM Corp. 2008, 200955- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>
+10-6
arch/s390/kvm/priv.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * handling privileged instructions34 *45 * Copyright IBM Corp. 2008, 201355- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>···232235 VCPU_EVENT(vcpu, 4, "%s", "retrying storage key operation");233236 return -EAGAIN;234237 }235235- if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)236236- return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);237238 return 0;238239}239240···241246 unsigned char key;242247 int reg1, reg2;243248 int rc;249249+250250+ if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)251251+ return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);244252245253 rc = try_handle_skey(vcpu);246254 if (rc)···273275 unsigned long addr;274276 int reg1, reg2;275277 int rc;278278+279279+ if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)280280+ return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);276281277282 rc = try_handle_skey(vcpu);278283 if (rc)···311310 unsigned char key, oldkey;312311 int reg1, reg2;313312 int rc;313313+314314+ if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)315315+ return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);314316315317 rc = try_handle_skey(vcpu);316318 if (rc)
+1-4
arch/s390/kvm/sigp.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * handling interprocessor communication34 *45 * Copyright IBM Corp. 2008, 201355- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): Carsten Otte <cotte@de.ibm.com>118 * Christian Borntraeger <borntraeger@de.ibm.com>
+1-4
arch/s390/kvm/vsie.c
···11+// SPDX-License-Identifier: GPL-2.012/*23 * kvm nested virtualization support for s390x34 *45 * Copyright IBM Corp. 201655- *66- * This program is free software; you can redistribute it and/or modify77- * it under the terms of the GNU General Public License (version 2 only)88- * as published by the Free Software Foundation.96 *107 * Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com>118 */
+2-2
arch/sparc/mm/gup.c
···7575 if (!(pmd_val(pmd) & _PAGE_VALID))7676 return 0;77777878- if (!pmd_access_permitted(pmd, write))7878+ if (write && !pmd_write(pmd))7979 return 0;80808181 refs = 0;···114114 if (!(pud_val(pud) & _PAGE_VALID))115115 return 0;116116117117- if (!pud_access_permitted(pud, write))117117+ if (write && !pud_write(pud))118118 return 0;119119120120 refs = 0;
···400400config UNWINDER_GUESS401401 bool "Guess unwinder"402402 depends on EXPERT403403+ depends on !STACKDEPOT403404 ---help---404405 This option enables the "guess" unwinder for unwinding kernel stack405406 traces. It scans the stack and reports every kernel text address it
···305305 leaq boot_stack_end(%rbx), %rsp306306307307#ifdef CONFIG_X86_5LEVEL308308- /* Check if 5-level paging has already enabled */309309- movq %cr4, %rax310310- testl $X86_CR4_LA57, %eax311311- jnz lvl5308308+ /*309309+ * Check if we need to enable 5-level paging.310310+ * RSI holds real mode data and need to be preserved across311311+ * a function call.312312+ */313313+ pushq %rsi314314+ call l5_paging_required315315+ popq %rsi316316+317317+ /* If l5_paging_required() returned zero, we're done here. */318318+ cmpq $0, %rax319319+ je lvl5312320313321 /*314322 * At this point we are in long mode with 4-level paging enabled,
+16
arch/x86/boot/compressed/misc.c
···169169 }170170}171171172172+static bool l5_supported(void)173173+{174174+ /* Check if leaf 7 is supported. */175175+ if (native_cpuid_eax(0) < 7)176176+ return 0;177177+178178+ /* Check if la57 is supported. */179179+ return native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31));180180+}181181+172182#if CONFIG_X86_NEED_RELOCS173183static void handle_relocations(void *output, unsigned long output_len,174184 unsigned long virt_addr)···371361372362 console_init();373363 debug_putstr("early console in extract_kernel\n");364364+365365+ if (IS_ENABLED(CONFIG_X86_5LEVEL) && !l5_supported()) {366366+ error("This linux kernel as configured requires 5-level paging\n"367367+ "This CPU does not support the required 'cr4.la57' feature\n"368368+ "Unable to boot - please use a kernel appropriate for your CPU\n");369369+ }374370375371 free_mem_ptr = heap; /* Heap */376372 free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
+28
arch/x86/boot/compressed/pgtable_64.c
···11+#include <asm/processor.h>22+33+/*44+ * __force_order is used by special_insns.h asm code to force instruction55+ * serialization.66+ *77+ * It is not referenced from the code, but GCC < 5 with -fPIE would fail88+ * due to an undefined symbol. Define it to make these ancient GCCs work.99+ */1010+unsigned long __force_order;1111+1212+int l5_paging_required(void)1313+{1414+ /* Check if leaf 7 is supported. */1515+1616+ if (native_cpuid_eax(0) < 7)1717+ return 0;1818+1919+ /* Check if la57 is supported. */2020+ if (!(native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31))))2121+ return 0;2222+2323+ /* Check if 5-level paging has already been enabled. */2424+ if (native_read_cr4() & X86_CR4_LA57)2525+ return 0;2626+2727+ return 1;2828+}
···536536 struct kvm_mmu_memory_cache mmu_page_cache;537537 struct kvm_mmu_memory_cache mmu_page_header_cache;538538539539+ /*540540+ * QEMU userspace and the guest each have their own FPU state.541541+ * In vcpu_run, we switch between the user and guest FPU contexts.542542+ * While running a VCPU, the VCPU thread will have the guest FPU543543+ * context.544544+ *545545+ * Note that while the PKRU state lives inside the fpu registers,546546+ * it is switched out separately at VMENTER and VMEXIT time. The547547+ * "guest_fpu" state here contains the guest FPU context, with the548548+ * host PRKU bits.549549+ */550550+ struct fpu user_fpu;539551 struct fpu guest_fpu;552552+540553 u64 xcr0;541554 u64 guest_supported_xcr0;542555 u32 guest_xstate_size;···1447143414481435#define put_smstate(type, buf, offset, val) \14491436 *(type *)((buf) + (offset) - 0x7e00) = val14371437+14381438+void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,14391439+ unsigned long start, unsigned long end);1450144014511441#endif /* _ASM_X86_KVM_HOST_H */
+7-1
arch/x86/include/asm/suspend_32.h
···12121313/* image of the saved processor state */1414struct saved_context {1515- u16 es, fs, gs, ss;1515+ /*1616+ * On x86_32, all segment registers, with the possible exception of1717+ * gs, are saved at kernel entry in pt_regs.1818+ */1919+#ifdef CONFIG_X86_32_LAZY_GS2020+ u16 gs;2121+#endif1622 unsigned long cr0, cr2, cr3, cr4;1723 u64 misc_enable;1824 bool misc_enable_saved;
+15-4
arch/x86/include/asm/suspend_64.h
···2020 */2121struct saved_context {2222 struct pt_regs regs;2323- u16 ds, es, fs, gs, ss;2424- unsigned long gs_base, gs_kernel_base, fs_base;2323+2424+ /*2525+ * User CS and SS are saved in current_pt_regs(). The rest of the2626+ * segment selectors need to be saved and restored here.2727+ */2828+ u16 ds, es, fs, gs;2929+3030+ /*3131+ * Usermode FSBASE and GSBASE may not match the fs and gs selectors,3232+ * so we save them separately. We save the kernelmode GSBASE to3333+ * restore percpu access after resume.3434+ */3535+ unsigned long kernelmode_gs_base, usermode_gs_base, fs_base;3636+2537 unsigned long cr0, cr2, cr3, cr4, cr8;2638 u64 misc_enable;2739 bool misc_enable_saved;···4230 u16 gdt_pad; /* Unused */4331 struct desc_ptr gdt_desc;4432 u16 idt_pad;4545- u16 idt_limit;4646- unsigned long idt_base;3333+ struct desc_ptr idt;4734 u16 ldt;4835 u16 tss;4936 unsigned long tr;
+2-2
arch/x86/kernel/smpboot.c
···106106static unsigned int logical_packages __read_mostly;107107108108/* Maximum number of SMT threads on any online core */109109-int __max_smt_threads __read_mostly;109109+int __read_mostly __max_smt_threads = 1;110110111111/* Flag to indicate if a complete sched domain rebuild is required */112112bool x86_topology_update;···13041304 * Today neither Intel nor AMD support heterogenous systems so13051305 * extrapolate the boot cpu's data to all packages.13061306 */13071307- ncpus = cpu_data(0).booted_cores * smp_num_siblings;13071307+ ncpus = cpu_data(0).booted_cores * topology_max_smt_threads();13081308 __max_logical_packages = DIV_ROUND_UP(nr_cpu_ids, ncpus);13091309 pr_info("Max logical packages: %u\n", __max_logical_packages);13101310
···67516751 goto out;67526752 }6753675367546754- vmx_io_bitmap_b = (unsigned long *)__get_free_page(GFP_KERNEL);67556754 memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE);67566755 memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE);6757675667586758- /*67596759- * Allow direct access to the PC debug port (it is often used for I/O67606760- * delays, but the vmexits simply slow things down).67616761- */67626757 memset(vmx_io_bitmap_a, 0xff, PAGE_SIZE);67636763- clear_bit(0x80, vmx_io_bitmap_a);6764675867656759 memset(vmx_io_bitmap_b, 0xff, PAGE_SIZE);67666760
+31-32
arch/x86/kvm/x86.c
···29372937 srcu_read_unlock(&vcpu->kvm->srcu, idx);29382938 pagefault_enable();29392939 kvm_x86_ops->vcpu_put(vcpu);29402940- kvm_put_guest_fpu(vcpu);29412940 vcpu->arch.last_host_tsc = rdtsc();29422941}29432942···52515252 emul_to_vcpu(ctxt)->arch.halt_request = 1;52525253}5253525452545254-static void emulator_get_fpu(struct x86_emulate_ctxt *ctxt)52555255-{52565256- preempt_disable();52575257- kvm_load_guest_fpu(emul_to_vcpu(ctxt));52585258-}52595259-52605260-static void emulator_put_fpu(struct x86_emulate_ctxt *ctxt)52615261-{52625262- preempt_enable();52635263-}52645264-52655255static int emulator_intercept(struct x86_emulate_ctxt *ctxt,52665256 struct x86_instruction_info *info,52675257 enum x86_intercept_stage stage)···53285340 .halt = emulator_halt,53295341 .wbinvd = emulator_wbinvd,53305342 .fix_hypercall = emulator_fix_hypercall,53315331- .get_fpu = emulator_get_fpu,53325332- .put_fpu = emulator_put_fpu,53335343 .intercept = emulator_intercept,53345344 .get_cpuid = emulator_get_cpuid,53355345 .set_nmi_mask = emulator_set_nmi_mask,···67646778 kvm_x86_ops->tlb_flush(vcpu);67656779}6766678067816781+void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,67826782+ unsigned long start, unsigned long end)67836783+{67846784+ unsigned long apic_address;67856785+67866786+ /*67876787+ * The physical address of apic access page is stored in the VMCS.67886788+ * Update it when it becomes invalid.67896789+ */67906790+ apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);67916791+ if (start <= apic_address && apic_address < end)67926792+ kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);67936793+}67946794+67676795void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)67686796{67696797 struct page *page = NULL;···69526952 preempt_disable();6953695369546954 kvm_x86_ops->prepare_guest_switch(vcpu);69556955- kvm_load_guest_fpu(vcpu);6956695569576956 /*69586957 * Disable IRQs before setting IN_GUEST_MODE. Posted interrupt···72967297 }72977298 }7298729973007300+ kvm_load_guest_fpu(vcpu);73017301+72997302 if (unlikely(vcpu->arch.complete_userspace_io)) {73007303 int (*cui)(struct kvm_vcpu *) = vcpu->arch.complete_userspace_io;73017304 vcpu->arch.complete_userspace_io = NULL;73027305 r = cui(vcpu);73037306 if (r <= 0)73047304- goto out;73077307+ goto out_fpu;73057308 } else73067309 WARN_ON(vcpu->arch.pio.count || vcpu->mmio_needed);73077310···73127311 else73137312 r = vcpu_run(vcpu);7314731373147314+out_fpu:73157315+ kvm_put_guest_fpu(vcpu);73157316out:73167317 post_kvm_run_save(vcpu);73177318 kvm_sigset_deactivate(vcpu);···77077704 vcpu->arch.cr0 |= X86_CR0_ET;77087705}7709770677077707+/* Swap (qemu) user FPU context for the guest FPU context. */77107708void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)77117709{77127712- if (vcpu->guest_fpu_loaded)77137713- return;77147714-77157715- /*77167716- * Restore all possible states in the guest,77177717- * and assume host would use all available bits.77187718- * Guest xcr0 would be loaded later.77197719- */77207720- vcpu->guest_fpu_loaded = 1;77217721- __kernel_fpu_begin();77107710+ preempt_disable();77117711+ copy_fpregs_to_fpstate(&vcpu->arch.user_fpu);77227712 /* PKRU is separately restored in kvm_x86_ops->run. */77237713 __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state,77247714 ~XFEATURE_MASK_PKRU);77157715+ preempt_enable();77257716 trace_kvm_fpu(1);77267717}7727771877197719+/* When vcpu_run ends, restore user space FPU context. */77287720void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)77297721{77307730- if (!vcpu->guest_fpu_loaded)77317731- return;77327732-77337733- vcpu->guest_fpu_loaded = 0;77227722+ preempt_disable();77347723 copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu);77357735- __kernel_fpu_end();77247724+ copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state);77257725+ preempt_enable();77367726 ++vcpu->stat.fpu_reload;77377727 trace_kvm_fpu(0);77387728}···78427846 * To avoid have the INIT path from kvm_apic_has_events() that be78437847 * called with loaded FPU and does not let userspace fix the state.78447848 */78457845- kvm_put_guest_fpu(vcpu);78497849+ if (init_event)78507850+ kvm_put_guest_fpu(vcpu);78467851 mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave,78477852 XFEATURE_MASK_BNDREGS);78487853 if (mpx_state_buffer)···78527855 XFEATURE_MASK_BNDCSR);78537856 if (mpx_state_buffer)78547857 memset(mpx_state_buffer, 0, sizeof(struct mpx_bndcsr));78587858+ if (init_event)78597859+ kvm_load_guest_fpu(vcpu);78557860 }7856786178577862 if (!init_event) {
···404404 return;405405 }406406407407+ mmiotrace_iounmap(addr);408408+407409 addr = (volatile void __iomem *)408410 (PAGE_MASK & (unsigned long __force)addr);409409-410410- mmiotrace_iounmap(addr);411411412412 /* Use the vm area unlocked, assuming the caller413413 ensures there isn't another iounmap for the same address
+7-5
arch/x86/mm/kmmio.c
···435435 unsigned long flags;436436 int ret = 0;437437 unsigned long size = 0;438438+ unsigned long addr = p->addr & PAGE_MASK;438439 const unsigned long size_lim = p->len + (p->addr & ~PAGE_MASK);439440 unsigned int l;440441 pte_t *pte;441442442443 spin_lock_irqsave(&kmmio_lock, flags);443443- if (get_kmmio_probe(p->addr)) {444444+ if (get_kmmio_probe(addr)) {444445 ret = -EEXIST;445446 goto out;446447 }447448448448- pte = lookup_address(p->addr, &l);449449+ pte = lookup_address(addr, &l);449450 if (!pte) {450451 ret = -EINVAL;451452 goto out;···455454 kmmio_count++;456455 list_add_rcu(&p->list, &kmmio_probes);457456 while (size < size_lim) {458458- if (add_kmmio_fault_page(p->addr + size))457457+ if (add_kmmio_fault_page(addr + size))459458 pr_err("Unable to set page fault.\n");460459 size += page_level_size(l);461460 }···529528{530529 unsigned long flags;531530 unsigned long size = 0;531531+ unsigned long addr = p->addr & PAGE_MASK;532532 const unsigned long size_lim = p->len + (p->addr & ~PAGE_MASK);533533 struct kmmio_fault_page *release_list = NULL;534534 struct kmmio_delayed_release *drelease;535535 unsigned int l;536536 pte_t *pte;537537538538- pte = lookup_address(p->addr, &l);538538+ pte = lookup_address(addr, &l);539539 if (!pte)540540 return;541541542542 spin_lock_irqsave(&kmmio_lock, flags);543543 while (size < size_lim) {544544- release_kmmio_fault_page(p->addr + size, &release_list);544544+ release_kmmio_fault_page(addr + size, &release_list);545545 size += page_level_size(l);546546 }547547 list_del_rcu(&p->list);
+21-6
arch/x86/pci/fixup.c
···665665 unsigned i;666666 u32 base, limit, high;667667 struct resource *res, *conflict;668668+ struct pci_dev *other;669669+670670+ /* Check that we are the only device of that type */671671+ other = pci_get_device(dev->vendor, dev->device, NULL);672672+ if (other != dev ||673673+ (other = pci_get_device(dev->vendor, dev->device, other))) {674674+ /* This is a multi-socket system, don't touch it for now */675675+ pci_dev_put(other);676676+ return;677677+ }668678669679 for (i = 0; i < 8; i++) {670680 pci_read_config_dword(dev, AMD_141b_MMIO_BASE(i), &base);···706696 res->end = 0xfd00000000ull - 1;707697708698 /* Just grab the free area behind system memory for this */709709- while ((conflict = request_resource_conflict(&iomem_resource, res)))699699+ while ((conflict = request_resource_conflict(&iomem_resource, res))) {700700+ if (conflict->end >= res->end) {701701+ kfree(res);702702+ return;703703+ }710704 res->start = conflict->end + 1;705705+ }711706712707 dev_info(&dev->dev, "adding root bus resource %pR\n", res);713708···729714730715 pci_bus_add_resource(dev->bus, res, 0);731716}732732-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1401, pci_amd_enable_64bit_bar);733733-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x141b, pci_amd_enable_64bit_bar);734734-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar);735735-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar);736736-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar);717717+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1401, pci_amd_enable_64bit_bar);718718+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x141b, pci_amd_enable_64bit_bar);719719+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar);720720+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar);721721+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar);737722738723#endif
+46-55
arch/x86/power/cpu.c
···8282 /*8383 * descriptor tables8484 */8585-#ifdef CONFIG_X86_328685 store_idt(&ctxt->idt);8787-#else8888-/* CONFIG_X86_64 */8989- store_idt((struct desc_ptr *)&ctxt->idt_limit);9090-#endif8686+9187 /*9288 * We save it here, but restore it only in the hibernate case.9389 * For ACPI S3 resume, this is loaded via 'early_gdt_desc' in 64-bit···99103 /*100104 * segment registers101105 */102102-#ifdef CONFIG_X86_32103103- savesegment(es, ctxt->es);104104- savesegment(fs, ctxt->fs);106106+#ifdef CONFIG_X86_32_LAZY_GS105107 savesegment(gs, ctxt->gs);106106- savesegment(ss, ctxt->ss);107107-#else108108-/* CONFIG_X86_64 */109109- asm volatile ("movw %%ds, %0" : "=m" (ctxt->ds));110110- asm volatile ("movw %%es, %0" : "=m" (ctxt->es));111111- asm volatile ("movw %%fs, %0" : "=m" (ctxt->fs));112112- asm volatile ("movw %%gs, %0" : "=m" (ctxt->gs));113113- asm volatile ("movw %%ss, %0" : "=m" (ctxt->ss));108108+#endif109109+#ifdef CONFIG_X86_64110110+ savesegment(gs, ctxt->gs);111111+ savesegment(fs, ctxt->fs);112112+ savesegment(ds, ctxt->ds);113113+ savesegment(es, ctxt->es);114114115115 rdmsrl(MSR_FS_BASE, ctxt->fs_base);116116- rdmsrl(MSR_GS_BASE, ctxt->gs_base);117117- rdmsrl(MSR_KERNEL_GS_BASE, ctxt->gs_kernel_base);116116+ rdmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);117117+ rdmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);118118 mtrr_save_fixed_ranges(NULL);119119120120 rdmsrl(MSR_EFER, ctxt->efer);···170178 write_gdt_entry(desc, GDT_ENTRY_TSS, &tss, DESC_TSS);171179172180 syscall_init(); /* This sets MSR_*STAR and related */181181+#else182182+ if (boot_cpu_has(X86_FEATURE_SEP))183183+ enable_sep_cpu();173184#endif174185 load_TR_desc(); /* This does ltr */175186 load_mm_ldt(current->active_mm); /* This does lldt */···185190}186191187192/**188188- * __restore_processor_state - restore the contents of CPU registers saved189189- * by __save_processor_state()190190- * @ctxt - structure to load the registers contents from193193+ * __restore_processor_state - restore the contents of CPU registers saved194194+ * by __save_processor_state()195195+ * @ctxt - structure to load the registers contents from196196+ *197197+ * The asm code that gets us here will have restored a usable GDT, although198198+ * it will be pointing to the wrong alias.191199 */192200static void notrace __restore_processor_state(struct saved_context *ctxt)193201{···213215 write_cr2(ctxt->cr2);214216 write_cr0(ctxt->cr0);215217216216- /*217217- * now restore the descriptor tables to their proper values218218- * ltr is done i fix_processor_context().219219- */220220-#ifdef CONFIG_X86_32218218+ /* Restore the IDT. */221219 load_idt(&ctxt->idt);222222-#else223223-/* CONFIG_X86_64 */224224- load_idt((const struct desc_ptr *)&ctxt->idt_limit);225225-#endif226220227227-#ifdef CONFIG_X86_64228221 /*229229- * We need GSBASE restored before percpu access can work.230230- * percpu access can happen in exception handlers or in complicated231231- * helpers like load_gs_index().222222+ * Just in case the asm code got us here with the SS, DS, or ES223223+ * out of sync with the GDT, update them.232224 */233233- wrmsrl(MSR_GS_BASE, ctxt->gs_base);225225+ loadsegment(ss, __KERNEL_DS);226226+ loadsegment(ds, __USER_DS);227227+ loadsegment(es, __USER_DS);228228+229229+ /*230230+ * Restore percpu access. Percpu access can happen in exception231231+ * handlers or in complicated helpers like load_gs_index().232232+ */233233+#ifdef CONFIG_X86_64234234+ wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);235235+#else236236+ loadsegment(fs, __KERNEL_PERCPU);237237+ loadsegment(gs, __KERNEL_STACK_CANARY);234238#endif235239240240+ /* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */236241 fix_processor_context();237242238243 /*239239- * Restore segment registers. This happens after restoring the GDT240240- * and LDT, which happen in fix_processor_context().244244+ * Now that we have descriptor tables fully restored and working245245+ * exception handling, restore the usermode segments.241246 */242242-#ifdef CONFIG_X86_32247247+#ifdef CONFIG_X86_64248248+ loadsegment(ds, ctxt->es);243249 loadsegment(es, ctxt->es);244250 loadsegment(fs, ctxt->fs);245245- loadsegment(gs, ctxt->gs);246246- loadsegment(ss, ctxt->ss);247247-248248- /*249249- * sysenter MSRs250250- */251251- if (boot_cpu_has(X86_FEATURE_SEP))252252- enable_sep_cpu();253253-#else254254-/* CONFIG_X86_64 */255255- asm volatile ("movw %0, %%ds" :: "r" (ctxt->ds));256256- asm volatile ("movw %0, %%es" :: "r" (ctxt->es));257257- asm volatile ("movw %0, %%fs" :: "r" (ctxt->fs));258251 load_gs_index(ctxt->gs);259259- asm volatile ("movw %0, %%ss" :: "r" (ctxt->ss));260252261253 /*262262- * Restore FSBASE and user GSBASE after reloading the respective263263- * segment selectors.254254+ * Restore FSBASE and GSBASE after restoring the selectors, since255255+ * restoring the selectors clobbers the bases. Keep in mind256256+ * that MSR_KERNEL_GS_BASE is horribly misnamed.264257 */265258 wrmsrl(MSR_FS_BASE, ctxt->fs_base);266266- wrmsrl(MSR_KERNEL_GS_BASE, ctxt->gs_kernel_base);259259+ wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);260260+#elif defined(CONFIG_X86_32_LAZY_GS)261261+ loadsegment(gs, ctxt->gs);267262#endif268263269264 do_fpu_end();
+1-1
arch/x86/xen/apic.c
···5757 return 0;58585959 if (reg == APIC_LVR)6060- return 0x10;6060+ return 0x14;6161#ifdef CONFIG_X86_326262 if (reg == APIC_LDR)6363 return SET_APIC_LOGICAL_ID(1UL << smp_processor_id());
+7-6
crypto/af_alg.c
···672672 }673673674674 tsgl = areq->tsgl;675675- for_each_sg(tsgl, sg, areq->tsgl_entries, i) {676676- if (!sg_page(sg))677677- continue;678678- put_page(sg_page(sg));679679- }675675+ if (tsgl) {676676+ for_each_sg(tsgl, sg, areq->tsgl_entries, i) {677677+ if (!sg_page(sg))678678+ continue;679679+ put_page(sg_page(sg));680680+ }680681681681- if (areq->tsgl && areq->tsgl_entries)682682 sock_kfree_s(sk, tsgl, areq->tsgl_entries * sizeof(*tsgl));683683+ }683684}684685EXPORT_SYMBOL_GPL(af_alg_free_areq_sgls);685686
···6969 /* Self-signed certificates form roots of their own, and if we7070 * don't know them, then we can't accept them.7171 */7272- if (x509->next == x509) {7272+ if (x509->signer == x509) {7373 kleave(" = -ENOKEY [unknown self-signed]");7474 return -ENOKEY;7575 }
+3-6
crypto/asymmetric_keys/pkcs7_verify.c
···5959 desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;60606161 /* Digest the message [RFC2315 9.3] */6262- ret = crypto_shash_init(desc);6363- if (ret < 0)6464- goto error;6565- ret = crypto_shash_finup(desc, pkcs7->data, pkcs7->data_len,6666- sig->digest);6262+ ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len,6363+ sig->digest);6764 if (ret < 0)6865 goto error;6966 pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest);···147150 pr_devel("Sig %u: Found cert serial match X.509[%u]\n",148151 sinfo->index, certix);149152150150- if (x509->pub->pkey_algo != sinfo->sig->pkey_algo) {153153+ if (strcmp(x509->pub->pkey_algo, sinfo->sig->pkey_algo) != 0) {151154 pr_warn("Sig %u: X.509 algo and PKCS#7 sig algo don't match\n",152155 sinfo->index);153156 continue;
+5-2
crypto/asymmetric_keys/public_key.c
···7373 char alg_name_buf[CRYPTO_MAX_ALG_NAME];7474 void *output;7575 unsigned int outlen;7676- int ret = -ENOMEM;7676+ int ret;77777878 pr_devel("==>%s()\n", __func__);7979···9999 if (IS_ERR(tfm))100100 return PTR_ERR(tfm);101101102102+ ret = -ENOMEM;102103 req = akcipher_request_alloc(tfm, GFP_KERNEL);103104 if (!req)104105 goto error_free_tfm;···128127 * signature and returns that to us.129128 */130129 ret = crypto_wait_req(crypto_akcipher_verify(req), &cwait);131131- if (ret < 0)130130+ if (ret)132131 goto out_free_output;133132134133 /* Do the actual verification step. */···143142error_free_tfm:144143 crypto_free_akcipher(tfm);145144 pr_devel("<==%s() = %d\n", __func__, ret);145145+ if (WARN_ON_ONCE(ret > 0))146146+ ret = -EINVAL;146147 return ret;147148}148149EXPORT_SYMBOL_GPL(public_key_verify_signature);
+2
crypto/asymmetric_keys/x509_cert_parser.c
···409409 ctx->cert->pub->pkey_algo = "rsa";410410411411 /* Discard the BIT STRING metadata */412412+ if (vlen < 1 || *(const u8 *)value != 0)413413+ return -EBADMSG;412414 ctx->key = value + 1;413415 ctx->key_size = vlen - 1;414416 return 0;
+2-6
crypto/asymmetric_keys/x509_public_key.c
···7979 desc->tfm = tfm;8080 desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;81818282- ret = crypto_shash_init(desc);8383- if (ret < 0)8484- goto error_2;8585- might_sleep();8686- ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest);8282+ ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, sig->digest);8783 if (ret < 0)8884 goto error_2;8985···131135 }132136133137 ret = -EKEYREJECTED;134134- if (cert->pub->pkey_algo != cert->sig->pkey_algo)138138+ if (strcmp(cert->pub->pkey_algo, cert->sig->pkey_algo) != 0)135139 goto out;136140137141 ret = public_key_verify_signature(cert->pub, cert->sig);
+5-1
crypto/hmac.c
···195195 salg = shash_attr_alg(tb[1], 0, 0);196196 if (IS_ERR(salg))197197 return PTR_ERR(salg);198198+ alg = &salg->base;198199200200+ /* The underlying hash algorithm must be unkeyed */199201 err = -EINVAL;202202+ if (crypto_shash_alg_has_setkey(salg))203203+ goto out_put_alg;204204+200205 ds = salg->digestsize;201206 ss = salg->statesize;202202- alg = &salg->base;203207 if (ds > alg->cra_blocksize ||204208 ss < alg->cra_blocksize)205209 goto out_put_alg;
+1-1
crypto/rsa_helper.c
···3030 return -EINVAL;31313232 if (fips_enabled) {3333- while (!*ptr && n_sz) {3333+ while (n_sz && !*ptr) {3434 ptr++;3535 n_sz--;3636 }
···25252626static const struct crypto_type crypto_shash_type;27272828-static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,2929- unsigned int keylen)2828+int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,2929+ unsigned int keylen)3030{3131 return -ENOSYS;3232}3333+EXPORT_SYMBOL_GPL(shash_no_setkey);33343435static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,3536 unsigned int keylen)
+1-1
drivers/acpi/device_pm.c
···11381138 * skip all of the subsequent "thaw" callbacks for the device.11391139 */11401140 if (dev_pm_smart_suspend_and_suspended(dev)) {11411141- dev->power.direct_complete = true;11411141+ dev_pm_skip_next_resume_phases(dev);11421142 return 0;11431143 }11441144
+3-3
drivers/ata/ahci_mtk.c
···11/*22- * MeidaTek AHCI SATA driver22+ * MediaTek AHCI SATA driver33 *44 * Copyright (c) 2017 MediaTek Inc.55 * Author: Ryder Lee <ryder.lee@mediatek.com>···2525#include <linux/reset.h>2626#include "ahci.h"27272828-#define DRV_NAME "ahci"2828+#define DRV_NAME "ahci-mtk"29293030#define SYS_CFG 0x143131#define SYS_CFG_SATA_MSK GENMASK(31, 30)···192192};193193module_platform_driver(mtk_ahci_driver);194194195195-MODULE_DESCRIPTION("MeidaTek SATA AHCI Driver");195195+MODULE_DESCRIPTION("MediaTek SATA AHCI Driver");196196MODULE_LICENSE("GPL v2");
···30823082 bit = fls(mask) - 1;30833083 mask &= ~(1 << bit);3084308430853085- /* Mask off all speeds higher than or equal to the current30863086- * one. Force 1.5Gbps if current SPD is not available.30853085+ /*30863086+ * Mask off all speeds higher than or equal to the current one. At30873087+ * this point, if current SPD is not available and we previously30883088+ * recorded the link speed from SStatus, the driver has already30893089+ * masked off the highest bit so mask should already be 1 or 0.30903090+ * Otherwise, we should not force 1.5Gbps on a link where we have30913091+ * not previously recorded speed from SStatus. Just return in this30923092+ * case.30873093 */30883094 if (spd > 1)30893095 mask &= (1 << (spd - 1)) - 1;30903096 else30913091- mask &= 1;30973097+ return -EINVAL;3092309830933099 /* were we already at the bottom? */30943100 if (!mask)
+6-10
drivers/ata/pata_pdc2027x.c
···8282 * is issued to the device. However, if the controller clock is 133MHz,8383 * the following tables must be used.8484 */8585-static struct pdc2027x_pio_timing {8585+static const struct pdc2027x_pio_timing {8686 u8 value0, value1, value2;8787} pdc2027x_pio_timing_tbl[] = {8888 { 0xfb, 0x2b, 0xac }, /* PIO mode 0 */···9292 { 0x23, 0x09, 0x25 }, /* PIO mode 4, IORDY on, Prefetch off */9393};94949595-static struct pdc2027x_mdma_timing {9595+static const struct pdc2027x_mdma_timing {9696 u8 value0, value1;9797} pdc2027x_mdma_timing_tbl[] = {9898 { 0xdf, 0x5f }, /* MDMA mode 0 */···100100 { 0x69, 0x25 }, /* MDMA mode 2 */101101};102102103103-static struct pdc2027x_udma_timing {103103+static const struct pdc2027x_udma_timing {104104 u8 value0, value1, value2;105105} pdc2027x_udma_timing_tbl[] = {106106 { 0x4a, 0x0f, 0xd5 }, /* UDMA mode 0 */···649649 * @host: target ATA host650650 * @board_idx: board identifier651651 */652652-static int pdc_hardware_init(struct ata_host *host, unsigned int board_idx)652652+static void pdc_hardware_init(struct ata_host *host, unsigned int board_idx)653653{654654 long pll_clock;655655···665665666666 /* Adjust PLL control register */667667 pdc_adjust_pll(host, pll_clock, board_idx);668668-669669- return 0;670668}671669672670/**···751753 //pci_enable_intx(pdev);752754753755 /* initialize adapter */754754- if (pdc_hardware_init(host, board_idx) != 0)755755- return -EIO;756756+ pdc_hardware_init(host, board_idx);756757757758 pci_set_master(pdev);758759 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,···775778 else776779 board_idx = PDC_UDMA_133;777780778778- if (pdc_hardware_init(host, board_idx))779779- return -EIO;781781+ pdc_hardware_init(host, board_idx);780782781783 ata_host_resume(host);782784 return 0;
+15
drivers/base/power/main.c
···526526/*------------------------- Resume routines -------------------------*/527527528528/**529529+ * dev_pm_skip_next_resume_phases - Skip next system resume phases for device.530530+ * @dev: Target device.531531+ *532532+ * Make the core skip the "early resume" and "resume" phases for @dev.533533+ *534534+ * This function can be called by middle-layer code during the "noirq" phase of535535+ * system resume if necessary, but not by device drivers.536536+ */537537+void dev_pm_skip_next_resume_phases(struct device *dev)538538+{539539+ dev->power.is_late_suspended = false;540540+ dev->power.is_suspended = false;541541+}542542+543543+/**529544 * device_resume_noirq - Execute a "noirq resume" callback for given device.530545 * @dev: Device to handle.531546 * @state: PM transition of the system being carried out.
···382382 ida_init(&dev->mode_config.connector_ida);383383 spin_lock_init(&dev->mode_config.connector_list_lock);384384385385+ init_llist_head(&dev->mode_config.connector_free_list);386386+ INIT_WORK(&dev->mode_config.connector_free_work, drm_connector_free_work_fn);387387+385388 drm_mode_create_standard_properties(dev);386389387390 /* Just to be sure */···435432 }436433 drm_connector_list_iter_end(&conn_iter);437434 /* connector_iter drops references in a work item. */438438- flush_scheduled_work();435435+ flush_work(&dev->mode_config.connector_free_work);439436 if (WARN_ON(!list_empty(&dev->mode_config.connector_list))) {440437 drm_connector_list_iter_begin(dev, &conn_iter);441438 drm_for_each_connector_iter(connector, &conn_iter)
+3-1
drivers/gpu/drm/vc4/vc4_gem.c
···888888 /* If we got force-completed because of GPU reset rather than889889 * through our IRQ handler, signal the fence now.890890 */891891- if (exec->fence)891891+ if (exec->fence) {892892 dma_fence_signal(exec->fence);893893+ dma_fence_put(exec->fence);894894+ }893895894896 if (exec->bo) {895897 for (i = 0; i < exec->bo_count; i++) {
···395395396396static int cqe_completes_wr(struct t4_cqe *cqe, struct t4_wq *wq)397397{398398+ if (CQE_OPCODE(cqe) == C4IW_DRAIN_OPCODE) {399399+ WARN_ONCE(1, "Unexpected DRAIN CQE qp id %u!\n", wq->sq.qid);400400+ return 0;401401+ }402402+398403 if (CQE_OPCODE(cqe) == FW_RI_TERMINATE)399404 return 0;400405
+16-6
drivers/infiniband/hw/cxgb4/qp.c
···868868869869 qhp = to_c4iw_qp(ibqp);870870 spin_lock_irqsave(&qhp->lock, flag);871871- if (t4_wq_in_error(&qhp->wq)) {871871+872872+ /*873873+ * If the qp has been flushed, then just insert a special874874+ * drain cqe.875875+ */876876+ if (qhp->wq.flushed) {872877 spin_unlock_irqrestore(&qhp->lock, flag);873878 complete_sq_drain_wr(qhp, wr);874879 return err;···1016101110171012 qhp = to_c4iw_qp(ibqp);10181013 spin_lock_irqsave(&qhp->lock, flag);10191019- if (t4_wq_in_error(&qhp->wq)) {10141014+10151015+ /*10161016+ * If the qp has been flushed, then just insert a special10171017+ * drain cqe.10181018+ */10191019+ if (qhp->wq.flushed) {10201020 spin_unlock_irqrestore(&qhp->lock, flag);10211021 complete_rq_drain_wr(qhp, wr);10221022 return err;···12951285 spin_unlock_irqrestore(&rchp->lock, flag);1296128612971287 if (schp == rchp) {12981298- if (t4_clear_cq_armed(&rchp->cq) &&12991299- (rq_flushed || sq_flushed)) {12881288+ if ((rq_flushed || sq_flushed) &&12891289+ t4_clear_cq_armed(&rchp->cq)) {13001290 spin_lock_irqsave(&rchp->comp_handler_lock, flag);13011291 (*rchp->ibcq.comp_handler)(&rchp->ibcq,13021292 rchp->ibcq.cq_context);13031293 spin_unlock_irqrestore(&rchp->comp_handler_lock, flag);13041294 }13051295 } else {13061306- if (t4_clear_cq_armed(&rchp->cq) && rq_flushed) {12961296+ if (rq_flushed && t4_clear_cq_armed(&rchp->cq)) {13071297 spin_lock_irqsave(&rchp->comp_handler_lock, flag);13081298 (*rchp->ibcq.comp_handler)(&rchp->ibcq,13091299 rchp->ibcq.cq_context);13101300 spin_unlock_irqrestore(&rchp->comp_handler_lock, flag);13111301 }13121312- if (t4_clear_cq_armed(&schp->cq) && sq_flushed) {13021302+ if (sq_flushed && t4_clear_cq_armed(&schp->cq)) {13131303 spin_lock_irqsave(&schp->comp_handler_lock, flag);13141304 (*schp->ibcq.comp_handler)(&schp->ibcq,13151305 schp->ibcq.cq_context);
+19-7
drivers/infiniband/hw/mlx4/qp.c
···666666 return (-EOPNOTSUPP);667667 }668668669669+ if (ucmd->rx_hash_fields_mask & ~(MLX4_IB_RX_HASH_SRC_IPV4 |670670+ MLX4_IB_RX_HASH_DST_IPV4 |671671+ MLX4_IB_RX_HASH_SRC_IPV6 |672672+ MLX4_IB_RX_HASH_DST_IPV6 |673673+ MLX4_IB_RX_HASH_SRC_PORT_TCP |674674+ MLX4_IB_RX_HASH_DST_PORT_TCP |675675+ MLX4_IB_RX_HASH_SRC_PORT_UDP |676676+ MLX4_IB_RX_HASH_DST_PORT_UDP)) {677677+ pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n",678678+ ucmd->rx_hash_fields_mask);679679+ return (-EOPNOTSUPP);680680+ }681681+669682 if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_IPV4) &&670683 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_IPV4)) {671684 rss_ctx->flags = MLX4_RSS_IPV4;···704691 return (-EOPNOTSUPP);705692 }706693707707- if (rss_ctx->flags & MLX4_RSS_IPV4) {694694+ if (rss_ctx->flags & MLX4_RSS_IPV4)708695 rss_ctx->flags |= MLX4_RSS_UDP_IPV4;709709- } else if (rss_ctx->flags & MLX4_RSS_IPV6) {696696+ if (rss_ctx->flags & MLX4_RSS_IPV6)710697 rss_ctx->flags |= MLX4_RSS_UDP_IPV6;711711- } else {698698+ if (!(rss_ctx->flags & (MLX4_RSS_IPV6 | MLX4_RSS_IPV4))) {712699 pr_debug("RX Hash fields_mask is not supported - UDP must be set with IPv4 or IPv6\n");713700 return (-EOPNOTSUPP);714701 }···720707721708 if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_PORT_TCP) &&722709 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_PORT_TCP)) {723723- if (rss_ctx->flags & MLX4_RSS_IPV4) {710710+ if (rss_ctx->flags & MLX4_RSS_IPV4)724711 rss_ctx->flags |= MLX4_RSS_TCP_IPV4;725725- } else if (rss_ctx->flags & MLX4_RSS_IPV6) {712712+ if (rss_ctx->flags & MLX4_RSS_IPV6)726713 rss_ctx->flags |= MLX4_RSS_TCP_IPV6;727727- } else {714714+ if (!(rss_ctx->flags & (MLX4_RSS_IPV6 | MLX4_RSS_IPV4))) {728715 pr_debug("RX Hash fields_mask is not supported - TCP must be set with IPv4 or IPv6\n");729716 return (-EOPNOTSUPP);730717 }731731-732718 } else if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_PORT_TCP) ||733719 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_PORT_TCP)) {734720 pr_debug("RX Hash fields_mask is not supported - both TCP SRC and DST must be set\n");
+1
drivers/infiniband/ulp/ipoib/ipoib_cm.c
···11451145 noio_flag = memalloc_noio_save();11461146 p->tx_ring = vzalloc(ipoib_sendq_size * sizeof(*p->tx_ring));11471147 if (!p->tx_ring) {11481148+ memalloc_noio_restore(noio_flag);11481149 ret = -ENOMEM;11491150 goto err_tx;11501151 }
+6-2
drivers/md/dm-bufio.c
···16111611 int l;16121612 struct dm_buffer *b, *tmp;16131613 unsigned long freed = 0;16141614- unsigned long count = nr_to_scan;16141614+ unsigned long count = c->n_buffers[LIST_CLEAN] +16151615+ c->n_buffers[LIST_DIRTY];16151616 unsigned long retain_target = get_retain_buffers(c);1616161716171618 for (l = 0; l < LIST_SIZE; l++) {···16481647dm_bufio_shrink_count(struct shrinker *shrink, struct shrink_control *sc)16491648{16501649 struct dm_bufio_client *c = container_of(shrink, struct dm_bufio_client, shrinker);16501650+ unsigned long count = READ_ONCE(c->n_buffers[LIST_CLEAN]) +16511651+ READ_ONCE(c->n_buffers[LIST_DIRTY]);16521652+ unsigned long retain_target = get_retain_buffers(c);1651165316521652- return READ_ONCE(c->n_buffers[LIST_CLEAN]) + READ_ONCE(c->n_buffers[LIST_DIRTY]);16541654+ return (count < retain_target) ? 0 : (count - retain_target);16531655}1654165616551657/*
+6-6
drivers/md/dm-cache-target.c
···34723472{34733473 int r;3474347434753475- r = dm_register_target(&cache_target);34763476- if (r) {34773477- DMERR("cache target registration failed: %d", r);34783478- return r;34793479- }34803480-34813475 migration_cache = KMEM_CACHE(dm_cache_migration, 0);34823476 if (!migration_cache) {34833477 dm_unregister_target(&cache_target);34843478 return -ENOMEM;34793479+ }34803480+34813481+ r = dm_register_target(&cache_target);34823482+ if (r) {34833483+ DMERR("cache target registration failed: %d", r);34843484+ return r;34853485 }3486348634873487 return 0;
+51-16
drivers/md/dm-mpath.c
···458458} while (0)459459460460/*461461+ * Check whether bios must be queued in the device-mapper core rather462462+ * than here in the target.463463+ *464464+ * If MPATHF_QUEUE_IF_NO_PATH and MPATHF_SAVED_QUEUE_IF_NO_PATH hold465465+ * the same value then we are not between multipath_presuspend()466466+ * and multipath_resume() calls and we have no need to check467467+ * for the DMF_NOFLUSH_SUSPENDING flag.468468+ */469469+static bool __must_push_back(struct multipath *m, unsigned long flags)470470+{471471+ return ((test_bit(MPATHF_QUEUE_IF_NO_PATH, &flags) !=472472+ test_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &flags)) &&473473+ dm_noflush_suspending(m->ti));474474+}475475+476476+/*477477+ * Following functions use READ_ONCE to get atomic access to478478+ * all m->flags to avoid taking spinlock479479+ */480480+static bool must_push_back_rq(struct multipath *m)481481+{482482+ unsigned long flags = READ_ONCE(m->flags);483483+ return test_bit(MPATHF_QUEUE_IF_NO_PATH, &flags) || __must_push_back(m, flags);484484+}485485+486486+static bool must_push_back_bio(struct multipath *m)487487+{488488+ unsigned long flags = READ_ONCE(m->flags);489489+ return __must_push_back(m, flags);490490+}491491+492492+/*461493 * Map cloned requests (request-based multipath)462494 */463495static int multipath_clone_and_map(struct dm_target *ti, struct request *rq,···510478 pgpath = choose_pgpath(m, nr_bytes);511479512480 if (!pgpath) {513513- if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))481481+ if (must_push_back_rq(m))514482 return DM_MAPIO_DELAY_REQUEUE;515483 dm_report_EIO(m); /* Failed */516484 return DM_MAPIO_KILL;···585553 }586554587555 if (!pgpath) {588588- if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))556556+ if (must_push_back_bio(m))589557 return DM_MAPIO_REQUEUE;590558 dm_report_EIO(m);591559 return DM_MAPIO_KILL;···683651 assign_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags,684652 (save_old_value && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) ||685653 (!save_old_value && queue_if_no_path));686686- assign_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags,687687- queue_if_no_path || dm_noflush_suspending(m->ti));654654+ assign_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags, queue_if_no_path);688655 spin_unlock_irqrestore(&m->lock, flags);689656690657 if (!queue_if_no_path) {···15171486 fail_path(pgpath);1518148715191488 if (atomic_read(&m->nr_valid_paths) == 0 &&15201520- !test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) {14891489+ !must_push_back_rq(m)) {15211490 if (error == BLK_STS_IOERR)15221491 dm_report_EIO(m);15231492 /* complete with the original error */···1552152115531522 if (atomic_read(&m->nr_valid_paths) == 0 &&15541523 !test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) {15551555- dm_report_EIO(m);15561556- *error = BLK_STS_IOERR;15241524+ if (must_push_back_bio(m)) {15251525+ r = DM_ENDIO_REQUEUE;15261526+ } else {15271527+ dm_report_EIO(m);15281528+ *error = BLK_STS_IOERR;15291529+ }15571530 goto done;15581531 }15591532···19921957{19931958 int r;1994195919951995- r = dm_register_target(&multipath_target);19961996- if (r < 0) {19971997- DMERR("request-based register failed %d", r);19981998- r = -EINVAL;19991999- goto bad_register_target;20002000- }20012001-20021960 kmultipathd = alloc_workqueue("kmpathd", WQ_MEM_RECLAIM, 0);20031961 if (!kmultipathd) {20041962 DMERR("failed to create workqueue kmpathd");···20131985 goto bad_alloc_kmpath_handlerd;20141986 }2015198719881988+ r = dm_register_target(&multipath_target);19891989+ if (r < 0) {19901990+ DMERR("request-based register failed %d", r);19911991+ r = -EINVAL;19921992+ goto bad_register_target;19931993+ }19941994+20161995 return 0;2017199619971997+bad_register_target:19981998+ destroy_workqueue(kmpath_handlerd);20181999bad_alloc_kmpath_handlerd:20192000 destroy_workqueue(kmultipathd);20202001bad_alloc_kmultipathd:20212021- dm_unregister_target(&multipath_target);20222022-bad_register_target:20232002 return r;20242003}20252004
+24-24
drivers/md/dm-snap.c
···24112411 return r;24122412 }2413241324142414- r = dm_register_target(&snapshot_target);24152415- if (r < 0) {24162416- DMERR("snapshot target register failed %d", r);24172417- goto bad_register_snapshot_target;24182418- }24192419-24202420- r = dm_register_target(&origin_target);24212421- if (r < 0) {24222422- DMERR("Origin target register failed %d", r);24232423- goto bad_register_origin_target;24242424- }24252425-24262426- r = dm_register_target(&merge_target);24272427- if (r < 0) {24282428- DMERR("Merge target register failed %d", r);24292429- goto bad_register_merge_target;24302430- }24312431-24322414 r = init_origin_hash();24332415 if (r) {24342416 DMERR("init_origin_hash failed.");···24312449 goto bad_pending_cache;24322450 }2433245124522452+ r = dm_register_target(&snapshot_target);24532453+ if (r < 0) {24542454+ DMERR("snapshot target register failed %d", r);24552455+ goto bad_register_snapshot_target;24562456+ }24572457+24582458+ r = dm_register_target(&origin_target);24592459+ if (r < 0) {24602460+ DMERR("Origin target register failed %d", r);24612461+ goto bad_register_origin_target;24622462+ }24632463+24642464+ r = dm_register_target(&merge_target);24652465+ if (r < 0) {24662466+ DMERR("Merge target register failed %d", r);24672467+ goto bad_register_merge_target;24682468+ }24692469+24342470 return 0;2435247124362436-bad_pending_cache:24372437- kmem_cache_destroy(exception_cache);24382438-bad_exception_cache:24392439- exit_origin_hash();24402440-bad_origin_hash:24412441- dm_unregister_target(&merge_target);24422472bad_register_merge_target:24432473 dm_unregister_target(&origin_target);24442474bad_register_origin_target:24452475 dm_unregister_target(&snapshot_target);24462476bad_register_snapshot_target:24772477+ kmem_cache_destroy(pending_cache);24782478+bad_pending_cache:24792479+ kmem_cache_destroy(exception_cache);24802480+bad_exception_cache:24812481+ exit_origin_hash();24822482+bad_origin_hash:24472483 dm_exception_store_exit();2448248424492485 return r;
···4355435543564356static int __init dm_thin_init(void)43574357{43584358- int r;43584358+ int r = -ENOMEM;4359435943604360 pool_table_init();4361436143624362+ _new_mapping_cache = KMEM_CACHE(dm_thin_new_mapping, 0);43634363+ if (!_new_mapping_cache)43644364+ return r;43654365+43624366 r = dm_register_target(&thin_target);43634367 if (r)43644364- return r;43684368+ goto bad_new_mapping_cache;4365436943664370 r = dm_register_target(&pool_target);43674371 if (r)43684368- goto bad_pool_target;43694369-43704370- r = -ENOMEM;43714371-43724372- _new_mapping_cache = KMEM_CACHE(dm_thin_new_mapping, 0);43734373- if (!_new_mapping_cache)43744374- goto bad_new_mapping_cache;43724372+ goto bad_thin_target;4375437343764374 return 0;4377437543784378-bad_new_mapping_cache:43794379- dm_unregister_target(&pool_target);43804380-bad_pool_target:43764376+bad_thin_target:43814377 dm_unregister_target(&thin_target);43784378+bad_new_mapping_cache:43794379+ kmem_cache_destroy(_new_mapping_cache);4382438043834381 return r;43844382}
···1290129012911291static void mmc_select_driver_type(struct mmc_card *card)12921292{12931293- int card_drv_type, drive_strength, drv_type;12931293+ int card_drv_type, drive_strength, drv_type = 0;12941294 int fixed_drv_type = card->host->fixed_drv_type;1295129512961296 card_drv_type = card->ext_csd.raw_driver_strength |
+8
drivers/mmc/core/quirks.h
···5353 MMC_QUIRK_BLK_NO_CMD23),54545555 /*5656+ * Some SD cards lockup while using CMD23 multiblock transfers.5757+ */5858+ MMC_FIXUP("AF SD", CID_MANFID_ATP, CID_OEMID_ANY, add_quirk_sd,5959+ MMC_QUIRK_BLK_NO_CMD23),6060+ MMC_FIXUP("APUSD", CID_MANFID_APACER, 0x5048, add_quirk_sd,6161+ MMC_QUIRK_BLK_NO_CMD23),6262+6363+ /*5664 * Some MMC cards need longer data read timeout than indicated in CSD.5765 */5866 MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc,
+1
drivers/net/dsa/mv88e6xxx/port.c
···338338 cmode = MV88E6XXX_PORT_STS_CMODE_2500BASEX;339339 break;340340 case PHY_INTERFACE_MODE_XGMII:341341+ case PHY_INTERFACE_MODE_XAUI:341342 cmode = MV88E6XXX_PORT_STS_CMODE_XAUI;342343 break;343344 case PHY_INTERFACE_MODE_RXAUI:
···23082308 struct ravb_private *priv = netdev_priv(ndev);23092309 int ret = 0;2310231023112311- if (priv->wol_enabled) {23122312- /* Reduce the usecount of the clock to zero and then23132313- * restore it to its original value. This is done to force23142314- * the clock to be re-enabled which is a workaround23152315- * for renesas-cpg-mssr driver which do not enable clocks23162316- * when resuming from PSCI suspend/resume.23172317- *23182318- * Without this workaround the driver fails to communicate23192319- * with the hardware if WoL was enabled when the system23202320- * entered PSCI suspend. This is due to that if WoL is enabled23212321- * we explicitly keep the clock from being turned off when23222322- * suspending, but in PSCI sleep power is cut so the clock23232323- * is disabled anyhow, the clock driver is not aware of this23242324- * so the clock is not turned back on when resuming.23252325- *23262326- * TODO: once the renesas-cpg-mssr suspend/resume is working23272327- * this clock dance should be removed.23282328- */23292329- clk_disable(priv->clk);23302330- clk_disable(priv->clk);23312331- clk_enable(priv->clk);23322332- clk_enable(priv->clk);23332333-23342334- /* Set reset mode to rearm the WoL logic */23112311+ /* If WoL is enabled set reset mode to rearm the WoL logic */23122312+ if (priv->wol_enabled)23352313 ravb_write(ndev, CCC_OPC_RESET, CCC);23362336- }2337231423382315 /* All register have been reset to default values.23392316 * Restore all registers which where setup at probe time and
+10
drivers/net/ethernet/renesas/sh_eth.c
···18921892 return PTR_ERR(phydev);18931893 }1894189418951895+ /* mask with MAC supported features */18961896+ if (mdp->cd->register_type != SH_ETH_REG_GIGABIT) {18971897+ int err = phy_set_max_speed(phydev, SPEED_100);18981898+ if (err) {18991899+ netdev_err(ndev, "failed to limit PHY to 100 Mbit/s\n");19001900+ phy_disconnect(phydev);19011901+ return err;19021902+ }19031903+ }19041904+18951905 phy_attached_info(phydev);1896190618971907 return 0;
···238238{239239 int value;240240241241- mutex_lock(&phydev->lock);242242-243241 value = phy_read(phydev, MII_BMCR);244242 value &= ~(BMCR_PDOWN | BMCR_ISOLATE);245243 phy_write(phydev, MII_BMCR, value);246246-247247- mutex_unlock(&phydev->lock);248244249245 return 0;250246}
+4
drivers/net/phy/marvell.c
···637637 if (err < 0)638638 goto error;639639640640+ /* Do not touch the fiber page if we're in copper->sgmii mode */641641+ if (phydev->interface == PHY_INTERFACE_MODE_SGMII)642642+ return 0;643643+640644 /* Then the fiber link */641645 err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE);642646 if (err < 0)
···2222#include <linux/ethtool.h>2323#include <linux/phy.h>2424#include <linux/netdevice.h>2525+#include <linux/bitfield.h>25262627static int meson_gxl_config_init(struct phy_device *phydev)2728{···5150 return 0;5251}53525353+/* This function is provided to cope with the possible failures of this phy5454+ * during aneg process. When aneg fails, the PHY reports that aneg is done5555+ * but the value found in MII_LPA is wrong:5656+ * - Early failures: MII_LPA is just 0x0001. if MII_EXPANSION reports that5757+ * the link partner (LP) supports aneg but the LP never acked our base5858+ * code word, it is likely that we never sent it to begin with.5959+ * - Late failures: MII_LPA is filled with a value which seems to make sense6060+ * but it actually is not what the LP is advertising. It seems that we6161+ * can detect this using a magic bit in the WOL bank (reg 12 - bit 12).6262+ * If this particular bit is not set when aneg is reported being done,6363+ * it means MII_LPA is likely to be wrong.6464+ *6565+ * In both case, forcing a restart of the aneg process solve the problem.6666+ * When this failure happens, the first retry is usually successful but,6767+ * in some cases, it may take up to 6 retries to get a decent result6868+ */6969+static int meson_gxl_read_status(struct phy_device *phydev)7070+{7171+ int ret, wol, lpa, exp;7272+7373+ if (phydev->autoneg == AUTONEG_ENABLE) {7474+ ret = genphy_aneg_done(phydev);7575+ if (ret < 0)7676+ return ret;7777+ else if (!ret)7878+ goto read_status_continue;7979+8080+ /* Need to access WOL bank, make sure the access is open */8181+ ret = phy_write(phydev, 0x14, 0x0000);8282+ if (ret)8383+ return ret;8484+ ret = phy_write(phydev, 0x14, 0x0400);8585+ if (ret)8686+ return ret;8787+ ret = phy_write(phydev, 0x14, 0x0000);8888+ if (ret)8989+ return ret;9090+ ret = phy_write(phydev, 0x14, 0x0400);9191+ if (ret)9292+ return ret;9393+9494+ /* Request LPI_STATUS WOL register */9595+ ret = phy_write(phydev, 0x14, 0x8D80);9696+ if (ret)9797+ return ret;9898+9999+ /* Read LPI_STATUS value */100100+ wol = phy_read(phydev, 0x15);101101+ if (wol < 0)102102+ return wol;103103+104104+ lpa = phy_read(phydev, MII_LPA);105105+ if (lpa < 0)106106+ return lpa;107107+108108+ exp = phy_read(phydev, MII_EXPANSION);109109+ if (exp < 0)110110+ return exp;111111+112112+ if (!(wol & BIT(12)) ||113113+ ((exp & EXPANSION_NWAY) && !(lpa & LPA_LPACK))) {114114+ /* Looks like aneg failed after all */115115+ phydev_dbg(phydev, "LPA corruption - aneg restart\n");116116+ return genphy_restart_aneg(phydev);117117+ }118118+ }119119+120120+read_status_continue:121121+ return genphy_read_status(phydev);122122+}123123+54124static struct phy_driver meson_gxl_phy[] = {55125 {56126 .phy_id = 0x01814400,···13159 .flags = PHY_IS_INTERNAL,13260 .config_init = meson_gxl_config_init,13361 .aneg_done = genphy_aneg_done,6262+ .read_status = meson_gxl_read_status,13463 .suspend = genphy_suspend,13564 .resume = genphy_resume,13665 },
+3-6
drivers/net/phy/phy.c
···806806 */807807void phy_start(struct phy_device *phydev)808808{809809- bool do_resume = false;810809 int err = 0;811810812811 mutex_lock(&phydev->lock);···818819 phydev->state = PHY_UP;819820 break;820821 case PHY_HALTED:822822+ /* if phy was suspended, bring the physical link up again */823823+ phy_resume(phydev);824824+821825 /* make sure interrupts are re-enabled for the PHY */822826 if (phydev->irq != PHY_POLL) {823827 err = phy_enable_interrupts(phydev);···829827 }830828831829 phydev->state = PHY_RESUMING;832832- do_resume = true;833830 break;834831 default:835832 break;836833 }837834 mutex_unlock(&phydev->lock);838838-839839- /* if phy was suspended, bring the physical link up again */840840- if (do_resume)841841- phy_resume(phydev);842835843836 phy_trigger_machine(phydev, true);844837}
···8585 * can be looked up later */8686 of_node_get(child);8787 phy->mdio.dev.of_node = child;8888+ phy->mdio.dev.fwnode = of_fwnode_handle(child);88898990 /* All data is now stored in the phy struct;9091 * register it */···116115 */117116 of_node_get(child);118117 mdiodev->dev.of_node = child;118118+ mdiodev->dev.fwnode = of_fwnode_handle(child);119119120120 /* All data is now stored in the mdiodev struct; register it. */121121 rc = mdio_device_register(mdiodev);···212210 mdio->phy_mask = ~0;213211214212 mdio->dev.of_node = np;213213+ mdio->dev.fwnode = of_fwnode_handle(np);215214216215 /* Get bus level PHY reset GPIO details */217216 mdio->reset_delay_us = DEFAULT_GPIO_RESET_DELAY;
···999999 * the subsequent "thaw" callbacks for the device.10001000 */10011001 if (dev_pm_smart_suspend_and_suspended(dev)) {10021002- dev->power.direct_complete = true;10021002+ dev_pm_skip_next_resume_phases(dev);10031003 return 0;10041004 }10051005
···904904 case ELS_FLOGI:905905 if (!lport->point_to_multipoint)906906 fc_lport_recv_flogi_req(lport, fp);907907+ else908908+ fc_rport_recv_req(lport, fp);907909 break;908910 case ELS_LOGO:909911 if (fc_frame_sid(fp) == FC_FID_FLOGI)910912 fc_lport_recv_logo_req(lport, fp);913913+ else914914+ fc_rport_recv_req(lport, fp);911915 break;912916 case ELS_RSCN:913917 lport->tt.disc_recv_req(lport, fp);
+5-5
drivers/scsi/libsas/sas_expander.c
···21452145 struct sas_rphy *rphy)21462146{21472147 struct domain_device *dev;21482148- unsigned int reslen = 0;21482148+ unsigned int rcvlen = 0;21492149 int ret = -EINVAL;2150215021512151 /* no rphy means no smp target support (ie aic94xx host) */···2179217921802180 ret = smp_execute_task_sg(dev, job->request_payload.sg_list,21812181 job->reply_payload.sg_list);21822182- if (ret > 0) {21832183- /* positive number is the untransferred residual */21842184- reslen = ret;21822182+ if (ret >= 0) {21832183+ /* bsg_job_done() requires the length received */21842184+ rcvlen = job->reply_payload.payload_len - ret;21852185 ret = 0;21862186 }2187218721882188out:21892189- bsg_job_done(job, ret, reslen);21892189+ bsg_job_done(job, ret, rcvlen);21902190}
···3434};353536363737-static const char spaces[] = " "; /* 16 of them */3837static blist_flags_t scsi_default_dev_flags;3938static LIST_HEAD(scsi_dev_info_list);4039static char scsi_dev_flags[256];···297298 size_t from_length;298299299300 from_length = strlen(from);300300- strncpy(to, from, min(to_length, from_length));301301- if (from_length < to_length) {302302- if (compatible) {303303- /*304304- * NUL terminate the string if it is short.305305- */306306- to[from_length] = '\0';307307- } else {308308- /*309309- * space pad the string if it is short.310310- */311311- strncpy(&to[from_length], spaces,312312- to_length - from_length);313313- }301301+ /* This zero-pads the destination */302302+ strncpy(to, from, to_length);303303+ if (from_length < to_length && !compatible) {304304+ /*305305+ * space pad the string if it is short.306306+ */307307+ memset(&to[from_length], ' ', to_length - from_length);314308 }315309 if (from_length > to_length)316310 printk(KERN_WARNING "%s: %s string '%s' is too long\n",···450458 /*451459 * vendor strings must be an exact match452460 */453453- if (vmax != strlen(devinfo->vendor) ||461461+ if (vmax != strnlen(devinfo->vendor,462462+ sizeof(devinfo->vendor)) ||454463 memcmp(devinfo->vendor, vskip, vmax))455464 continue;456465···459466 * @model specifies the full string, and460467 * must be larger or equal to devinfo->model461468 */462462- mlen = strlen(devinfo->model);469469+ mlen = strnlen(devinfo->model, sizeof(devinfo->model));463470 if (mmax < mlen || memcmp(devinfo->model, mskip, mlen))464471 continue;465472 return devinfo;
···537537 * 2 - Internal DMA538538 * @power_optimized Are power optimizations enabled?539539 * @num_dev_ep Number of device endpoints available540540+ * @num_dev_in_eps Number of device IN endpoints available540541 * @num_dev_perio_in_ep Number of device periodic IN endpoints541542 * available542543 * @dev_token_q_depth Device Mode IN Token Sequence Learning Queue···566565 * 2 - 8 or 16 bits567566 * @snpsid: Value from SNPSID register568567 * @dev_ep_dirs: Direction of device endpoints (GHWCFG1)568568+ * @g_tx_fifo_size[] Power-on values of TxFIFO sizes569569 */570570struct dwc2_hw_params {571571 unsigned op_mode:3;···588586 unsigned fs_phy_type:2;589587 unsigned i2c_enable:1;590588 unsigned num_dev_ep:4;589589+ unsigned num_dev_in_eps : 4;591590 unsigned num_dev_perio_in_ep:4;592591 unsigned total_fifo_size:16;593592 unsigned power_optimized:1;594593 unsigned utmi_phy_data_width:2;595594 u32 snpsid;596595 u32 dev_ep_dirs;596596+ u32 g_tx_fifo_size[MAX_EPS_CHANNELS];597597};598598599599/* Size of control and EP0 buffers */
+2-40
drivers/usb/dwc2/gadget.c
···195195{196196 if (hsotg->hw_params.en_multiple_tx_fifo)197197 /* In dedicated FIFO mode we need count of IN EPs */198198- return (dwc2_readl(hsotg->regs + GHWCFG4) &199199- GHWCFG4_NUM_IN_EPS_MASK) >> GHWCFG4_NUM_IN_EPS_SHIFT;198198+ return hsotg->hw_params.num_dev_in_eps;200199 else201200 /* In shared FIFO mode we need count of Periodic IN EPs */202201 return hsotg->hw_params.num_dev_perio_in_ep;203203-}204204-205205-/**206206- * dwc2_hsotg_ep_info_size - return Endpoint Info Control block size in DWORDs207207- */208208-static int dwc2_hsotg_ep_info_size(struct dwc2_hsotg *hsotg)209209-{210210- int val = 0;211211- int i;212212- u32 ep_dirs;213213-214214- /*215215- * Don't need additional space for ep info control registers in216216- * slave mode.217217- */218218- if (!using_dma(hsotg)) {219219- dev_dbg(hsotg->dev, "Buffer DMA ep info size 0\n");220220- return 0;221221- }222222-223223- /*224224- * Buffer DMA mode - 1 location per endpoit225225- * Descriptor DMA mode - 4 locations per endpoint226226- */227227- ep_dirs = hsotg->hw_params.dev_ep_dirs;228228-229229- for (i = 0; i <= hsotg->hw_params.num_dev_ep; i++) {230230- val += ep_dirs & 3 ? 1 : 2;231231- ep_dirs >>= 2;232232- }233233-234234- if (using_desc_dma(hsotg))235235- val = val * 4;236236-237237- return val;238202}239203240204/**···207243 */208244int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg)209245{210210- int ep_info_size;211246 int addr;212247 int tx_addr_max;213248 u32 np_tx_fifo_size;···215252 hsotg->params.g_np_tx_fifo_size);216253217254 /* Get Endpoint Info Control block size in DWORDs. */218218- ep_info_size = dwc2_hsotg_ep_info_size(hsotg);219219- tx_addr_max = hsotg->hw_params.total_fifo_size - ep_info_size;255255+ tx_addr_max = hsotg->hw_params.total_fifo_size;220256221257 addr = hsotg->params.g_rx_fifo_size + np_tx_fifo_size;222258 if (tx_addr_max <= addr)
+19-10
drivers/usb/dwc2/params.c
···484484 }485485486486 for (fifo = 1; fifo <= fifo_count; fifo++) {487487- dptxfszn = (dwc2_readl(hsotg->regs + DPTXFSIZN(fifo)) &488488- FIFOSIZE_DEPTH_MASK) >> FIFOSIZE_DEPTH_SHIFT;487487+ dptxfszn = hsotg->hw_params.g_tx_fifo_size[fifo];489488490489 if (hsotg->params.g_tx_fifo_size[fifo] < min ||491490 hsotg->params.g_tx_fifo_size[fifo] > dptxfszn) {···608609 struct dwc2_hw_params *hw = &hsotg->hw_params;609610 bool forced;610611 u32 gnptxfsiz;612612+ int fifo, fifo_count;611613612614 if (hsotg->dr_mode == USB_DR_MODE_HOST)613615 return;···616616 forced = dwc2_force_mode_if_needed(hsotg, false);617617618618 gnptxfsiz = dwc2_readl(hsotg->regs + GNPTXFSIZ);619619+620620+ fifo_count = dwc2_hsotg_tx_fifo_count(hsotg);621621+622622+ for (fifo = 1; fifo <= fifo_count; fifo++) {623623+ hw->g_tx_fifo_size[fifo] =624624+ (dwc2_readl(hsotg->regs + DPTXFSIZN(fifo)) &625625+ FIFOSIZE_DEPTH_MASK) >> FIFOSIZE_DEPTH_SHIFT;626626+ }619627620628 if (forced)621629 dwc2_clear_force_mode(hsotg);···669661 hwcfg4 = dwc2_readl(hsotg->regs + GHWCFG4);670662 grxfsiz = dwc2_readl(hsotg->regs + GRXFSIZ);671663672672- /*673673- * Host specific hardware parameters. Reading these parameters674674- * requires the controller to be in host mode. The mode will675675- * be forced, if necessary, to read these values.676676- */677677- dwc2_get_host_hwparams(hsotg);678678- dwc2_get_dev_hwparams(hsotg);679679-680664 /* hwcfg1 */681665 hw->dev_ep_dirs = hwcfg1;682666···711711 hw->en_multiple_tx_fifo = !!(hwcfg4 & GHWCFG4_DED_FIFO_EN);712712 hw->num_dev_perio_in_ep = (hwcfg4 & GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK) >>713713 GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT;714714+ hw->num_dev_in_eps = (hwcfg4 & GHWCFG4_NUM_IN_EPS_MASK) >>715715+ GHWCFG4_NUM_IN_EPS_SHIFT;714716 hw->dma_desc_enable = !!(hwcfg4 & GHWCFG4_DESC_DMA);715717 hw->power_optimized = !!(hwcfg4 & GHWCFG4_POWER_OPTIMIZ);716718 hw->utmi_phy_data_width = (hwcfg4 & GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK) >>···721719 /* fifo sizes */722720 hw->rx_fifo_size = (grxfsiz & GRXFSIZ_DEPTH_MASK) >>723721 GRXFSIZ_DEPTH_SHIFT;722722+ /*723723+ * Host specific hardware parameters. Reading these parameters724724+ * requires the controller to be in host mode. The mode will725725+ * be forced, if necessary, to read these values.726726+ */727727+ dwc2_get_host_hwparams(hsotg);728728+ dwc2_get_dev_hwparams(hsotg);724729725730 return 0;726731}
···259259{260260 const struct usb_endpoint_descriptor *desc = dep->endpoint.desc;261261 struct dwc3 *dwc = dep->dwc;262262- u32 timeout = 500;262262+ u32 timeout = 1000;263263 u32 reg;264264265265 int cmd_status = 0;···912912 */913913 if (speed == USB_SPEED_HIGH) {914914 struct usb_ep *ep = &dep->endpoint;915915- unsigned int mult = ep->mult - 1;915915+ unsigned int mult = 2;916916 unsigned int maxp = usb_endpoint_maxp(ep->desc);917917918918 if (length <= (2 * maxp))
+2-2
drivers/usb/gadget/Kconfig
···508508 controller, and the relevant drivers for each function declared509509 by the device.510510511511-endchoice512512-513511source "drivers/usb/gadget/legacy/Kconfig"512512+513513+endchoice514514515515endif # USB_GADGET
+1-11
drivers/usb/gadget/legacy/Kconfig
···1313# both kinds of controller can also support "USB On-the-Go" (CONFIG_USB_OTG).1414#15151616-menuconfig USB_GADGET_LEGACY1717- bool "Legacy USB Gadget Support"1818- help1919- Legacy USB gadgets are USB gadgets that do not use the USB gadget2020- configfs interface.2121-2222-if USB_GADGET_LEGACY2323-2416config USB_ZERO2517 tristate "Gadget Zero (DEVELOPMENT)"2618 select USB_LIBCOMPOSITE···479487# or video class gadget drivers), or specific hardware, here.480488config USB_G_WEBCAM481489 tristate "USB Webcam Gadget"482482- depends on VIDEO_DEV490490+ depends on VIDEO_V4L2483491 select USB_LIBCOMPOSITE484492 select VIDEOBUF2_VMALLOC485493 select USB_F_UVC···490498491499 Say "y" to link the driver statically, or "m" to build a492500 dynamically linked module called "g_webcam".493493-494494-endif
+11-4
drivers/usb/host/xhci-mem.c
···971971 return 0;972972 }973973974974- xhci->devs[slot_id] = kzalloc(sizeof(*xhci->devs[slot_id]), flags);975975- if (!xhci->devs[slot_id])974974+ dev = kzalloc(sizeof(*dev), flags);975975+ if (!dev)976976 return 0;977977- dev = xhci->devs[slot_id];978977979978 /* Allocate the (output) device context that will be used in the HC. */980979 dev->out_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_DEVICE, flags);···1014101510151016 trace_xhci_alloc_virt_device(dev);1016101710181018+ xhci->devs[slot_id] = dev;10191019+10171020 return 1;10181021fail:10191019- xhci_free_virt_device(xhci, slot_id);10221022+10231023+ if (dev->in_ctx)10241024+ xhci_free_container_ctx(xhci, dev->in_ctx);10251025+ if (dev->out_ctx)10261026+ xhci_free_container_ctx(xhci, dev->out_ctx);10271027+ kfree(dev);10281028+10201029 return 0;10211030}10221031
+3-3
drivers/usb/host/xhci-ring.c
···31123112{31133113 u32 maxp, total_packet_count;3114311431153115- /* MTK xHCI is mostly 0.97 but contains some features from 1.0 */31153115+ /* MTK xHCI 0.96 contains some features from 1.0 */31163116 if (xhci->hci_version < 0x100 && !(xhci->quirks & XHCI_MTK_HOST))31173117 return ((td_total_len - transferred) >> 10);31183118···31213121 trb_buff_len == td_total_len)31223122 return 0;3123312331243124- /* for MTK xHCI, TD size doesn't include this TRB */31253125- if (xhci->quirks & XHCI_MTK_HOST)31243124+ /* for MTK xHCI 0.96, TD size include this TRB, but not in 1.x */31253125+ if ((xhci->quirks & XHCI_MTK_HOST) && (xhci->hci_version < 0x100))31263126 trb_buff_len = 0;3127312731283128 maxp = usb_endpoint_maxp(&urb->ep->desc);
+9-1
drivers/usb/musb/da8xx.c
···284284 musb->xceiv->otg->state = OTG_STATE_A_WAIT_VRISE;285285 portstate(musb->port1_status |= USB_PORT_STAT_POWER);286286 del_timer(&musb->dev_timer);287287- } else {287287+ } else if (!(musb->int_usb & MUSB_INTR_BABBLE)) {288288+ /*289289+ * When babble condition happens, drvvbus interrupt290290+ * is also generated. Ignore this drvvbus interrupt291291+ * and let babble interrupt handler recovers the292292+ * controller; otherwise, the host-mode flag is lost293293+ * due to the MUSB_DEV_MODE() call below and babble294294+ * recovery logic will not be called.295295+ */288296 musb->is_active = 0;289297 MUSB_DEV_MODE(musb);290298 otg->default_a = 0;
+7
drivers/usb/storage/unusual_devs.h
···21002100 USB_SC_DEVICE, USB_PR_DEVICE, NULL,21012101 US_FL_BROKEN_FUA ),2102210221032103+/* Reported by David Kozub <zub@linux.fjfi.cvut.cz> */21042104+UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,21052105+ "JMicron",21062106+ "JMS567",21072107+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,21082108+ US_FL_BROKEN_FUA),21092109+21032110/*21042111 * Reported by Alexandre Oliva <oliva@lsd.ic.unicamp.br>21052112 * JMicron responds to USN and several other SCSI ioctls with a
···17171818/*1919 * output example:2020- * hub port sta spd dev socket local_busid2121- * hs 0000 004 000 00000000 c5a7bb80 1-2.32020+ * hub port sta spd dev sockfd local_busid2121+ * hs 0000 004 000 00000000 3 1-2.32222 * ................................................2323- * ss 0008 004 000 00000000 d8cee980 2-3.42323+ * ss 0008 004 000 00000000 4 2-3.42424 * ................................................2525 *2626- * IP address can be retrieved from a socket pointer address by looking2727- * up /proc/net/{tcp,tcp6}. Also, a userland program may remember a2828- * port number and its peer IP address.2626+ * Output includes socket fd instead of socket pointer address to avoid2727+ * leaking kernel memory address in:2828+ * /sys/devices/platform/vhci_hcd.0/status and in debug output.2929+ * The socket pointer address is not used at the moment and it was made3030+ * visible as a convenient way to find IP address from socket pointer3131+ * address by looking up /proc/net/{tcp,tcp6}. As this opens a security3232+ * hole, the change is made to use sockfd instead.3333+ *2934 */3035static void port_show_vhci(char **out, int hub, int port, struct vhci_device *vdev)3136{···4439 if (vdev->ud.status == VDEV_ST_USED) {4540 *out += sprintf(*out, "%03u %08x ",4641 vdev->speed, vdev->devid);4747- *out += sprintf(*out, "%16p %s",4848- vdev->ud.tcp_socket,4242+ *out += sprintf(*out, "%u %s",4343+ vdev->ud.sockfd,4944 dev_name(&vdev->udev->dev));50455146 } else {···165160 char *s = out;166161167162 /*168168- * Half the ports are for SPEED_HIGH and half for SPEED_SUPER, thus the * 2.163163+ * Half the ports are for SPEED_HIGH and half for SPEED_SUPER,164164+ * thus the * 2.169165 */170166 out += sprintf(out, "%d\n", VHCI_PORTS * vhci_num_controllers);171167 return out - s;···372366373367 vdev->devid = devid;374368 vdev->speed = speed;369369+ vdev->ud.sockfd = sockfd;375370 vdev->ud.tcp_socket = socket;376371 vdev->ud.status = VDEV_ST_NOTASSIGNED;377372
+9-34
drivers/virtio/virtio_mmio.c
···522522 return -EBUSY;523523524524 vm_dev = devm_kzalloc(&pdev->dev, sizeof(*vm_dev), GFP_KERNEL);525525- if (!vm_dev) {526526- rc = -ENOMEM;527527- goto free_mem;528528- }525525+ if (!vm_dev)526526+ return -ENOMEM;529527530528 vm_dev->vdev.dev.parent = &pdev->dev;531529 vm_dev->vdev.dev.release = virtio_mmio_release_dev;···533535 spin_lock_init(&vm_dev->lock);534536535537 vm_dev->base = devm_ioremap(&pdev->dev, mem->start, resource_size(mem));536536- if (vm_dev->base == NULL) {537537- rc = -EFAULT;538538- goto free_vmdev;539539- }538538+ if (vm_dev->base == NULL)539539+ return -EFAULT;540540541541 /* Check magic value */542542 magic = readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);543543 if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) {544544 dev_warn(&pdev->dev, "Wrong magic value 0x%08lx!\n", magic);545545- rc = -ENODEV;546546- goto unmap;545545+ return -ENODEV;547546 }548547549548 /* Check device version */···548553 if (vm_dev->version < 1 || vm_dev->version > 2) {549554 dev_err(&pdev->dev, "Version %ld not supported!\n",550555 vm_dev->version);551551- rc = -ENXIO;552552- goto unmap;556556+ return -ENXIO;553557 }554558555559 vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);···557563 * virtio-mmio device with an ID 0 is a (dummy) placeholder558564 * with no function. End probing now with no error reported.559565 */560560- rc = -ENODEV;561561- goto unmap;566566+ return -ENODEV;562567 }563568 vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);564569···583590 platform_set_drvdata(pdev, vm_dev);584591585592 rc = register_virtio_device(&vm_dev->vdev);586586- if (rc) {587587- iounmap(vm_dev->base);588588- devm_release_mem_region(&pdev->dev, mem->start,589589- resource_size(mem));593593+ if (rc)590594 put_device(&vm_dev->vdev.dev);591591- }592592- return rc;593593-unmap:594594- iounmap(vm_dev->base);595595-free_mem:596596- devm_release_mem_region(&pdev->dev, mem->start,597597- resource_size(mem));598598-free_vmdev:599599- devm_kfree(&pdev->dev, vm_dev);595595+600596 return rc;601597}602598603599static int virtio_mmio_remove(struct platform_device *pdev)604600{605601 struct virtio_mmio_device *vm_dev = platform_get_drvdata(pdev);606606- struct resource *mem;607607-608608- iounmap(vm_dev->base);609609- mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);610610- if (mem)611611- devm_release_mem_region(&pdev->dev, mem->start,612612- resource_size(mem));613602 unregister_virtio_device(&vm_dev->vdev);614603615604 return 0;
+1-1
drivers/xen/Kconfig
···269269270270config XEN_ACPI_PROCESSOR271271 tristate "Xen ACPI processor"272272- depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ272272+ depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ273273 default m274274 help275275 This ACPI processor uploads Power Management information to the Xen
-1
fs/autofs4/waitq.c
···170170171171 mutex_unlock(&sbi->wq_mutex);172172173173- if (autofs4_write(sbi, pipe, &pkt, pktsz))174173 switch (ret = autofs4_write(sbi, pipe, &pkt, pktsz)) {175174 case 0:176175 break;
+12-6
fs/btrfs/ctree.c
···10321032 root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) &&10331033 !(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF)) {10341034 ret = btrfs_inc_ref(trans, root, buf, 1);10351035- BUG_ON(ret); /* -ENOMEM */10351035+ if (ret)10361036+ return ret;1036103710371038 if (root->root_key.objectid ==10381039 BTRFS_TREE_RELOC_OBJECTID) {10391040 ret = btrfs_dec_ref(trans, root, buf, 0);10401040- BUG_ON(ret); /* -ENOMEM */10411041+ if (ret)10421042+ return ret;10411043 ret = btrfs_inc_ref(trans, root, cow, 1);10421042- BUG_ON(ret); /* -ENOMEM */10441044+ if (ret)10451045+ return ret;10431046 }10441047 new_flags |= BTRFS_BLOCK_FLAG_FULL_BACKREF;10451048 } else {···10521049 ret = btrfs_inc_ref(trans, root, cow, 1);10531050 else10541051 ret = btrfs_inc_ref(trans, root, cow, 0);10551055- BUG_ON(ret); /* -ENOMEM */10521052+ if (ret)10531053+ return ret;10561054 }10571055 if (new_flags != 0) {10581056 int level = btrfs_header_level(buf);···10721068 ret = btrfs_inc_ref(trans, root, cow, 1);10731069 else10741070 ret = btrfs_inc_ref(trans, root, cow, 0);10751075- BUG_ON(ret); /* -ENOMEM */10711071+ if (ret)10721072+ return ret;10761073 ret = btrfs_dec_ref(trans, root, buf, 1);10771077- BUG_ON(ret); /* -ENOMEM */10741074+ if (ret)10751075+ return ret;10781076 }10791077 clean_tree_block(fs_info, buf);10801078 *last_ref = 1;
+5-7
fs/btrfs/disk-io.c
···32313231 int errors = 0;32323232 u32 crc;32333233 u64 bytenr;32343234+ int op_flags;3234323532353236 if (max_mirrors == 0)32363237 max_mirrors = BTRFS_SUPER_MIRROR_MAX;···32743273 * we fua the first super. The others we allow32753274 * to go down lazy.32763275 */32773277- if (i == 0) {32783278- ret = btrfsic_submit_bh(REQ_OP_WRITE,32793279- REQ_SYNC | REQ_FUA | REQ_META | REQ_PRIO, bh);32803280- } else {32813281- ret = btrfsic_submit_bh(REQ_OP_WRITE,32823282- REQ_SYNC | REQ_META | REQ_PRIO, bh);32833283- }32763276+ op_flags = REQ_SYNC | REQ_META | REQ_PRIO;32773277+ if (i == 0 && !btrfs_test_opt(device->fs_info, NOBARRIER))32783278+ op_flags |= REQ_FUA;32793279+ ret = btrfsic_submit_bh(REQ_OP_WRITE, op_flags, bh);32843280 if (ret)32853281 errors++;32863282 }
···627627628628 if (pfn != pmd_pfn(*pmdp))629629 goto unlock_pmd;630630- if (!pmd_dirty(*pmdp)631631- && !pmd_access_permitted(*pmdp, WRITE))630630+ if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))632631 goto unlock_pmd;633632634633 flush_cache_page(vma, address, pfn);
+3-4
fs/exec.c
···12161216 return -EAGAIN;12171217}1218121812191219-char *get_task_comm(char *buf, struct task_struct *tsk)12191219+char *__get_task_comm(char *buf, size_t buf_size, struct task_struct *tsk)12201220{12211221- /* buf must be at least sizeof(tsk->comm) in size */12221221 task_lock(tsk);12231223- strncpy(buf, tsk->comm, sizeof(tsk->comm));12221222+ strncpy(buf, tsk->comm, buf_size);12241223 task_unlock(tsk);12251224 return buf;12261225}12271227-EXPORT_SYMBOL_GPL(get_task_comm);12261226+EXPORT_SYMBOL_GPL(__get_task_comm);1228122712291228/*12301229 * These functions flushes out all traces of the currently running executable
···235235 ei = kmem_cache_alloc(hpfs_inode_cachep, GFP_NOFS);236236 if (!ei)237237 return NULL;238238- ei->vfs_inode.i_version = 1;239238 return &ei->vfs_inode;240239}241240
+11
fs/nfs/client.c
···291291 const struct sockaddr *sap = data->addr;292292 struct nfs_net *nn = net_generic(data->net, nfs_net_id);293293294294+again:294295 list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {295296 const struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr;296297 /* Don't match clients that failed to initialise properly */297298 if (clp->cl_cons_state < 0)298299 continue;300300+301301+ /* If a client is still initializing then we need to wait */302302+ if (clp->cl_cons_state > NFS_CS_READY) {303303+ refcount_inc(&clp->cl_count);304304+ spin_unlock(&nn->nfs_client_lock);305305+ nfs_wait_client_init_complete(clp);306306+ nfs_put_client(clp);307307+ spin_lock(&nn->nfs_client_lock);308308+ goto again;309309+ }299310300311 /* Different NFS versions cannot share the same nfs_client */301312 if (clp->rpc_ops != data->nfs_mod->rpc_ops)
+13-4
fs/nfs/nfs4client.c
···404404 if (error < 0)405405 goto error;406406407407- if (!nfs4_has_session(clp))408408- nfs_mark_client_ready(clp, NFS_CS_READY);409409-410407 error = nfs4_discover_server_trunking(clp, &old);411408 if (error < 0)412409 goto error;413410414414- if (clp != old)411411+ if (clp != old) {415412 clp->cl_preserve_clid = true;413413+ /*414414+ * Mark the client as having failed initialization so other415415+ * processes walking the nfs_client_list in nfs_match_client()416416+ * won't try to use it.417417+ */418418+ nfs_mark_client_ready(clp, -EPERM);419419+ }416420 nfs_put_client(clp);417421 clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags);418422 return old;···543539 spin_lock(&nn->nfs_client_lock);544540 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) {545541542542+ if (pos == new)543543+ goto found;544544+546545 status = nfs4_match_client(pos, new, &prev, nn);547546 if (status < 0)548547 goto out_unlock;···566559 * way that a SETCLIENTID_CONFIRM to pos can succeed is567560 * if new and pos point to the same server:568561 */562562+found:569563 refcount_inc(&pos->cl_count);570564 spin_unlock(&nn->nfs_client_lock);571565···580572 case 0:581573 nfs4_swap_callback_idents(pos, new);582574 pos->cl_confirm = new->cl_confirm;575575+ nfs_mark_client_ready(pos, NFS_CS_READY);583576584577 prev = NULL;585578 *result = pos;
+2
fs/nfs/write.c
···18901890 if (res)18911891 error = nfs_generic_commit_list(inode, &head, how, &cinfo);18921892 nfs_commit_end(cinfo.mds);18931893+ if (res == 0)18941894+ return res;18931895 if (error < 0)18941896 goto out_error;18951897 if (!may_wait)
+3
fs/nfsd/auth.c
···6060 gi->gid[i] = exp->ex_anon_gid;6161 else6262 gi->gid[i] = rqgi->gid[i];6363+6464+ /* Each thread allocates its own gi, no race */6565+ groups_sort(gi);6366 }6467 } else {6568 gi = get_group_info(rqgi);
+10
fs/overlayfs/Kconfig
···2424 an overlay which has redirects on a kernel that doesn't support this2525 feature will have unexpected results.26262727+config OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW2828+ bool "Overlayfs: follow redirects even if redirects are turned off"2929+ default y3030+ depends on OVERLAY_FS3131+ help3232+ Disable this to get a possibly more secure configuration, but that3333+ might not be backward compatible with previous kernels.3434+3535+ For more information, see Documentation/filesystems/overlayfs.txt3636+2737config OVERLAY_FS_INDEX2838 bool "Overlayfs: turn on inodes index feature by default"2939 depends on OVERLAY_FS
+2-1
fs/overlayfs/dir.c
···887887 spin_unlock(&dentry->d_lock);888888 } else {889889 kfree(redirect);890890- pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err);890890+ pr_warn_ratelimited("overlayfs: failed to set redirect (%i)\n",891891+ err);891892 /* Fall back to userspace copy-up */892893 err = -EXDEV;893894 }
+17-1
fs/overlayfs/namei.c
···435435436436 /* Check if index is orphan and don't warn before cleaning it */437437 if (d_inode(index)->i_nlink == 1 &&438438- ovl_get_nlink(index, origin.dentry, 0) == 0)438438+ ovl_get_nlink(origin.dentry, index, 0) == 0)439439 err = -ENOENT;440440441441 dput(origin.dentry);···680680681681 if (d.stop)682682 break;683683+684684+ /*685685+ * Following redirects can have security consequences: it's like686686+ * a symlink into the lower layer without the permission checks.687687+ * This is only a problem if the upper layer is untrusted (e.g688688+ * comes from an USB drive). This can allow a non-readable file689689+ * or directory to become readable.690690+ *691691+ * Only following redirects when redirects are enabled disables692692+ * this attack vector when not necessary.693693+ */694694+ err = -EPERM;695695+ if (d.redirect && !ofs->config.redirect_follow) {696696+ pr_warn_ratelimited("overlay: refusing to follow redirect for (%pd2)\n", dentry);697697+ goto out_put;698698+ }683699684700 if (d.redirect && d.redirect[0] == '/' && poe != roe) {685701 poe = roe;
···499499 return err;500500501501fail:502502- pr_warn_ratelimited("overlay: failed to look up (%s) for ino (%i)\n",502502+ pr_warn_ratelimited("overlayfs: failed to look up (%s) for ino (%i)\n",503503 p->name, err);504504 goto out;505505}···663663 return PTR_ERR(rdt.cache);664664 }665665666666- return iterate_dir(od->realfile, &rdt.ctx);666666+ err = iterate_dir(od->realfile, &rdt.ctx);667667+ ctx->pos = rdt.ctx.pos;668668+669669+ return err;667670}668671669672
+66-21
fs/overlayfs/super.c
···3333MODULE_PARM_DESC(ovl_redirect_dir_def,3434 "Default to on or off for the redirect_dir feature");35353636+static bool ovl_redirect_always_follow =3737+ IS_ENABLED(CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW);3838+module_param_named(redirect_always_follow, ovl_redirect_always_follow,3939+ bool, 0644);4040+MODULE_PARM_DESC(ovl_redirect_always_follow,4141+ "Follow redirects even if redirect_dir feature is turned off");4242+3643static bool ovl_index_def = IS_ENABLED(CONFIG_OVERLAY_FS_INDEX);3744module_param_named(index, ovl_index_def, bool, 0644);3845MODULE_PARM_DESC(ovl_index_def,···239232 kfree(ofs->config.lowerdir);240233 kfree(ofs->config.upperdir);241234 kfree(ofs->config.workdir);235235+ kfree(ofs->config.redirect_mode);242236 if (ofs->creator_cred)243237 put_cred(ofs->creator_cred);244238 kfree(ofs);···252244 ovl_free_fs(ofs);253245}254246247247+/* Sync real dirty inodes in upper filesystem (if it exists) */255248static int ovl_sync_fs(struct super_block *sb, int wait)256249{257250 struct ovl_fs *ofs = sb->s_fs_info;···261252262253 if (!ofs->upper_mnt)263254 return 0;264264- upper_sb = ofs->upper_mnt->mnt_sb;265265- if (!upper_sb->s_op->sync_fs)255255+256256+ /*257257+ * If this is a sync(2) call or an emergency sync, all the super blocks258258+ * will be iterated, including upper_sb, so no need to do anything.259259+ *260260+ * If this is a syncfs(2) call, then we do need to call261261+ * sync_filesystem() on upper_sb, but enough if we do it when being262262+ * called with wait == 1.263263+ */264264+ if (!wait)266265 return 0;267266268268- /* real inodes have already been synced by sync_filesystem(ovl_sb) */267267+ upper_sb = ofs->upper_mnt->mnt_sb;268268+269269 down_read(&upper_sb->s_umount);270270- ret = upper_sb->s_op->sync_fs(upper_sb, wait);270270+ ret = sync_filesystem(upper_sb);271271 up_read(&upper_sb->s_umount);272272+272273 return ret;273274}274275···314295 return (!ofs->upper_mnt || !ofs->workdir);315296}316297298298+static const char *ovl_redirect_mode_def(void)299299+{300300+ return ovl_redirect_dir_def ? "on" : "off";301301+}302302+317303/**318304 * ovl_show_options319305 *···337313 }338314 if (ofs->config.default_permissions)339315 seq_puts(m, ",default_permissions");340340- if (ofs->config.redirect_dir != ovl_redirect_dir_def)341341- seq_printf(m, ",redirect_dir=%s",342342- ofs->config.redirect_dir ? "on" : "off");316316+ if (strcmp(ofs->config.redirect_mode, ovl_redirect_mode_def()) != 0)317317+ seq_printf(m, ",redirect_dir=%s", ofs->config.redirect_mode);343318 if (ofs->config.index != ovl_index_def)344344- seq_printf(m, ",index=%s",345345- ofs->config.index ? "on" : "off");319319+ seq_printf(m, ",index=%s", ofs->config.index ? "on" : "off");346320 return 0;347321}348322···370348 OPT_UPPERDIR,371349 OPT_WORKDIR,372350 OPT_DEFAULT_PERMISSIONS,373373- OPT_REDIRECT_DIR_ON,374374- OPT_REDIRECT_DIR_OFF,351351+ OPT_REDIRECT_DIR,375352 OPT_INDEX_ON,376353 OPT_INDEX_OFF,377354 OPT_ERR,···381360 {OPT_UPPERDIR, "upperdir=%s"},382361 {OPT_WORKDIR, "workdir=%s"},383362 {OPT_DEFAULT_PERMISSIONS, "default_permissions"},384384- {OPT_REDIRECT_DIR_ON, "redirect_dir=on"},385385- {OPT_REDIRECT_DIR_OFF, "redirect_dir=off"},363363+ {OPT_REDIRECT_DIR, "redirect_dir=%s"},386364 {OPT_INDEX_ON, "index=on"},387365 {OPT_INDEX_OFF, "index=off"},388366 {OPT_ERR, NULL}···410390 return sbegin;411391}412392393393+static int ovl_parse_redirect_mode(struct ovl_config *config, const char *mode)394394+{395395+ if (strcmp(mode, "on") == 0) {396396+ config->redirect_dir = true;397397+ /*398398+ * Does not make sense to have redirect creation without399399+ * redirect following.400400+ */401401+ config->redirect_follow = true;402402+ } else if (strcmp(mode, "follow") == 0) {403403+ config->redirect_follow = true;404404+ } else if (strcmp(mode, "off") == 0) {405405+ if (ovl_redirect_always_follow)406406+ config->redirect_follow = true;407407+ } else if (strcmp(mode, "nofollow") != 0) {408408+ pr_err("overlayfs: bad mount option \"redirect_dir=%s\"\n",409409+ mode);410410+ return -EINVAL;411411+ }412412+413413+ return 0;414414+}415415+413416static int ovl_parse_opt(char *opt, struct ovl_config *config)414417{415418 char *p;419419+420420+ config->redirect_mode = kstrdup(ovl_redirect_mode_def(), GFP_KERNEL);421421+ if (!config->redirect_mode)422422+ return -ENOMEM;416423417424 while ((p = ovl_next_opt(&opt)) != NULL) {418425 int token;···475428 config->default_permissions = true;476429 break;477430478478- case OPT_REDIRECT_DIR_ON:479479- config->redirect_dir = true;480480- break;481481-482482- case OPT_REDIRECT_DIR_OFF:483483- config->redirect_dir = false;431431+ case OPT_REDIRECT_DIR:432432+ kfree(config->redirect_mode);433433+ config->redirect_mode = match_strdup(&args[0]);434434+ if (!config->redirect_mode)435435+ return -ENOMEM;484436 break;485437486438 case OPT_INDEX_ON:···504458 config->workdir = NULL;505459 }506460507507- return 0;461461+ return ovl_parse_redirect_mode(config, config->redirect_mode);508462}509463510464#define OVL_WORKDIR_NAME "work"···12061160 if (!cred)12071161 goto out_err;1208116212091209- ofs->config.redirect_dir = ovl_redirect_dir_def;12101163 ofs->config.index = ovl_index_def;12111164 err = ovl_parse_opt((char *) data, &ofs->config);12121165 if (err)
+3-7
fs/xfs/libxfs/xfs_ialloc.c
···920920xfs_ialloc_ag_select(921921 xfs_trans_t *tp, /* transaction pointer */922922 xfs_ino_t parent, /* parent directory inode number */923923- umode_t mode, /* bits set to indicate file type */924924- int okalloc) /* ok to allocate more space */923923+ umode_t mode) /* bits set to indicate file type */925924{926925 xfs_agnumber_t agcount; /* number of ag's in the filesystem */927926 xfs_agnumber_t agno; /* current ag number */···976977 xfs_perag_put(pag);977978 return agno;978979 }979979-980980- if (!okalloc)981981- goto nextag;982980983981 if (!pag->pagf_init) {984982 error = xfs_alloc_pagf_init(mp, tp, agno, flags);···16761680 struct xfs_trans *tp,16771681 xfs_ino_t parent,16781682 umode_t mode,16791679- int okalloc,16801683 struct xfs_buf **IO_agbp,16811684 xfs_ino_t *inop)16821685{···16871692 int noroom = 0;16881693 xfs_agnumber_t start_agno;16891694 struct xfs_perag *pag;16951695+ int okalloc = 1;1690169616911697 if (*IO_agbp) {16921698 /*···17031707 * We do not have an agbp, so select an initial allocation17041708 * group for inode allocation.17051709 */17061706- start_agno = xfs_ialloc_ag_select(tp, parent, mode, okalloc);17101710+ start_agno = xfs_ialloc_ag_select(tp, parent, mode);17071711 if (start_agno == NULLAGNUMBER) {17081712 *inop = NULLFSINO;17091713 return 0;
-1
fs/xfs/libxfs/xfs_ialloc.h
···8181 struct xfs_trans *tp, /* transaction pointer */8282 xfs_ino_t parent, /* parent inode (directory) */8383 umode_t mode, /* mode bits for new inode */8484- int okalloc, /* ok to allocate more space */8584 struct xfs_buf **agbp, /* buf for a.g. inode header */8685 xfs_ino_t *inop); /* inode number allocated */8786
···749749 xfs_nlink_t nlink,750750 dev_t rdev,751751 prid_t prid,752752- int okalloc,753752 xfs_buf_t **ialloc_context,754753 xfs_inode_t **ipp)755754{···764765 * Call the space management code to pick765766 * the on-disk inode to be allocated.766767 */767767- error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc,768768+ error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode,768769 ialloc_context, &ino);769770 if (error)770771 return error;···956957 xfs_nlink_t nlink,957958 dev_t rdev,958959 prid_t prid, /* project id */959959- int okalloc, /* ok to allocate new space */960960 xfs_inode_t **ipp, /* pointer to inode; it will be961961 locked. */962962 int *committed)···986988 * transaction commit so that no other process can steal987989 * the inode(s) that we've just allocated.988990 */989989- code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid, okalloc,990990- &ialloc_context, &ip);991991+ code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid, &ialloc_context,992992+ &ip);991993992994 /*993995 * Return an error if we were unable to allocate a new inode.···10591061 * this call should always succeed.10601062 */10611063 code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid,10621062- okalloc, &ialloc_context, &ip);10641064+ &ialloc_context, &ip);1063106510641066 /*10651067 * If we get an error at this point, return to the caller···11801182 xfs_flush_inodes(mp);11811183 error = xfs_trans_alloc(mp, tres, resblks, 0, 0, &tp);11821184 }11831183- if (error == -ENOSPC) {11841184- /* No space at all so try a "no-allocation" reservation */11851185- resblks = 0;11861186- error = xfs_trans_alloc(mp, tres, 0, 0, 0, &tp);11871187- }11881185 if (error)11891186 goto out_release_inode;11901187···11961203 if (error)11971204 goto out_trans_cancel;1198120511991199- if (!resblks) {12001200- error = xfs_dir_canenter(tp, dp, name);12011201- if (error)12021202- goto out_trans_cancel;12031203- }12041204-12051206 /*12061207 * A newly created regular or special file just has one directory12071208 * entry pointing to them, but a directory also the "." entry12081209 * pointing to itself.12091210 */12101210- error = xfs_dir_ialloc(&tp, dp, mode, is_dir ? 2 : 1, rdev,12111211- prid, resblks > 0, &ip, NULL);12111211+ error = xfs_dir_ialloc(&tp, dp, mode, is_dir ? 2 : 1, rdev, prid, &ip,12121212+ NULL);12121213 if (error)12131214 goto out_trans_cancel;12141215···13271340 tres = &M_RES(mp)->tr_create_tmpfile;1328134113291342 error = xfs_trans_alloc(mp, tres, resblks, 0, 0, &tp);13301330- if (error == -ENOSPC) {13311331- /* No space at all so try a "no-allocation" reservation */13321332- resblks = 0;13331333- error = xfs_trans_alloc(mp, tres, 0, 0, 0, &tp);13341334- }13351343 if (error)13361344 goto out_release_inode;13371345···13351353 if (error)13361354 goto out_trans_cancel;1337135513381338- error = xfs_dir_ialloc(&tp, dp, mode, 1, 0,13391339- prid, resblks > 0, &ip, NULL);13561356+ error = xfs_dir_ialloc(&tp, dp, mode, 1, 0, prid, &ip, NULL);13401357 if (error)13411358 goto out_trans_cancel;13421359
···2424#define __DRM_CONNECTOR_H__25252626#include <linux/list.h>2727+#include <linux/llist.h>2728#include <linux/ctype.h>2829#include <linux/hdmi.h>2930#include <drm/drm_mode_object.h>···919918 uint16_t tile_h_size, tile_v_size;920919921920 /**922922- * @free_work:921921+ * @free_node:923922 *924924- * Work used only by &drm_connector_iter to be able to clean up a925925- * connector from any context.923923+ * List used only by &drm_connector_iter to be able to clean up a924924+ * connector from any context, in conjunction with925925+ * &drm_mode_config.connector_free_work.926926 */927927- struct work_struct free_work;927927+ struct llist_node free_node;928928};929929930930#define obj_to_connector(x) container_of(x, struct drm_connector, base)
···220220/*221221 * Prevent the compiler from merging or refetching reads or writes. The222222 * compiler is also forbidden from reordering successive instances of223223- * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the224224- * compiler is aware of some particular ordering. One way to make the225225- * compiler aware of ordering is to put the two invocations of READ_ONCE,226226- * WRITE_ONCE or ACCESS_ONCE() in different C statements.223223+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some224224+ * particular ordering. One way to make the compiler aware of ordering is to225225+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C226226+ * statements.227227 *228228- * In contrast to ACCESS_ONCE these two macros will also work on aggregate229229- * data types like structs or unions. If the size of the accessed data230230- * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)231231- * READ_ONCE() and WRITE_ONCE() will fall back to memcpy(). There's at232232- * least two memcpy()s: one for the __builtin_memcpy() and then one for233233- * the macro doing the copy of variable - '__u' allocated on the stack.228228+ * These two macros will also work on aggregate data types like structs or229229+ * unions. If the size of the accessed data type exceeds the word size of230230+ * the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will231231+ * fall back to memcpy(). There's at least two memcpy()s: one for the232232+ * __builtin_memcpy() and then one for the macro doing the copy of variable233233+ * - '__u' allocated on the stack.234234 *235235 * Their two major use cases are: (1) Mediating communication between236236 * process-level code and irq/NMI handlers, all running on the same CPU,237237- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise237237+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise238238 * mutilate accesses that either do not require ordering or that interact239239 * with an explicit memory barrier or atomic instruction that provides the240240 * required ordering.···326326#define compiletime_assert_atomic_type(t) \327327 compiletime_assert(__native_word(t), \328328 "Need native word sized stores/loads for atomicity.")329329-330330-/*331331- * Prevent the compiler from merging or refetching accesses. The compiler332332- * is also forbidden from reordering successive instances of ACCESS_ONCE(),333333- * but only when the compiler is aware of some particular ordering. One way334334- * to make the compiler aware of ordering is to put the two invocations of335335- * ACCESS_ONCE() in different C statements.336336- *337337- * ACCESS_ONCE will only work on scalar types. For union types, ACCESS_ONCE338338- * on a union member will work as long as the size of the member matches the339339- * size of the union and the size is smaller than word size.340340- *341341- * The major use cases of ACCESS_ONCE used to be (1) Mediating communication342342- * between process-level code and irq/NMI handlers, all running on the same CPU,343343- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise344344- * mutilate accesses that either do not require ordering or that interact345345- * with an explicit memory barrier or atomic instruction that provides the346346- * required ordering.347347- *348348- * If possible use READ_ONCE()/WRITE_ONCE() instead.349349- */350350-#define __ACCESS_ONCE(x) ({ \351351- __maybe_unused typeof(x) __var = (__force typeof(x)) 0; \352352- (volatile typeof(x) *)&(x); })353353-#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x))354329355330#endif /* __LINUX_COMPILER_H */
···232232 struct mutex mutex;233233 struct kvm_run *run;234234235235- int guest_fpu_loaded, guest_xcr0_loaded;235235+ int guest_xcr0_loaded;236236 struct swait_queue_head wq;237237 struct pid __rcu *pid;238238 int sigset_active;
-125
include/linux/lockdep.h
···158158 int cpu;159159 unsigned long ip;160160#endif161161-#ifdef CONFIG_LOCKDEP_CROSSRELEASE162162- /*163163- * Whether it's a crosslock.164164- */165165- int cross;166166-#endif167161};168162169163static inline void lockdep_copy_map(struct lockdep_map *to,···261267 unsigned int hardirqs_off:1;262268 unsigned int references:12; /* 32 bits */263269 unsigned int pin_count;264264-#ifdef CONFIG_LOCKDEP_CROSSRELEASE265265- /*266266- * Generation id.267267- *268268- * A value of cross_gen_id will be stored when holding this,269269- * which is globally increased whenever each crosslock is held.270270- */271271- unsigned int gen_id;272272-#endif273270};274274-275275-#ifdef CONFIG_LOCKDEP_CROSSRELEASE276276-#define MAX_XHLOCK_TRACE_ENTRIES 5277277-278278-/*279279- * This is for keeping locks waiting for commit so that true dependencies280280- * can be added at commit step.281281- */282282-struct hist_lock {283283- /*284284- * Id for each entry in the ring buffer. This is used to285285- * decide whether the ring buffer was overwritten or not.286286- *287287- * For example,288288- *289289- * |<----------- hist_lock ring buffer size ------->|290290- * pppppppppppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiii291291- * wrapped > iiiiiiiiiiiiiiiiiiiiiiiiiii.......................292292- *293293- * where 'p' represents an acquisition in process294294- * context, 'i' represents an acquisition in irq295295- * context.296296- *297297- * In this example, the ring buffer was overwritten by298298- * acquisitions in irq context, that should be detected on299299- * rollback or commit.300300- */301301- unsigned int hist_id;302302-303303- /*304304- * Seperate stack_trace data. This will be used at commit step.305305- */306306- struct stack_trace trace;307307- unsigned long trace_entries[MAX_XHLOCK_TRACE_ENTRIES];308308-309309- /*310310- * Seperate hlock instance. This will be used at commit step.311311- *312312- * TODO: Use a smaller data structure containing only necessary313313- * data. However, we should make lockdep code able to handle the314314- * smaller one first.315315- */316316- struct held_lock hlock;317317-};318318-319319-/*320320- * To initialize a lock as crosslock, lockdep_init_map_crosslock() should321321- * be called instead of lockdep_init_map().322322- */323323-struct cross_lock {324324- /*325325- * When more than one acquisition of crosslocks are overlapped,326326- * we have to perform commit for them based on cross_gen_id of327327- * the first acquisition, which allows us to add more true328328- * dependencies.329329- *330330- * Moreover, when no acquisition of a crosslock is in progress,331331- * we should not perform commit because the lock might not exist332332- * any more, which might cause incorrect memory access. So we333333- * have to track the number of acquisitions of a crosslock.334334- */335335- int nr_acquire;336336-337337- /*338338- * Seperate hlock instance. This will be used at commit step.339339- *340340- * TODO: Use a smaller data structure containing only necessary341341- * data. However, we should make lockdep code able to handle the342342- * smaller one first.343343- */344344- struct held_lock hlock;345345-};346346-347347-struct lockdep_map_cross {348348- struct lockdep_map map;349349- struct cross_lock xlock;350350-};351351-#endif352271353272/*354273 * Initialization, self-test and debugging-output methods:···467560 XHLOCK_CTX_NR,468561};469562470470-#ifdef CONFIG_LOCKDEP_CROSSRELEASE471471-extern void lockdep_init_map_crosslock(struct lockdep_map *lock,472472- const char *name,473473- struct lock_class_key *key,474474- int subclass);475475-extern void lock_commit_crosslock(struct lockdep_map *lock);476476-477477-/*478478- * What we essencially have to initialize is 'nr_acquire'. Other members479479- * will be initialized in add_xlock().480480- */481481-#define STATIC_CROSS_LOCK_INIT() \482482- { .nr_acquire = 0,}483483-484484-#define STATIC_CROSS_LOCKDEP_MAP_INIT(_name, _key) \485485- { .map.name = (_name), .map.key = (void *)(_key), \486486- .map.cross = 1, .xlock = STATIC_CROSS_LOCK_INIT(), }487487-488488-/*489489- * To initialize a lockdep_map statically use this macro.490490- * Note that _name must not be NULL.491491- */492492-#define STATIC_LOCKDEP_MAP_INIT(_name, _key) \493493- { .name = (_name), .key = (void *)(_key), .cross = 0, }494494-495495-extern void crossrelease_hist_start(enum xhlock_context_t c);496496-extern void crossrelease_hist_end(enum xhlock_context_t c);497497-extern void lockdep_invariant_state(bool force);498498-extern void lockdep_init_task(struct task_struct *task);499499-extern void lockdep_free_task(struct task_struct *task);500500-#else /* !CROSSRELEASE */501563#define lockdep_init_map_crosslock(m, n, k, s) do {} while (0)502564/*503565 * To initialize a lockdep_map statically use this macro.···480604static inline void lockdep_invariant_state(bool force) {}481605static inline void lockdep_init_task(struct task_struct *task) {}482606static inline void lockdep_free_task(struct task_struct *task) {}483483-#endif /* CROSSRELEASE */484607485608#ifdef CONFIG_LOCK_STAT486609
+9
include/linux/oom.h
···6767}68686969/*7070+ * Use this helper if tsk->mm != mm and the victim mm needs a special7171+ * handling. This is guaranteed to stay true after once set.7272+ */7373+static inline bool mm_is_oom_victim(struct mm_struct *mm)7474+{7575+ return test_bit(MMF_OOM_VICTIM, &mm->flags);7676+}7777+7878+/*7079 * Checks whether a page fault on the given mm is still reliable.7180 * This is no longer true if the oom reaper started to reap the7281 * address space which is reflected by MMF_UNSTABLE flag set in
+3
include/linux/pci.h
···16751675static inline struct pci_dev *pci_get_bus_and_slot(unsigned int bus,16761676 unsigned int devfn)16771677{ return NULL; }16781678+static inline struct pci_dev *pci_get_domain_bus_and_slot(int domain,16791679+ unsigned int bus, unsigned int devfn)16801680+{ return NULL; }1678168116791682static inline int pci_domain_nr(struct pci_bus *bus) { return 0; }16801683static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) { return NULL; }
···101101102102/* Note: callers invoking this in a loop must use a compiler barrier,103103 * for example cpu_relax(). Callers must hold producer_lock.104104+ * Callers are responsible for making sure pointer that is being queued105105+ * points to a valid data.104106 */105107static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr)106108{107109 if (unlikely(!r->size) || r->queue[r->producer])108110 return -ENOSPC;111111+112112+ /* Make sure the pointer we are storing points to a valid data. */113113+ /* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */114114+ smp_wmb();109115110116 r->queue[r->producer++] = ptr;111117 if (unlikely(r->producer >= r->size))···281275 if (ptr)282276 __ptr_ring_discard_one(r);283277278278+ /* Make sure anyone accessing data through the pointer is up to date. */279279+ /* Pairs with smp_wmb in __ptr_ring_produce. */280280+ smp_read_barrier_depends();284281 return ptr;285282}286283
···1010 */1111typedef struct {1212 arch_rwlock_t raw_lock;1313-#ifdef CONFIG_GENERIC_LOCKBREAK1414- unsigned int break_lock;1515-#endif1613#ifdef CONFIG_DEBUG_SPINLOCK1714 unsigned int magic, owner_cpu;1815 void *owner;
+5-12
include/linux/sched.h
···849849 struct held_lock held_locks[MAX_LOCK_DEPTH];850850#endif851851852852-#ifdef CONFIG_LOCKDEP_CROSSRELEASE853853-#define MAX_XHLOCKS_NR 64UL854854- struct hist_lock *xhlocks; /* Crossrelease history locks */855855- unsigned int xhlock_idx;856856- /* For restoring at history boundaries */857857- unsigned int xhlock_idx_hist[XHLOCK_CTX_NR];858858- unsigned int hist_id;859859- /* For overwrite check at each context exit */860860- unsigned int hist_id_save[XHLOCK_CTX_NR];861861-#endif862862-863852#ifdef CONFIG_UBSAN864853 unsigned int in_ubsan;865854#endif···14921503 __set_task_comm(tsk, from, false);14931504}1494150514951495-extern char *get_task_comm(char *to, struct task_struct *tsk);15061506+extern char *__get_task_comm(char *to, size_t len, struct task_struct *tsk);15071507+#define get_task_comm(buf, tsk) ({ \15081508+ BUILD_BUG_ON(sizeof(buf) != TASK_COMM_LEN); \15091509+ __get_task_comm(buf, sizeof(buf), tsk); \15101510+})1496151114971512#ifdef CONFIG_SMP14981513void scheduler_ipi(void);
+1
include/linux/sched/coredump.h
···7070#define MMF_UNSTABLE 22 /* mm is unstable for copy_from_user */7171#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zero page */7272#define MMF_DISABLE_THP 24 /* disable THP for all VMAs */7373+#define MMF_OOM_VICTIM 25 /* mm is the oom victim */7374#define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP)74757576#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
-5
include/linux/spinlock.h
···107107108108#define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock)109109110110-#ifdef CONFIG_GENERIC_LOCKBREAK111111-#define raw_spin_is_contended(lock) ((lock)->break_lock)112112-#else113113-114110#ifdef arch_spin_is_contended115111#define raw_spin_is_contended(lock) arch_spin_is_contended(&(lock)->raw_lock)116112#else117113#define raw_spin_is_contended(lock) (((void)(lock), 0))118114#endif /*arch_spin_is_contended*/119119-#endif120115121116/*122117 * This barrier must provide two things:
-3
include/linux/spinlock_types.h
···19192020typedef struct raw_spinlock {2121 arch_spinlock_t raw_lock;2222-#ifdef CONFIG_GENERIC_LOCKBREAK2323- unsigned int break_lock;2424-#endif2522#ifdef CONFIG_DEBUG_SPINLOCK2623 unsigned int magic, owner_cpu;2724 void *owner;
+4-1
include/linux/string.h
···259259{260260 __kernel_size_t ret;261261 size_t p_size = __builtin_object_size(p, 0);262262- if (p_size == (size_t)-1)262262+263263+ /* Work around gcc excess stack consumption issue */264264+ if (p_size == (size_t)-1 ||265265+ (__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0'))263266 return __builtin_strlen(p);264267 ret = strnlen(p, p_size);265268 if (p_size <= ret)
···4444#else4545#error "Please fix <asm/byteorder.h>"4646#endif4747- __u8 proto_ctype;4848- __u16 flags;4747+ __u8 proto_ctype;4848+ __be16 flags;4949 };5050- __u32 word;5050+ __be32 word;5151 };5252};5353···8484 * if there is an unknown standard or private flags, or the options length for8585 * the flags exceeds the options length specific in hlen of the GUE header.8686 */8787-static inline int validate_gue_flags(struct guehdr *guehdr,8888- size_t optlen)8787+static inline int validate_gue_flags(struct guehdr *guehdr, size_t optlen)8988{8989+ __be16 flags = guehdr->flags;9090 size_t len;9191- __be32 flags = guehdr->flags;92919392 if (flags & ~GUE_FLAGS_ALL)9493 return 1;···100101 /* Private flags are last four bytes accounted in101102 * guehdr_flags_len102103 */103103- flags = *(__be32 *)((void *)&guehdr[1] + len - GUE_LEN_PRIV);104104+ __be32 pflags = *(__be32 *)((void *)&guehdr[1] +105105+ len - GUE_LEN_PRIV);104106105105- if (flags & ~GUE_PFLAGS_ALL)107107+ if (pflags & ~GUE_PFLAGS_ALL)106108 return 1;107109108108- len += guehdr_priv_flags_len(flags);110110+ len += guehdr_priv_flags_len(pflags);109111 if (len > optlen)110112 return 1;111113 }
···589589 radix_tree_init();590590591591 /*592592+ * Set up housekeeping before setting up workqueues to allow the unbound593593+ * workqueue to take non-housekeeping into account.594594+ */595595+ housekeeping_init();596596+597597+ /*592598 * Allow workqueue creation and work item queueing/cancelling593599 * early. Work item execution depends on kthreads and starts after594600 * workqueue_init().···611605 early_irq_init();612606 init_IRQ();613607 tick_init();614614- housekeeping_init();615608 rcu_init_nohz();616609 init_timers();617610 hrtimers_init();
···17551755 return -EFAULT;17561756}17571757#endif17581758+17591759+__weak void abort(void)17601760+{17611761+ BUG();17621762+17631763+ /* if that doesn't kill us, halt */17641764+ panic("Oops failed to kill thread");17651765+}
+2-2
kernel/futex.c
···15821582{15831583 unsigned int op = (encoded_op & 0x70000000) >> 28;15841584 unsigned int cmp = (encoded_op & 0x0f000000) >> 24;15851585- int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 12);15861586- int cmparg = sign_extend32(encoded_op & 0x00000fff, 12);15851585+ int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 11);15861586+ int cmparg = sign_extend32(encoded_op & 0x00000fff, 11);15871587 int oldval, ret;1588158815891589 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) {
···5757#define CREATE_TRACE_POINTS5858#include <trace/events/lock.h>59596060-#ifdef CONFIG_LOCKDEP_CROSSRELEASE6161-#include <linux/slab.h>6262-#endif6363-6460#ifdef CONFIG_PROVE_LOCKING6561int prove_locking = 1;6662module_param(prove_locking, int, 0644);···7074#else7175#define lock_stat 07276#endif7373-7474-#ifdef CONFIG_BOOTPARAM_LOCKDEP_CROSSRELEASE_FULLSTACK7575-static int crossrelease_fullstack = 1;7676-#else7777-static int crossrelease_fullstack;7878-#endif7979-static int __init allow_crossrelease_fullstack(char *str)8080-{8181- crossrelease_fullstack = 1;8282- return 0;8383-}8484-8585-early_param("crossrelease_fullstack", allow_crossrelease_fullstack);86778778/*8879 * lockdep_lock: protects the lockdep graph, the hashes and the···723740 return is_static || static_obj(lock->key) ? NULL : ERR_PTR(-EINVAL);724741}725742726726-#ifdef CONFIG_LOCKDEP_CROSSRELEASE727727-static void cross_init(struct lockdep_map *lock, int cross);728728-static int cross_lock(struct lockdep_map *lock);729729-static int lock_acquire_crosslock(struct held_lock *hlock);730730-static int lock_release_crosslock(struct lockdep_map *lock);731731-#else732732-static inline void cross_init(struct lockdep_map *lock, int cross) {}733733-static inline int cross_lock(struct lockdep_map *lock) { return 0; }734734-static inline int lock_acquire_crosslock(struct held_lock *hlock) { return 2; }735735-static inline int lock_release_crosslock(struct lockdep_map *lock) { return 2; }736736-#endif737737-738743/*739744 * Register a lock's class in the hash-table, if the class is not present740745 * yet. Otherwise we look it up. We cache the result in the lock object···11221151 printk(KERN_CONT "\n\n");11231152 }1124115311251125- if (cross_lock(tgt->instance)) {11261126- printk(" Possible unsafe locking scenario by crosslock:\n\n");11271127- printk(" CPU0 CPU1\n");11281128- printk(" ---- ----\n");11291129- printk(" lock(");11301130- __print_lock_name(parent);11311131- printk(KERN_CONT ");\n");11321132- printk(" lock(");11331133- __print_lock_name(target);11341134- printk(KERN_CONT ");\n");11351135- printk(" lock(");11361136- __print_lock_name(source);11371137- printk(KERN_CONT ");\n");11381138- printk(" unlock(");11391139- __print_lock_name(target);11401140- printk(KERN_CONT ");\n");11411141- printk("\n *** DEADLOCK ***\n\n");11421142- } else {11431143- printk(" Possible unsafe locking scenario:\n\n");11441144- printk(" CPU0 CPU1\n");11451145- printk(" ---- ----\n");11461146- printk(" lock(");11471147- __print_lock_name(target);11481148- printk(KERN_CONT ");\n");11491149- printk(" lock(");11501150- __print_lock_name(parent);11511151- printk(KERN_CONT ");\n");11521152- printk(" lock(");11531153- __print_lock_name(target);11541154- printk(KERN_CONT ");\n");11551155- printk(" lock(");11561156- __print_lock_name(source);11571157- printk(KERN_CONT ");\n");11581158- printk("\n *** DEADLOCK ***\n\n");11591159- }11541154+ printk(" Possible unsafe locking scenario:\n\n");11551155+ printk(" CPU0 CPU1\n");11561156+ printk(" ---- ----\n");11571157+ printk(" lock(");11581158+ __print_lock_name(target);11591159+ printk(KERN_CONT ");\n");11601160+ printk(" lock(");11611161+ __print_lock_name(parent);11621162+ printk(KERN_CONT ");\n");11631163+ printk(" lock(");11641164+ __print_lock_name(target);11651165+ printk(KERN_CONT ");\n");11661166+ printk(" lock(");11671167+ __print_lock_name(source);11681168+ printk(KERN_CONT ");\n");11691169+ printk("\n *** DEADLOCK ***\n\n");11601170}1161117111621172/*···11631211 curr->comm, task_pid_nr(curr));11641212 print_lock(check_src);1165121311661166- if (cross_lock(check_tgt->instance))11671167- pr_warn("\nbut now in release context of a crosslock acquired at the following:\n");11681168- else11691169- pr_warn("\nbut task is already holding lock:\n");12141214+ pr_warn("\nbut task is already holding lock:\n");1170121511711216 print_lock(check_tgt);11721217 pr_warn("\nwhich lock already depends on the new lock.\n\n");···11931244 if (!debug_locks_off_graph_unlock() || debug_locks_silent)11941245 return 0;1195124611961196- if (cross_lock(check_tgt->instance))11971197- this->trace = *trace;11981198- else if (!save_trace(&this->trace))12471247+ if (!save_trace(&this->trace))11991248 return 0;1200124912011250 depth = get_lock_depth(target);···17971850 if (nest)17981851 return 2;1799185218001800- if (cross_lock(prev->instance))18011801- continue;18021802-18031853 return print_deadlock_bug(curr, prev, next);18041854 }18051855 return 1;···19622018 for (;;) {19632019 int distance = curr->lockdep_depth - depth + 1;19642020 hlock = curr->held_locks + depth - 1;19651965- /*19661966- * Only non-crosslock entries get new dependencies added.19671967- * Crosslock entries will be added by commit later:19681968- */19691969- if (!cross_lock(hlock->instance)) {19701970- /*19711971- * Only non-recursive-read entries get new dependencies19721972- * added:19731973- */19741974- if (hlock->read != 2 && hlock->check) {19751975- int ret = check_prev_add(curr, hlock, next,19761976- distance, &trace, save_trace);19771977- if (!ret)19781978- return 0;1979202119801980- /*19811981- * Stop after the first non-trylock entry,19821982- * as non-trylock entries have added their19831983- * own direct dependencies already, so this19841984- * lock is connected to them indirectly:19851985- */19861986- if (!hlock->trylock)19871987- break;19881988- }20222022+ /*20232023+ * Only non-recursive-read entries get new dependencies20242024+ * added:20252025+ */20262026+ if (hlock->read != 2 && hlock->check) {20272027+ int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);20282028+ if (!ret)20292029+ return 0;20302030+20312031+ /*20322032+ * Stop after the first non-trylock entry,20332033+ * as non-trylock entries have added their20342034+ * own direct dependencies already, so this20352035+ * lock is connected to them indirectly:20362036+ */20372037+ if (!hlock->trylock)20382038+ break;19892039 }20402040+19902041 depth--;19912042 /*19922043 * End of lock-stack?···32313292void lockdep_init_map(struct lockdep_map *lock, const char *name,32323293 struct lock_class_key *key, int subclass)32333294{32343234- cross_init(lock, 0);32353295 __lockdep_init_map(lock, name, key, subclass);32363296}32373297EXPORT_SYMBOL_GPL(lockdep_init_map);32383238-32393239-#ifdef CONFIG_LOCKDEP_CROSSRELEASE32403240-void lockdep_init_map_crosslock(struct lockdep_map *lock, const char *name,32413241- struct lock_class_key *key, int subclass)32423242-{32433243- cross_init(lock, 1);32443244- __lockdep_init_map(lock, name, key, subclass);32453245-}32463246-EXPORT_SYMBOL_GPL(lockdep_init_map_crosslock);32473247-#endif3248329832493299struct lock_class_key __lockdep_no_validate__;32503300EXPORT_SYMBOL_GPL(__lockdep_no_validate__);···32903362 int chain_head = 0;32913363 int class_idx;32923364 u64 chain_key;32933293- int ret;3294336532953366 if (unlikely(!debug_locks))32963367 return 0;···3338341133393412 class_idx = class - lock_classes + 1;3340341333413341- /* TODO: nest_lock is not implemented for crosslock yet. */33423342- if (depth && !cross_lock(lock)) {34143414+ if (depth) {33433415 hlock = curr->held_locks + depth - 1;33443416 if (hlock->class_idx == class_idx && nest_lock) {33453417 if (hlock->references) {···3425349934263500 if (!validate_chain(curr, lock, hlock, chain_head, chain_key))34273501 return 0;34283428-34293429- ret = lock_acquire_crosslock(hlock);34303430- /*34313431- * 2 means normal acquire operations are needed. Otherwise, it's34323432- * ok just to return with '0:fail, 1:success'.34333433- */34343434- if (ret != 2)34353435- return ret;3436350234373503 curr->curr_chain_key = chain_key;34383504 curr->lockdep_depth++;···36633745 struct task_struct *curr = current;36643746 struct held_lock *hlock;36653747 unsigned int depth;36663666- int ret, i;37483748+ int i;3667374936683750 if (unlikely(!debug_locks))36693751 return 0;36703670-36713671- ret = lock_release_crosslock(lock);36723672- /*36733673- * 2 means normal release operations are needed. Otherwise, it's36743674- * ok just to return with '0:fail, 1:success'.36753675- */36763676- if (ret != 2)36773677- return ret;3678375236793753 depth = curr->lockdep_depth;36803754 /*···45854675 dump_stack();45864676}45874677EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious);45884588-45894589-#ifdef CONFIG_LOCKDEP_CROSSRELEASE45904590-45914591-/*45924592- * Crossrelease works by recording a lock history for each thread and45934593- * connecting those historic locks that were taken after the45944594- * wait_for_completion() in the complete() context.45954595- *45964596- * Task-A Task-B45974597- *45984598- * mutex_lock(&A);45994599- * mutex_unlock(&A);46004600- *46014601- * wait_for_completion(&C);46024602- * lock_acquire_crosslock();46034603- * atomic_inc_return(&cross_gen_id);46044604- * |46054605- * | mutex_lock(&B);46064606- * | mutex_unlock(&B);46074607- * |46084608- * | complete(&C);46094609- * `-- lock_commit_crosslock();46104610- *46114611- * Which will then add a dependency between B and C.46124612- */46134613-46144614-#define xhlock(i) (current->xhlocks[(i) % MAX_XHLOCKS_NR])46154615-46164616-/*46174617- * Whenever a crosslock is held, cross_gen_id will be increased.46184618- */46194619-static atomic_t cross_gen_id; /* Can be wrapped */46204620-46214621-/*46224622- * Make an entry of the ring buffer invalid.46234623- */46244624-static inline void invalidate_xhlock(struct hist_lock *xhlock)46254625-{46264626- /*46274627- * Normally, xhlock->hlock.instance must be !NULL.46284628- */46294629- xhlock->hlock.instance = NULL;46304630-}46314631-46324632-/*46334633- * Lock history stacks; we have 2 nested lock history stacks:46344634- *46354635- * HARD(IRQ)46364636- * SOFT(IRQ)46374637- *46384638- * The thing is that once we complete a HARD/SOFT IRQ the future task locks46394639- * should not depend on any of the locks observed while running the IRQ. So46404640- * what we do is rewind the history buffer and erase all our knowledge of that46414641- * temporal event.46424642- */46434643-46444644-void crossrelease_hist_start(enum xhlock_context_t c)46454645-{46464646- struct task_struct *cur = current;46474647-46484648- if (!cur->xhlocks)46494649- return;46504650-46514651- cur->xhlock_idx_hist[c] = cur->xhlock_idx;46524652- cur->hist_id_save[c] = cur->hist_id;46534653-}46544654-46554655-void crossrelease_hist_end(enum xhlock_context_t c)46564656-{46574657- struct task_struct *cur = current;46584658-46594659- if (cur->xhlocks) {46604660- unsigned int idx = cur->xhlock_idx_hist[c];46614661- struct hist_lock *h = &xhlock(idx);46624662-46634663- cur->xhlock_idx = idx;46644664-46654665- /* Check if the ring was overwritten. */46664666- if (h->hist_id != cur->hist_id_save[c])46674667- invalidate_xhlock(h);46684668- }46694669-}46704670-46714671-/*46724672- * lockdep_invariant_state() is used to annotate independence inside a task, to46734673- * make one task look like multiple independent 'tasks'.46744674- *46754675- * Take for instance workqueues; each work is independent of the last. The46764676- * completion of a future work does not depend on the completion of a past work46774677- * (in general). Therefore we must not carry that (lock) dependency across46784678- * works.46794679- *46804680- * This is true for many things; pretty much all kthreads fall into this46814681- * pattern, where they have an invariant state and future completions do not46824682- * depend on past completions. Its just that since they all have the 'same'46834683- * form -- the kthread does the same over and over -- it doesn't typically46844684- * matter.46854685- *46864686- * The same is true for system-calls, once a system call is completed (we've46874687- * returned to userspace) the next system call does not depend on the lock46884688- * history of the previous system call.46894689- *46904690- * They key property for independence, this invariant state, is that it must be46914691- * a point where we hold no locks and have no history. Because if we were to46924692- * hold locks, the restore at _end() would not necessarily recover it's history46934693- * entry. Similarly, independence per-definition means it does not depend on46944694- * prior state.46954695- */46964696-void lockdep_invariant_state(bool force)46974697-{46984698- /*46994699- * We call this at an invariant point, no current state, no history.47004700- * Verify the former, enforce the latter.47014701- */47024702- WARN_ON_ONCE(!force && current->lockdep_depth);47034703- if (current->xhlocks)47044704- invalidate_xhlock(&xhlock(current->xhlock_idx));47054705-}47064706-47074707-static int cross_lock(struct lockdep_map *lock)47084708-{47094709- return lock ? lock->cross : 0;47104710-}47114711-47124712-/*47134713- * This is needed to decide the relationship between wrapable variables.47144714- */47154715-static inline int before(unsigned int a, unsigned int b)47164716-{47174717- return (int)(a - b) < 0;47184718-}47194719-47204720-static inline struct lock_class *xhlock_class(struct hist_lock *xhlock)47214721-{47224722- return hlock_class(&xhlock->hlock);47234723-}47244724-47254725-static inline struct lock_class *xlock_class(struct cross_lock *xlock)47264726-{47274727- return hlock_class(&xlock->hlock);47284728-}47294729-47304730-/*47314731- * Should we check a dependency with previous one?47324732- */47334733-static inline int depend_before(struct held_lock *hlock)47344734-{47354735- return hlock->read != 2 && hlock->check && !hlock->trylock;47364736-}47374737-47384738-/*47394739- * Should we check a dependency with next one?47404740- */47414741-static inline int depend_after(struct held_lock *hlock)47424742-{47434743- return hlock->read != 2 && hlock->check;47444744-}47454745-47464746-/*47474747- * Check if the xhlock is valid, which would be false if,47484748- *47494749- * 1. Has not used after initializaion yet.47504750- * 2. Got invalidated.47514751- *47524752- * Remind hist_lock is implemented as a ring buffer.47534753- */47544754-static inline int xhlock_valid(struct hist_lock *xhlock)47554755-{47564756- /*47574757- * xhlock->hlock.instance must be !NULL.47584758- */47594759- return !!xhlock->hlock.instance;47604760-}47614761-47624762-/*47634763- * Record a hist_lock entry.47644764- *47654765- * Irq disable is only required.47664766- */47674767-static void add_xhlock(struct held_lock *hlock)47684768-{47694769- unsigned int idx = ++current->xhlock_idx;47704770- struct hist_lock *xhlock = &xhlock(idx);47714771-47724772-#ifdef CONFIG_DEBUG_LOCKDEP47734773- /*47744774- * This can be done locklessly because they are all task-local47754775- * state, we must however ensure IRQs are disabled.47764776- */47774777- WARN_ON_ONCE(!irqs_disabled());47784778-#endif47794779-47804780- /* Initialize hist_lock's members */47814781- xhlock->hlock = *hlock;47824782- xhlock->hist_id = ++current->hist_id;47834783-47844784- xhlock->trace.nr_entries = 0;47854785- xhlock->trace.max_entries = MAX_XHLOCK_TRACE_ENTRIES;47864786- xhlock->trace.entries = xhlock->trace_entries;47874787-47884788- if (crossrelease_fullstack) {47894789- xhlock->trace.skip = 3;47904790- save_stack_trace(&xhlock->trace);47914791- } else {47924792- xhlock->trace.nr_entries = 1;47934793- xhlock->trace.entries[0] = hlock->acquire_ip;47944794- }47954795-}47964796-47974797-static inline int same_context_xhlock(struct hist_lock *xhlock)47984798-{47994799- return xhlock->hlock.irq_context == task_irq_context(current);48004800-}48014801-48024802-/*48034803- * This should be lockless as far as possible because this would be48044804- * called very frequently.48054805- */48064806-static void check_add_xhlock(struct held_lock *hlock)48074807-{48084808- /*48094809- * Record a hist_lock, only in case that acquisitions ahead48104810- * could depend on the held_lock. For example, if the held_lock48114811- * is trylock then acquisitions ahead never depends on that.48124812- * In that case, we don't need to record it. Just return.48134813- */48144814- if (!current->xhlocks || !depend_before(hlock))48154815- return;48164816-48174817- add_xhlock(hlock);48184818-}48194819-48204820-/*48214821- * For crosslock.48224822- */48234823-static int add_xlock(struct held_lock *hlock)48244824-{48254825- struct cross_lock *xlock;48264826- unsigned int gen_id;48274827-48284828- if (!graph_lock())48294829- return 0;48304830-48314831- xlock = &((struct lockdep_map_cross *)hlock->instance)->xlock;48324832-48334833- /*48344834- * When acquisitions for a crosslock are overlapped, we use48354835- * nr_acquire to perform commit for them, based on cross_gen_id48364836- * of the first acquisition, which allows to add additional48374837- * dependencies.48384838- *48394839- * Moreover, when no acquisition of a crosslock is in progress,48404840- * we should not perform commit because the lock might not exist48414841- * any more, which might cause incorrect memory access. So we48424842- * have to track the number of acquisitions of a crosslock.48434843- *48444844- * depend_after() is necessary to initialize only the first48454845- * valid xlock so that the xlock can be used on its commit.48464846- */48474847- if (xlock->nr_acquire++ && depend_after(&xlock->hlock))48484848- goto unlock;48494849-48504850- gen_id = (unsigned int)atomic_inc_return(&cross_gen_id);48514851- xlock->hlock = *hlock;48524852- xlock->hlock.gen_id = gen_id;48534853-unlock:48544854- graph_unlock();48554855- return 1;48564856-}48574857-48584858-/*48594859- * Called for both normal and crosslock acquires. Normal locks will be48604860- * pushed on the hist_lock queue. Cross locks will record state and48614861- * stop regular lock_acquire() to avoid being placed on the held_lock48624862- * stack.48634863- *48644864- * Return: 0 - failure;48654865- * 1 - crosslock, done;48664866- * 2 - normal lock, continue to held_lock[] ops.48674867- */48684868-static int lock_acquire_crosslock(struct held_lock *hlock)48694869-{48704870- /*48714871- * CONTEXT 1 CONTEXT 248724872- * --------- ---------48734873- * lock A (cross)48744874- * X = atomic_inc_return(&cross_gen_id)48754875- * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~48764876- * Y = atomic_read_acquire(&cross_gen_id)48774877- * lock B48784878- *48794879- * atomic_read_acquire() is for ordering between A and B,48804880- * IOW, A happens before B, when CONTEXT 2 see Y >= X.48814881- *48824882- * Pairs with atomic_inc_return() in add_xlock().48834883- */48844884- hlock->gen_id = (unsigned int)atomic_read_acquire(&cross_gen_id);48854885-48864886- if (cross_lock(hlock->instance))48874887- return add_xlock(hlock);48884888-48894889- check_add_xhlock(hlock);48904890- return 2;48914891-}48924892-48934893-static int copy_trace(struct stack_trace *trace)48944894-{48954895- unsigned long *buf = stack_trace + nr_stack_trace_entries;48964896- unsigned int max_nr = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;48974897- unsigned int nr = min(max_nr, trace->nr_entries);48984898-48994899- trace->nr_entries = nr;49004900- memcpy(buf, trace->entries, nr * sizeof(trace->entries[0]));49014901- trace->entries = buf;49024902- nr_stack_trace_entries += nr;49034903-49044904- if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {49054905- if (!debug_locks_off_graph_unlock())49064906- return 0;49074907-49084908- print_lockdep_off("BUG: MAX_STACK_TRACE_ENTRIES too low!");49094909- dump_stack();49104910-49114911- return 0;49124912- }49134913-49144914- return 1;49154915-}49164916-49174917-static int commit_xhlock(struct cross_lock *xlock, struct hist_lock *xhlock)49184918-{49194919- unsigned int xid, pid;49204920- u64 chain_key;49214921-49224922- xid = xlock_class(xlock) - lock_classes;49234923- chain_key = iterate_chain_key((u64)0, xid);49244924- pid = xhlock_class(xhlock) - lock_classes;49254925- chain_key = iterate_chain_key(chain_key, pid);49264926-49274927- if (lookup_chain_cache(chain_key))49284928- return 1;49294929-49304930- if (!add_chain_cache_classes(xid, pid, xhlock->hlock.irq_context,49314931- chain_key))49324932- return 0;49334933-49344934- if (!check_prev_add(current, &xlock->hlock, &xhlock->hlock, 1,49354935- &xhlock->trace, copy_trace))49364936- return 0;49374937-49384938- return 1;49394939-}49404940-49414941-static void commit_xhlocks(struct cross_lock *xlock)49424942-{49434943- unsigned int cur = current->xhlock_idx;49444944- unsigned int prev_hist_id = xhlock(cur).hist_id;49454945- unsigned int i;49464946-49474947- if (!graph_lock())49484948- return;49494949-49504950- if (xlock->nr_acquire) {49514951- for (i = 0; i < MAX_XHLOCKS_NR; i++) {49524952- struct hist_lock *xhlock = &xhlock(cur - i);49534953-49544954- if (!xhlock_valid(xhlock))49554955- break;49564956-49574957- if (before(xhlock->hlock.gen_id, xlock->hlock.gen_id))49584958- break;49594959-49604960- if (!same_context_xhlock(xhlock))49614961- break;49624962-49634963- /*49644964- * Filter out the cases where the ring buffer was49654965- * overwritten and the current entry has a bigger49664966- * hist_id than the previous one, which is impossible49674967- * otherwise:49684968- */49694969- if (unlikely(before(prev_hist_id, xhlock->hist_id)))49704970- break;49714971-49724972- prev_hist_id = xhlock->hist_id;49734973-49744974- /*49754975- * commit_xhlock() returns 0 with graph_lock already49764976- * released if fail.49774977- */49784978- if (!commit_xhlock(xlock, xhlock))49794979- return;49804980- }49814981- }49824982-49834983- graph_unlock();49844984-}49854985-49864986-void lock_commit_crosslock(struct lockdep_map *lock)49874987-{49884988- struct cross_lock *xlock;49894989- unsigned long flags;49904990-49914991- if (unlikely(!debug_locks || current->lockdep_recursion))49924992- return;49934993-49944994- if (!current->xhlocks)49954995- return;49964996-49974997- /*49984998- * Do commit hist_locks with the cross_lock, only in case that49994999- * the cross_lock could depend on acquisitions after that.50005000- *50015001- * For example, if the cross_lock does not have the 'check' flag50025002- * then we don't need to check dependencies and commit for that.50035003- * Just skip it. In that case, of course, the cross_lock does50045004- * not depend on acquisitions ahead, either.50055005- *50065006- * WARNING: Don't do that in add_xlock() in advance. When an50075007- * acquisition context is different from the commit context,50085008- * invalid(skipped) cross_lock might be accessed.50095009- */50105010- if (!depend_after(&((struct lockdep_map_cross *)lock)->xlock.hlock))50115011- return;50125012-50135013- raw_local_irq_save(flags);50145014- check_flags(flags);50155015- current->lockdep_recursion = 1;50165016- xlock = &((struct lockdep_map_cross *)lock)->xlock;50175017- commit_xhlocks(xlock);50185018- current->lockdep_recursion = 0;50195019- raw_local_irq_restore(flags);50205020-}50215021-EXPORT_SYMBOL_GPL(lock_commit_crosslock);50225022-50235023-/*50245024- * Return: 0 - failure;50255025- * 1 - crosslock, done;50265026- * 2 - normal lock, continue to held_lock[] ops.50275027- */50285028-static int lock_release_crosslock(struct lockdep_map *lock)50295029-{50305030- if (cross_lock(lock)) {50315031- if (!graph_lock())50325032- return 0;50335033- ((struct lockdep_map_cross *)lock)->xlock.nr_acquire--;50345034- graph_unlock();50355035- return 1;50365036- }50375037- return 2;50385038-}50395039-50405040-static void cross_init(struct lockdep_map *lock, int cross)50415041-{50425042- if (cross)50435043- ((struct lockdep_map_cross *)lock)->xlock.nr_acquire = 0;50445044-50455045- lock->cross = cross;50465046-50475047- /*50485048- * Crossrelease assumes that the ring buffer size of xhlocks50495049- * is aligned with power of 2. So force it on build.50505050- */50515051- BUILD_BUG_ON(MAX_XHLOCKS_NR & (MAX_XHLOCKS_NR - 1));50525052-}50535053-50545054-void lockdep_init_task(struct task_struct *task)50555055-{50565056- int i;50575057-50585058- task->xhlock_idx = UINT_MAX;50595059- task->hist_id = 0;50605060-50615061- for (i = 0; i < XHLOCK_CTX_NR; i++) {50625062- task->xhlock_idx_hist[i] = UINT_MAX;50635063- task->hist_id_save[i] = 0;50645064- }50655065-50665066- task->xhlocks = kzalloc(sizeof(struct hist_lock) * MAX_XHLOCKS_NR,50675067- GFP_KERNEL);50685068-}50695069-50705070-void lockdep_free_task(struct task_struct *task)50715071-{50725072- if (task->xhlocks) {50735073- void *tmp = task->xhlocks;50745074- /* Diable crossrelease for current */50755075- task->xhlocks = NULL;50765076- kfree(tmp);50775077- }50785078-}50795079-#endif
···50975097 return ret;50985098}5099509951005100-/**51015101- * sys_sched_rr_get_interval - return the default timeslice of a process.51025102- * @pid: pid of the process.51035103- * @interval: userspace pointer to the timeslice value.51045104- *51055105- * this syscall writes the default timeslice value of a given process51065106- * into the user-space timespec buffer. A value of '0' means infinity.51075107- *51085108- * Return: On success, 0 and the timeslice is in @interval. Otherwise,51095109- * an error code.51105110- */51115100static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)51125101{51135102 struct task_struct *p;···51335144 return retval;51345145}5135514651475147+/**51485148+ * sys_sched_rr_get_interval - return the default timeslice of a process.51495149+ * @pid: pid of the process.51505150+ * @interval: userspace pointer to the timeslice value.51515151+ *51525152+ * this syscall writes the default timeslice value of a given process51535153+ * into the user-space timespec buffer. A value of '0' means infinity.51545154+ *51555155+ * Return: On success, 0 and the timeslice is in @interval. Otherwise,51565156+ * an error code.51575157+ */51365158SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,51375159 struct timespec __user *, interval)51385160{
+7-1
kernel/sched/rt.c
···20342034 bool resched = false;20352035 struct task_struct *p;20362036 struct rq *src_rq;20372037+ int rt_overload_count = rt_overloaded(this_rq);2037203820382038- if (likely(!rt_overloaded(this_rq)))20392039+ if (likely(!rt_overload_count))20392040 return;2040204120412042 /*···20442043 * see overloaded we must also see the rto_mask bit.20452044 */20462045 smp_rmb();20462046+20472047+ /* If we are the only overloaded CPU do nothing */20482048+ if (rt_overload_count == 1 &&20492049+ cpumask_test_cpu(this_rq->cpu, this_rq->rd->rto_mask))20502050+ return;2047205120482052#ifdef HAVE_RT_PUSH_IPI20492053 if (sched_feat(RT_PUSH_IPI)) {
+1
kernel/trace/Kconfig
···164164 bool "Enable trace events for preempt and irq disable/enable"165165 select TRACE_IRQFLAGS166166 depends on DEBUG_PREEMPT || !PROVE_LOCKING167167+ depends on TRACING167168 default n168169 help169170 Enable tracing of disable and enable events for preemption and irqs.
···362362}363363364364/**365365- * trace_pid_filter_add_remove - Add or remove a task from a pid_list365365+ * trace_pid_filter_add_remove_task - Add or remove a task from a pid_list366366 * @pid_list: The list to modify367367 * @self: The current task for fork or NULL for exit368368 * @task: The task to add or remove···925925}926926927927/**928928- * trace_snapshot - take a snapshot of the current buffer.928928+ * tracing_snapshot - take a snapshot of the current buffer.929929 *930930 * This causes a swap between the snapshot buffer and the current live931931 * tracing buffer. You can use this to take snapshots of the live···10041004EXPORT_SYMBOL_GPL(tracing_alloc_snapshot);1005100510061006/**10071007- * trace_snapshot_alloc - allocate and take a snapshot of the current buffer.10071007+ * tracing_snapshot_alloc - allocate and take a snapshot of the current buffer.10081008 *10091009- * This is similar to trace_snapshot(), but it will allocate the10091009+ * This is similar to tracing_snapshot(), but it will allocate the10101010 * snapshot buffer if it isn't already allocated. Use this only10111011 * where it is safe to sleep, as the allocation may sleep.10121012 *···13031303/*13041304 * Copy the new maximum trace into the separate maximum-trace13051305 * structure. (this way the maximum trace is permanently saved,13061306- * for later retrieval via /sys/kernel/debug/tracing/latency_trace)13061306+ * for later retrieval via /sys/kernel/tracing/tracing_max_latency)13071307 */13081308static void13091309__update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)···2415241524162416 entry = ring_buffer_event_data(event);24172417 size = ring_buffer_event_length(event);24182418- export->write(entry, size);24182418+ export->write(export, entry, size);24192419}2420242024212421static DEFINE_MUTEX(ftrace_export_lock);···41784178 .llseek = seq_lseek,41794179};4180418041814181-/*41824182- * The tracer itself will not take this lock, but still we want41834183- * to provide a consistent cpumask to user-space:41844184- */41854185-static DEFINE_MUTEX(tracing_cpumask_update_lock);41864186-41874187-/*41884188- * Temporary storage for the character representation of the41894189- * CPU bitmask (and one more byte for the newline):41904190- */41914191-static char mask_str[NR_CPUS + 1];41924192-41934181static ssize_t41944182tracing_cpumask_read(struct file *filp, char __user *ubuf,41954183 size_t count, loff_t *ppos)41964184{41974185 struct trace_array *tr = file_inode(filp)->i_private;41864186+ char *mask_str;41984187 int len;4199418842004200- mutex_lock(&tracing_cpumask_update_lock);41894189+ len = snprintf(NULL, 0, "%*pb\n",41904190+ cpumask_pr_args(tr->tracing_cpumask)) + 1;41914191+ mask_str = kmalloc(len, GFP_KERNEL);41924192+ if (!mask_str)41934193+ return -ENOMEM;4201419442024202- len = snprintf(mask_str, count, "%*pb\n",41954195+ len = snprintf(mask_str, len, "%*pb\n",42034196 cpumask_pr_args(tr->tracing_cpumask));42044197 if (len >= count) {42054198 count = -EINVAL;42064199 goto out_err;42074200 }42084208- count = simple_read_from_buffer(ubuf, count, ppos, mask_str, NR_CPUS+1);42014201+ count = simple_read_from_buffer(ubuf, count, ppos, mask_str, len);4209420242104203out_err:42114211- mutex_unlock(&tracing_cpumask_update_lock);42044204+ kfree(mask_str);4212420542134206 return count;42144207}···42204227 err = cpumask_parse_user(ubuf, count, tracing_cpumask_new);42214228 if (err)42224229 goto err_unlock;42234223-42244224- mutex_lock(&tracing_cpumask_update_lock);4225423042264231 local_irq_disable();42274232 arch_spin_lock(&tr->max_lock);···42434252 local_irq_enable();4244425342454254 cpumask_copy(tr->tracing_cpumask, tracing_cpumask_new);42464246-42474247- mutex_unlock(&tracing_cpumask_update_lock);42484255 free_cpumask_var(tracing_cpumask_new);4249425642504257 return count;
+4
kernel/trace/trace_stack.c
···209209 if (__this_cpu_read(disable_stack_tracer) != 1)210210 goto out;211211212212+ /* If rcu is not watching, then save stack trace can fail */213213+ if (!rcu_is_watching())214214+ goto out;215215+212216 ip += MCOUNT_INSN_SIZE;213217214218 check_stack(ip, &stack);
···3838#include <linux/hardirq.h>3939#include <linux/mempolicy.h>4040#include <linux/freezer.h>4141-#include <linux/kallsyms.h>4241#include <linux/debug_locks.h>4342#include <linux/lockdep.h>4443#include <linux/idr.h>···4748#include <linux/nodemask.h>4849#include <linux/moduleparam.h>4950#include <linux/uaccess.h>5151+#include <linux/sched/isolation.h>50525153#include "workqueue_internal.h"5254···16341634 mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);1635163516361636 /*16371637- * Sanity check nr_running. Because wq_unbind_fn() releases16371637+ * Sanity check nr_running. Because unbind_workers() releases16381638 * pool->lock between setting %WORKER_UNBOUND and zapping16391639 * nr_running, the warning may trigger spuriously. Check iff16401640 * unbind is not in progress.···45104510 * cpu comes back online.45114511 */4512451245134513-static void wq_unbind_fn(struct work_struct *work)45134513+static void unbind_workers(int cpu)45144514{45154515- int cpu = smp_processor_id();45164515 struct worker_pool *pool;45174516 struct worker *worker;45184517···45874588 pool->attrs->cpumask) < 0);4588458945894590 spin_lock_irq(&pool->lock);45904590-45914591- /*45924592- * XXX: CPU hotplug notifiers are weird and can call DOWN_FAILED45934593- * w/o preceding DOWN_PREPARE. Work around it. CPU hotplug is45944594- * being reworked and this can go away in time.45954595- */45964596- if (!(pool->flags & POOL_DISASSOCIATED)) {45974597- spin_unlock_irq(&pool->lock);45984598- return;45994599- }4600459146014592 pool->flags &= ~POOL_DISASSOCIATED;46024593···4698470946994710int workqueue_offline_cpu(unsigned int cpu)47004711{47014701- struct work_struct unbind_work;47024712 struct workqueue_struct *wq;4703471347044714 /* unbinding per-cpu workers should happen on the local CPU */47054705- INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);47064706- queue_work_on(cpu, system_highpri_wq, &unbind_work);47154715+ if (WARN_ON(cpu != smp_processor_id()))47164716+ return -1;47174717+47184718+ unbind_workers(cpu);4707471947084720 /* update NUMA affinity of unbound workqueues */47094721 mutex_lock(&wq_pool_mutex);···47124722 wq_update_unbound_numa(wq, cpu, false);47134723 mutex_unlock(&wq_pool_mutex);4714472447154715- /* wait for per-cpu unbinding to finish */47164716- flush_work(&unbind_work);47174717- destroy_work_on_stack(&unbind_work);47184725 return 0;47194726}47204727···49444957 if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL))49454958 return -ENOMEM;4946495949604960+ /*49614961+ * Not excluding isolated cpus on purpose.49624962+ * If the user wishes to include them, we allow that.49634963+ */49474964 cpumask_and(cpumask, cpumask, cpu_possible_mask);49484965 if (!cpumask_empty(cpumask)) {49494966 apply_wqattrs_lock();···55465555 WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long));5547555655485557 BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL));55495549- cpumask_copy(wq_unbound_cpumask, cpu_possible_mask);55585558+ cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(HK_FLAG_DOMAIN));5550555955515560 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC);55525561
-33
lib/Kconfig.debug
···10991099 select DEBUG_MUTEXES11001100 select DEBUG_RT_MUTEXES if RT_MUTEXES11011101 select DEBUG_LOCK_ALLOC11021102- select LOCKDEP_CROSSRELEASE11031103- select LOCKDEP_COMPLETIONS11041102 select TRACE_IRQFLAGS11051103 default n11061104 help···1167116911681170 CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.11691171 (CONFIG_LOCKDEP defines "acquire" and "release" events.)11701170-11711171-config LOCKDEP_CROSSRELEASE11721172- bool11731173- help11741174- This makes lockdep work for crosslock which is a lock allowed to11751175- be released in a different context from the acquisition context.11761176- Normally a lock must be released in the context acquiring the lock.11771177- However, relexing this constraint helps synchronization primitives11781178- such as page locks or completions can use the lock correctness11791179- detector, lockdep.11801180-11811181-config LOCKDEP_COMPLETIONS11821182- bool11831183- help11841184- A deadlock caused by wait_for_completion() and complete() can be11851185- detected by lockdep using crossrelease feature.11861186-11871187-config BOOTPARAM_LOCKDEP_CROSSRELEASE_FULLSTACK11881188- bool "Enable the boot parameter, crossrelease_fullstack"11891189- depends on LOCKDEP_CROSSRELEASE11901190- default n11911191- help11921192- The lockdep "cross-release" feature needs to record stack traces11931193- (of calling functions) for all acquisitions, for eventual later11941194- use during analysis. By default only a single caller is recorded,11951195- because the unwind operation can be very expensive with deeper11961196- stack chains.11971197-11981198- However a boot parameter, crossrelease_fullstack, was11991199- introduced since sometimes deeper traces are required for full12001200- analysis. This option turns on the boot parameter.1201117212021173config DEBUG_LOCKDEP12031174 bool "Lock dependency engine debugging"
+28-21
lib/asn1_decoder.c
···313313314314 /* Decide how to handle the operation */315315 switch (op) {316316- case ASN1_OP_MATCH_ANY_ACT:317317- case ASN1_OP_MATCH_ANY_ACT_OR_SKIP:318318- case ASN1_OP_COND_MATCH_ANY_ACT:319319- case ASN1_OP_COND_MATCH_ANY_ACT_OR_SKIP:320320- ret = actions[machine[pc + 1]](context, hdr, tag, data + dp, len);321321- if (ret < 0)322322- return ret;323323- goto skip_data;324324-325325- case ASN1_OP_MATCH_ACT:326326- case ASN1_OP_MATCH_ACT_OR_SKIP:327327- case ASN1_OP_COND_MATCH_ACT_OR_SKIP:328328- ret = actions[machine[pc + 2]](context, hdr, tag, data + dp, len);329329- if (ret < 0)330330- return ret;331331- goto skip_data;332332-333316 case ASN1_OP_MATCH:334317 case ASN1_OP_MATCH_OR_SKIP:318318+ case ASN1_OP_MATCH_ACT:319319+ case ASN1_OP_MATCH_ACT_OR_SKIP:335320 case ASN1_OP_MATCH_ANY:336321 case ASN1_OP_MATCH_ANY_OR_SKIP:322322+ case ASN1_OP_MATCH_ANY_ACT:323323+ case ASN1_OP_MATCH_ANY_ACT_OR_SKIP:337324 case ASN1_OP_COND_MATCH_OR_SKIP:325325+ case ASN1_OP_COND_MATCH_ACT_OR_SKIP:338326 case ASN1_OP_COND_MATCH_ANY:339327 case ASN1_OP_COND_MATCH_ANY_OR_SKIP:340340- skip_data:328328+ case ASN1_OP_COND_MATCH_ANY_ACT:329329+ case ASN1_OP_COND_MATCH_ANY_ACT_OR_SKIP:330330+341331 if (!(flags & FLAG_CONS)) {342332 if (flags & FLAG_INDEFINITE_LENGTH) {333333+ size_t tmp = dp;334334+343335 ret = asn1_find_indefinite_length(344344- data, datalen, &dp, &len, &errmsg);336336+ data, datalen, &tmp, &len, &errmsg);345337 if (ret < 0)346338 goto error;347347- } else {348348- dp += len;349339 }350340 pr_debug("- LEAF: %zu\n", len);351341 }342342+343343+ if (op & ASN1_OP_MATCH__ACT) {344344+ unsigned char act;345345+346346+ if (op & ASN1_OP_MATCH__ANY)347347+ act = machine[pc + 1];348348+ else349349+ act = machine[pc + 2];350350+ ret = actions[act](context, hdr, tag, data + dp, len);351351+ if (ret < 0)352352+ return ret;353353+ }354354+355355+ if (!(flags & FLAG_CONS))356356+ dp += len;352357 pc += asn1_op_lengths[op];353358 goto next_op;354359···439434 else440435 act = machine[pc + 1];441436 ret = actions[act](context, hdr, 0, data + tdp, len);437437+ if (ret < 0)438438+ return ret;442439 }443440 pc += asn1_op_lengths[op];444441 goto next_op;
+10-6
lib/oid_registry.c
···116116 int count;117117118118 if (v >= end)119119- return -EBADMSG;119119+ goto bad;120120121121 n = *v++;122122 ret = count = snprintf(buffer, bufsize, "%u.%u", n / 40, n % 40);123123+ if (count >= bufsize)124124+ return -ENOBUFS;123125 buffer += count;124126 bufsize -= count;125125- if (bufsize == 0)126126- return -ENOBUFS;127127128128 while (v < end) {129129 num = 0;···134134 num = n & 0x7f;135135 do {136136 if (v >= end)137137- return -EBADMSG;137137+ goto bad;138138 n = *v++;139139 num <<= 7;140140 num |= n & 0x7f;141141 } while (n & 0x80);142142 }143143 ret += count = snprintf(buffer, bufsize, ".%lu", num);144144- buffer += count;145145- if (bufsize <= count)144144+ if (count >= bufsize)146145 return -ENOBUFS;146146+ buffer += count;147147 bufsize -= count;148148 }149149150150 return ret;151151+152152+bad:153153+ snprintf(buffer, bufsize, "(bad)");154154+ return -EBADMSG;151155}152156EXPORT_SYMBOL_GPL(sprint_oid);153157
···111111 enum fixed_addresses idx;112112 int i, slot;113113114114- WARN_ON(system_state != SYSTEM_BOOTING);114114+ WARN_ON(system_state >= SYSTEM_RUNNING);115115116116 slot = -1;117117 for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+4-2
mm/frame_vector.c
···6262 * get_user_pages_longterm() and disallow it for filesystem-dax6363 * mappings.6464 */6565- if (vma_is_fsdax(vma))6666- return -EOPNOTSUPP;6565+ if (vma_is_fsdax(vma)) {6666+ ret = -EOPNOTSUPP;6767+ goto out;6868+ }67696870 if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) {6971 vec->got_ref = true;
···38313831 return VM_FAULT_FALLBACK;38323832}3833383338343834-static int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)38343834+/* `inline' is required to avoid gcc 4.1.2 build error */38353835+static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)38353836{38363837 if (vma_is_anonymous(vmf->vma))38373838 return do_huge_pmd_wp_page(vmf, orig_pmd);···39493948 if (unlikely(!pte_same(*vmf->pte, entry)))39503949 goto unlock;39513950 if (vmf->flags & FAULT_FLAG_WRITE) {39523952- if (!pte_access_permitted(entry, WRITE))39513951+ if (!pte_write(entry))39533952 return do_wp_page(vmf);39543953 entry = pte_mkdirty(entry);39553954 }···4014401340154014 /* NUMA case for anonymous PUDs would go here */4016401540174017- if (dirty && !pud_access_permitted(orig_pud, WRITE)) {40164016+ if (dirty && !pud_write(orig_pud)) {40184017 ret = wp_huge_pud(&vmf, orig_pud);40194018 if (!(ret & VM_FAULT_FALLBACK))40204019 return ret;···40474046 if (pmd_protnone(orig_pmd) && vma_is_accessible(vma))40484047 return do_huge_pmd_numa_page(&vmf, orig_pmd);4049404840504050- if (dirty && !pmd_access_permitted(orig_pmd, WRITE)) {40494049+ if (dirty && !pmd_write(orig_pmd)) {40514050 ret = wp_huge_pmd(&vmf, orig_pmd);40524051 if (!(ret & VM_FAULT_FALLBACK))40534052 return ret;···43374336 goto out;43384337 pte = *ptep;4339433843404340- if (!pte_access_permitted(pte, flags & FOLL_WRITE))43394339+ if ((flags & FOLL_WRITE) && !pte_write(pte))43414340 goto unlock;4342434143434342 *prot = pgprot_val(pte_pgprot(pte));
+5-5
mm/mmap.c
···30193019 /* Use -1 here to ensure all VMAs in the mm are unmapped */30203020 unmap_vmas(&tlb, vma, 0, -1);3021302130223022- set_bit(MMF_OOM_SKIP, &mm->flags);30233023- if (unlikely(tsk_is_oom_victim(current))) {30223022+ if (unlikely(mm_is_oom_victim(mm))) {30243023 /*30253024 * Wait for oom_reap_task() to stop working on this30263025 * mm. Because MMF_OOM_SKIP is already set before30273026 * calling down_read(), oom_reap_task() will not run30283027 * on this "mm" post up_write().30293028 *30303030- * tsk_is_oom_victim() cannot be set from under us30313031- * either because current->mm is already set to NULL30293029+ * mm_is_oom_victim() cannot be set from under us30303030+ * either because victim->mm is already set to NULL30323031 * under task_lock before calling mmput and oom_mm is30333033- * set not NULL by the OOM killer only if current->mm30323032+ * set not NULL by the OOM killer only if victim->mm30343033 * is found not NULL while holding the task_lock.30353034 */30353035+ set_bit(MMF_OOM_SKIP, &mm->flags);30363036 down_write(&mm->mmap_sem);30373037 up_write(&mm->mmap_sem);30383038 }
+3-1
mm/oom_kill.c
···683683 return;684684685685 /* oom_mm is bound to the signal struct life time. */686686- if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm))686686+ if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm)) {687687 mmgrab(tsk->signal->oom_mm);688688+ set_bit(MMF_OOM_VICTIM, &mm->flags);689689+ }688690689691 /*690692 * Make sure that the task is woken up from uninterruptible sleep
+11
mm/page_alloc.c
···26842684{26852685 struct page *page, *next;26862686 unsigned long flags, pfn;26872687+ int batch_count = 0;2687268826882689 /* Prepare pages for freeing */26892690 list_for_each_entry_safe(page, next, list, lru) {···27012700 set_page_private(page, 0);27022701 trace_mm_page_free_batched(page);27032702 free_unref_page_commit(page, pfn);27032703+27042704+ /*27052705+ * Guard against excessive IRQ disabled times when we get27062706+ * a large list of pages to free.27072707+ */27082708+ if (++batch_count == SWAP_CLUSTER_MAX) {27092709+ local_irq_restore(flags);27102710+ batch_count = 0;27112711+ local_irq_save(flags);27122712+ }27042713 }27052714 local_irq_restore(flags);27062715}
+4
mm/percpu.c
···2719271927202720 if (pcpu_setup_first_chunk(ai, fc) < 0)27212721 panic("Failed to initialize percpu areas.");27222722+#ifdef CONFIG_CRIS27232723+#warning "the CRIS architecture has physical and virtual addresses confused"27242724+#else27222725 pcpu_free_alloc_info(ai);27262726+#endif27232727}2724272827252729#endif /* CONFIG_SMP */
···482482483483/**484484 * batadv_tp_sender_timeout - timer that fires in case of packet loss485485- * @arg: address of the related tp_vars485485+ * @t: address to timer_list inside tp_vars486486 *487487 * If fired it means that there was packet loss.488488 * Switch to Slow Start, set the ss_threshold to half of the current cwnd and···11061106/**11071107 * batadv_tp_receiver_shutdown - stop a tp meter receiver when timeout is11081108 * reached without received ack11091109- * @arg: address of the related tp_vars11091109+ * @t: address to timer_list inside tp_vars11101110 */11111111static void batadv_tp_receiver_shutdown(struct timer_list *t)11121112{
···42934293 struct sock *sk = skb->sk;4294429442954295 if (!skb_may_tx_timestamp(sk, false))42964296- return;42964296+ goto err;4297429742984298 /* Take a reference to prevent skb_orphan() from freeing the socket,42994299 * but only if the socket refcount is not zero.···43024302 *skb_hwtstamps(skb) = *hwtstamps;43034303 __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND, false);43044304 sock_put(sk);43054305+ return;43054306 }43074307+43084308+err:43094309+ kfree_skb(skb);43064310}43074311EXPORT_SYMBOL_GPL(skb_complete_tx_timestamp);43084312
···513513 int err;514514 struct ip_options_data opt_copy;515515 struct raw_frag_vec rfv;516516+ int hdrincl;516517517518 err = -EMSGSIZE;518519 if (len > 0xFFFF)519520 goto out;520521522522+ /* hdrincl should be READ_ONCE(inet->hdrincl)523523+ * but READ_ONCE() doesn't work with bit fields524524+ */525525+ hdrincl = inet->hdrincl;521526 /*522527 * Check the flags.523528 */···598593 /* Linux does not mangle headers on raw sockets,599594 * so that IP options + IP_HDRINCL is non-sense.600595 */601601- if (inet->hdrincl)596596+ if (hdrincl)602597 goto done;603598 if (ipc.opt->opt.srr) {604599 if (!daddr)···620615621616 flowi4_init_output(&fl4, ipc.oif, sk->sk_mark, tos,622617 RT_SCOPE_UNIVERSE,623623- inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol,618618+ hdrincl ? IPPROTO_RAW : sk->sk_protocol,624619 inet_sk_flowi_flags(sk) |625625- (inet->hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),620620+ (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0),626621 daddr, saddr, 0, 0, sk->sk_uid);627622628628- if (!inet->hdrincl) {623623+ if (!hdrincl) {629624 rfv.msg = msg;630625 rfv.hlen = 0;631626···650645 goto do_confirm;651646back_from_confirm:652647653653- if (inet->hdrincl)648648+ if (hdrincl)654649 err = raw_send_hdrinc(sk, &fl4, msg, len,655650 &rt, msg->msg_flags, &ipc.sockc);656651
+6-4
net/ipv4/tcp_input.c
···508508 u32 new_sample = tp->rcv_rtt_est.rtt_us;509509 long m = sample;510510511511- if (m == 0)512512- m = 1;513513-514511 if (new_sample != 0) {515512 /* If we sample in larger samples in the non-timestamp516513 * case, we could grossly overestimate the RTT especially···544547 if (before(tp->rcv_nxt, tp->rcv_rtt_est.seq))545548 return;546549 delta_us = tcp_stamp_us_delta(tp->tcp_mstamp, tp->rcv_rtt_est.time);550550+ if (!delta_us)551551+ delta_us = 1;547552 tcp_rcv_rtt_update(tp, delta_us, 1);548553549554new_measure:···562563 (TCP_SKB_CB(skb)->end_seq -563564 TCP_SKB_CB(skb)->seq >= inet_csk(sk)->icsk_ack.rcv_mss)) {564565 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;565565- u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);566566+ u32 delta_us;566567568568+ if (!delta)569569+ delta = 1;570570+ delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);567571 tcp_rcv_rtt_update(tp, delta_us, 0);568572 }569573}
···291291 int i;292292293293 mutex_lock(&sta->ampdu_mlme.mtx);294294- for (i = 0; i < IEEE80211_NUM_TIDS; i++) {294294+ for (i = 0; i < IEEE80211_NUM_TIDS; i++)295295 ___ieee80211_stop_rx_ba_session(sta, i, WLAN_BACK_RECIPIENT,296296 WLAN_REASON_QSTA_LEAVE_QBSS,297297 reason != AGG_STOP_DESTROY_STA &&298298 reason != AGG_STOP_PEER_REQUEST);299299- }300300- mutex_unlock(&sta->ampdu_mlme.mtx);301299302300 for (i = 0; i < IEEE80211_NUM_TIDS; i++)303301 ___ieee80211_stop_tx_ba_session(sta, i, reason);302302+ mutex_unlock(&sta->ampdu_mlme.mtx);304303305304 /* stopping might queue the work again - so cancel only afterwards */306305 cancel_work_sync(&sta->ampdu_mlme.work);
+98-30
net/netfilter/nf_conntrack_h323_asn1.c
···103103#define INC_BIT(bs) if((++(bs)->bit)>7){(bs)->cur++;(bs)->bit=0;}104104#define INC_BITS(bs,b) if(((bs)->bit+=(b))>7){(bs)->cur+=(bs)->bit>>3;(bs)->bit&=7;}105105#define BYTE_ALIGN(bs) if((bs)->bit){(bs)->cur++;(bs)->bit=0;}106106-#define CHECK_BOUND(bs,n) if((bs)->cur+(n)>(bs)->end)return(H323_ERROR_BOUND)107106static unsigned int get_len(struct bitstr *bs);108107static unsigned int get_bit(struct bitstr *bs);109108static unsigned int get_bits(struct bitstr *bs, unsigned int b);···162163 }163164164165 return v;166166+}167167+168168+static int nf_h323_error_boundary(struct bitstr *bs, size_t bytes, size_t bits)169169+{170170+ bits += bs->bit;171171+ bytes += bits / BITS_PER_BYTE;172172+ if (bits % BITS_PER_BYTE > 0)173173+ bytes++;174174+175175+ if (*bs->cur + bytes > *bs->end)176176+ return 1;177177+178178+ return 0;165179}166180167181/****************************************************************************/···291279 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name);292280293281 INC_BIT(bs);294294-295295- CHECK_BOUND(bs, 0);282282+ if (nf_h323_error_boundary(bs, 0, 0))283283+ return H323_ERROR_BOUND;296284 return H323_ERROR_NONE;297285}298286···305293 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name);306294307295 BYTE_ALIGN(bs);308308- CHECK_BOUND(bs, 1);296296+ if (nf_h323_error_boundary(bs, 1, 0))297297+ return H323_ERROR_BOUND;298298+309299 len = *bs->cur++;310300 bs->cur += len;301301+ if (nf_h323_error_boundary(bs, 0, 0))302302+ return H323_ERROR_BOUND;311303312312- CHECK_BOUND(bs, 0);313304 return H323_ERROR_NONE;314305}315306···334319 bs->cur += 2;335320 break;336321 case CONS: /* 64K < Range < 4G */322322+ if (nf_h323_error_boundary(bs, 0, 2))323323+ return H323_ERROR_BOUND;337324 len = get_bits(bs, 2) + 1;338325 BYTE_ALIGN(bs);339326 if (base && (f->attr & DECODE)) { /* timeToLive */···347330 break;348331 case UNCO:349332 BYTE_ALIGN(bs);350350- CHECK_BOUND(bs, 2);333333+ if (nf_h323_error_boundary(bs, 2, 0))334334+ return H323_ERROR_BOUND;351335 len = get_len(bs);352336 bs->cur += len;353337 break;···359341360342 PRINT("\n");361343362362- CHECK_BOUND(bs, 0);344344+ if (nf_h323_error_boundary(bs, 0, 0))345345+ return H323_ERROR_BOUND;363346 return H323_ERROR_NONE;364347}365348···376357 INC_BITS(bs, f->sz);377358 }378359379379- CHECK_BOUND(bs, 0);360360+ if (nf_h323_error_boundary(bs, 0, 0))361361+ return H323_ERROR_BOUND;380362 return H323_ERROR_NONE;381363}382364···395375 len = f->lb;396376 break;397377 case WORD: /* 2-byte length */398398- CHECK_BOUND(bs, 2);378378+ if (nf_h323_error_boundary(bs, 2, 0))379379+ return H323_ERROR_BOUND;399380 len = (*bs->cur++) << 8;400381 len += (*bs->cur++) + f->lb;401382 break;402383 case SEMI:403403- CHECK_BOUND(bs, 2);384384+ if (nf_h323_error_boundary(bs, 2, 0))385385+ return H323_ERROR_BOUND;404386 len = get_len(bs);405387 break;406388 default:···413391 bs->cur += len >> 3;414392 bs->bit = len & 7;415393416416- CHECK_BOUND(bs, 0);394394+ if (nf_h323_error_boundary(bs, 0, 0))395395+ return H323_ERROR_BOUND;417396 return H323_ERROR_NONE;418397}419398···427404 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name);428405429406 /* 2 <= Range <= 255 */407407+ if (nf_h323_error_boundary(bs, 0, f->sz))408408+ return H323_ERROR_BOUND;430409 len = get_bits(bs, f->sz) + f->lb;431410432411 BYTE_ALIGN(bs);433412 INC_BITS(bs, (len << 2));434413435435- CHECK_BOUND(bs, 0);414414+ if (nf_h323_error_boundary(bs, 0, 0))415415+ return H323_ERROR_BOUND;436416 return H323_ERROR_NONE;437417}438418···466440 break;467441 case BYTE: /* Range == 256 */468442 BYTE_ALIGN(bs);469469- CHECK_BOUND(bs, 1);443443+ if (nf_h323_error_boundary(bs, 1, 0))444444+ return H323_ERROR_BOUND;470445 len = (*bs->cur++) + f->lb;471446 break;472447 case SEMI:473448 BYTE_ALIGN(bs);474474- CHECK_BOUND(bs, 2);449449+ if (nf_h323_error_boundary(bs, 2, 0))450450+ return H323_ERROR_BOUND;475451 len = get_len(bs) + f->lb;476452 break;477453 default: /* 2 <= Range <= 255 */454454+ if (nf_h323_error_boundary(bs, 0, f->sz))455455+ return H323_ERROR_BOUND;478456 len = get_bits(bs, f->sz) + f->lb;479457 BYTE_ALIGN(bs);480458 break;···488458489459 PRINT("\n");490460491491- CHECK_BOUND(bs, 0);461461+ if (nf_h323_error_boundary(bs, 0, 0))462462+ return H323_ERROR_BOUND;492463 return H323_ERROR_NONE;493464}494465···504473 switch (f->sz) {505474 case BYTE: /* Range == 256 */506475 BYTE_ALIGN(bs);507507- CHECK_BOUND(bs, 1);476476+ if (nf_h323_error_boundary(bs, 1, 0))477477+ return H323_ERROR_BOUND;508478 len = (*bs->cur++) + f->lb;509479 break;510480 default: /* 2 <= Range <= 255 */481481+ if (nf_h323_error_boundary(bs, 0, f->sz))482482+ return H323_ERROR_BOUND;511483 len = get_bits(bs, f->sz) + f->lb;512484 BYTE_ALIGN(bs);513485 break;···518484519485 bs->cur += len << 1;520486521521- CHECK_BOUND(bs, 0);487487+ if (nf_h323_error_boundary(bs, 0, 0))488488+ return H323_ERROR_BOUND;522489 return H323_ERROR_NONE;523490}524491···538503 base = (base && (f->attr & DECODE)) ? base + f->offset : NULL;539504540505 /* Extensible? */506506+ if (nf_h323_error_boundary(bs, 0, 1))507507+ return H323_ERROR_BOUND;541508 ext = (f->attr & EXT) ? get_bit(bs) : 0;542509543510 /* Get fields bitmap */511511+ if (nf_h323_error_boundary(bs, 0, f->sz))512512+ return H323_ERROR_BOUND;544513 bmp = get_bitmap(bs, f->sz);545514 if (base)546515 *(unsigned int *)base = bmp;···564525565526 /* Decode */566527 if (son->attr & OPEN) { /* Open field */567567- CHECK_BOUND(bs, 2);528528+ if (nf_h323_error_boundary(bs, 2, 0))529529+ return H323_ERROR_BOUND;568530 len = get_len(bs);569569- CHECK_BOUND(bs, len);531531+ if (nf_h323_error_boundary(bs, len, 0))532532+ return H323_ERROR_BOUND;570533 if (!base || !(son->attr & DECODE)) {571534 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE,572535 " ", son->name);···596555 return H323_ERROR_NONE;597556598557 /* Get the extension bitmap */558558+ if (nf_h323_error_boundary(bs, 0, 7))559559+ return H323_ERROR_BOUND;599560 bmp2_len = get_bits(bs, 7) + 1;600600- CHECK_BOUND(bs, (bmp2_len + 7) >> 3);561561+ if (nf_h323_error_boundary(bs, 0, bmp2_len))562562+ return H323_ERROR_BOUND;601563 bmp2 = get_bitmap(bs, bmp2_len);602564 bmp |= bmp2 >> f->sz;603565 if (base)···611567 for (opt = 0; opt < bmp2_len; opt++, i++, son++) {612568 /* Check Range */613569 if (i >= f->ub) { /* Newer Version? */614614- CHECK_BOUND(bs, 2);570570+ if (nf_h323_error_boundary(bs, 2, 0))571571+ return H323_ERROR_BOUND;615572 len = get_len(bs);616616- CHECK_BOUND(bs, len);573573+ if (nf_h323_error_boundary(bs, len, 0))574574+ return H323_ERROR_BOUND;617575 bs->cur += len;618576 continue;619577 }···629583 if (!((0x80000000 >> opt) & bmp2)) /* Not present */630584 continue;631585632632- CHECK_BOUND(bs, 2);586586+ if (nf_h323_error_boundary(bs, 2, 0))587587+ return H323_ERROR_BOUND;633588 len = get_len(bs);634634- CHECK_BOUND(bs, len);589589+ if (nf_h323_error_boundary(bs, len, 0))590590+ return H323_ERROR_BOUND;635591 if (!base || !(son->attr & DECODE)) {636592 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, " ",637593 son->name);···671623 switch (f->sz) {672624 case BYTE:673625 BYTE_ALIGN(bs);674674- CHECK_BOUND(bs, 1);626626+ if (nf_h323_error_boundary(bs, 1, 0))627627+ return H323_ERROR_BOUND;675628 count = *bs->cur++;676629 break;677630 case WORD:678631 BYTE_ALIGN(bs);679679- CHECK_BOUND(bs, 2);632632+ if (nf_h323_error_boundary(bs, 2, 0))633633+ return H323_ERROR_BOUND;680634 count = *bs->cur++;681635 count <<= 8;682636 count += *bs->cur++;683637 break;684638 case SEMI:685639 BYTE_ALIGN(bs);686686- CHECK_BOUND(bs, 2);640640+ if (nf_h323_error_boundary(bs, 2, 0))641641+ return H323_ERROR_BOUND;687642 count = get_len(bs);688643 break;689644 default:645645+ if (nf_h323_error_boundary(bs, 0, f->sz))646646+ return H323_ERROR_BOUND;690647 count = get_bits(bs, f->sz);691648 break;692649 }···711658 for (i = 0; i < count; i++) {712659 if (son->attr & OPEN) {713660 BYTE_ALIGN(bs);661661+ if (nf_h323_error_boundary(bs, 2, 0))662662+ return H323_ERROR_BOUND;714663 len = get_len(bs);715715- CHECK_BOUND(bs, len);664664+ if (nf_h323_error_boundary(bs, len, 0))665665+ return H323_ERROR_BOUND;716666 if (!base || !(son->attr & DECODE)) {717667 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE,718668 " ", son->name);···766710 base = (base && (f->attr & DECODE)) ? base + f->offset : NULL;767711768712 /* Decode the choice index number */713713+ if (nf_h323_error_boundary(bs, 0, 1))714714+ return H323_ERROR_BOUND;769715 if ((f->attr & EXT) && get_bit(bs)) {770716 ext = 1;717717+ if (nf_h323_error_boundary(bs, 0, 7))718718+ return H323_ERROR_BOUND;771719 type = get_bits(bs, 7) + f->lb;772720 } else {773721 ext = 0;722722+ if (nf_h323_error_boundary(bs, 0, f->sz))723723+ return H323_ERROR_BOUND;774724 type = get_bits(bs, f->sz);775725 if (type >= f->lb)776726 return H323_ERROR_RANGE;···789727 /* Check Range */790728 if (type >= f->ub) { /* Newer version? */791729 BYTE_ALIGN(bs);730730+ if (nf_h323_error_boundary(bs, 2, 0))731731+ return H323_ERROR_BOUND;792732 len = get_len(bs);793793- CHECK_BOUND(bs, len);733733+ if (nf_h323_error_boundary(bs, len, 0))734734+ return H323_ERROR_BOUND;794735 bs->cur += len;795736 return H323_ERROR_NONE;796737 }···807742808743 if (ext || (son->attr & OPEN)) {809744 BYTE_ALIGN(bs);745745+ if (nf_h323_error_boundary(bs, len, 0))746746+ return H323_ERROR_BOUND;810747 len = get_len(bs);811811- CHECK_BOUND(bs, len);748748+ if (nf_h323_error_boundary(bs, len, 0))749749+ return H323_ERROR_BOUND;812750 if (!base || !(son->attr & DECODE)) {813751 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, " ",814752 son->name);
···1919#include <linux/module.h>2020#include <linux/kernel.h>21212222+#include <linux/capability.h>2223#include <linux/if.h>2324#include <linux/inetdevice.h>2425#include <linux/ip.h>···7170 struct xt_osf_finger *kf = NULL, *sf;7271 int err = 0;73727373+ if (!capable(CAP_NET_ADMIN))7474+ return -EPERM;7575+7476 if (!osf_attrs[OSF_ATTR_FINGER])7577 return -EINVAL;7678···118114 struct xt_osf_user_finger *f;119115 struct xt_osf_finger *sf;120116 int err = -ENOENT;117117+118118+ if (!capable(CAP_NET_ADMIN))119119+ return -EPERM;121120122121 if (!osf_attrs[OSF_ATTR_FINGER])123122 return -EINVAL;
+3
net/netlink/af_netlink.c
···284284 struct sock *sk = skb->sk;285285 int ret = -ENOMEM;286286287287+ if (!net_eq(dev_net(dev), sock_net(sk)))288288+ return 0;289289+287290 dev_hold(dev);288291289292 if (is_vmalloc_addr(skb->head))
···2323#include <linux/skbuff.h>2424#include <linux/init.h>2525#include <linux/kmod.h>2626-#include <linux/err.h>2726#include <linux/slab.h>2827#include <net/net_namespace.h>2928#include <net/sock.h>···344345 /* Hold a refcnt for all chains, so that they don't disappear345346 * while we are iterating.346347 */348348+ if (!block)349349+ return;347350 list_for_each_entry(chain, &block->chain_list, list)348351 tcf_chain_hold(chain);349352···368367{369368 struct tcf_block_ext_info ei = {0, };370369371371- if (!block)372372- return;373370 tcf_block_put_ext(block, block->q, &ei);374371}375372
···15881588 * The caller must have Setattr permission to change keyring restrictions.15891589 *15901590 * The requested type name may be a NULL pointer to reject all attempts15911591- * to link to the keyring. If _type is non-NULL, _restriction can be15921592- * NULL or a pointer to a string describing the restriction. If _type is15931593- * NULL, _restriction must also be NULL.15911591+ * to link to the keyring. In this case, _restriction must also be NULL.15921592+ * Otherwise, both _type and _restriction must be non-NULL.15941593 *15951594 * Returns 0 if successful.15961595 */···15971598 const char __user *_restriction)15981599{15991600 key_ref_t key_ref;16001600- bool link_reject = !_type;16011601 char type[32];16021602 char *restriction = NULL;16031603 long ret;···16051607 if (IS_ERR(key_ref))16061608 return PTR_ERR(key_ref);1607160916101610+ ret = -EINVAL;16081611 if (_type) {16121612+ if (!_restriction)16131613+ goto error;16141614+16091615 ret = key_get_type_from_user(type, _type, sizeof(type));16101616 if (ret < 0)16111617 goto error;16121612- }16131613-16141614- if (_restriction) {16151615- if (!_type) {16161616- ret = -EINVAL;16171617- goto error;16181618- }1619161816201619 restriction = strndup_user(_restriction, PAGE_SIZE);16211620 if (IS_ERR(restriction)) {16221621 ret = PTR_ERR(restriction);16231622 goto error;16241623 }16241624+ } else {16251625+ if (_restriction)16261626+ goto error;16251627 }1626162816271627- ret = keyring_restrict(key_ref, link_reject ? NULL : type, restriction);16291629+ ret = keyring_restrict(key_ref, _type ? type : NULL, restriction);16281630 kfree(restriction);16291629-16301631error:16311632 key_ref_put(key_ref);16321632-16331633 return ret;16341634}16351635
+37-11
security/keys/request_key.c
···251251 * The keyring selected is returned with an extra reference upon it which the252252 * caller must release.253253 */254254-static void construct_get_dest_keyring(struct key **_dest_keyring)254254+static int construct_get_dest_keyring(struct key **_dest_keyring)255255{256256 struct request_key_auth *rka;257257 const struct cred *cred = current_cred();258258 struct key *dest_keyring = *_dest_keyring, *authkey;259259+ int ret;259260260261 kenter("%p", dest_keyring);261262···265264 /* the caller supplied one */266265 key_get(dest_keyring);267266 } else {267267+ bool do_perm_check = true;268268+268269 /* use a default keyring; falling through the cases until we269270 * find one that we actually have */270271 switch (cred->jit_keyring) {···281278 dest_keyring =282279 key_get(rka->dest_keyring);283280 up_read(&authkey->sem);284284- if (dest_keyring)281281+ if (dest_keyring) {282282+ do_perm_check = false;285283 break;284284+ }286285 }287286288287 case KEY_REQKEY_DEFL_THREAD_KEYRING:···319314 default:320315 BUG();321316 }317317+318318+ /*319319+ * Require Write permission on the keyring. This is essential320320+ * because the default keyring may be the session keyring, and321321+ * joining a keyring only requires Search permission.322322+ *323323+ * However, this check is skipped for the "requestor keyring" so324324+ * that /sbin/request-key can itself use request_key() to add325325+ * keys to the original requestor's destination keyring.326326+ */327327+ if (dest_keyring && do_perm_check) {328328+ ret = key_permission(make_key_ref(dest_keyring, 1),329329+ KEY_NEED_WRITE);330330+ if (ret) {331331+ key_put(dest_keyring);332332+ return ret;333333+ }334334+ }322335 }323336324337 *_dest_keyring = dest_keyring;325338 kleave(" [dk %d]", key_serial(dest_keyring));326326- return;339339+ return 0;327340}328341329342/*···467444 if (ctx->index_key.type == &key_type_keyring)468445 return ERR_PTR(-EPERM);469446470470- user = key_user_lookup(current_fsuid());471471- if (!user)472472- return ERR_PTR(-ENOMEM);447447+ ret = construct_get_dest_keyring(&dest_keyring);448448+ if (ret)449449+ goto error;473450474474- construct_get_dest_keyring(&dest_keyring);451451+ user = key_user_lookup(current_fsuid());452452+ if (!user) {453453+ ret = -ENOMEM;454454+ goto error_put_dest_keyring;455455+ }475456476457 ret = construct_alloc_key(ctx, dest_keyring, flags, user, &key);477458 key_user_put(user);···490463 } else if (ret == -EINPROGRESS) {491464 ret = 0;492465 } else {493493- goto couldnt_alloc_key;466466+ goto error_put_dest_keyring;494467 }495468496469 key_put(dest_keyring);···500473construction_failed:501474 key_negate_and_link(key, key_negative_timeout, NULL, NULL);502475 key_put(key);503503-couldnt_alloc_key:476476+error_put_dest_keyring:504477 key_put(dest_keyring);478478+error:505479 kleave(" = %d", ret);506480 return ERR_PTR(ret);507481}···574546 if (!IS_ERR(key_ref)) {575547 key = key_ref_to_ptr(key_ref);576548 if (dest_keyring) {577577- construct_get_dest_keyring(&dest_keyring);578549 ret = key_link(dest_keyring, key);579579- key_put(dest_keyring);580550 if (ret < 0) {581551 key_put(key);582552 key = ERR_PTR(ret);
+1
tools/arch/x86/include/asm/cpufeatures.h
···266266/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */267267#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */268268#define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */269269+#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */269270270271/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */271272#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+9-12
tools/include/linux/compiler.h
···84848585#define uninitialized_var(x) x = *(&(x))86868787-#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))8888-8987#include <linux/types.h>90889189/*···133135/*134136 * Prevent the compiler from merging or refetching reads or writes. The135137 * compiler is also forbidden from reordering successive instances of136136- * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the137137- * compiler is aware of some particular ordering. One way to make the138138- * compiler aware of ordering is to put the two invocations of READ_ONCE,139139- * WRITE_ONCE or ACCESS_ONCE() in different C statements.138138+ * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some139139+ * particular ordering. One way to make the compiler aware of ordering is to140140+ * put the two invocations of READ_ONCE or WRITE_ONCE in different C141141+ * statements.140142 *141141- * In contrast to ACCESS_ONCE these two macros will also work on aggregate142142- * data types like structs or unions. If the size of the accessed data143143- * type exceeds the word size of the machine (e.g., 32 bits or 64 bits)144144- * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a145145- * compile-time warning.143143+ * These two macros will also work on aggregate data types like structs or144144+ * unions. If the size of the accessed data type exceeds the word size of145145+ * the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will146146+ * fall back to memcpy and print a compile-time warning.146147 *147148 * Their two major use cases are: (1) Mediating communication between148149 * process-level code and irq/NMI handlers, all running on the same CPU,149149- * and (2) Ensuring that the compiler does not fold, spindle, or otherwise150150+ * and (2) Ensuring that the compiler does not fold, spindle, or otherwise150151 * mutilate accesses that either do not require ordering or that interact151152 * with an explicit memory barrier or atomic instruction that provides the152153 * required ordering.
···479479480480 vtimer_restore_state(vcpu);481481482482- if (has_vhe())483483- disable_el1_phys_timer_access();484484-485482 /* Set the background timer for the physical timer emulation. */486483 phys_timer_emulate(vcpu);487484}···506509507510 if (unlikely(!timer->enabled))508511 return;509509-510510- if (has_vhe())511511- enable_el1_phys_timer_access();512512513513 vtimer_save_state(vcpu);514514···835841no_vgic:836842 preempt_disable();837843 timer->enabled = 1;838838- kvm_timer_vcpu_load_vgic(vcpu);844844+ if (!irqchip_in_kernel(vcpu->kvm))845845+ kvm_timer_vcpu_load_user(vcpu);846846+ else847847+ kvm_timer_vcpu_load_vgic(vcpu);839848 preempt_enable();840849841850 return 0;
+5-2
virt/kvm/arm/arm.c
···188188 kvm->vcpus[i] = NULL;189189 }190190 }191191+ atomic_set(&kvm->online_vcpus, 0);191192}192193193194int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)···297296{298297 kvm_mmu_free_memory_caches(vcpu);299298 kvm_timer_vcpu_terminate(vcpu);300300- kvm_vgic_vcpu_destroy(vcpu);301299 kvm_pmu_vcpu_destroy(vcpu);302300 kvm_vcpu_uninit(vcpu);303301 kmem_cache_free(kvm_vcpu_cache, vcpu);···627627 ret = kvm_handle_mmio_return(vcpu, vcpu->run);628628 if (ret)629629 return ret;630630+ if (kvm_arm_handle_step_debug(vcpu, vcpu->run))631631+ return 0;632632+630633 }631634632635 if (run->immediate_exit)···15051502 bool in_hyp_mode;1506150315071504 if (!is_hyp_mode_available()) {15081508- kvm_err("HYP mode not available\n");15051505+ kvm_info("HYP mode not available\n");15091506 return -ENODEV;15101507 }15111508
+20-28
virt/kvm/arm/hyp/timer-sr.c
···2727 write_sysreg(cntvoff, cntvoff_el2);2828}29293030-void __hyp_text enable_el1_phys_timer_access(void)3131-{3232- u64 val;3333-3434- /* Allow physical timer/counter access for the host */3535- val = read_sysreg(cnthctl_el2);3636- val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;3737- write_sysreg(val, cnthctl_el2);3838-}3939-4040-void __hyp_text disable_el1_phys_timer_access(void)4141-{4242- u64 val;4343-4444- /*4545- * Disallow physical timer access for the guest4646- * Physical counter access is allowed4747- */4848- val = read_sysreg(cnthctl_el2);4949- val &= ~CNTHCTL_EL1PCEN;5050- val |= CNTHCTL_EL1PCTEN;5151- write_sysreg(val, cnthctl_el2);5252-}5353-5430void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu)5531{5632 /*5733 * We don't need to do this for VHE since the host kernel runs in EL25834 * with HCR_EL2.TGE ==1, which makes those bits have no impact.5935 */6060- if (!has_vhe())6161- enable_el1_phys_timer_access();3636+ if (!has_vhe()) {3737+ u64 val;3838+3939+ /* Allow physical timer/counter access for the host */4040+ val = read_sysreg(cnthctl_el2);4141+ val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;4242+ write_sysreg(val, cnthctl_el2);4343+ }6244}63456446void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu)6547{6666- if (!has_vhe())6767- disable_el1_phys_timer_access();4848+ if (!has_vhe()) {4949+ u64 val;5050+5151+ /*5252+ * Disallow physical timer access for the guest5353+ * Physical counter access is allowed5454+ */5555+ val = read_sysreg(cnthctl_el2);5656+ val &= ~CNTHCTL_EL1PCEN;5757+ val |= CNTHCTL_EL1PCTEN;5858+ write_sysreg(val, cnthctl_el2);5959+ }6860}
···112112 u32 nr = dist->nr_spis;113113 int i, ret;114114115115- entries = kcalloc(nr, sizeof(struct kvm_kernel_irq_routing_entry),116116- GFP_KERNEL);115115+ entries = kcalloc(nr, sizeof(*entries), GFP_KERNEL);117116 if (!entries)118117 return -ENOMEM;119118
+3-1
virt/kvm/arm/vgic/vgic-its.c
···421421 u32 *intids;422422 int nr_irqs, i;423423 unsigned long flags;424424+ u8 pendmask;424425425426 nr_irqs = vgic_copy_lpi_list(vcpu, &intids);426427 if (nr_irqs < 0)···429428430429 for (i = 0; i < nr_irqs; i++) {431430 int byte_offset, bit_nr;432432- u8 pendmask;433431434432 byte_offset = intids[i] / BITS_PER_BYTE;435433 bit_nr = intids[i] % BITS_PER_BYTE;···821821 return E_ITS_MAPC_COLLECTION_OOR;822822823823 collection = kzalloc(sizeof(*collection), GFP_KERNEL);824824+ if (!collection)825825+ return -ENOMEM;824826825827 collection->collection_id = coll_id;826828 collection->target_addr = COLLECTION_NOT_MAPPED;
+1-1
virt/kvm/arm/vgic/vgic-v3.c
···327327 int last_byte_offset = -1;328328 struct vgic_irq *irq;329329 int ret;330330+ u8 val;330331331332 list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) {332333 int byte_offset, bit_nr;333334 struct kvm_vcpu *vcpu;334335 gpa_t pendbase, ptr;335336 bool stored;336336- u8 val;337337338338 vcpu = irq->target_vcpu;339339 if (!vcpu)
+4-2
virt/kvm/arm/vgic/vgic-v4.c
···337337 goto out;338338339339 WARN_ON(!(irq->hw && irq->host_irq == virq));340340- irq->hw = false;341341- ret = its_unmap_vlpi(virq);340340+ if (irq->hw) {341341+ irq->hw = false;342342+ ret = its_unmap_vlpi(virq);343343+ }342344343345out:344346 mutex_unlock(&its->its_lock);
+5-3
virt/kvm/arm/vgic/vgic.c
···492492int kvm_vgic_set_owner(struct kvm_vcpu *vcpu, unsigned int intid, void *owner)493493{494494 struct vgic_irq *irq;495495+ unsigned long flags;495496 int ret = 0;496497497498 if (!vgic_initialized(vcpu->kvm))···503502 return -EINVAL;504503505504 irq = vgic_get_irq(vcpu->kvm, vcpu, intid);506506- spin_lock(&irq->irq_lock);505505+ spin_lock_irqsave(&irq->irq_lock, flags);507506 if (irq->owner && irq->owner != owner)508507 ret = -EEXIST;509508 else510509 irq->owner = owner;511511- spin_unlock(&irq->irq_lock);510510+ spin_unlock_irqrestore(&irq->irq_lock, flags);512511513512 return ret;514513}···824823825824bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int vintid)826825{827827- struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, vintid);826826+ struct vgic_irq *irq;828827 bool map_is_active;829828 unsigned long flags;830829831830 if (!vgic_initialized(vcpu->kvm))832831 return false;833832833833+ irq = vgic_get_irq(vcpu->kvm, vcpu, vintid);834834 spin_lock_irqsave(&irq->irq_lock, flags);835835 map_is_active = irq->hw && irq->active;836836 spin_unlock_irqrestore(&irq->irq_lock, flags);
+8
virt/kvm/kvm_main.c
···135135static unsigned long long kvm_createvm_count;136136static unsigned long long kvm_active_vms;137137138138+__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,139139+ unsigned long start, unsigned long end)140140+{141141+}142142+138143bool kvm_is_reserved_pfn(kvm_pfn_t pfn)139144{140145 if (pfn_valid(pfn))···365360 kvm_flush_remote_tlbs(kvm);366361367362 spin_unlock(&kvm->mmu_lock);363363+364364+ kvm_arch_mmu_notifier_invalidate_range(kvm, start, end);365365+368366 srcu_read_unlock(&kvm->srcu, idx);369367}370368