aboutsummaryrefslogtreecommitdiff
path: root/kernel/locking/qspinlock.c
AgeCommit message (Expand)AuthorFilesLines
2024-04-11locking/qspinlock: Use atomic_try_cmpxchg_relaxed() in xchg_tail()Gravatar Uros Bizjak 1-8/+5
2023-01-05locking/qspinlock: Micro-optimize pending state waiting for unlockGravatar Guo Ren 1-2/+2
2022-08-19locking: Add __lockfunc to slow path functionsGravatar Namhyung Kim 1-1/+1
2022-04-05locking: Apply contention tracepoints in the slow pathGravatar Namhyung Kim 1-0/+5
2020-07-08x86/kvm: Add "nopvspin" parameter to disable PV spinlocksGravatar Zhenzhong Duan 1-0/+7
2020-01-17locking/qspinlock: Fix inaccessible URL of MCS lock paperGravatar Waiman Long 1-6/+7
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 157Gravatar Thomas Gleixner 1-10/+1
2019-04-10locking/qspinlock_stat: Introduce generic lockevent_*() counting APIsGravatar Waiman Long 1-4/+4
2019-02-28locking/qspinlock: Remove unnecessary BUG_ON() callGravatar Waiman Long 1-3/+0
2019-02-04locking/qspinlock_stat: Track the no MCS node available caseGravatar Waiman Long 1-1/+2
2019-02-04locking/qspinlock: Handle > 4 slowpath nesting levelsGravatar Waiman Long 1-0/+15
2018-10-17locking/pvqspinlock: Extend node size when pvqspinlock is configuredGravatar Waiman Long 1-8/+26
2018-10-17locking/qspinlock_stat: Count instances of nested lock slowpathsGravatar Waiman Long 1-0/+5
2018-10-16locking/qspinlock, x86: Provide liveness guaranteeGravatar Peter Zijlstra 1-1/+15
2018-10-16locking/qspinlock: Rework some commentsGravatar Peter Zijlstra 1-10/+26
2018-10-16locking/qspinlock: Re-order codeGravatar Peter Zijlstra 1-29/+27
2018-04-27locking/qspinlock: Add stat tracking for pending vs. slowpathGravatar Waiman Long 1-3/+11
2018-04-27locking/qspinlock: Use try_cmpxchg() instead of cmpxchg() when lockingGravatar Will Deacon 1-10/+9
2018-04-27locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb()Gravatar Will Deacon 1-16/+17
2018-04-27locking/qspinlock: Use smp_cond_load_relaxed() to wait for next nodeGravatar Will Deacon 1-4/+2
2018-04-27locking/qspinlock: Use atomic_cond_read_acquire()Gravatar Will Deacon 1-6/+6
2018-04-27locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queueGravatar Will Deacon 1-11/+8
2018-04-27locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpathGravatar Will Deacon 1-44/+58
2018-04-27locking/qspinlock: Bound spinning on pending->locked transition in slowpathGravatar Will Deacon 1-3/+17
2018-04-27locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'Gravatar Will Deacon 1-43/+3
2018-02-13locking/qspinlock: Ensure node->count is updated before initialising nodeGravatar Will Deacon 1-0/+8
2018-02-13locking/qspinlock: Ensure node is initialised before updating prev->nextGravatar Will Deacon 1-6/+7
2017-12-04locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()Gravatar Paul E. McKenney 1-7/+5
2017-08-17locking: Remove spin_unlock_wait() generic definitionsGravatar Paul E. McKenney 1-117/+0
2017-07-08locking/qspinlock: Explicitly include asm/prefetch.hGravatar Stafford Horne 1-0/+1
2016-06-27locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec()Gravatar Pan Xinhui 1-1/+1
2016-06-14locking/barriers: Introduce smp_acquire__after_ctrl_dep()Gravatar Peter Zijlstra 1-1/+1
2016-06-14locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()Gravatar Peter Zijlstra 1-6/+6
2016-06-08locking/qspinlock: Add commentsGravatar Peter Zijlstra 1-0/+57
2016-06-08locking/qspinlock: Clarify xchg_tail() orderingGravatar Peter Zijlstra 1-2/+13
2016-06-08locking/qspinlock: Fix spin_unlock_wait() some moreGravatar Peter Zijlstra 1-0/+60
2016-02-29locking/qspinlock: Use smp_cond_acquire() in pending codeGravatar Waiman Long 1-4/+3
2015-12-04locking/pvqspinlock: Queue node adaptive spinningGravatar Waiman Long 1-2/+3
2015-12-04locking/pvqspinlock: Allow limited lock stealingGravatar Waiman Long 1-6/+20
2015-12-04locking, sched: Introduce smp_cond_acquire() and use itGravatar Peter Zijlstra 1-2/+1
2015-11-23locking/qspinlock: Avoid redundant read of next pointerGravatar Waiman Long 1-3/+6
2015-11-23locking/qspinlock: Prefetch the next node cachelineGravatar Waiman Long 1-0/+10
2015-11-23locking/qspinlock: Use _acquire/_release() versions of cmpxchg() & xchg()Gravatar Waiman Long 1-5/+24
2015-09-11locking/qspinlock/x86: Fix performance regression under unaccelerated VMsGravatar Peter Zijlstra 1-1/+1
2015-08-03locking/pvqspinlock: Only kick CPU at unlock timeGravatar Waiman Long 1-3/+3
2015-05-08locking/pvqspinlock: Implement simple paravirt support for the qspinlockGravatar Waiman Long 1-1/+67
2015-05-08locking/qspinlock: Revert to test-and-set on hypervisorsGravatar Peter Zijlstra (Intel) 1-0/+3
2015-05-08locking/qspinlock: Use a simple write to grab the lockGravatar Waiman Long 1-16/+50
2015-05-08locking/qspinlock: Optimize for smaller NR_CPUSGravatar Peter Zijlstra (Intel) 1-1/+68
2015-05-08locking/qspinlock: Extract out code snippets for the next patchGravatar Waiman Long 1-31/+48