aboutsummaryrefslogtreecommitdiff
path: root/include/linux/slub_def.h
AgeCommit message (Expand)AuthorFilesLines
2008-07-26SL*B: drop kmem cache argument from constructorGravatar Alexey Dobriyan 1-1/+1
2008-07-04Christoph has movedGravatar Christoph Lameter 1-1/+1
2008-07-03slub: Do not use 192 byte sized cache if minimum alignment is 128 byteGravatar Christoph Lameter 1-0/+2
2008-04-27slub: Fallback to minimal order during slab page allocationGravatar Christoph Lameter 1-0/+2
2008-04-27slub: Update statistics handling for variable order slabsGravatar Christoph Lameter 1-0/+2
2008-04-27slub: Add kmem_cache_order_objects structGravatar Christoph Lameter 1-2/+10
2008-04-14slub: No need for per node slab counters if !SLUB_DEBUGGravatar Christoph Lameter 1-1/+1
2008-03-03slub: Fix up commentsGravatar Christoph Lameter 1-2/+2
2008-02-14slub: Support 4k kmallocs again to compensate for page allocator slownessGravatar Christoph Lameter 1-3/+3
2008-02-14slub: Determine gfpflags once and not every time a slab is allocatedGravatar Christoph Lameter 1-0/+1
2008-02-14slub: kmalloc page allocator pass-through cleanupGravatar Pekka Enberg 1-2/+6
2008-02-07SLUB: Support for performance statisticsGravatar Christoph Lameter 1-0/+23
2008-02-04Explain kmem_cache_cpu fieldsGravatar Christoph Lameter 1-5/+5
2008-02-04SLUB: rename defrag to remote_node_defrag_ratioGravatar Christoph Lameter 1-1/+4
2008-01-02Unify /proc/slabinfo configurationGravatar Linus Torvalds 1-2/+0
2008-01-01slub: provide /proc/slabinfoGravatar Pekka J Enberg 1-0/+2
2007-10-17Slab API: remove useless ctor parameter and reorder parametersGravatar Christoph Lameter 1-1/+1
2007-10-16SLUB: Optimize cacheline use for zeroingGravatar Christoph Lameter 1-0/+1
2007-10-16SLUB: Place kmem_cache_cpu structures in a NUMA aware wayGravatar Christoph Lameter 1-3/+6
2007-10-16SLUB: Move page->offset to kmem_cache_cpu->offsetGravatar Christoph Lameter 1-0/+1
2007-10-16SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabGravatar Christoph Lameter 1-1/+8
2007-10-16SLUB: direct pass through of page size or higher kmalloc requestsGravatar Christoph Lameter 1-33/+24
2007-08-31SLUB: Force inlining for functions in slub_def.hGravatar Christoph Lameter 1-4/+4
2007-07-20fix gfp_t annotations for slubGravatar Al Viro 1-1/+1
2007-07-17Slab allocators: Cleanup zeroing allocationsGravatar Christoph Lameter 1-13/+0
2007-07-17SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGGravatar Christoph Lameter 1-0/+4
2007-07-17Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsGravatar Christoph Lameter 1-12/+0
2007-07-16slob: initial NUMA supportGravatar Paul Mundt 1-1/+5
2007-06-16SLUB: minimum alignment fixesGravatar Christoph Lameter 1-2/+11
2007-06-08SLUB: return ZERO_SIZE_PTR for kmalloc(0)Gravatar Christoph Lameter 1-8/+17
2007-05-17Slab allocators: define common size limitationsGravatar Christoph Lameter 1-17/+2
2007-05-17slub: fix handling of oversized slabsGravatar Andrew Morton 1-1/+6
2007-05-17Slab allocators: Drop support for destructorsGravatar Christoph Lameter 1-1/+0
2007-05-16SLUB: It is legit to allocate a slab of the maximum permitted sizeGravatar Christoph Lameter 1-1/+1
2007-05-15SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limitGravatar Christoph Lameter 1-1/+5
2007-05-07slub: enable tracking of full slabsGravatar Christoph Lameter 1-0/+1
2007-05-07SLUB: allocate smallest object size if the user asks for 0 bytesGravatar Christoph Lameter 1-2/+6
2007-05-07SLUB coreGravatar Christoph Lameter 1-0/+201