aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2023-10-22bcachefs: Nocow locking fixupGravatar Kent Overstreet 1-3/+10
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Add some logging for btree node rewrites due to errorsGravatar Kent Overstreet 2-4/+29
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Ensure btree node cache is not more than half dirtyGravatar Kent Overstreet 1-0/+3
Tweak journal reclaim to ensure the btree node cache isn't more than half dirty so that memory reclaim can always make progress - the same as we do for the btree key cache. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22bcachefs: Add max nr of IOs in flight to the move pathGravatar Kent Overstreet 3-7/+24
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Add an assert to bch2_bucket_nocow_unlock()Gravatar Kent Overstreet 1-0/+4
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: don't block reads if we're promotingGravatar Daniel Hill 3-9/+25
The promote path calls data_update_init() and now that we take locks here, there's potential for promote to block our read path, just error when we can't take the lock instead of blocking. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix promote path leakGravatar Kent Overstreet 1-2/+2
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Improve invalidate_one_bucket() error messagesGravatar Kent Overstreet 3-16/+46
Make sure to check for lru entries that point to buckets that don't exist as well as buckets in the wrong state, and improve the error message we print out. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix move_ctxt_wait_event()Gravatar Kent Overstreet 1-8/+12
We shouldn't be evaluating cond again if it already returned true. This fixes a bug when this helper is used for taking nocow locks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix deadlock on nocow locks in data move pathGravatar Kent Overstreet 5-32/+55
The recent nocow locking rework introduced a deadlock in the data move path: the new nocow locking scheme uses a hash table with a fixed size array for chaining, meaning on hash collision we may have to wait for other locks to be released before we can lock a bucket. And since the data move path needs to submit writes from the same thread that's taking nocow locks and submitting reads, this introduces a deadlock. This shouldn't happen often in practice, but since the data move path can keep large numbers of IOs in flight simultaneously, it's something we have to handle. This patch makes move_ctxt_wait_event() available to bch2_data_update_init() and uses it when appropriate, which is our normal solution to this kind of thing. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: BKEY_INVALID_FROM_JOURNALGravatar Kent Overstreet 3-7/+8
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Change bkey_invalid() rw param to flagsGravatar Kent Overstreet 24-67/+68
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Rework lru btreeGravatar Kent Overstreet 6-138/+93
This patch changes how the LRU index works: Instead of using KEY_TYPE_lru where the bucket the lru entry points to is part of the value, this switches to KEY_TYPE_set and encoding the bucket we refer to in the low bits of the key. This means that we no longer have to check for collisions when inserting LRU entries. We'll be making using of this in the next patch, which adds a btree write buffer - a pure write buffer for btree updates, where updates are appended to a simple array and then periodically sorted and batch inserted. This is a new on disk format version, and a forced upgrade. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Improved nocow lockingGravatar Kent Overstreet 10-62/+179
This improves the nocow lock table so that hash table entries have multiple locks, and locks specify which bucket they're for - i.e. we can now resolve hash collisions. This is important because the allocator has to skip buckets that are locked in the nocow lock table, and previously hash collisions would cause it to spuriously skip unlocked buckets. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: handle failed data_update_init cleanupGravatar Daniel Hill 1-1/+9
data_update_init allocates several resources, but we forget to clean these up when it fails. Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: expose nocow_lock table in sysfsGravatar Daniel Hill 1-0/+19
Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: bucket_gens btreeGravatar Kent Overstreet 6-19/+435
To improve mount times, add a btree for just bucket gens, 256 of them per key: this means we'll have to scan drastically less metadata at startup. This adds - trigger for keeping it in sync with the all btree - initialization code, for filesystems from previous versions - new path for reading bucket gens - new fsck code And a new on disk format version. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Inline bch2_two_state_(trylock|unlock)Gravatar Kent Overstreet 3-31/+37
Standard inlining of fast paths - these locks are now used by our new nocow mode. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Nocow supportGravatar Kent Overstreet 22-52/+709
This adds support for nocow mode, where we do writes in-place when possible. Patch components: - New boolean filesystem and inode option, nocow: note that when nocow is enabled, data checksumming and compression are implicitly disabled - To prevent in-place writes from racing with data moves (data_update.c) or bucket reuse (i.e. a bucket being reused and re-allocated while a nocow write is in flight, we have a new locking mechanism. Buckets can be locked for either data update or data move, using a fixed size hash table of two_state_shared locks. We don't have any chaining, meaning updates and moves to different buckets that hash to the same lock will wait unnecessarily - we'll want to watch for this becoming an issue. - The allocator path also needs to check for in-place writes in flight to a given bucket before giving it out: thus we add another counter to bucket_alloc_state so we can track this. - Fsync now may need to issue cache flushes to block devices instead of flushing the journal. We add a device bitmask to bch_inode_info, ei_devs_need_flush, which tracks devices that need to have flushes issued - note that this will lead to unnecessary flushes when other codepaths have already issued flushes, we may want to replace this with a sequence number. - New nocow write path: look up extents, and if they're writable write to them - otherwise fall back to the normal COW write path. XXX: switch to sequence numbers instead of bitmask for devs needing journal flush XXX: ei_quota_lock being a mutex means bch2_nocow_write_done() needs to run in process context - see if we can improve this Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Data update support for unwritten extentsGravatar Kent Overstreet 4-13/+118
The data update path requires special support for unwritten extents - we still need to be able to move them, but there's no need to read or write anything. This patch adds a new error code to tell bch2_move_extent() that we're short circuiting the read, and adds bch2_update_unwritten_extent() to create a reservation then call __bch2_data_update_index_update(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Unwritten extents supportGravatar Kent Overstreet 8-16/+69
- bch2_extent_merge checks unwritten bit - read path returns 0s for unwritten extents without actually reading - reflink path skips over unwritten extents - bch2_bkey_ptrs_invalid() checks for extents with both written and unwritten extents, and non-normal extents (stripes, btree ptrs) with unwritten ptrs - fiemap checks for unwritten extents and returns FIEMAP_EXTENT_UNWRITTEN Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: bch2_extent_update_i_size_sectors()Gravatar Kent Overstreet 1-54/+60
In the io path, when we do the extent update we also have to update the inode - for i_size and i_sectors updates, as well as for bi_journal_seq for fsync. This factors that out into a new helper which will be used in the new nocow mode, in the unwritten extent conversion path. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: bch2_extent_fallocate()Gravatar Kent Overstreet 3-26/+38
This factors out part of __bchfs_fallocate() in fs-io.c into an new, lower level io.c helper, which creates a single extent reservation. This is prep work for nocow support - the new helper will shortly gain the ability to create unwritten extents. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Skip inode unpack/pack in bch2_extent_update()Gravatar Kent Overstreet 1-43/+45
This takes advantage of the new inode type to skip the expensive pack/unpack when inode updates are required in the extent update path. Additionally, we now skip the inode update entirely when i_sectors and i_size aren't changing. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Drop old maybe_extending optimizationGravatar Kent Overstreet 3-35/+4
The extend update path had an optimization to avoid updating the inode if we knew we were definitely not extending the file. But now that we're updating inodes on every extent update - for fsync - that code can be deleted. Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
2023-10-22bcachefs: KEY_TYPE_inode_v3, metadata_version_inode_v3Gravatar Kent Overstreet 7-37/+219
Move bi_size and bi_sectors into the non-varint portion of the inode, so that the write path can update them without going through the relatively expensive unpack/pack operations. Other changes: - Add a field for the offset of the varint section, so we can add new non-varint fields without needing a new inode type, like alloc_v3 - Move bi_mode into the flags field, so that the varint section can be u64 aligned Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Start snapshots before bch2_gc()Gravatar Kent Overstreet 1-28/+14
bch2_gc may require snapshots to be started - the repair path when checking the reflink btree may do updates to the extents btree. This moves bch2_fs_initialize_subvolumes() and bch2_fs_snapshots_start() to before bch2_gc() - since we haven't gone RW yet, the updates in bch2_fs_initialize_subvolumes() are done via the journal replay keys list, so it's fine to do this before bch2_gc(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Improve bch2_check_alloc_info()Gravatar Kent Overstreet 1-53/+207
This factors out a new helper from bch2_dev_freespace_init(), bch2_get_key_or_hole(), and uses it in bch2_check_alloc_info(): we're now able to process holes in the alloc btree as ranges, instead of one bucket at a time. This will improve fsck performance on new filesystems, or filesystems where not every bucket has been used yet. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Improve bch2_dev_freespace_init()Gravatar Kent Overstreet 1-18/+93
This makes bch2_dev_freespace_init() much faster: instead of processing every bucket on the device one at a time, we handle ranges of missing keys all at once: the freespace btree is an extents style btree, so we only have to insert one freespace key for every range of missing keys in the alloc btree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22fixup bcachefs: New on disk format: BackpointersGravatar Kent Overstreet 1-8/+27
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Don't use key cache during fsckGravatar Kent Overstreet 5-7/+12
The btree key cache mainly helps with lock contention, at the cost of additional memory overhead. During some fsck passes the memory overhead really matters, but fsck is single threaded so lock contention is an issue - so skipping the key cache during fsck will help with performance. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Run check_extents_to_backpointers() in multiple passesGravatar Kent Overstreet 1-18/+124
Similer to the previous patch for check_backpointers_to_extents(), if the alloc + backpointers btrees do not fit in ram we need to run into multiple passes. The counting of btree nodes that fit in memory is different here, because we have to walk the alloc and backpointers btrees at the same time, since a backpointer could reside in either of them and we don't know which without checking both. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Run bch2_check_backpointers_to_extents() in multiple passes if ↵Gravatar Kent Overstreet 2-13/+180
necessary When the extents + reflink btrees don't fit into memory this fsck pass becomes _much_ slower, since we're doing random lookups. This patch changes this pass to check how much of the relevant btrees will fit into memory, and run in multiple passes if needed. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Don't stop copygc while removing devicesGravatar Kent Overstreet 1-8/+0
With the new backpointer based copygc we don't need an explicit copygc reserve, we're always evacuating buckets one at a time - so this is no longer needed, and in fact removing it fixes a deadlock in bch2_dev_allocator_remove(). Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Delete in memory ec backpointersGravatar Kent Overstreet 4-42/+0
Post btree backpointers, these aren't needed anymore. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Erasure coding now uses backpointersGravatar Kent Overstreet 1-30/+85
This is only a start to updating erasure coding for backpointers - it's still not working yet. The subsequent patch will delete our old in memory backpointers for copygc, and this fixes a spurious EPERM bug/error message. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Copygc now uses backpointersGravatar Kent Overstreet 5-224/+346
Previously, copygc needed to walk the entire extents & reflink btrees to find extents that needed to be moved. Now that we have backpointers, this patch implements bch2_evacuate_bucket() in the move code, which copygc now uses for evacuating mostly empty buckets. Also, thanks to the new backpointers code, copygc can now move btree nodes. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: New on disk format: BackpointersGravatar Kent Overstreet 14-66/+1186
This patch adds backpointers: we now have a reverse index from device and offset on that device (specifically, offset within a bucket) back to btree nodes and (non cached) data extents. The first 40 backpointers within a bucket are stored in the alloc key; after that backpointers spill over to the next backpointers btree. This is to help avoid performance regressions from additional btree updates on large streaming workloads. This patch adds all the code for creating, checking and repairing backpointers. The next patch in the series is going to use backpointers for copygc - finally getting rid of the need to scan all extents to do copygc. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Btree write bufferGravatar Kent Overstreet 16-21/+677
This adds a new method of doing btree updates - a straight write buffer, implemented as a flat fixed size array. This is only useful when we don't need to read from the btree in order to do the update, and when reading is infrequent - perfect for the LRU btree. This will make LRU btree updates fast enough that we'll be able to use it for persistently indexing buckets by fragmentation, which will be a massive boost to copygc performance. Changes: - A new btree_insert_type enum, for btree_insert_entries. Specifies btree, btree key cache, or btree write buffer. - bch2_trans_update_buffered(): updates via the btree write buffer don't need a btree path, so we need a new update path. - Transaction commit path changes: The update to the btree write buffer both mutates global, and can fail if there isn't currently room. Therefore we do all write buffer updates in the transaction all at once, and also if it fails we have to revert filesystem usage counter changes. If there isn't room we flush the write buffer in the transaction commit error path and retry. - A new persistent option, for specifying the number of entries in the write buffer. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Go RW before check_alloc_info()Gravatar Kent Overstreet 4-29/+53
It's possible to do btree updates before going RW by adding them to the list of updates for journal replay to do, but this is limited by what fits in RAM. This patch switches the second alloc info phase to run after going RW - btree_gc has already ensured the alloc btree itself is correct - and tweaks the allocation path to deal with the potential small inconsistencies. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Start copygc when first going read-writeGravatar Kent Overstreet 2-31/+42
In the distant past, it wasn't possible to start copygc until after journal replay had finished. Now, the btree iterator code overlays keys from the journal, so there's no reason not to start it earlier - and it solves a rare deadlock. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Kill trans->flagsGravatar Kent Overstreet 5-51/+50
Recursive transaction commits are occasionally necessary - in particular, for the upcoming btree write buffer's flush path. This avoids bugs due to trans->flags being accidentally mutated mid-commit, which can cause c->writes refcount leaks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: trans->notrace_relock_failGravatar Kent Overstreet 4-2/+11
When we unlock in order to submit IO, the next relock event is likely to fail if submit_bio() blocked - we shouldn't those events in our _fail stats, since those are expected events and shouldn't cause test failures. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Debug mode for c->writes referencesGravatar Kent Overstreet 13-46/+168
This adds a debug mode where we split up the c->writes refcount into distinct refcounts for every codepath that takes a reference, and adds sysfs code to print the value of each ref. This will make it easier to debug shutdown hangs due to refcount leaks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: ec_stripe_delete_work() now takes ref on c->writesGravatar Kent Overstreet 3-8/+17
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix btree_node_write_blocked() not being clearedGravatar Kent Overstreet 1-0/+3
The btree_node_write_blocked bit was a later addition to this code, it only mirrors the state of the b->write_blocked list (empty or nonempty) - unfortunately, when it was added it wasn't correctly kept in sync - oops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Switch a BUG_ON() to a panic()Gravatar Kent Overstreet 1-1/+4
This assert is popping - rarely - in the CI, this will help us track it down from the logs. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix btree_path_alloc()Gravatar Kent Overstreet 1-3/+8
We need to call bch2_trans_update_max_paths() before marking the new path as allocated, since we're not initializing it yet. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Fix memleak in replicas_table_update()Gravatar Brett Holman 1-7/+0
Signed-off-by: Brett Holman <bholman.devel@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
2023-10-22bcachefs: Use for_each_btree_key_upto() more consistentlyGravatar Kent Overstreet 11-81/+150
It's important that in BTREE_ITER_FILTER_SNAPSHOTS mode we always use peek_upto() and provide an end for the interval we're searching for - otherwise, when we hit the end of the inode the next inode be in a different subvolume and not have any keys in the current snapshot, and we'd iterate over arbitrarily many keys before returning one. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>