-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove 'zero_end = 1;' from btrfs_prepare_device() #1
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
then use it deal with zero_dev_end(), it should not be set to 1 in btrfs_prepare_device(). Signed-off-by: Li Yang <[email protected]>
@kdave I havent join in [email protected], so I send this pullreq. |
The development takes place in the mailinglist, the github repository is for convenience. Please send the patch to the mailinglist. As for the patch itself, I've looked at how zero_end is used and it would need more fixes. For example main() in mkfs.c also forces the value to 1. |
kdave
pushed a commit
that referenced
this pull request
May 29, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 3, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 12, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 14, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 18, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 18, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 18, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 27, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 27, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 24, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Aug 22, 2014
A mdrestore_struct was being written to without its mutex being held. This race was found with ThreadSanitizer; the relevant part of the report looks like this: WARNING: ThreadSanitizer: data race (pid=18828) Write of size 8 at 0x7fffffc3d088 by main thread: #0 build_chunk_tree .../btrfs-progs/btrfs-image.c:2233 #1 __restore_metadump .../btrfs-progs/btrfs-image.c:2294 #2 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #3 main .../btrfs-progs/btrfs-image.c:2545 Previous read of size 8 at 0x7fffffc3d088 by thread T1 (mutexes: write M0): #0 restore_worker .../btrfs-progs/btrfs-image.c:1636 Location is stack of main thread. Mutex M0 created at: #0 pthread_mutex_init ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1766 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Thread T1 (tid=18830, running) created by main thread at: #0 pthread_create ??:0 #1 mdrestore_init .../btrfs-progs/btrfs-image.c:1784 #2 __restore_metadump .../btrfs-progs/btrfs-image.c:2286 #3 restore_metadump .../btrfs-progs/btrfs-image.c:2345 #4 main .../btrfs-progs/btrfs-image.c:2545 Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Aug 22, 2014
When a struct btrfs_fs_devices was being torn down by btrfs_close_devices(), there was an invalidated pointer in the global list fs_uuids which still pointed to it; if a device was closed and then reopened (which btrfs-convert does), freed memory would be accessed. This was found using ThreadSanitizer (pretty much doing what AddressSanitizer would, but not exiting after the first failure). To reproduce, build with -fsanitize=thread and run 'make test'. Representative output is below. This change makes the current tests TSan-clean. WARNING: ThreadSanitizer: heap-use-after-free (pid=29161) Read of size 8 at 0x7d180000eee0 by main thread: #0 memcmp ??:0 #1 find_fsid .../volumes.c:81 #2 device_list_add .../volumes.c:95 #3 btrfs_scan_one_device .../volumes.c:259 #4 btrfs_scan_fs_devices .../disk-io.c:1002 #5 __open_ctree_fd .../disk-io.c:1090 #6 open_ctree_fd .../disk-io.c:1191 #7 do_convert .../btrfs-convert.c:2317 #8 main .../btrfs-convert.c:2745 Previous write of size 8 at 0x7d180000eee0 by main thread: #0 free ??:0 #1 btrfs_close_devices .../volumes.c:191 #2 close_ctree .../disk-io.c:1401 #3 do_convert .../btrfs-convert.c:2300 #4 main .../btrfs-convert.c:2745 Location is heap block of size 96 at 0x7d180000eee0 allocated by main thread: #0 calloc ??:0 (exe+0x00000002acc6) #1 device_list_add .../volumes.c:97 #2 btrfs_scan_one_device .../volumes.c:259 #3 btrfs_scan_fs_devices .../disk-io.c:1002 #4 __open_ctree_fd .../disk-io.c:1090 #5 open_ctree_fd .../disk-io.c:1191 #6 do_convert .../btrfs-convert.c:2256 #7 main .../btrfs-convert.c:2745 Signed-off-by: Adam Buchbinder <[email protected]> Reviewed-by: Satoru Takeuchi <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
May 29, 2017
Running "btrfsck --repair /dev/sdd2" crashed as it can happen in (corrupted) file systems, that slot > nritems: > (gdb) bt full > #0 0x00007ffff7020e71 in __memmove_sse2_unaligned_erms () from /lib/x86_64-linux-gnu/libc.so.6 > #1 0x0000000000438764 in btrfs_del_ptr (trans=<optimized out>, root=0x6e4fe0, path=0x1d17880, level=0, slot=7) > at ctree.c:2611 > parent = 0xcd96980 > nritems = <optimized out> > __func__ = "btrfs_del_ptr" > #2 0x0000000000421b15 in repair_btree (corrupt_blocks=<optimized out>, root=<optimized out>) at cmds-check.c:3539 > key = {objectid = 77990592512, type = 168 '\250', offset = 16384} > trans = 0x8f48c0 > path = 0x1d17880 > level = 0 > #3 check_fs_root (wc=<optimized out>, root_cache=<optimized out>, root=<optimized out>) at cmds-check.c:3703 > corrupt = 0x1d17880 > corrupt_blocks = {root = {rb_node = 0x6e80c60}} > path = {nodes = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {0, 0, 0, 0, 0, 0, 0, 0}, locks = {0, 0, > 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, skip_check_block = 0} > nrefs = {bytenr = {271663104, 271646720, 560021504, 0, 0, 0, 0, 0}, refs = {1, 1, 1, 0, 0, 0, 0, 0}} > wret = 215575372 > root_node = {cache = {rb_node = {__rb_parent_color = 0, rb_right = 0x0, rb_left = 0x0}, objectid = 0, > start = 0, size = 0}, root_cache = {root = {rb_node = 0x0}}, inode_cache = {root = { > rb_node = 0x781c80}}, current = 0x819530, refs = 0} > status = 215575372 > rec = 0x1 > #4 check_fs_roots (root_cache=0xcd96b6d, root=<optimized out>) at cmds-check.c:3809 > path = {nodes = {0x6eed90, 0x6a2f40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {18, 2, 0, 0, 0, 0, 0, 0}, > locks = {0, 0, 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, > skip_check_block = 0} > key = {objectid = 323, type = 132 '\204', offset = 18446744073709551615} > wc = {shared = {root = {rb_node = 0x0}}, nodes = {0x0, 0x0, 0x7fffffffe428, 0x0, 0x0, 0x0, 0x0, 0x0}, > active_node = 2, root_level = 2} > leaf = 0x6e4fe0 > tmp_root = 0x6e4fe0 > #5 0x00000000004287c3 in cmd_check (argc=215575372, argv=0x1d17880) at cmds-check.c:11521 > root_cache = {root = {rb_node = 0x98c2940}} > info = 0x6927b0 > bytenr = 6891440 > tree_root_bytenr = 0 > uuidbuf = "f65ff1a1-76ef-456e-beb5-c6c3841e7534" > num = 215575372 > readonly = 218080104 > qgroups_repaired = 0 > #6 0x000000000040a41f in main (argc=3, argv=0x7fffffffebe8) at btrfs.c:243 > cmd = 0x689868 > bname = <optimized out> > ret = <optimized out> in that case the count of remaining items (nritems - slot - 1) gets negative. That is then casted to (unsigned long len), which leads to the observed crash. Change the tests before the move to handle only the non-corrupted case, were slow < nritems. This does not fix the corruption, but allows btrfsck to finish without crashing. Signed-off-by: Philipp Hahn <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 2, 2017
Running "btrfsck --repair /dev/sdd2" crashed as it can happen in (corrupted) file systems, that slot > nritems: > (gdb) bt full > #0 0x00007ffff7020e71 in __memmove_sse2_unaligned_erms () from /lib/x86_64-linux-gnu/libc.so.6 > #1 0x0000000000438764 in btrfs_del_ptr (trans=<optimized out>, root=0x6e4fe0, path=0x1d17880, level=0, slot=7) > at ctree.c:2611 > parent = 0xcd96980 > nritems = <optimized out> > __func__ = "btrfs_del_ptr" > #2 0x0000000000421b15 in repair_btree (corrupt_blocks=<optimized out>, root=<optimized out>) at cmds-check.c:3539 > key = {objectid = 77990592512, type = 168 '\250', offset = 16384} > trans = 0x8f48c0 > path = 0x1d17880 > level = 0 > #3 check_fs_root (wc=<optimized out>, root_cache=<optimized out>, root=<optimized out>) at cmds-check.c:3703 > corrupt = 0x1d17880 > corrupt_blocks = {root = {rb_node = 0x6e80c60}} > path = {nodes = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {0, 0, 0, 0, 0, 0, 0, 0}, locks = {0, 0, > 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, skip_check_block = 0} > nrefs = {bytenr = {271663104, 271646720, 560021504, 0, 0, 0, 0, 0}, refs = {1, 1, 1, 0, 0, 0, 0, 0}} > wret = 215575372 > root_node = {cache = {rb_node = {__rb_parent_color = 0, rb_right = 0x0, rb_left = 0x0}, objectid = 0, > start = 0, size = 0}, root_cache = {root = {rb_node = 0x0}}, inode_cache = {root = { > rb_node = 0x781c80}}, current = 0x819530, refs = 0} > status = 215575372 > rec = 0x1 > #4 check_fs_roots (root_cache=0xcd96b6d, root=<optimized out>) at cmds-check.c:3809 > path = {nodes = {0x6eed90, 0x6a2f40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {18, 2, 0, 0, 0, 0, 0, 0}, > locks = {0, 0, 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, > skip_check_block = 0} > key = {objectid = 323, type = 132 '\204', offset = 18446744073709551615} > wc = {shared = {root = {rb_node = 0x0}}, nodes = {0x0, 0x0, 0x7fffffffe428, 0x0, 0x0, 0x0, 0x0, 0x0}, > active_node = 2, root_level = 2} > leaf = 0x6e4fe0 > tmp_root = 0x6e4fe0 > #5 0x00000000004287c3 in cmd_check (argc=215575372, argv=0x1d17880) at cmds-check.c:11521 > root_cache = {root = {rb_node = 0x98c2940}} > info = 0x6927b0 > bytenr = 6891440 > tree_root_bytenr = 0 > uuidbuf = "f65ff1a1-76ef-456e-beb5-c6c3841e7534" > num = 215575372 > readonly = 218080104 > qgroups_repaired = 0 > #6 0x000000000040a41f in main (argc=3, argv=0x7fffffffebe8) at btrfs.c:243 > cmd = 0x689868 > bname = <optimized out> > ret = <optimized out> in that case the count of remaining items (nritems - slot - 1) gets negative. That is then casted to (unsigned long len), which leads to the observed crash. Change the tests before the move to handle only the non-corrupted case, were slow < nritems. This does not fix the corruption, but allows btrfsck to finish without crashing. Signed-off-by: Philipp Hahn <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 3, 2017
Running "btrfsck --repair /dev/sdd2" crashed as it can happen in (corrupted) file systems, that slot > nritems: > (gdb) bt full > #0 0x00007ffff7020e71 in __memmove_sse2_unaligned_erms () from /lib/x86_64-linux-gnu/libc.so.6 > #1 0x0000000000438764 in btrfs_del_ptr (trans=<optimized out>, root=0x6e4fe0, path=0x1d17880, level=0, slot=7) > at ctree.c:2611 > parent = 0xcd96980 > nritems = <optimized out> > __func__ = "btrfs_del_ptr" > #2 0x0000000000421b15 in repair_btree (corrupt_blocks=<optimized out>, root=<optimized out>) at cmds-check.c:3539 > key = {objectid = 77990592512, type = 168 '\250', offset = 16384} > trans = 0x8f48c0 > path = 0x1d17880 > level = 0 > #3 check_fs_root (wc=<optimized out>, root_cache=<optimized out>, root=<optimized out>) at cmds-check.c:3703 > corrupt = 0x1d17880 > corrupt_blocks = {root = {rb_node = 0x6e80c60}} > path = {nodes = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {0, 0, 0, 0, 0, 0, 0, 0}, locks = {0, 0, > 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, skip_check_block = 0} > nrefs = {bytenr = {271663104, 271646720, 560021504, 0, 0, 0, 0, 0}, refs = {1, 1, 1, 0, 0, 0, 0, 0}} > wret = 215575372 > root_node = {cache = {rb_node = {__rb_parent_color = 0, rb_right = 0x0, rb_left = 0x0}, objectid = 0, > start = 0, size = 0}, root_cache = {root = {rb_node = 0x0}}, inode_cache = {root = { > rb_node = 0x781c80}}, current = 0x819530, refs = 0} > status = 215575372 > rec = 0x1 > #4 check_fs_roots (root_cache=0xcd96b6d, root=<optimized out>) at cmds-check.c:3809 > path = {nodes = {0x6eed90, 0x6a2f40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, slots = {18, 2, 0, 0, 0, 0, 0, 0}, > locks = {0, 0, 0, 0, 0, 0, 0, 0}, reada = 0, lowest_level = 0, search_for_split = 0, > skip_check_block = 0} > key = {objectid = 323, type = 132 '\204', offset = 18446744073709551615} > wc = {shared = {root = {rb_node = 0x0}}, nodes = {0x0, 0x0, 0x7fffffffe428, 0x0, 0x0, 0x0, 0x0, 0x0}, > active_node = 2, root_level = 2} > leaf = 0x6e4fe0 > tmp_root = 0x6e4fe0 > #5 0x00000000004287c3 in cmd_check (argc=215575372, argv=0x1d17880) at cmds-check.c:11521 > root_cache = {root = {rb_node = 0x98c2940}} > info = 0x6927b0 > bytenr = 6891440 > tree_root_bytenr = 0 > uuidbuf = "f65ff1a1-76ef-456e-beb5-c6c3841e7534" > num = 215575372 > readonly = 218080104 > qgroups_repaired = 0 > #6 0x000000000040a41f in main (argc=3, argv=0x7fffffffebe8) at btrfs.c:243 > cmd = 0x689868 > bname = <optimized out> > ret = <optimized out> in that case the count of remaining items (nritems - slot - 1) gets negative. That is then casted to (unsigned long len), which leads to the observed crash. Change the tests before the move to handle only the non-corrupted case, were slow < nritems. This does not fix the corruption, but allows btrfsck to finish without crashing. Signed-off-by: Philipp Hahn <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 12, 2017
The status display was reading the state while the task was updating it. Use a mutex to prevent the race. This race was detected using ThreadSanitizer and misc-tests/005-convert-progress-thread-crash. ================== WARNING: ThreadSanitizer: data race Write of size 8 by main thread: #0 ext2_copy_inodes btrfs-progs/convert/source-ext2.c:853 #1 copy_inodes btrfs-progs/convert/main.c:145 #2 do_convert btrfs-progs/convert/main.c:1297 #3 main btrfs-progs/convert/main.c:1924 Previous read of size 8 by thread T1: #0 print_copied_inodes btrfs-progs/convert/main.c:124 Location is stack of main thread. Thread T1 (running) created by main thread at: #0 pthread_create <null> #1 task_start btrfs-progs/task-utils.c:50 #2 do_convert btrfs-progs/convert/main.c:1295 #3 main btrfs-progs/convert/main.c:1924 SUMMARY: ThreadSanitizer: data race btrfs-progs/convert/source-ext2.c:853 in ext2_copy_inodes Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 12, 2017
The status display was reading the state while the task was updating it. Use a mutex to prevent the race. This race was detected using ThreadSanitizer and misc-tests/005-convert-progress-thread-crash. ================== WARNING: ThreadSanitizer: data race Write of size 8 by main thread: #0 ext2_copy_inodes btrfs-progs/convert/source-ext2.c:853 #1 copy_inodes btrfs-progs/convert/main.c:145 #2 do_convert btrfs-progs/convert/main.c:1297 #3 main btrfs-progs/convert/main.c:1924 Previous read of size 8 by thread T1: #0 print_copied_inodes btrfs-progs/convert/main.c:124 Location is stack of main thread. Thread T1 (running) created by main thread at: #0 pthread_create <null> #1 task_start btrfs-progs/task-utils.c:50 #2 do_convert btrfs-progs/convert/main.c:1295 #3 main btrfs-progs/convert/main.c:1924 SUMMARY: ThreadSanitizer: data race btrfs-progs/convert/source-ext2.c:853 in ext2_copy_inodes Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 12, 2017
Making the code data-race safe requires that reads *and* writes happen under a mutex lock, if any of the access are writes. See Dmitri Vyukov, "Benign data races: what could possibly go wrong?" for more details. The fix here was to put most of the main loop of restore_worker under a mutex lock. This race was detected using fsck-tests/012-leaf-corruption. ================== WARNING: ThreadSanitizer: data race Write of size 4 by main thread: #0 add_cluster btrfs-progs/image/main.c:1931 #1 restore_metadump btrfs-progs/image/main.c:2566 #2 main btrfs-progs/image/main.c:2859 Previous read of size 4 by thread T6: #0 restore_worker btrfs-progs/image/main.c:1720 Location is stack of main thread. Thread T6 (running) created by main thread at: #0 pthread_create <null> #1 mdrestore_init btrfs-progs/image/main.c:1868 #2 restore_metadump btrfs-progs/image/main.c:2534 #3 main btrfs-progs/image/main.c:2859 SUMMARY: ThreadSanitizer: data race btrfs-progs/image/main.c:1931 in add_cluster Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 20, 2017
The status display was reading the state while the task was updating it. Use a mutex to prevent the race. This race was detected using ThreadSanitizer and misc-tests/005-convert-progress-thread-crash. ================== WARNING: ThreadSanitizer: data race Write of size 8 by main thread: #0 ext2_copy_inodes btrfs-progs/convert/source-ext2.c:853 #1 copy_inodes btrfs-progs/convert/main.c:145 #2 do_convert btrfs-progs/convert/main.c:1297 #3 main btrfs-progs/convert/main.c:1924 Previous read of size 8 by thread T1: #0 print_copied_inodes btrfs-progs/convert/main.c:124 Location is stack of main thread. Thread T1 (running) created by main thread at: #0 pthread_create <null> #1 task_start btrfs-progs/task-utils.c:50 #2 do_convert btrfs-progs/convert/main.c:1295 #3 main btrfs-progs/convert/main.c:1924 SUMMARY: ThreadSanitizer: data race btrfs-progs/convert/source-ext2.c:853 in ext2_copy_inodes Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 20, 2017
Making the code data-race safe requires that reads *and* writes happen under a mutex lock, if any of the access are writes. See Dmitri Vyukov, "Benign data races: what could possibly go wrong?" for more details. The fix here was to put most of the main loop of restore_worker under a mutex lock. This race was detected using fsck-tests/012-leaf-corruption. ================== WARNING: ThreadSanitizer: data race Write of size 4 by main thread: #0 add_cluster btrfs-progs/image/main.c:1931 #1 restore_metadump btrfs-progs/image/main.c:2566 #2 main btrfs-progs/image/main.c:2859 Previous read of size 4 by thread T6: #0 restore_worker btrfs-progs/image/main.c:1720 Location is stack of main thread. Thread T6 (running) created by main thread at: #0 pthread_create <null> #1 mdrestore_init btrfs-progs/image/main.c:1868 #2 restore_metadump btrfs-progs/image/main.c:2534 #3 main btrfs-progs/image/main.c:2859 SUMMARY: ThreadSanitizer: data race btrfs-progs/image/main.c:1931 in add_cluster Signed-off-by: Adam Buchbinder <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Feb 14, 2018
…_info_cache() This bug is exposed by fsck-test with D=asan, hit by test case 020, with the following error report: ================================================================= ==10740==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x621000061580 at pc 0x56051f0db6cd bp 0x7ffe170f3e20 sp 0x7ffe170f3e10 READ of size 1 at 0x621000061580 thread T0 #0 0x56051f0db6cc in btrfs_extent_inline_ref_type /home/adam/btrfs/btrfs-progs/ctree.h:1727 #1 0x56051f13b669 in build_roots_info_cache /home/adam/btrfs/btrfs-progs/cmds-check.c:14306 #2 0x56051f13c86a in repair_root_items /home/adam/btrfs/btrfs-progs/cmds-check.c:14450 #3 0x56051f13ea89 in cmd_check /home/adam/btrfs/btrfs-progs/cmds-check.c:14965 #4 0x56051efe75bb in main /home/adam/btrfs/btrfs-progs/btrfs.c:302 #5 0x7f04ddbb0f49 in __libc_start_main (/usr/lib/libc.so.6+0x20f49) #6 0x56051efe68c9 in _start (/home/adam/btrfs/btrfs-progs/btrfs+0x5b8c9) 0x621000061580 is located 0 bytes to the right of 4224-byte region [0x621000060500,0x621000061580) allocated by thread T0 here: #0 0x7f04ded50ce1 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cc:70 #1 0x56051f04685e in __alloc_extent_buffer /home/adam/btrfs/btrfs-progs/extent_io.c:553 #2 0x56051f047563 in alloc_extent_buffer /home/adam/btrfs/btrfs-progs/extent_io.c:687 #3 0x56051efff1d1 in btrfs_find_create_tree_block /home/adam/btrfs/btrfs-progs/disk-io.c:187 #4 0x56051f000133 in read_tree_block /home/adam/btrfs/btrfs-progs/disk-io.c:327 #5 0x56051efeddb8 in read_node_slot /home/adam/btrfs/btrfs-progs/ctree.c:652 #6 0x56051effb0d9 in btrfs_next_leaf /home/adam/btrfs/btrfs-progs/ctree.c:2853 #7 0x56051f13b343 in build_roots_info_cache /home/adam/btrfs/btrfs-progs/cmds-check.c:14267 #8 0x56051f13c86a in repair_root_items /home/adam/btrfs/btrfs-progs/cmds-check.c:14450 #9 0x56051f13ea89 in cmd_check /home/adam/btrfs/btrfs-progs/cmds-check.c:14965 #10 0x56051efe75bb in main /home/adam/btrfs/btrfs-progs/btrfs.c:302 #11 0x7f04ddbb0f49 in __libc_start_main (/usr/lib/libc.so.6+0x20f49) It's completely possible that one extent/metadata item has no inline reference, while build_roots_info_cache() doesn't have such check. Fix it by checking @iref against item end to avoid such problem. Issue: #92 Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jun 18, 2019
…y wrong condition to free delayed ref/head. [BUG] When btrfs-progs is compiled with D=asan, it can't pass even the very basic fsck tests due to btrfs-image has memory leak: === START TEST /home/adam/btrfs/btrfs-progs/tests//fsck-tests/001-bad-file-extent-bytenr restoring image default_case.img ================================================================= ==7790==ERROR: LeakSanitizer: detected memory leaks Direct leak of 104 byte(s) in 1 object(s) allocated from: #0 0x7f1d3b738389 in __interceptor_malloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cc:86 #1 0x560ca6b7f4ff in btrfs_add_delayed_tree_ref /home/adam/btrfs/btrfs-progs/delayed-ref.c:569 #2 0x560ca6af2d0b in btrfs_free_extent /home/adam/btrfs/btrfs-progs/extent-tree.c:2155 #3 0x560ca6ac16ca in __btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:319 #4 0x560ca6ac1d8c in btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:383 #5 0x560ca6ac6c8e in btrfs_search_slot /home/adam/btrfs/btrfs-progs/ctree.c:1153 #6 0x560ca6ab7e83 in fixup_device_size image/main.c:2113 #7 0x560ca6ab9279 in fixup_chunks_and_devices image/main.c:2333 #8 0x560ca6ab9ada in restore_metadump image/main.c:2455 #9 0x560ca6abaeba in main image/main.c:2723 #10 0x7f1d3b148ce2 in __libc_start_main (/usr/lib/libc.so.6+0x23ce2) ... tons of similar leakage for delayed_tree_ref ... Direct leak of 96 byte(s) in 1 object(s) allocated from: #0 0x7f1d3b738389 in __interceptor_malloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cc:86 #1 0x560ca6b7f5fb in btrfs_add_delayed_tree_ref /home/adam/btrfs/btrfs-progs/delayed-ref.c:583 #2 0x560ca6af5679 in alloc_tree_block /home/adam/btrfs/btrfs-progs/extent-tree.c:2503 #3 0x560ca6af57ac in btrfs_alloc_free_block /home/adam/btrfs/btrfs-progs/extent-tree.c:2524 #4 0x560ca6ac115b in __btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:290 #5 0x560ca6ac1d8c in btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:383 #6 0x560ca6b7bb15 in commit_tree_roots /home/adam/btrfs/btrfs-progs/transaction.c:98 #7 0x560ca6b7c525 in btrfs_commit_transaction /home/adam/btrfs/btrfs-progs/transaction.c:192 #8 0x560ca6ab92be in fixup_chunks_and_devices image/main.c:2337 #9 0x560ca6ab9ada in restore_metadump image/main.c:2455 #10 0x560ca6abaeba in main image/main.c:2723 #11 0x7f1d3b148ce2 in __libc_start_main (/usr/lib/libc.so.6+0x23ce2) ... tons of similar leakage for delayed_ref_head ... SUMMARY: AddressSanitizer: 1600 byte(s) leaked in 16 allocation(s). failed to restore image ./default_case.img [CAUSE] Commit c603970 ("btrfs-progs: Add delayed refs infrastructure") introduces delayed ref infrastructure for free space tree, however the refcount_dec_and_test() from kernel code is wrongly backported. refcount_dec_and_test() will return true if the refcount reaches 0. So kernel code will free the allocated space as expected: if (refcount_dec_and_test(&ref->refs)) { kmem_cache_free(); } However btrfs-progs backport is using the opposite condition: if (--ref->refs) { kfree(); } This will not free the memory for the last user, but for refs >= 2. Causing both use-after-free and memory leak for any offline write operation. [FIX] Fix the (--ref->refs) condition to (--ref->refs == 0) to fix the backport error. Fixes: c603970 ("btrfs-progs: Add delayed refs infrastructure") Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 3, 2019
…y wrong condition to free delayed ref/head. [BUG] When btrfs-progs is compiled with D=asan, it can't pass even the very basic fsck tests due to btrfs-image has memory leak: === START TEST /home/adam/btrfs/btrfs-progs/tests//fsck-tests/001-bad-file-extent-bytenr restoring image default_case.img ================================================================= ==7790==ERROR: LeakSanitizer: detected memory leaks Direct leak of 104 byte(s) in 1 object(s) allocated from: #0 0x7f1d3b738389 in __interceptor_malloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cc:86 #1 0x560ca6b7f4ff in btrfs_add_delayed_tree_ref /home/adam/btrfs/btrfs-progs/delayed-ref.c:569 #2 0x560ca6af2d0b in btrfs_free_extent /home/adam/btrfs/btrfs-progs/extent-tree.c:2155 #3 0x560ca6ac16ca in __btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:319 #4 0x560ca6ac1d8c in btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:383 #5 0x560ca6ac6c8e in btrfs_search_slot /home/adam/btrfs/btrfs-progs/ctree.c:1153 #6 0x560ca6ab7e83 in fixup_device_size image/main.c:2113 #7 0x560ca6ab9279 in fixup_chunks_and_devices image/main.c:2333 #8 0x560ca6ab9ada in restore_metadump image/main.c:2455 #9 0x560ca6abaeba in main image/main.c:2723 #10 0x7f1d3b148ce2 in __libc_start_main (/usr/lib/libc.so.6+0x23ce2) ... tons of similar leakage for delayed_tree_ref ... Direct leak of 96 byte(s) in 1 object(s) allocated from: #0 0x7f1d3b738389 in __interceptor_malloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cc:86 #1 0x560ca6b7f5fb in btrfs_add_delayed_tree_ref /home/adam/btrfs/btrfs-progs/delayed-ref.c:583 #2 0x560ca6af5679 in alloc_tree_block /home/adam/btrfs/btrfs-progs/extent-tree.c:2503 #3 0x560ca6af57ac in btrfs_alloc_free_block /home/adam/btrfs/btrfs-progs/extent-tree.c:2524 #4 0x560ca6ac115b in __btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:290 #5 0x560ca6ac1d8c in btrfs_cow_block /home/adam/btrfs/btrfs-progs/ctree.c:383 #6 0x560ca6b7bb15 in commit_tree_roots /home/adam/btrfs/btrfs-progs/transaction.c:98 #7 0x560ca6b7c525 in btrfs_commit_transaction /home/adam/btrfs/btrfs-progs/transaction.c:192 #8 0x560ca6ab92be in fixup_chunks_and_devices image/main.c:2337 #9 0x560ca6ab9ada in restore_metadump image/main.c:2455 #10 0x560ca6abaeba in main image/main.c:2723 #11 0x7f1d3b148ce2 in __libc_start_main (/usr/lib/libc.so.6+0x23ce2) ... tons of similar leakage for delayed_ref_head ... SUMMARY: AddressSanitizer: 1600 byte(s) leaked in 16 allocation(s). failed to restore image ./default_case.img [CAUSE] Commit c603970 ("btrfs-progs: Add delayed refs infrastructure") introduces delayed ref infrastructure for free space tree, however the refcount_dec_and_test() from kernel code is wrongly backported. refcount_dec_and_test() will return true if the refcount reaches 0. So kernel code will free the allocated space as expected: if (refcount_dec_and_test(&ref->refs)) { kmem_cache_free(); } However btrfs-progs backport is using the opposite condition: if (--ref->refs) { kfree(); } This will not free the memory for the last user, but for refs >= 2. Causing both use-after-free and memory leak for any offline write operation. [FIX] Fix the (--ref->refs) condition to (--ref->refs == 0) to fix the backport error. Fixes: c603970 ("btrfs-progs: Add delayed refs infrastructure") Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jan 2, 2020
[BUG] For certain fuzzed image, `btrfs check` will fail with the following call trace: Checking filesystem on issue_213.raw UUID: 99e50868-0bda-4d89-b0e4-7e8560312ef9 [1/7] checking root items [2/7] checking extents Program received signal SIGABRT, Aborted. 0x00007ffff7c88f25 in raise () from /usr/lib/libc.so.6 (gdb) bt #0 0x00007ffff7c88f25 in raise () from /usr/lib/libc.so.6 #1 0x00007ffff7c72897 in abort () from /usr/lib/libc.so.6 #2 0x00005555555abc3e in run_next_block (...) at check/main.c:6398 #3 0x00005555555b0f36 in deal_root_from_list (...) at check/main.c:8408 #4 0x00005555555b1a3d in check_chunks_and_extents (fs_info=0x5555556a1e30) at check/main.c:8690 #5 0x00005555555b1e3e in do_check_chunks_and_extents (fs_info=0x5555556a1e30) a #6 0x00005555555b5710 in cmd_check (cmd=0x555555696920 <cmd_struct_check>, argc #7 0x0000555555568dc7 in cmd_execute (cmd=0x555555696920 <cmd_struct_check>, ar #8 0x0000555555569713 in main (argc=2, argv=0x7fffffffde70) at btrfs.c:386 [CAUSE] This fuzzed images has a corrupted EXTENT_DATA item in data reloc tree: item 1 key (256 EXTENT_DATA 256) itemoff 16111 itemsize 12 generation 0 type 2 (prealloc) prealloc data disk byte 16777216 nr 0 prealloc data offset 0 nr 0 There are several problems with the item: - Bad item size 12 is too small. - Bad key offset offset of EXTENT_DATA type key represents file offset, which should always be aligned to sector size (4K in this particular case). [FIX] Do extra item size and key offset check for original mode, and remove the abort() call in run_next_block(). And to show off how robust lowmem mode is, lowmem can handle it without any hiccup. With this fix, original mode can detect the problem properly: Checking filesystem on issue_213.raw UUID: 99e50868-0bda-4d89-b0e4-7e8560312ef9 [1/7] checking root items [2/7] checking extents ERROR: invalid file extent item size, have 12 expect (21, 16283] ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache [4/7] checking fs roots root 18446744073709551607 root dir 256 error root 18446744073709551607 inode 256 errors 62, no orphan item, odd file extent, bad file extent ERROR: errors found in fs roots found 131072 bytes used, error(s) found total csum bytes: 0 total tree bytes: 131072 total fs tree bytes: 32768 total extent tree bytes: 16384 btree space waste bytes: 124774 file data blocks allocated: 0 referenced 0 Issue: #213 Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: Su Yue <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jan 9, 2020
[BUG] For certain fuzzed image, `btrfs check` will fail with the following call trace: Checking filesystem on issue_213.raw UUID: 99e50868-0bda-4d89-b0e4-7e8560312ef9 [1/7] checking root items [2/7] checking extents Program received signal SIGABRT, Aborted. 0x00007ffff7c88f25 in raise () from /usr/lib/libc.so.6 (gdb) bt #0 0x00007ffff7c88f25 in raise () from /usr/lib/libc.so.6 #1 0x00007ffff7c72897 in abort () from /usr/lib/libc.so.6 #2 0x00005555555abc3e in run_next_block (...) at check/main.c:6398 #3 0x00005555555b0f36 in deal_root_from_list (...) at check/main.c:8408 #4 0x00005555555b1a3d in check_chunks_and_extents (fs_info=0x5555556a1e30) at check/main.c:8690 #5 0x00005555555b1e3e in do_check_chunks_and_extents (fs_info=0x5555556a1e30) a #6 0x00005555555b5710 in cmd_check (cmd=0x555555696920 <cmd_struct_check>, argc #7 0x0000555555568dc7 in cmd_execute (cmd=0x555555696920 <cmd_struct_check>, ar #8 0x0000555555569713 in main (argc=2, argv=0x7fffffffde70) at btrfs.c:386 [CAUSE] This fuzzed images has a corrupted EXTENT_DATA item in data reloc tree: item 1 key (256 EXTENT_DATA 256) itemoff 16111 itemsize 12 generation 0 type 2 (prealloc) prealloc data disk byte 16777216 nr 0 prealloc data offset 0 nr 0 There are several problems with the item: - Bad item size 12 is too small. - Bad key offset offset of EXTENT_DATA type key represents file offset, which should always be aligned to sector size (4K in this particular case). [FIX] Do extra item size and key offset check for original mode, and remove the abort() call in run_next_block(). And to show off how robust lowmem mode is, lowmem can handle it without any hiccup. With this fix, original mode can detect the problem properly: Checking filesystem on issue_213.raw UUID: 99e50868-0bda-4d89-b0e4-7e8560312ef9 [1/7] checking root items [2/7] checking extents ERROR: invalid file extent item size, have 12 expect (21, 16283] ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache [4/7] checking fs roots root 18446744073709551607 root dir 256 error root 18446744073709551607 inode 256 errors 62, no orphan item, odd file extent, bad file extent ERROR: errors found in fs roots found 131072 bytes used, error(s) found total csum bytes: 0 total tree bytes: 131072 total fs tree bytes: 32768 total extent tree bytes: 16384 btree space waste bytes: 124774 file data blocks allocated: 0 referenced 0 Issue: #213 Signed-off-by: Qu Wenruo <[email protected]> Reviewed-by: Su Yue <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Nov 1, 2021
…properly handled [BUG] When a special image (diverted from fsck/012) has its unused slots (slot number >= nritems) with garbage, lowmem mode btrfs check can crash: (gdb) run check --mode=lowmem ~/downloads/good.img.restored Starting program: /home/adam/btrfs/btrfs-progs/btrfs check --mode=lowmem ~/downloads/good.img.restored ... ERROR: root 5 INODE[5044031582654955520] nlink(257228800) not equal to inode_refs(0) ERROR: root 5 INODE[5044031582654955520] nbytes 474624 not equal to extent_size 0 Program received signal SIGSEGV, Segmentation fault. 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 1703 BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); (gdb) bt #0 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 #1 0x0000555555641544 in check_inode_item (root=0x5555556c2290, path=0x7fffffffd960) at check/mode-lowmem.c:2628 [CAUSE] At check_inode_item() we have path->slot[0] at 29, while the tree block only has 26 items. This happens because two reasons: - btrfs_next_item() never reverts its slots Even if we failed to read next leaf. - check_inode_item() doesn't inform the caller that a fatal error happened In check_inode_item(), if btrfs_next_item() failed, it goes to out label, which doesn't really set @err properly. This means, when check_inode_item() fails at btrfs_next_item(), it will increase path->slots[0], while it's already beyond current tree block nritems. When the slot increases furthermore, and if the unused item slots have some garbage, we will get invalid btrfs_item_ptr() result, and causing above segfault. [FIX] Fix the problems by two ways: - Make btrfs_next_item() to revert its path->slots[0] on failure - Properly detect fatal error from check_inode_item() By this, we will no longer crash on the crafted image. Reported-by: Wang Yugui <[email protected]> Issue: #412 Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Nov 1, 2021
…properly handled [BUG] When a special image (diverted from fsck/012) has its unused slots (slot number >= nritems) with garbage, lowmem mode btrfs check can crash: (gdb) run check --mode=lowmem ~/downloads/good.img.restored Starting program: /home/adam/btrfs/btrfs-progs/btrfs check --mode=lowmem ~/downloads/good.img.restored ... ERROR: root 5 INODE[5044031582654955520] nlink(257228800) not equal to inode_refs(0) ERROR: root 5 INODE[5044031582654955520] nbytes 474624 not equal to extent_size 0 Program received signal SIGSEGV, Segmentation fault. 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 1703 BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); (gdb) bt #0 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 #1 0x0000555555641544 in check_inode_item (root=0x5555556c2290, path=0x7fffffffd960) at check/mode-lowmem.c:2628 [CAUSE] At check_inode_item() we have path->slot[0] at 29, while the tree block only has 26 items. This happens because two reasons: - btrfs_next_item() never reverts its slots Even if we failed to read next leaf. - check_inode_item() doesn't inform the caller that a fatal error happened In check_inode_item(), if btrfs_next_item() failed, it goes to out label, which doesn't really set @err properly. This means, when check_inode_item() fails at btrfs_next_item(), it will increase path->slots[0], while it's already beyond current tree block nritems. When the slot increases furthermore, and if the unused item slots have some garbage, we will get invalid btrfs_item_ptr() result, and causing above segfault. [FIX] Fix the problems by two ways: - Make btrfs_next_item() to revert its path->slots[0] on failure - Properly detect fatal error from check_inode_item() By this, we will no longer crash on the crafted image. Reported-by: Wang Yugui <[email protected]> Issue: #412 Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Nov 4, 2021
…properly handled [BUG] When a special image (diverted from fsck/012) has its unused slots (slot number >= nritems) with garbage, lowmem mode btrfs check can crash: (gdb) run check --mode=lowmem ~/downloads/good.img.restored Starting program: /home/adam/btrfs/btrfs-progs/btrfs check --mode=lowmem ~/downloads/good.img.restored ... ERROR: root 5 INODE[5044031582654955520] nlink(257228800) not equal to inode_refs(0) ERROR: root 5 INODE[5044031582654955520] nbytes 474624 not equal to extent_size 0 Program received signal SIGSEGV, Segmentation fault. 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 1703 BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); (gdb) bt #0 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 #1 0x0000555555641544 in check_inode_item (root=0x5555556c2290, path=0x7fffffffd960) at check/mode-lowmem.c:2628 [CAUSE] At check_inode_item() we have path->slot[0] at 29, while the tree block only has 26 items. This happens because two reasons: - btrfs_next_item() never reverts its slots Even if we failed to read next leaf. - check_inode_item() doesn't inform the caller that a fatal error happened In check_inode_item(), if btrfs_next_item() failed, it goes to out label, which doesn't really set @err properly. This means, when check_inode_item() fails at btrfs_next_item(), it will increase path->slots[0], while it's already beyond current tree block nritems. When the slot increases furthermore, and if the unused item slots have some garbage, we will get invalid btrfs_item_ptr() result, and causing above segfault. [FIX] Fix the problems by two ways: - Make btrfs_next_item() to revert its path->slots[0] on failure - Properly detect fatal error from check_inode_item() By this, we will no longer crash on the crafted image. Reported-by: Wang Yugui <[email protected]> Issue: #412 Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
lansuse
pushed a commit
to lansuse/btrfs-progs
that referenced
this pull request
Dec 1, 2021
…properly handled [BUG] When a special image (diverted from fsck/012) has its unused slots (slot number >= nritems) with garbage, lowmem mode btrfs check can crash: (gdb) run check --mode=lowmem ~/downloads/good.img.restored Starting program: /home/adam/btrfs/btrfs-progs/btrfs check --mode=lowmem ~/downloads/good.img.restored ... ERROR: root 5 INODE[5044031582654955520] nlink(257228800) not equal to inode_refs(0) ERROR: root 5 INODE[5044031582654955520] nbytes 474624 not equal to extent_size 0 Program received signal SIGSEGV, Segmentation fault. 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 1703 BTRFS_SETGET_FUNCS(inode_size, struct btrfs_inode_item, size, 64); (gdb) bt #0 0x0000555555639b11 in btrfs_inode_size (eb=0x5555558a7540, s=0x642e6cd1) at ./kernel-shared/ctree.h:1703 kdave#1 0x0000555555641544 in check_inode_item (root=0x5555556c2290, path=0x7fffffffd960) at check/mode-lowmem.c:2628 [CAUSE] At check_inode_item() we have path->slot[0] at 29, while the tree block only has 26 items. This happens because two reasons: - btrfs_next_item() never reverts its slots Even if we failed to read next leaf. - check_inode_item() doesn't inform the caller that a fatal error happened In check_inode_item(), if btrfs_next_item() failed, it goes to out label, which doesn't really set @err properly. This means, when check_inode_item() fails at btrfs_next_item(), it will increase path->slots[0], while it's already beyond current tree block nritems. When the slot increases furthermore, and if the unused item slots have some garbage, we will get invalid btrfs_item_ptr() result, and causing above segfault. [FIX] Fix the problems by two ways: - Make btrfs_next_item() to revert its path->slots[0] on failure - Properly detect fatal error from check_inode_item() By this, we will no longer crash on the crafted image. Reported-by: Wang Yugui <[email protected]> Issue: kdave#412 Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Feb 1, 2022
…level [BUG] When running lowmem mode with METADATA_ITEM which has invalid level, it will crash with the following backtrace: (gdb) bt #0 0x0000555555616b0b in btrfs_header_bytenr (eb=0x4) at ./kernel-shared/ctree.h:2134 #1 0x0000555555620c78 in check_tree_block_backref (root_id=5, bytenr=30457856, level=256) at check/mode-lowmem.c:3818 #2 0x0000555555621f6c in check_extent_item (path=0x7fffffffd9c0) at check/mode-lowmem.c:4334 #3 0x00005555556235a5 in check_leaf_items (root=0x555555688e10, path=0x7fffffffd9c0, nrefs=0x7fffffffda30, account_bytes=1) at check/mode-lowmem.c:4835 #4 0x0000555555623c6d in walk_down_tree (root=0x555555688e10, path=0x7fffffffd9c0, level=0x7fffffffd984, nrefs=0x7fffffffda30, check_all=1) at check/mode-lowmem.c:4967 #5 0x000055555562494f in check_btrfs_root (root=0x555555688e10, check_all=1) at check/mode-lowmem.c:5266 #6 0x00005555556254ee in check_chunks_and_extents_lowmem () at check/mode-lowmem.c:5556 #7 0x00005555555f0b82 in do_check_chunks_and_extents () at check/main.c:9114 #8 0x00005555555f50ea in cmd_check (cmd=0x55555567c640 <cmd_struct_check>, argc=3, argv=0x7fffffffdec0) at check/main.c:10892 #9 0x000055555556b2b1 in cmd_execute (argv=0x7fffffffdec0, argc=3, cmd=0x55555567c640 <cmd_struct_check>) at cmds/commands.h:125 [CAUSE] For function check_extent_item() it will go through inline extent items and then check their backrefs. But for METADATA_ITEM, it doesn't really validate key.offset, which is u64 and can contain value way larger than BTRFS_MAX_LEVEL (mostly caused by bit flip). In that case, if we have a larger value like 256 in key.offset, then later check_tree_block_backref() will use 256 as level, and overflow path->nodes[level] and crash. [FIX] Just verify the level, no matter if it's from btrfs_tree_block_level() (which is just u8), or it's from key.offset (which is u64). To do the check properly and detect higher bits corruption, also change the type of @Level from u8 to u64. Now lowmem mode can detect the problem properly: ... [2/7] checking extents ERROR: tree block 30457856 has bad backref level, has 256 expect [0, 7] ERROR: extent[30457856 16384] level mismatch, wanted: 0, have: 256 ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space tree ... Reviewed-by: Su Yue <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Oct 10, 2022
… long [BUG] Even with chunk_objectid bug fixed, mkfs.btrfs can still caused stack overflow when enabling extent-tree-v2 feature (need experimental features enabled): # ./mkfs.btrfs -f -O extent-tree-v2 ~/test.img btrfs-progs v5.19.1 See http://btrfs.wiki.kernel.org for more information. ERROR: superblock magic doesn't match NOTE: several default settings have changed in version 5.15, please make sure this does not affect your deployments: - DUP for metadata (-m dup) - enabled no-holes (-O no-holes) - enabled free-space-tree (-R free-space-tree) Label: (null) UUID: 205c61e7-f58e-4e8f-9dc2-38724f5c554b Node size: 16384 Sector size: 4096 Filesystem size: 512.00MiB Block group profiles: Data: single 8.00MiB Metadata: DUP 32.00MiB System: DUP 8.00MiB SSD detected: no Zoned device: no ================================================================= [... Skip full ASAN output ...] ==65655==ABORTING [CAUSE] For experimental build, we have unified feature output, but the old buffer size is only 64 bytes, which is too small to cover the new full feature string: extref, skinny-metadata, no-holes, free-space-tree, block-group-tree, extent-tree-v2 Above feature string is already 84 bytes, over the 64 on-stack memory size. This can also be proved by the ASAN output: ==65655==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc4e03b1d0 at pc 0x7ff0fc05fafe bp 0x7ffc4e03ac60 sp 0x7ffc4e03a408 WRITE of size 17 at 0x7ffc4e03b1d0 thread T0 #0 0x7ff0fc05fafd in __interceptor_strcat /usr/src/debug/gcc/libsanitizer/asan/asan_interceptors.cpp:377 #1 0x55cdb7b06ca5 in parse_features_to_string common/fsfeatures.c:316 #2 0x55cdb7b06ce1 in btrfs_parse_fs_features_to_string common/fsfeatures.c:324 #3 0x55cdb7a37226 in main mkfs/main.c:1783 #4 0x7ff0fbe3c28f (/usr/lib/libc.so.6+0x2328f) #5 0x7ff0fbe3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349) #6 0x55cdb7a2cb34 in _start ../sysdeps/x86_64/start.S:115 [FIX] Introduce a new macro, BTRFS_FEATURE_STRING_BUF_SIZE, along with a new sanity check helper, btrfs_assert_feature_buf_size(). The problem is I can not find a build time method to verify BTRFS_FEATURE_STRING_BUF_SIZE is large enough to contain all feature names, thus have to go the runtime function to do the BUG_ON() to verify the macro size. Now the minimal buffer size for experimental build is 138 bytes, just bump it to 160 for future expansion. And if further features go beyond that number, mkfs.btrfs/btrfs-convert will immediately crash at that BUG_ON(), so we can definitely detect it. Reviewed-by: Anand Jain <[email protected]> Tested-by: Anand Jain <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Oct 11, 2022
… long [BUG] Even with chunk_objectid bug fixed, mkfs.btrfs can still caused stack overflow when enabling extent-tree-v2 feature (need experimental features enabled): # ./mkfs.btrfs -f -O extent-tree-v2 ~/test.img btrfs-progs v5.19.1 See http://btrfs.wiki.kernel.org for more information. ERROR: superblock magic doesn't match NOTE: several default settings have changed in version 5.15, please make sure this does not affect your deployments: - DUP for metadata (-m dup) - enabled no-holes (-O no-holes) - enabled free-space-tree (-R free-space-tree) Label: (null) UUID: 205c61e7-f58e-4e8f-9dc2-38724f5c554b Node size: 16384 Sector size: 4096 Filesystem size: 512.00MiB Block group profiles: Data: single 8.00MiB Metadata: DUP 32.00MiB System: DUP 8.00MiB SSD detected: no Zoned device: no ================================================================= [... Skip full ASAN output ...] ==65655==ABORTING [CAUSE] For experimental build, we have unified feature output, but the old buffer size is only 64 bytes, which is too small to cover the new full feature string: extref, skinny-metadata, no-holes, free-space-tree, block-group-tree, extent-tree-v2 Above feature string is already 84 bytes, over the 64 on-stack memory size. This can also be proved by the ASAN output: ==65655==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc4e03b1d0 at pc 0x7ff0fc05fafe bp 0x7ffc4e03ac60 sp 0x7ffc4e03a408 WRITE of size 17 at 0x7ffc4e03b1d0 thread T0 #0 0x7ff0fc05fafd in __interceptor_strcat /usr/src/debug/gcc/libsanitizer/asan/asan_interceptors.cpp:377 #1 0x55cdb7b06ca5 in parse_features_to_string common/fsfeatures.c:316 #2 0x55cdb7b06ce1 in btrfs_parse_fs_features_to_string common/fsfeatures.c:324 #3 0x55cdb7a37226 in main mkfs/main.c:1783 #4 0x7ff0fbe3c28f (/usr/lib/libc.so.6+0x2328f) #5 0x7ff0fbe3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349) #6 0x55cdb7a2cb34 in _start ../sysdeps/x86_64/start.S:115 [FIX] Introduce a new macro, BTRFS_FEATURE_STRING_BUF_SIZE, along with a new sanity check helper, btrfs_assert_feature_buf_size(). The problem is I can not find a build time method to verify BTRFS_FEATURE_STRING_BUF_SIZE is large enough to contain all feature names, thus have to go the runtime function to do the BUG_ON() to verify the macro size. Now the minimal buffer size for experimental build is 138 bytes, just bump it to 160 for future expansion. And if further features go beyond that number, mkfs.btrfs/btrfs-convert will immediately crash at that BUG_ON(), so we can definitely detect it. Reviewed-by: Anand Jain <[email protected]> Tested-by: Anand Jain <[email protected]> Signed-off-by: Qu Wenruo <[email protected]> Signed-off-by: David Sterba <[email protected]>
boryas
pushed a commit
to boryas/btrfs-progs
that referenced
this pull request
Apr 27, 2023
When running btrfs/187 with a bash 5.0+ build that has debug enabled, the test fails due to an unexpected warning message from bash: $ ./check btrfs/187 FSTYP -- btrfs PLATFORM -- Linux/x86_64 debian9 5.12.0-rc8-btrfs-next-92 kdave#1 SMP PREEMPT Wed Apr 21 10:36:03 WEST 2021 MKFS_OPTIONS -- /dev/sdc MOUNT_OPTIONS -- /dev/sdc /home/fdmanana/btrfs-tests/scratch_1 btrfs/187 436s ... - output mismatch (see /xfstests/results//btrfs/187.out.bad) --- tests/btrfs/187.out 2020-10-16 23:13:46.550152492 +0100 +++ /xfstests/results//btrfs/187.out.bad 2021-04-27 14:57:02.623941700 +0100 @@ -1,3 +1,4 @@ QA output created by 187 Create a readonly snapshot of 'SCRATCH_MNT' in 'SCRATCH_MNT/snap1' Create a readonly snapshot of 'SCRATCH_MNT' in 'SCRATCH_MNT/snap2' +/xfstests/tests/btrfs/187: line 1: warning: wait_for: recursively setting old_sigint_handler to wait_sigint_handler: running_trap = 16 ... (Run 'diff -u /xfstests/tests/btrfs/187.out /xfstests/results//btrfs/187.out.bad' to see the entire diff) Ran: btrfs/187 Failures: btrfs/187 Failed 1 of 1 tests This is because the process running dedupe_files_loop() executes the 'wait' command in the trap it has setup and very often it receives the SIGTERM signal while it is running the 'wait' command in the while loop of that function - so executing the trap makes bash run 'wait' while it is already running 'wait', triggering the warning message from bash. That warning message was added in bash 5.0 by commit 36f89ff1d8b761 ("SIGINT trap handler SIGINT loop fix"): https://git.savannah.gnu.org/cgit/bash.git/commit/?id=36f89ff1d8b761c815d8993e9833e6357a57fc6b So fix this by making the trap set a local variable named 'stop' to the value 1 and have the loop exit when the local variable 'stop' is 1. Signed-off-by: Filipe Manana <[email protected]> Reviewed-by: Boris Burkov <[email protected]> Signed-off-by: Eryu Guan <[email protected]>
kdave
added a commit
that referenced
this pull request
Jun 19, 2024
Free memory after errors in sum(), this is reported by gcc 13 on the CI. There's still one reported but it's not clear where it happens: ================================================================= ==32700==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32816 byte(s) in 1 object(s) allocated from: #0 0x7f88e50fae37 in malloc (/lib64/libasan.so.8+0xfae37) #1 0x7f88e4cddeb1 in __alloc_dir (/lib64/libc.so.6+0xddeb1) To reproduce: $ make D=asan TEST=019-receive-clones-on-mounted-subvol test-misc Signed-off-by: David Sterba <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 29, 2024
[BUG] ASAN test fails at misc/055 with the following leak: Qgroupid Referenced Exclusive Path -------- ---------- --------- ---- 0/5 16.00KiB 16.00KiB <toplevel> 0/256 16.00KiB 16.00KiB <stale> ====== RUN CHECK /home/runner/work/btrfs-progs/btrfs-progs/btrfs qgroup clear-stale /home/runner/work/btrfs-progs/btrfs-progs/tests/mnt ================================================================= ==102571==ERROR: LeakSanitizer: detected memory leaks Indirect leak of 4096 byte(s) in 1 object(s) allocated from: #0 0x7fd1c98fbb37 in malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:69 #1 0x55aa2f8953f8 in btrfs_util_subvolume_path_fd libbtrfsutil/subvolume.c:178 #2 0x55aa2f8fa2a6 in get_or_add_qgroup cmds/qgroup.c:837 #3 0x55aa2f8fa7e9 in update_qgroup_info cmds/qgroup.c:883 #4 0x55aa2f8fd912 in __qgroups_search cmds/qgroup.c:1385 #5 0x55aa2f8fe196 in qgroups_search_all cmds/qgroup.c:1453 #6 0x55aa2f902a7c in cmd_qgroup_clear_stale cmds/qgroup.c:2281 #7 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #8 0x55aa2f734bcc in handle_command_group /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:177 #9 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #10 0x55aa2f735a96 in main /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:518 #11 0x7fd1c942a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #12 0x7fd1c942a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #13 0x55aa2f734144 in _start (/home/runner/work/btrfs-progs/btrfs-progs/btrfs+0x84144) (BuildId: 56f3dd838e1ae189c142c5d27fac025cd46deddb) Indirect leak of 432 byte(s) in 2 object(s) allocated from: #0 0x7fd1c98fb4d0 in calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:77 #1 0x55aa2f8fa1a1 in get_or_add_qgroup cmds/qgroup.c:822 #2 0x55aa2f8fa7e9 in update_qgroup_info cmds/qgroup.c:883 #3 0x55aa2f8fd912 in __qgroups_search cmds/qgroup.c:1385 #4 0x55aa2f8fe196 in qgroups_search_all cmds/qgroup.c:1453 #5 0x55aa2f902a7c in cmd_qgroup_clear_stale cmds/qgroup.c:2281 #6 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #7 0x55aa2f734bcc in handle_command_group /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:177 #8 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #9 0x55aa2f735a96 in main /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:518 #10 0x7fd1c942a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #11 0x7fd1c942a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #12 0x55aa2f734144 in _start (/home/runner/work/btrfs-progs/btrfs-progs/btrfs+0x84144) (BuildId: 56f3dd838e1ae189c142c5d27fac025cd46deddb) [CAUSE] Above leaks are caused by two btrfs_qgroup structures and one path for toplevel qgroup. It's caused by the fact that we called qgroups_search_all() but didn't do any cleanup. [FIX] Call __free_all_qgroups() inside cmd_qgroup_clear_stale() to properly free the qgroups. Fixes: 701ab15 ("btrfs-progs: qgroup: new command to delete stale qgroups") Signed-off-by: Qu Wenruo <[email protected]>
kdave
pushed a commit
that referenced
this pull request
Jul 30, 2024
[BUG] ASAN test fails at misc/055 with the following leak: Qgroupid Referenced Exclusive Path -------- ---------- --------- ---- 0/5 16.00KiB 16.00KiB <toplevel> 0/256 16.00KiB 16.00KiB <stale> ====== RUN CHECK /home/runner/work/btrfs-progs/btrfs-progs/btrfs qgroup clear-stale /home/runner/work/btrfs-progs/btrfs-progs/tests/mnt ================================================================= ==102571==ERROR: LeakSanitizer: detected memory leaks Indirect leak of 4096 byte(s) in 1 object(s) allocated from: #0 0x7fd1c98fbb37 in malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:69 #1 0x55aa2f8953f8 in btrfs_util_subvolume_path_fd libbtrfsutil/subvolume.c:178 #2 0x55aa2f8fa2a6 in get_or_add_qgroup cmds/qgroup.c:837 #3 0x55aa2f8fa7e9 in update_qgroup_info cmds/qgroup.c:883 #4 0x55aa2f8fd912 in __qgroups_search cmds/qgroup.c:1385 #5 0x55aa2f8fe196 in qgroups_search_all cmds/qgroup.c:1453 #6 0x55aa2f902a7c in cmd_qgroup_clear_stale cmds/qgroup.c:2281 #7 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #8 0x55aa2f734bcc in handle_command_group /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:177 #9 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #10 0x55aa2f735a96 in main /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:518 #11 0x7fd1c942a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #12 0x7fd1c942a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #13 0x55aa2f734144 in _start (/home/runner/work/btrfs-progs/btrfs-progs/btrfs+0x84144) (BuildId: 56f3dd838e1ae189c142c5d27fac025cd46deddb) Indirect leak of 432 byte(s) in 2 object(s) allocated from: #0 0x7fd1c98fb4d0 in calloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:77 #1 0x55aa2f8fa1a1 in get_or_add_qgroup cmds/qgroup.c:822 #2 0x55aa2f8fa7e9 in update_qgroup_info cmds/qgroup.c:883 #3 0x55aa2f8fd912 in __qgroups_search cmds/qgroup.c:1385 #4 0x55aa2f8fe196 in qgroups_search_all cmds/qgroup.c:1453 #5 0x55aa2f902a7c in cmd_qgroup_clear_stale cmds/qgroup.c:2281 #6 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #7 0x55aa2f734bcc in handle_command_group /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:177 #8 0x55aa2f73425b in cmd_execute cmds/commands.h:126 #9 0x55aa2f735a96 in main /home/runner/work/btrfs-progs/btrfs-progs/btrfs.c:518 #10 0x7fd1c942a1c9 (/lib/x86_64-linux-gnu/libc.so.6+0x2a1c9) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #11 0x7fd1c942a28a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2a28a) (BuildId: 08134323d00289185684a4cd177d202f39c2a5f3) #12 0x55aa2f734144 in _start (/home/runner/work/btrfs-progs/btrfs-progs/btrfs+0x84144) (BuildId: 56f3dd838e1ae189c142c5d27fac025cd46deddb) [CAUSE] Above leaks are caused by two btrfs_qgroup structures and one path for toplevel qgroup. It's caused by the fact that we called qgroups_search_all() but didn't do any cleanup. [FIX] Call __free_all_qgroups() inside cmd_qgroup_clear_stale() to properly free the qgroups. Fixes: 701ab15 ("btrfs-progs: qgroup: new command to delete stale qgroups") Signed-off-by: Qu Wenruo <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In btrfs_prepare_device() function, zero_end is passed as a parameter,
then use it deal with zero_dev_end(), it should not be set to 1 in
btrfs_prepare_device().
Signed-off-by: Li Yang [email protected]