Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: how to make RAID-1 with btrfs if have error "zoned: data raid1 needs raid-stripe-tree"? #856

Open
Bogdan107 opened this issue Jul 27, 2024 · 9 comments
Labels
question Not a bug, clarifications, undocumented behaviour

Comments

@Bogdan107
Copy link

I have 2 separate disks:

  • /dev/sdb: 10Tb (have some data);
  • /dev/sdc: 12Tb (new disk);
  • /dev/sdc1: partition 10Tb.

I want to make RAID-1 with /dev/sdb and /dev/sdc1.

I got and error "BTRFS error (device sdb): zoned: data raid1 needs raid-stripe-tree" when try to do "balance -dconvnert=raid1".

Environment:

OS: Linux Gentoo
kernel: sys-kernel/gentoo-sources v6.10.0
btrfs-progs: v6.9.2 [use flags: convert man udev verify-sig zstd]

mkfs.btrfs -O list-all

Filesystem features available:
mixed-bg            - mixed data and metadata block groups (compat=2.6.37, safe=2.6.37)
quota               - hierarchical quota group support (qgroups) (compat=3.4)
extref              - increased hardlink limit per file to 65536 (compat=3.7, safe=3.12, default=3.12)
raid56              - raid56 extended format (compat=3.9)
skinny-metadata     - reduced-size metadata extent refs (compat=3.10, safe=3.18, default=3.18)
no-holes            - no explicit hole extents for files (compat=3.14, safe=4.0, default=5.15)
fst                 - free-space-tree alias
free-space-tree     - free space tree, improved space tracking (space_cache=v2) (compat=4.5, safe=4.9, default=5.15)
raid1c34            - RAID1 with 3 or 4 copies (compat=5.5)
zoned               - support zoned (SMR/ZBC/ZNS) devices (compat=5.12)
bgt                 - block-group-tree alias
block-group-tree    - block group tree, more efficient block group tracking to reduce mount time (compat=6.1)
squota              - squota support (simple accounting qgroups) (compat=6.7)

Steps:

1 - Make storage

mkfs.btrfs /dev/sdb
mount -t btrfs /dev/sdb /mnt/10tb

2 - Increase storage size

btrfs device add /dev/sdc1 /mnt/10tb

3 - Try to make raid 1

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/10tb

Got an error:

BTRFS info (device sdb): balance: start -dconvert=raid1 -mconvert=raid1 -sconvert=raid1
BTRFS error (device sdb): zoned: data raid1 needs raid-stripe-tree
BTRFS info (device sdb): balance: ended with status: -22

4 - Full balance

btrfs balance start --full-balance /mnt/10tb

5 - Try to make raid 1

btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/10tb

Still got an error zoned: data raid1 needs raid-stripe-tree.

Questions

  • Is it real to make RADI-1 in my case?
  • Which conditions must be met to enable raid-stripe-tree for RAID-1?
  • Which kernel modules must be enabled for raid-stripe-tree?
  • Which btrfs-progs version must be used for raid-stripe-tree?
@adam900710
Copy link
Collaborator

Raid stripe tree feature is not stable, thus it's hidden behind experimental feature.
Even if you build a kernel with CONFIG_BTRFS_DEBUG to enable such feature for kernel, and ./configure --enable-experimental for progs, you're going to hit various known bugs.

Furthermore, if your fs is already zoned, it means at least one device is zoned, but with your sdc1, it's definitely not zoned device as no partition support for zoned device.
It's strongly not recommended to mix regular and zoned devices for now.

I'm wondering what disks you have for sdb and sdc, are they really zoned devices?

@Bogdan107
Copy link
Author

Bogdan107 commented Jul 27, 2024

/dev/sdb

Vendor:               HGST
Product:              HUH721010AL5204
Revision:             LE02
Compliance:           SPC-4
User Capacity:        9 796 820 402 176 bytes [9,79 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
Formatted with type 2 protection
8 bytes of protection information per logical block
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Device type:          disk
Transport protocol:   SAS (SPL-4)

/dev/sdc

Vendor:               HGST
Product:              HUH721212AL5204
Revision:             NE01
Compliance:           SPC-4
User Capacity:        11 756 399 230 976 bytes [11,7 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Device type:          disk
Transport protocol:   SAS (SPL-4)

@adam900710
Copy link
Collaborator

Please provide the following dump:

# cat /sys/block/sdb/queue/zoned
# cat /sys/block/sdc/queue/zoned

@Bogdan107
Copy link
Author

# cat /sys/block/sdb/queue/zoned
none
# cat /sys/block/sdc/queue/zoned
none

@adam900710
Copy link
Collaborator

Then the problem is you have specified zoned feature for the devices even it's not a zoned device.

Just to be sure, please provide the following dump:

# btrfs ins dump-super /dev/sdb

If it includes something like the following, then you have forced emulated zoned support:

incompat_flags		0x1341
			( MIXED_BACKREF |
			  EXTENDED_IREF |
			  SKINNY_METADATA |
			  NO_HOLES |
			  ZONED )  <<<

In that case, you just need to re-mkfs the filesystem without ZONED feature, then you can go with regular RAID1 without any problem.

Or just mkfs the RAID1 at mkfs time:

# mkfs.btrfs -f /dev/sdb /dev/sdc1 -m raid1 -d raid1

@Bogdan107
Copy link
Author

btrfs ins dump-super /dev/sdb

superblock: bytenr=65536, device=/dev/sdb
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x3a388085 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			1e5d3a9a-2628-49b1-bbf9-b6c9db19c52e
metadata_uuid		00000000-0000-0000-0000-000000000000
label			datastore1
generation		2519996
root			32350498390016
sys_array_size		129
chunk_root_generation	2519994
root_level		0
chunk_root		32148172111872
chunk_root_level	1
log_root		0
log_root_transid (deprecated)	0
log_root_level		0
total_bytes		19593636610048
bytes_used		1657296355328
sectorsize		4096
nodesize		16384
leafsize (deprecated)	16384
stripesize		4096
root_dir		6
num_devices		2
compat_flags		0x0
compat_ro_flags		0xb
			( FREE_SPACE_TREE |
			  FREE_SPACE_TREE_VALID |
			  BLOCK_GROUP_TREE )
incompat_flags		0x1373
			( MIXED_BACKREF |
			  DEFAULT_SUBVOL |
			  COMPRESS_ZSTD |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA |
			  NO_HOLES |
			  ZONED )
cache_generation	0
uuid_tree_generation	2519996
dev_item.uuid		12c7b55b-762b-43e6-a355-94e225b463ba
dev_item.fsid		1e5d3a9a-2628-49b1-bbf9-b6c9db19c52e [match]
dev_item.type		0
dev_item.total_bytes	9796820402176
dev_item.bytes_used	898185035776
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0

@adam900710
Copy link
Collaborator

So you have to remake the fs to remove the ZONED flag unfortunately.

And I have no idea why default mkfs.btrfs would even enable zoned.
Did you specify ZONED feature during mkfs?

@Bogdan107
Copy link
Author

I enable some flags with mkfs, but not ZONED.

At now, I need a pause to save data and re-mkfs /dev/sdb.

@kdave kdave added the question Not a bug, clarifications, undocumented behaviour label Jul 29, 2024
@kdave
Copy link
Owner

kdave commented Jul 29, 2024

For zoned on non-zoned devices, it could be possible to unset the bit and write superblock to the right locations so it's a regular fs afterwards.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Not a bug, clarifications, undocumented behaviour
Projects
None yet
Development

No branches or pull requests

3 participants