Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script reports to be run from M.2 when using M2. SSD cache on volume #45

Closed
herbingk opened this issue Jan 5, 2024 · 13 comments
Closed
Assignees

Comments

@herbingk
Copy link

herbingk commented Jan 5, 2024

Script reports to be run from M,2 when using M2. SSD cache on volume. Same occurs for Synology_HDD_DB script as well.

cat /proc/mdstat command returns the following on my DS1621+

Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1]
md3 : active raid1 nvme1n1p1[1] nvme0n1p1[0]
1000196800 blocks super 1.2 [2/2] [UU]

md2 : active raid5 sata1p3[0] sata3p3[2] sata4p3[3] sata5p3[4] sata6p3[5] sata2p3[1]
37453707840 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md1 : active raid1 sata1p2[0] sata4p2[5] sata5p2[4] sata6p2[3] sata3p2[2] sata2p2[1]
2097088 blocks [6/6] [UUUUUU]

md0 : active raid1 sata1p1[0] sata4p1[5] sata5p1[4] sata6p1[3] sata3p1[2] sata2p1[1]
8388544 blocks [6/6] [UUUUUU]

unused devices:

Thanks for looking into it.

@007revad 007revad self-assigned this Jan 5, 2024
@007revad
Copy link
Owner

007revad commented Jan 5, 2024

Looks like I'll have to setup an NVMe read cache for 1 of my volumes to see what the script is doing wrong.

@herbingk
Copy link
Author

herbingk commented Jan 6, 2024

Just did a quick check and confirm, by disabling the M.2 SSD cache, the warning disappears.

@007revad
Copy link
Owner

007revad commented Jan 6, 2024

Ok. I've done some testing and I get the warning if the scriptpath variable is empty or contains a volume# that does not exist.

If you try this test script what does it return?
https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh

@007revad
Copy link
Owner

007revad commented Jan 6, 2024

When you run syno_enable_dedupe.sh what does the 4th or 5th line show?

It should show the path and filename of the script, like: Running from: /volume1/scripts/syno_enable_dedupe.sh

@herbingk
Copy link
Author

herbingk commented Jan 6, 2024

When you run syno_enable_dedupe.sh what does the 4th or 5th line show?

It should show the path and filename of the script, like: Running from: /volume1/scripts/syno_enable_dedupe.sh

Running from: /volume1/admin/Synology_enable_Deduplication/syno_enable_dedupe.sh

@herbingk
Copy link
Author

herbingk commented Jan 6, 2024

Ok. I've done some testing and I get the warning if the scriptpath variable is empty or contains a volume# that does not exist.

If you try this test script what does it return? https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh

/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md3
md2
WARNING Don't store this script on an NVMe volume!

@007revad
Copy link
Owner

007revad commented Jan 6, 2024

Strange. It doesn't do that for me.

I currently have 4 volumes:

  1. volume1 is on HDDs
  2. volume3 is on an NVMe drive in the DS1821+
  3. volume4 is on a pair of NVMe drives in a E10M20-T1
  4. volume5 is on a HDD

And a SATA SSD as a read cache for volume 5.

/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md3
md2

/volume3/scripts/test.sh
scriptvol: volume3
vg: vg3
md: md4
WARNING Don't store this script on an NVMe volume!

/volume4/scripts/test.sh
scriptvol: volume4
vg: vg4
md: md5
WARNING Don't store this script on an NVMe volume!

/volume5/scripts/test.sh
scriptvol: volume5
vg: vg5
md: md6

I just noticed that you and I are somehow getting an extra md2 on the line after "md: md3" for volume1.

@007revad
Copy link
Owner

007revad commented Jan 6, 2024

I just ran that test script on my DS720+ which only has 1 HDD volume and I didn't get the warning or the extra md2.

/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md2

@herbingk
Copy link
Author

herbingk commented Jan 6, 2024

I just ran that test script on my DS720+ which only has 1 HDD volume and I didn't get the warning or the extra md2.

/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md2

md3 is my M.2 SSD cache
md2 is my RAID5 SATA SSD volume where the script is stored at

cat /proc/mdstat returns:

md3 : active raid1 nvme1n1p1[1] nvme0n1p1[0]
1000196800 blocks super 1.2 [2/2] [UU]

md2 : active raid5 sata1p3[0] sata3p3[2] sata4p3[3] sata5p3[4] sata6p3[5] sata2p3[1]
37453707840 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md1 : active raid1 sata1p2[0] sata4p2[5] sata5p2[4] sata6p2[3] sata3p2[2] sata2p2[1]
2097088 blocks [6/6] [UUUUUU]

md0 : active raid1 sata1p1[0] sata4p1[5] sata5p1[4] sata6p1[3] sata3p1[2] sata2p1[1]
8388544 blocks [6/6] [UUUUUU]

@007revad
Copy link
Owner

007revad commented Jan 6, 2024

Your seeing the same as me. md3 is the SSD cache. md2 is the HDD array.

On my DS1821+ I get this:

# lvdisplay | grep /volume_1 | cut -d"/" -f3
vg1

And this:

# pvdisplay | grep -B 1 vg1 | grep /dev/ | cut -d"/" -f3
md3
md2

Digging a little deeper I see this:

# pvdisplay | grep -B 1 vg1
  PV Name               /dev/md3
  VG Name               shared_cache_vg1
--
  PV Name               /dev/md2
  VG Name               vg1

I believe I've found the solution. Can you try this test script to confirm it works correctly for you.
https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh

@herbingk
Copy link
Author

herbingk commented Jan 6, 2024

I believe I've found the solution. Can you try this test script to confirm it works correctly for you. https://github.com/007revad/Synology_enable_Deduplication/blob/test/script_on_ssd.sh

Sure, returns no more warning:

/volume1/scripts/test.sh
scriptvol: volume1
vg: vg1
md: md2

@007revad 007revad mentioned this issue Jan 7, 2024
@007revad
Copy link
Owner

007revad commented Jan 7, 2024

@007revad 007revad closed this as completed Jan 7, 2024
@herbingk
Copy link
Author

herbingk commented Jan 7, 2024

It was a pleasure to me. I have to thank you for creating, maintaining and sharing these valuable scripts with us. Very much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants