DZMM Posted November 28, 2018 Share Posted November 28, 2018 1 hour ago, johnnie.black said: Finally run a scrub, make sure there are no uncorrectable errors and keep working normally, any more issues you'll get a new notification What scrub arguments do you use for a UD pool? Thanks Quote Link to comment
JorgeB Posted November 28, 2018 Share Posted November 28, 2018 3 hours ago, DZMM said: What scrub arguments do you use for a UD pool? Thanks btrfs scrub start /mnt/disks/UD_pool_name to get scrub status during or after is done: btrfs scrub status /mnt/disks/UD_pool_name 1 Quote Link to comment
DZMM Posted November 28, 2018 Share Posted November 28, 2018 Thanks - script picked up errors on my UD pool which I've corrected Quote Link to comment
jpowell8672 Posted July 7, 2019 Share Posted July 7, 2019 On 6/30/2019 at 11:26 AM, Squid said: Fix Common Problems is telling me that Write Cache is disabled on a drive. What do I do? This test has nothing to do with any given unRaid version. For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default. This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question. To do this, first make a note of the drive letter which you can get from the Main Tab Then, from unRaid's terminal enter in the following (changing the sdX accordingly) hdparm -W 1 /dev/sdm You should get a response similar to this: /dev/sdm: setting drive write-caching to 1 (on) write-caching = 1 (on) If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this. 99% of the time, this command will permanently set write caching to be on. In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only) It should be noted that even with write-caching disabled this is not a big deal. Only performance will suffer. No other ill-effects will happen. /dev/sde: setting drive write-caching to 1 (on) write-caching = 0 (off) Above is what I get when I try: hdparm -W 1 /dev/sde Does not work for me. Quote Link to comment
Squid Posted July 7, 2019 Share Posted July 7, 2019 (edited) 3 minutes ago, jpowell8672 said: /dev/sde: setting drive write-caching to 1 (on) write-caching = 0 (off) Above is what I get when I try: hdparm -W 1 /dev/sde Does not work for me. contact the drive manufacturer or ignore the warning from FCP Edited July 7, 2019 by Squid 1 Quote Link to comment
Tower_Of_Power Posted July 15, 2019 Share Posted July 15, 2019 thx for the help guys .. not sure why my cache got corrupted while trying to create VMs but it did and I needed to recover data.. glad i didn't have to start from scratch Quote Link to comment
djhunter67 Posted July 28, 2019 Share Posted July 28, 2019 On 6/30/2019 at 8:26 AM, Squid said: Fix Common Problems is telling me that Write Cache is disabled on a drive. What do I do? This test has nothing to do with any given unRaid version. For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default. This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question. To do this, first make a note of the drive letter which you can get from the Main Tab Then, from unRaid's terminal enter in the following (changing the sdX accordingly) hdparm -W 1 /dev/sdm You should get a response similar to this: /dev/sdm: setting drive write-caching to 1 (on) write-caching = 1 (on) If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this. 99% of the time, this command will permanently set write caching to be on. In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only) It should be noted that even with write-caching disabled this is not a big deal. Only performance will suffer. No other ill-effects will happen. NOTE: If this does not work for you, then you will either need to contact the drive manufacturer as to why or simply ignore the warning from Fix Common Problems Excellent work, this worked perfectly. Quote Link to comment
ceyo14 Posted August 23, 2019 Share Posted August 23, 2019 On 6/30/2019 at 11:26 AM, Squid said: Then, from unRaid's terminal enter in the following (changing the sdX accordingly) hdparm -W 1 /dev/sdm I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup? Quote Link to comment
Frank1940 Posted August 23, 2019 Share Posted August 23, 2019 22 minutes ago, ceyo14 said: I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup? Do This (It was right in the FAQ): On 6/30/2019 at 11:26 AM, Squid said: 99% of the time, this command will permanently set write caching to be on. In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only) Quote Link to comment
itimpi Posted August 23, 2019 Share Posted August 23, 2019 23 minutes ago, ceyo14 said: I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup? Add it to the 'go' file in the config folder on the flash drive. Quote Link to comment
ceyo14 Posted August 29, 2019 Share Posted August 29, 2019 On 8/23/2019 at 7:59 AM, Frank1940 said: Do This (It was right in the FAQ): On 8/23/2019 at 8:01 AM, itimpi said: Add it to the 'go' file in the config folder on the flash drive. I wasn't sure how to add it, part of the reason why I dismissed it when I first read it and I thought the appropriate command was something else. but just read how the go file works and just added both lines to the bottom as if I would have wrote it on the terminal. Just rebooted and all good, thanks! Quote Link to comment
fluisterben Posted October 2, 2019 Share Posted October 2, 2019 Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)? I need to move data from 2 old SSDs to 2 new SSDs (where the 2 old SSDs are part of a 5 disk SSD RAID10 array..). Quote Link to comment
JorgeB Posted October 5, 2019 Share Posted October 5, 2019 (edited) On 10/2/2019 at 3:16 PM, fluisterben said: Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)? You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached. Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them. Edited October 5, 2019 by johnnie.black 1 Quote Link to comment
fluisterben Posted October 10, 2019 Share Posted October 10, 2019 On 10/5/2019 at 9:51 AM, johnnie.black said: You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached. Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them. Why is it bad to wipe the new devices? There's nothing on them. So, here's what I've done so far; - I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10). - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. The array started, but I can't do anything regarding disks, because it says; "Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs. and under the Cache it says "Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed). Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not? Quote Link to comment
JorgeB Posted October 10, 2019 Share Posted October 10, 2019 5 hours ago, fluisterben said: Why is it bad to wipe the new devices? There's nothing on them. That was after cloning with dd, so the new devices would have the data, and Unraid would consider them to be new and wipe them, making the pool unmountable. 5 hours ago, fluisterben said: - Took out 2 of the 5 SSDs and connected the 2 new SSDs. - Fired up unRAID again. Did you assign the new SSDs? If not pool will be re-balanced for the 2 remaining devices. Quote Link to comment
fluisterben Posted October 11, 2019 Share Posted October 11, 2019 (edited) OK, ssds added to the cache pool and ran ~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v Dumping filters: flags 0x7, state 0x0, force is off DATA (flags 0x100): converting, target=64, soft is off METADATA (flags 0x100): converting, target=64, soft is off SYSTEM (flags 0x100): converting, target=64, soft is off which I'll have to wait and see if it works, but it looks good thus far. ~# btrfs fi show Label: none uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693 Total devices 7 FS bytes used 937.73GiB devid 2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1 devid 3 size 894.25GiB used 894.25GiB path /dev/sdp1 devid 4 size 894.25GiB used 894.25GiB path /dev/sdn1 devid 6 size 953.87GiB used 781.50MiB path /dev/sdj1 devid 7 size 953.87GiB used 781.50MiB path /dev/sdl1 *** Some devices missing Label: none uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6 Total devices 1 FS bytes used 1.62GiB devid 1 size 20.00GiB used 5.02GiB path /dev/loop2 Label: none uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65 Total devices 1 FS bytes used 604.00KiB devid 1 size 1.00GiB used 398.38MiB path /dev/loop3 I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there? Edited October 11, 2019 by fluisterben Quote Link to comment
cbr600ds2 Posted December 8, 2019 Share Posted December 8, 2019 On 6/30/2019 at 11:26 AM, Squid said: Fix Common Problems is telling me that Write Cache is disabled on a drive. What do I do? This test has nothing to do with any given unRaid version. For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default. This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question. To do this, first make a note of the drive letter which you can get from the Main Tab Then, from unRaid's terminal enter in the following (changing the sdX accordingly) hdparm -W 1 /dev/sdm You should get a response similar to this: /dev/sdm: setting drive write-caching to 1 (on) write-caching = 1 (on) If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this. 99% of the time, this command will permanently set write caching to be on. In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only) It should be noted that even with write-caching disabled this is not a big deal. Only performance will suffer. No other ill-effects will happen. NOTE: If this does not work for you, then you will either need to contact the drive manufacturer as to why or simply ignore the warning from Fix Common Problems So I've done this and even added as a common script to run in the beginning of the array start but its still showing up in FCP...is that...weird? Quote Link to comment
BR0KK Posted January 11, 2020 Share Posted January 11, 2020 (edited) im currently trying to understand how to remove drives from the cache pool/ pool .... without loosing everything .... It would be nice if you could provide screenshots of how to do this... so that the windows script kiddies like me have it easier to understand what needs to be done.... I followed the guide to in the faq but now im lost..... 1. Stoped the array 2. unassigned the cache drives (2 in my case) --> my mistake for mot reading.... 3. set the drive slots to none 4. Started the array Nothing happened ? In the guide it said that there will be a move process after this but that didn't happen? Cache Pool is now "not mountable" because of the missing ssds.... This i think is clearly my mistake but.... Maybe its possible to implement an easier sollution like a "unassign drive and move" botton to the Main Page. One that allow the user to click on and dissable one specific drive at a time while a process in tha background takes care of the data beeing mooved Im very hesitant to try the other option about "forgetting the config", since that went bad real quick last time (complete Dataloss - Also my mistake) Edited January 11, 2020 by BR0KK Quote Link to comment
JorgeB Posted January 11, 2020 Share Posted January 11, 2020 4 minutes ago, BR0KK said: unassigned the cache drives (2 in my case) From the FAQ: Quote -You can only remove devices from redundant pools (raid1, raid10, etc), and make sure to only remove one device at a time from a redundant pool, i.e., you can't remove 2 devices at the same time from a 4 disk raid1 pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices). Quote Link to comment
BR0KK Posted January 11, 2020 Share Posted January 11, 2020 Yes Yes .... my mistake for not reading correctly Quote Link to comment
frodr Posted February 9, 2020 Share Posted February 9, 2020 Do I understand it correctly that the procedure for an unmountable cache drive is to backup the data on the ssd, then format it and copy the data back? Why isn't it possible to rebuild the ssd when running Raid? Cheers, Frode Quote Link to comment
itimpi Posted February 9, 2020 Share Posted February 9, 2020 34 minutes ago, frodr said: Do I understand it correctly that the procedure for an unmountable cache drive is to backup the data on the ssd, then format it and copy the data back? Why isn't it possible to rebuild the ssd when running Raid? Cheers, Frode When you get an unmountable disk then this file system corruption that is reflected in all RAID copies (if you have that configured). Unfortunately BTRFS fsck is not as good as many other file systems (probably because there is an assumption that BTRFS should protect against such corruption in the first place). Quote Link to comment
Squid Posted February 18, 2020 Share Posted February 18, 2020 @johnnie.black Re: Should we add on to this stating that XMP/AMP is a form of overclocking (defaulted by many motherboards)? Or generalize it for all motherboards? Quote Link to comment
JorgeB Posted February 18, 2020 Share Posted February 18, 2020 1 minute ago, Squid said: Should we add on to this stating that XMP/AMP is a form of overclocking (defaulted by many motherboards)? Or generalize it for all motherboards? We can, adding that any sort of overclocking for a server is a bad idea, though Intel systems are mostly stable with RAM using XMP profiles, Ryzen based servers on the other hand appear to be prone to issues. 1 Quote Link to comment
Perforator Posted September 1, 2020 Share Posted September 1, 2020 1) how do you replace parity disk 2) how do you replace data disk 3) how do you shrink your array Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.