FAQ Feedback - for FAQ for unRAID v6


RobJ

Recommended Posts

  • trurl pinned this topic
  • 6 months later...
On 6/30/2019 at 11:26 AM, Squid said:

Fix Common Problems is telling me that Write Cache is disabled on a drive.  What do I do?

 

This test has nothing to do with any given unRaid version.  For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default.  This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question.

 

To do this, first make a note of the drive letter which you can get from the Main Tab

 

image.png.110b5e815d1f77f136b1bda537d32553.png

 

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

You should get a response similar to this:


/dev/sdm:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)

If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this.

 

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

It should be noted that even with write-caching disabled this is not a big deal.  Only performance will suffer.  No other ill-effects will happen.

 

/dev/sde:
 setting drive write-caching to 1 (on)
 write-caching =  0 (off)

 

Above is what I get when I try: hdparm -W 1 /dev/sde

 

Does not work for me.

Link to comment
3 minutes ago, jpowell8672 said:

/dev/sde:
 setting drive write-caching to 1 (on)
 write-caching =  0 (off)

 

Above is what I get when I try: hdparm -W 1 /dev/sde

 

Does not work for me.

contact the drive manufacturer or ignore the warning from FCP

Edited by Squid
  • Upvote 1
Link to comment
  • 2 weeks later...
On 6/30/2019 at 8:26 AM, Squid said:

Fix Common Problems is telling me that Write Cache is disabled on a drive.  What do I do?

 

This test has nothing to do with any given unRaid version.  For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default.  This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question.

 

To do this, first make a note of the drive letter which you can get from the Main Tab

 

image.png.110b5e815d1f77f136b1bda537d32553.png

 

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

You should get a response similar to this:


/dev/sdm:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)

If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this.

 

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

It should be noted that even with write-caching disabled this is not a big deal.  Only performance will suffer.  No other ill-effects will happen.

 

NOTE:  If this does not work for you, then you will either need to contact the drive manufacturer as to why or simply ignore the warning from Fix Common Problems

 

Excellent work, this worked perfectly.

Link to comment
  • 4 weeks later...
22 minutes ago, ceyo14 said:

I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup?

Do This (It was right in the FAQ): 

On 6/30/2019 at 11:26 AM, Squid said:

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

Link to comment
On 8/23/2019 at 7:59 AM, Frank1940 said:

Do This (It was right in the FAQ): 

 

 

On 8/23/2019 at 8:01 AM, itimpi said:

Add it to the 'go' file in the config folder on the flash drive.

I wasn't sure how to add it, part of the reason why I dismissed it when I first read it and I thought the appropriate command was something else. but just read how the go file works and just added both lines to the bottom as if I would have wrote it on the terminal. Just rebooted and all good, thanks!

Link to comment
  • 1 month later...
On 10/2/2019 at 3:16 PM, fluisterben said:

Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)?

You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached.

 

Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them.

Edited by johnnie.black
  • Thanks 1
Link to comment
On 10/5/2019 at 9:51 AM, johnnie.black said:

You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached.

 

Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them.

Why is it bad to wipe the new devices? There's nothing on them.

So, here's what I've done so far;

- I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10).

- Took out 2 of the 5 SSDs and connected the 2 new SSDs.

- Fired up unRAID again.

The array started, but I can't do anything regarding disks, because it says;

"Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs.

and under the Cache it says

"Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed).

 

Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not?

Link to comment
5 hours ago, fluisterben said:

Why is it bad to wipe the new devices? There's nothing on them.

That was after cloning with dd, so the new devices would have the data, and Unraid would consider them to be new and wipe them, making the pool unmountable.

 

5 hours ago, fluisterben said:

- Took out 2 of the 5 SSDs and connected the 2 new SSDs.

- Fired up unRAID again.

Did you assign the new SSDs? If not pool will be re-balanced for the 2 remaining devices.

Link to comment

OK, ssds added to the cache pool and ran

~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v
  Dumping filters: flags 0x7, state 0x0, force is off
  DATA (flags 0x100): converting, target=64, soft is off
  METADATA (flags 0x100): converting, target=64, soft is off
  SYSTEM (flags 0x100): converting, target=64, soft is off

which I'll have to wait and see if it works, but it looks good thus far.

~# btrfs fi show
Label: none  uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693
        Total devices 7 FS bytes used 937.73GiB
        devid    2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1
        devid    3 size 894.25GiB used 894.25GiB path /dev/sdp1
        devid    4 size 894.25GiB used 894.25GiB path /dev/sdn1
        devid    6 size 953.87GiB used 781.50MiB path /dev/sdj1
        devid    7 size 953.87GiB used 781.50MiB path /dev/sdl1
        *** Some devices missing

Label: none  uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6
        Total devices 1 FS bytes used 1.62GiB
        devid    1 size 20.00GiB used 5.02GiB path /dev/loop2

Label: none  uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65
        Total devices 1 FS bytes used 604.00KiB
        devid    1 size 1.00GiB used 398.38MiB path /dev/loop3

I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there?

Edited by fluisterben
Link to comment
  • 1 month later...
On 6/30/2019 at 11:26 AM, Squid said:

Fix Common Problems is telling me that Write Cache is disabled on a drive.  What do I do?

 

This test has nothing to do with any given unRaid version.  For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default.  This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question.

 

To do this, first make a note of the drive letter which you can get from the Main Tab

 

image.png.110b5e815d1f77f136b1bda537d32553.png

 

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

You should get a response similar to this:


/dev/sdm:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)

If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this.

 

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

It should be noted that even with write-caching disabled this is not a big deal.  Only performance will suffer.  No other ill-effects will happen.

 

NOTE:  If this does not work for you, then you will either need to contact the drive manufacturer as to why or simply ignore the warning from Fix Common Problems

 

So I've done this and even added as a common script to run in the beginning of the array start but its still showing up in FCP...is that...weird?

Link to comment
  • 1 month later...

im currently trying to understand how to remove drives from the cache pool/ pool .... without loosing everything ....

 

It would be nice if you could provide screenshots of how to do this... so that the windows script kiddies like me have it easier to understand what needs to be done.... 

 

I followed the guide to in the faq but now im lost.....

 

1. Stoped the array

2. unassigned the cache drives (2 in my case) --> my mistake for mot reading.... 

3. set the drive slots to none 

4. Started the array 

 

Nothing happened ?  In the guide it said that there will be a move process after this but that didn't happen? 

Cache Pool is now "not mountable" because of the missing ssds.... 

 

This i think is clearly my mistake but.... 

 

Maybe its possible to implement an easier sollution like a "unassign drive and move" botton to the Main Page. One that allow the user to click on and dissable one specific drive at a time while a process in tha background takes care of the data beeing mooved

 

Im very hesitant to try the other option about "forgetting the config", since that went bad real quick last time (complete Dataloss - Also my mistake)

 

 

Edited by BR0KK
Link to comment
4 minutes ago, BR0KK said:

unassigned the cache drives (2 in my case) 

From the FAQ:

Quote

-You can only remove devices from redundant pools (raid1, raid10, etc), and make sure to only remove one device at a time from a redundant pool, i.e., you can't remove 2 devices at the same time from a 4 disk raid1 pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices).

 

 

 

Link to comment
  • 5 weeks later...
34 minutes ago, frodr said:

Do I understand it correctly that the procedure for an unmountable cache drive is to backup the data on the ssd, then format it and copy the data back? 

 

Why isn't it possible to rebuild the ssd when running Raid?

 

 

Cheers,

 

 

Frode

When you get an unmountable disk then this file system corruption that is reflected in all RAID copies (if you have that configured).    Unfortunately BTRFS fsck is not as good as many other file systems (probably because there is an assumption that BTRFS should protect against such corruption in the first place).

Link to comment
  • 2 weeks later...
1 minute ago, Squid said:

Should we add on to this stating that XMP/AMP is a form of overclocking (defaulted by many motherboards)?  Or generalize it for all motherboards?

We can, adding that any sort of overclocking for a server is a bad idea, though Intel systems are mostly stable with RAM using XMP profiles, Ryzen based servers on the other hand appear to be prone to issues.

  • Thanks 1
Link to comment
  • 6 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.