Jump to content
RobJ

FAQ Feedback - for FAQ for unRAID v6

41 posts in this topic Last Reply

Recommended Posts

1 hour ago, johnnie.black said:

Finally run a scrub, make sure there are no uncorrectable errors and keep working normally, any more issues you'll get a new notification

What scrub arguments do you use for a UD pool?  Thanks

Share this post


Link to post
3 hours ago, DZMM said:

What scrub arguments do you use for a UD pool?  Thanks

btrfs scrub start /mnt/disks/UD_pool_name

to get scrub status during or after is done:

btrfs scrub status /mnt/disks/UD_pool_name

 

Share this post


Link to post
On 6/30/2019 at 11:26 AM, Squid said:

Fix Common Problems is telling me that Write Cache is disabled on a drive.  What do I do?

 

This test has nothing to do with any given unRaid version.  For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default.  This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question.

 

To do this, first make a note of the drive letter which you can get from the Main Tab

 

image.png.110b5e815d1f77f136b1bda537d32553.png

 

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

You should get a response similar to this:


/dev/sdm:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)

If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this.

 

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

It should be noted that even with write-caching disabled this is not a big deal.  Only performance will suffer.  No other ill-effects will happen.

 

/dev/sde:
 setting drive write-caching to 1 (on)
 write-caching =  0 (off)

 

Above is what I get when I try: hdparm -W 1 /dev/sde

 

Does not work for me.

Share this post


Link to post
Posted (edited)
3 minutes ago, jpowell8672 said:

/dev/sde:
 setting drive write-caching to 1 (on)
 write-caching =  0 (off)

 

Above is what I get when I try: hdparm -W 1 /dev/sde

 

Does not work for me.

contact the drive manufacturer or ignore the warning from FCP

Edited by Squid

Share this post


Link to post

thx for the help guys .. not sure why my cache got corrupted while trying to create VMs but it did and I needed to recover data.. glad i didn't have to start from scratch

Share this post


Link to post
On 6/30/2019 at 8:26 AM, Squid said:

Fix Common Problems is telling me that Write Cache is disabled on a drive.  What do I do?

 

This test has nothing to do with any given unRaid version.  For some reason, sometimes hard drive manufacturers disable write cache on their drives (in particular shucked drives) by default.  This is not a problem per se, but you will see better performance by enabling the write cache on the drive in question.

 

To do this, first make a note of the drive letter which you can get from the Main Tab

 

image.png.110b5e815d1f77f136b1bda537d32553.png

 

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

You should get a response similar to this:


/dev/sdm:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)

If write caching stays disabled, then either the drive is a SAS drive, in which case you will need to utilize the sdparm commands (google is your friend), or the drive may be connected via USB in which case you may not be able to do anything about this.

 

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

It should be noted that even with write-caching disabled this is not a big deal.  Only performance will suffer.  No other ill-effects will happen.

 

NOTE:  If this does not work for you, then you will either need to contact the drive manufacturer as to why or simply ignore the warning from Fix Common Problems

 

Excellent work, this worked perfectly.

Share this post


Link to post
On 6/30/2019 at 11:26 AM, Squid said:

Then, from unRaid's terminal enter in the following (changing the sdX accordingly)

 


hdparm -W 1 /dev/sdm

 

I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup?

Share this post


Link to post
22 minutes ago, ceyo14 said:

I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup?

Do This (It was right in the FAQ): 

On 6/30/2019 at 11:26 AM, Squid said:

99% of the time, this command will permanently set write caching to be on.  In some rare circumstances, this change is not permanent, and you will need to add the appropriate command to either the "go" file (/config/go on the flash drive), or execute it via the user scripts plugin (with it set to run at first array start only)

 

Share this post


Link to post
23 minutes ago, ceyo14 said:

I have to do this more times than I want to, and its always the same 2 disks, how can I automate this on startup?

Add it to the 'go' file in the config folder on the flash drive.

Share this post


Link to post
On 8/23/2019 at 7:59 AM, Frank1940 said:

Do This (It was right in the FAQ): 

 

 

On 8/23/2019 at 8:01 AM, itimpi said:

Add it to the 'go' file in the config folder on the flash drive.

I wasn't sure how to add it, part of the reason why I dismissed it when I first read it and I thought the appropriate command was something else. but just read how the go file works and just added both lines to the bottom as if I would have wrote it on the terminal. Just rebooted and all good, thanks!

Share this post


Link to post

Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)?

I need to move data from 2 old SSDs to 2 new SSDs (where the 2 old SSDs are part of a 5 disk SSD RAID10 array..).

Share this post


Link to post
Posted (edited)
On 10/2/2019 at 3:16 PM, fluisterben said:

Can I use ddrescue just for cloning (cache)disks as well (so without having to recover anything)?

You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached.

 

Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them.

Edited by johnnie.black

Share this post


Link to post
On 10/5/2019 at 9:51 AM, johnnie.black said:

You can but should use regular dd instead, also make sure you don't try to mount the pool with duplicate devices attached.

 

Edit to add: You'll need to do a new config or make Unraid forget current cache config to accept the new devices without wiping them.

Why is it bad to wipe the new devices? There's nothing on them.

So, here's what I've done so far;

- I've (successfully) changed the 5 SSD btrfs to a raid6 cache (coming from raid10).

- Took out 2 of the 5 SSDs and connected the 2 new SSDs.

- Fired up unRAID again.

The array started, but I can't do anything regarding disks, because it says;

"Disabled -- BTRFS operation is running" so I cannot stop the Array and/or format the new SSDs.

and under the Cache it says

"Cache not installed" and then shows the Cache2 Cache3 Cache4 ssds as normal (because they *are* installed).

 

Is there a way to see the BTRFS operation's status? It shouldn't take too long since they're fast ssds, so they should be able to rebuild their raid6 with the 2 ssds missing, not?

Share this post


Link to post
5 hours ago, fluisterben said:

Why is it bad to wipe the new devices? There's nothing on them.

That was after cloning with dd, so the new devices would have the data, and Unraid would consider them to be new and wipe them, making the pool unmountable.

 

5 hours ago, fluisterben said:

- Took out 2 of the 5 SSDs and connected the 2 new SSDs.

- Fired up unRAID again.

Did you assign the new SSDs? If not pool will be re-balanced for the 2 remaining devices.

Share this post


Link to post

OK, ssds added to the cache pool and ran

~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v
  Dumping filters: flags 0x7, state 0x0, force is off
  DATA (flags 0x100): converting, target=64, soft is off
  METADATA (flags 0x100): converting, target=64, soft is off
  SYSTEM (flags 0x100): converting, target=64, soft is off

which I'll have to wait and see if it works, but it looks good thus far.

~# btrfs fi show
Label: none  uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693
        Total devices 7 FS bytes used 937.73GiB
        devid    2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1
        devid    3 size 894.25GiB used 894.25GiB path /dev/sdp1
        devid    4 size 894.25GiB used 894.25GiB path /dev/sdn1
        devid    6 size 953.87GiB used 781.50MiB path /dev/sdj1
        devid    7 size 953.87GiB used 781.50MiB path /dev/sdl1
        *** Some devices missing

Label: none  uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6
        Total devices 1 FS bytes used 1.62GiB
        devid    1 size 20.00GiB used 5.02GiB path /dev/loop2

Label: none  uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65
        Total devices 1 FS bytes used 604.00KiB
        devid    1 size 1.00GiB used 398.38MiB path /dev/loop3

I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there?

Edited by fluisterben

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.