unRAID Server Release 6.0-beta12-x86_64 Available


limetech

Recommended Posts

Adding things in the go file is not recommended, but at times there is no other way.

 

Fromdos takes the carriage return characters off the file and leaves the line feed characters so the file ends up being in Linux format.  It's the same as a cp, but insures that the file is in Linux line feed format.  A lot of problems people run into is from improper Linux file formatting because they used a non-Linux compatible editor.

 

None of the file formats used in unraid implements the 'discard' option when mounting a drive.  There is an open feature request to add trim support to SSD cache drives.

 

If you are looking for more information on how trim works, and its impact on Linux, do a Google search.  It is off topic in this forum post.

Link to comment
  • Replies 504
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Hi Unraid,

Can you pls confirm if on XFS formatted disks within v6 trim support is enabled by default?

Also, if any other special features should be considered regarding trim (i.e. trim not running if snapshots present).

I post this here since I could not found a specific XFS thread like the one for BTRFS.

 

Link to comment

here is an option:

create a cache_backup.sh file in /boot/custom folder and make it executable.

 

cache_backup.sh contents.

#!/bin/bash
rsync -av /mnt/docker/appdata/xxxxxx /mnt/user/docker_backup/

 

in the go file add:

cp /boot/custom/cache_backup.sh /etc/cron.weekly

replace "cron.weekly" with "cron.hourly", "cron.daily" or "cron.monthly"

 

This is really cool, might have to give this a go later when I have time to mess around. Thanks! I still stand in the camp that these kind of forum based knowledge and script answers aren't very user friendly. Frankly I never even thought about using rsync to back up my dockers/ and app data until now. If these was a built in feature and part of the webgui, I would have been all over this on day one...

 

I fully agree here, I tried the setup above and it seems to be working, but I have no idea if there are errors or any indication that it is even running as scheduled. If this could be built into the webgui with a summary on the dashboard that would be the ideal docker backup solution in my opinion.

Link to comment

 

Can xfs be used on the ssd for a cache with TRIM support?

 

Yes.  There is no longer a requirement for a BTRFS drive for Docker since the Docker "image" is BTRFS.

 

I have been running an XFS cache for a while.

 

Very useful conversation guys. Thank you.

 

So it seems to me that the one of the only key differences between running XFS vs. BTRFS for a cache drive is that the only way you can pool cache drives is to use BTRFS.

 

Given I way hoping to use two 250 Gb SSDs hard drives in a cache pool, so it seems like BTRFS is the way to go for me.

 

Link to comment

here is an option:

create a cache_backup.sh file in /boot/custom folder and make it executable.

 

cache_backup.sh contents.

#!/bin/bash
rsync -av /mnt/docker/appdata/xxxxxx /mnt/user/docker_backup/

 

in the go file add:

cp /boot/custom/cache_backup.sh /etc/cron.weekly

replace "cron.weekly" with "cron.hourly", "cron.daily" or "cron.monthly"

 

This is really cool, might have to give this a go later when I have time to mess around. Thanks! I still stand in the camp that these kind of forum based knowledge and script answers aren't very user friendly. Frankly I never even thought about using rsync to back up my dockers/ and app data until now. If these was a built in feature and part of the webgui, I would have been all over this on day one...

 

I fully agree here, I tried the setup above and it seems to be working, but I have no idea if there are errors or any indication that it is even running as scheduled. If this could be built into the webgui with a summary on the dashboard that would be the ideal docker backup solution in my opinion.

 

Doing a little reading on rsync, you could pass it the logfile paramater so that rsync creates a log for you to review.

 

All you would have to do is add --log-file=/xxxx to your call of rsync. It should look like the following I believe. 

rsync -av --log-file=/xxxx /source /destination

Where xxxx is the location you want the log created (Rsync's example is /tmp/rlog). /source is the source you want rsync to sync, and /destination is where you want the sync to go. Earlier the example was source "/mnt/docker/appdata/xxxxxx"  destination "/mnt/user/docker_backup/"

 

 

Link to comment

 

Can xfs be used on the ssd for a cache with TRIM support?

 

Yes.  There is no longer a requirement for a BTRFS drive for Docker since the Docker "image" is BTRFS.

 

I have been running an XFS cache for a while.

 

Very useful conversation guys. Thank you.

 

So it seems to me that the one of the only key differences between running XFS vs. BTRFS for a cache drive is that the only way you can pool cache drives is to use BTRFS.

 

Given I way hoping to use two 250 Gb SSDs hard drives in a cache pool, so it seems like BTRFS is the way to go for me.

 

While cache pooling is implemented in unraid, I think it not advisable to pool cache drives at this time.  BTRFS is still in its infancy and it seems to be stable for single drive implementations, but may not be reliable for cache pooling.

Link to comment

I use a SSD drive formatted with XFS and I set up a daily cron to run trim.

 

Hi again,

 

In a setup where my ssd is mounted outside of the array to only store the vm´s images (in an SSD formatted as XFS), would you advise to run a daily cron to perform trim? I am thinking of my vm´s running and the cron running at the same time.

Link to comment

here is an option:

create a cache_backup.sh file in /boot/custom folder and make it executable.

 

cache_backup.sh contents.

#!/bin/bash
rsync -av /mnt/docker/appdata/xxxxxx /mnt/user/docker_backup/

 

in the go file add:

cp /boot/custom/cache_backup.sh /etc/cron.weekly

replace "cron.weekly" with "cron.hourly", "cron.daily" or "cron.monthly"

 

This is really cool, might have to give this a go later when I have time to mess around. Thanks! I still stand in the camp that these kind of forum based knowledge and script answers aren't very user friendly. Frankly I never even thought about using rsync to back up my dockers/ and app data until now. If these was a built in feature and part of the webgui, I would have been all over this on day one...

 

I fully agree here, I tried the setup above and it seems to be working, but I have no idea if there are errors or any indication that it is even running as scheduled. If this could be built into the webgui with a summary on the dashboard that would be the ideal docker backup solution in my opinion.

 

Doing a little reading on rsync, you could pass it the logfile paramater so that rsync creates a log for you to review.

 

All you would have to do is add --log-file=/xxxx to your call of rsync. It should look like the following I believe. 

rsync -av --log-file=/xxxx /source /destination

Where xxxx is the location you want the log created (Rsync's example is /tmp/rlog). /source is the source you want rsync to sync, and /destination is where you want the sync to go. Earlier the example was source "/mnt/docker/appdata/xxxxxx"  destination "/mnt/user/docker_backup/"

 

I appreciate the help, but I gave up to on the rsync solution, in my opinion it shouldn't be more complicated to setup and monitor the backup of the appdata folder than is is to setup docker or any of the apps I have installed with it. My time is valuable to me and while rsync may work perfectly fine for others I just don't have any confidence in this solution. I would really love to see this incorporated into gui and supported by LT. Until or in case that never happens I am planning on adding a second cache drive, that's not ideal but it's better that nothing for now.

Link to comment

I use a SSD drive formatted with XFS and I set up a daily cron to run trim.

 

Hi again,

 

In a setup where my ssd is mounted outside of the array to only store the vm´s images (in an SSD formatted as XFS), would you advise to run a daily cron to perform trim? I am thinking of my vm´s running and the cron running at the same time.

 

The trim cron job would have no impact on the VMs.  I run a daily trim on my cache drive with two Dockers and four (4) VMs.  No impact at all.

Link to comment

I use a SSD drive formatted with XFS and I set up a daily cron to run trim.

 

Hi again,

 

In a setup where my ssd is mounted outside of the array to only store the vm´s images (in an SSD formatted as XFS), would you advise to run a daily cron to perform trim? I am thinking of my vm´s running and the cron running at the same time.

 

The trim cron job would have no impact on the VMs.  I run a daily trim on my cache drive with two Dockers and four (4) VMs.  No impact at all.

 

I implemented this last night. Thought I'm not sure that this is doing anything when run on btrfs formated SSDs. I was testing my implementation so I ran the comand a few times. The first time it said it trimmed a large number of bytes, or bits, or whatever? The second time it said the same thing... so it seemed like fstrim might not have done anything? I did a bit of research and found that this was the expected result, but couldn't decipher if this means fstrim is working or not.

 

In comparison if you fstrim an xfs files system twice the first run should give you a number of blocks or bits or bytes or whatever to be trimmed and the second should say 0.

 

Is fstrim on btrfs just not a thing? 

Link to comment

Pls, how do you check (command) for the results after running trim?

 

This is a good question. I used the syntax that dlandon wrote to create my script.

 

fstrim -v /mnt/cache | logger

 

The -v option makes fstrim verbose, and | logger I assume adds the output of fstrim to the system log. If you check the system log the verbose output from fstrim should show up.

 

Now for my question:

I'm not a linux expert so "| logger" was not somethign I had done before. Does adding "| logger" to the end of your command work the same for other commands? How do you know which commands "| logger" will work with?

 

IE if I did an rsync -av (archive and verbose) command with | logger at the end would all the verbose output from rsync show up in the system log? 

Link to comment

unfortunately, I cant get fstrim to run.

 

root@nas:~# fstrim -v /mnt/cache
fstrim: /mnt/cache: FITRIM ioctl failed: Operation not supported

 

It appears that the drive supports TRIM:

 


root@nas:~# hdparm  -I  /dev/sdb

/dev/sdb:

ATA device, with non-removable media
        Model Number:       Crucial_CT512M550SSD1
        Serial Number:      14230C370A19
        Firmware Revision:  MU01
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
Standards:
        Used: unknown (minor revision code 0x0028)
        Supported: 9 8 7 6 5
        Likely used: 9
Configuration:
        Logical         max     current
        cylinders       16383   16383
        heads           16      16
        sectors/track   63      63
        --
        CHS current addressable sectors:   16514064
        LBA    user addressable sectors:  268435455
        LBA48  user addressable sectors: 1000215216
        Logical  Sector size:                   512 bytes
        Physical Sector size:                  4096 bytes
        Logical Sector-0 offset:                  0 bytes
        device size with M = 1024*1024:      488386 MBytes
        device size with M = 1000*1000:      512110 MBytes (512 GB)
        cache/buffer size  = unknown
        Form Factor: 2.5 inch
        Nominal Media Rotation Rate: Solid State Device
Capabilities:
        LBA, IORDY(can be disabled)
        Queue depth: 32
        Standby timer values: spec'd by Standard, with device specific minimum
        R/W multiple sector transfer: Max = 16  Current = 16
        Advanced power management level: 254
        DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
             Cycle time: min=120ns recommended=120ns
        PIO: pio0 pio1 pio2 pio3 pio4
             Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
        Enabled Supported:
           *    SMART feature set
                Security Mode feature set
           *    Power Management feature set
           *    Write cache
           *    Look-ahead
           *    Host Protected Area feature set
           *    WRITE_BUFFER command
           *    READ_BUFFER command
           *    NOP cmd
           *    DOWNLOAD_MICROCODE
           *    Advanced Power Management feature set
                SET_MAX security extension
           *    48-bit Address feature set
           *    Device Configuration Overlay feature set
           *    Mandatory FLUSH_CACHE
           *    FLUSH_CACHE_EXT
           *    SMART error logging
           *    SMART self-test
           *    General Purpose Logging feature set
           *    WRITE_{DMA|MULTIPLE}_FUA_EXT
           *    64-bit World wide name
           *    IDLE_IMMEDIATE with UNLOAD
                Write-Read-Verify feature set
           *    WRITE_UNCORRECTABLE_EXT command
           *    {READ,WRITE}_DMA_EXT_GPL commands
           *    Segmented DOWNLOAD_MICROCODE
           *    Gen1 signaling speed (1.5Gb/s)
           *    Gen2 signaling speed (3.0Gb/s)
           *    Gen3 signaling speed (6.0Gb/s)
           *    Native Command Queueing (NCQ)
           *    Phy event counters
           *    NCQ priority information
           *    unknown 76[15]
           *    DMA Setup Auto-Activate optimization
                Device-initiated interface power management
                Asynchronous notification (eg. media change)
           *    Software settings preservation
                unknown 78[8]
           *    SMART Command Transport (SCT) feature set
           *    SCT Write Same (AC2)
           *    SCT Features Control (AC4)
           *    SCT Data Tables (AC5)
           *    reserved 69[4]
           *    reserved 69[7]
           *    Data Set Management TRIM supported (limit 8 blocks)
           *    Deterministic read ZEROs after TRIM
Security:
        Master password revision code = 65534
                supported
        not     enabled
        not     locked
        not     frozen
        not     expired: security count
                supported: enhanced erase
        2min for SECURITY ERASE UNIT. 2min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 500a07510c370a19
        NAA             : 5
        IEEE OUI        : 00a075
        Unique ID       : 10c370a19
Checksum: correct
root@nas:~#

Link to comment
  • 2 weeks later...

Hi,

 

Any sense of when the next beta build will be released?  Given that it's been roughly 7 weeks since beta 12, wouldn't it be good to release a new build to:

1) Correct the issues that have been fixed [from those reported] for those using Beta 12

2) Level-set again to know what's still really an issue vs. not?

 

 

(I'm new to Unraid, but I ask this question knowing/having read/understanding the the unraid beta cycles last a long time, and go a while between releases)....

 

Thanks,

Link to comment

yep well, maybe it would be not a bad idea to have sth like a beta12b version. which actually just focuses on fixing the most common encountered bugs in beta12, instead of introducing new features which may work flawless or not. don't get me wrong, i am pretty happy with b12! only real issue i have is drives not spinning down when they should (i guess thats better than not spinning up, but still annoying).  but there seem to be more probs for other users. so why not take a breath, stop introducing all kind of new and improved features, get a grip on the existing probs and try to solve them... than go with new steam on b13 (or rc1).

i am easy going using only one plugin and one docker-app. i would think for others the issues are a lot worse.

 

cheers, L

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.