gyto6

Members
  • Posts

    148
  • Joined

  • Last visited

Posts posted by gyto6

  1. 9 hours ago, ich777 said:

    @all had to push another update for ZFS (to plugin version 2.1.4) to force an update form the Plugin Update Helper because I've found a bug.

     

    Sorry for the inconvenience...

    Well, doing an update is not this much annoying, mostly an habit.. 😄

     

    Do you mean that we need to reboot our server if we want to apply the update?

  2. 12 minutes ago, ich777 said:

    TBH, even if it is implemented in Unraid directly I'm fine with it, as long as it is working and makes Unraid better for everyone I'm fine with it, I also don't see a reason why I should be mad about such a thing... :)

     

    Heck, even if they implement the Nvidia Drivers, DVB Drivers and so on directly into Unraid, why should I be mad...? If it's working I'm fine with it, even if this means all my plugins are then deprecated (BTW, this means more free time in my free time for me :D ).

     

    Well that's honest indeed. I'm not providing all the work you're providing with Unraid plugins and Docker containers, however I share your point of view about it.

     

    We're profiting about your free time already, and I'm grateful about ZFS and Nvidia support by now. Thanks a lot. 😉

     

    Thanks for the answers!

    • Like 1
  3. 24 minutes ago, ich777 said:

    Why should it be deprecated? This plugin is basically only the ZFS module and the tools which are needed to make ZFS run on Unraid.

     

    I might I've a bad english.

     

    I was asking if Limetech has warned you about official ZFS support in Unraid.

     

    I've only thought that the Unraid's team would warn you if they're planning to support ZFS in order to not reproduce what happened with @CHBMB who discovered that @limetech deprecated its works (and more, but that's not the subject) without warning him/her.

  4. 1 hour ago, ich777 said:

    As @SimonF pointed out nothing too fancy, a necessary change how the plugin downloads/installs the package & code cleanup.

     

    Actually I pushed the update. 😅

    Well thanks then. 😉

     

    You've not been advised about the ZFS plugin depreciation by the LimeTech Team due to its integration in Unraid so far?

  5. @steini84 Hi, I saw that you Uudated the ZFS plugin today.

     

    Quote

    2022.07.17

    Necessary changes for unRAID 6.11+

     

    Can you be more explicit about what the update implied ?

     

    Looking forward if the 6.11 poll shall be granted :

     

    image.thumb.png.6312d178aa8dd205457ec2e891a6f046.png

  6. 1 hour ago, calvados said:

    Hi everyone,

     

    I'm really hoping someone can help me unmount my pool.  I originally had my pool mounted off of /pool/dataset, and wanted to move it into /mnt.  I unmounted, then made the dumb move of running:

    zfs set mountpoint=/mnt pool/dataset

    I now see that I should have instead run:

    zfs set mountpoint=/mnt/pool/dataset pool/dataset

    When I try to unmount the pool I consistently get "cannot unmount '/mnt': pool or dataset is busy".  The array is stopped, no dockers, or VMs, are running.

     

    I've rebooted the server multiple times, and no matter what I try it is always busy.

    root@ur:~# zfs unmount cxUrPool
    cannot unmount '/mnt': pool or dataset is busy
    root@ur:~# zfs unmount -f cxUrPool/liveFiles
    cannot unmount '/mnt': pool or dataset is busy
    root@ur:~# zfs unmount -f cxUrPool
    cannot unmount '/mnt': pool or dataset is busy
    root@ur:~# zfs set mountpoint=/mnt/cxUrPool/liveFiles cxUrPool/liveFiles
    cannot unmount '/mnt': pool or dataset is busy

    Further complicating matters I have the pool mounted in one spot and the pool/dataset mounted in another:

    root@ur:~# zfs get mountpoint
    NAME                PROPERTY    VALUE       SOURCE
    cxUrPool            mountpoint  /cxUrPool   local
    cxUrPool/liveFiles  mountpoint  /mnt        local
    
            NAME                                      STATE     READ WRITE CKSUM
            cxUrPool                                  ONLINE       0     0     0

    I would like to remove both mountpoints and remount into /mnt/cxUrPool/liveFiles.

     

    How can I determine what is causing the pool or dataset to be busy, and kill it so I can make this change?

     

    Thanks everyone,

    Cal.

    ur-diagnostics-20220715-0000.zip 77.57 kB · 0 downloads

    In your case, I'll uninstall ZFS, reboot the machine and stop the array, docker and VM to umount.

     

    PS: I switched back to my XFS ZVOL as I couldn't access the docker tab this morning with the img running from my ZFS partition.

  7. 44 minutes ago, BVD said:

    Also, why would we want btrfs on top of zfs?

    At the time, I couldn't start with my docker.img if the file was set onto a ZFS partition.

     

    I switched to a XFS ZVOL since. I'll try to run it back onto a ZFS partition.

  8. 8 hours ago, asopala said:

    Hey all,

     

    I tried @gyto6's scripts for mounting the docker containers into a zvol, this one.

     

     

    The problem I ran into was after creating the partition, I got this issue with the next line of the script:

    mkfs.btrfs -q /dev/tank/docker-part1
    probe of /dev/tank/docker-part1 failed, cannot detect existing filesystem.
    WARNING: cannot read superblock on /dev/tank/docker-part1, please check manually
    
    ERROR: use the -f option to force overwrite of /dev/tank/docker-part1

     

    I'm not sure what to do at this point.  Anybody able to help?

    Are you sure you've "written" the modification when you created the partition?

    It's the final step to create the partition.

  9. @NytoxRex

     

    Did a test, and the "Previous Versions" is less dumb and display a result only if a modification is detected, else, there would be no result.

     

    Please, share your actual config, for the others to help you proceed to diagnostic.

     

    My own config is quite simple, but might probably help:

    [Documents]
    	path = /mnt/fastraid/Documents/
    	browseable = yes
    	guest ok = no
    	writeable = yes
    	read only = no
    	create mask = 0775
    	directory mask = 0775
            write list = margaux
            valid users = margaux	
            strict sync = yes
    	vfs objects = shadow_copy2
            shadow: basedir = /mnt/fastraid/Documents/
    	shadow: snapdir = .zfs/snapshot
    	shadow: sort = desc
    	shadow: format = %Y-%m-%d-%H%M%S
    	shadow: localtime = yes

     

     

  10. Small warning,

     

    I've lost access to my server. I detected it when DNS queries were no more solve by my DNS server container. I've tried connecting to GUI, but it freezed then were no more reachable (ping answer however).

     

    I initiated a reboot through IPMI but after 3H, the diagnostic file wasn't generated yet, due to graceful stop unsuccesful. So I proceded to a forced shutdown. 

     

    This happened onto my SuperMicro server, linked on the 2 host's ethernet port by 802.3ad. No troubleshoot when I checked this link on the Mikrotik router...

     

    Next time it occurs, i'll try to wait enough for the server to generate the diagnostic files to be posted on the General Support channel, but 3h is not normal and already long enough..

  11. What can be done, is to use as Special device to boost your system. The special_small_blocks property authorizes you to store not only metadata, but data.

    According to your dataset recordsize property value, the special_small_blocks value represents the threshold block size for including small file blocks into the special allocation class. Blocks smaller than or equal to this value will be assigned to the special allocation class while greater blocks will be assigned to the regular class.

     

    What I use for my applications datasets is that I put the same value to the recordsize and special_small_blocks, so my whole applications runs on my NVMe drives.

     

    In consequence, my SATA metadata files are stored on my NVMe drives for better indexing, and are not troubled by the applications using more R/W operations. I use two Special mirrored dev because if the Special fails, your whole pool is lost.

     

    So my personal files which must not be altered, have the parity and checksum running in their pool and is what makes most of its operations as they're rarely used. My applications always running do not interfere on this activity has they run on the NVMe Special drives which are more efficient.

    An app can be reinstalled, a corrupt picture is not retrievable..

     

    Btw, I don't risk any data loss as all my devices have Power Loss Protection (PLP), runs behind a UPS set to turn off the server softly after 5min without power. Finally, my data are saved every hour on my NAS somewhere else in my house, and on my Sharepoint ou-site.

     

    Backup is a must to keep your data safe. Do not hope your system to take care of them without any trouble.

     

    What I meant is that you won't optimize your system without caveats or risks. You ALWAYS must imagine the worst, to think about all it implies, then find a solution. You'd be better letting your drives in raidz for know and think about a NVMe drives for your applications which requires a lot of I/O.

  12. 40 minutes ago, Andrea3000 said:

    Before reading your reply I thought that the reason for the lower performance were the reads that are interleaved in between the writes.

    I know very little about zfs but I wasn't expecting a write operation to require reads. Does this have something to do with the ZIL?

    No. That's how a system works.

     

    You use raidz for data that must absolutely not be lost nor altered. If you wan't to run you files with best performances, you then use a Raid-0 / Raid-10 (For redundancy), so two vdev stripped or two mirrors stripped.

    RAID-0 is faster than RAID-1, which is faster than RAIDZ-1, which is faster than RAIDZ-2, which is faster than RAIDZ-3.

  13. 20 minutes ago, Andrea3000 said:

    I’m going for a 10GB SLOG based on the fact that I will connect to the server via 10gbe, which means that in 5 seconds I can send up to 5GB. I’m doubling that to have some additional margin.

    The P1600X has power loss protection, therefore I should be able to recover everything that wasn’t yet committed to disk in case of power failure.

     

    I won't expect such a result for many files as a huge one sadly in reality.

     

    Whatever, test your config to determine where you're facing a real bottleneck and what you can miss. 🙂 

  14. 52 minutes ago, jortan said:

     

    To clarify, it's relevant for any application doing synchronous writes to your ZFS pool. 

     

    Indeed. 😉

     

    52 minutes ago, jortan said:

     

    SLOG is not a read cache

     

    L2ARC is a read cache for those wondering. Most Recently Used (MRU) and Most Frequently Used (MFU) composing the ARC hold the files in RAM. The files evicted from these components due to RAM size limite, file usage cycle and ZFS parameter are then stored in the L2ARC, which should be set onto an SSD (even SATA, but prefer NVMe).

    I must notice that the L2ARC index is stored itself in the RAM, reducing your potential MRU/MFU size!

     

    52 minutes ago, jortan said:

     

    (it then flushes those writes from ARC to your pool - by default every 5 seconds I think?)

     

    Correct about the 5 seconds

    52 minutes ago, jortan said:

     

    SMB shares will default to asynchronous?

     

    Indeed, you need to set this parameter in your share config to enable synchronous writes. Asynchronous writes is used for "efficiency" writing to memory buffer, letting the disks to handle theirs others activity. (Thanks for the correction @jortan )

    “strict sync = yes”
    52 minutes ago, jortan said:

    That said if you have/are getting this anyway, you may as well slice off 10GB for a SLOG.

    OpenZFS recommandations are even 4Go/GB : https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html?highlight=slog#optane-3d-xpoint-ssds

  15. 1 hour ago, Marshalleq said:

    I don’t use slogs I thought they were really only beneficial in rare cases and would need more redundancy because it’s writes?  I don’t know much about them sorry.   Very nice drive though!

    Slog is relevant for database and network shares. They do not need to be mirrored. If a Slog is degraded, you'd need to get a unexpected powershutdown in the 5min interval of the degraded state to loose data. When a Slog is degraded, the sync write goes back to the normal transaction and the user/database wait for a real anwser for each sync request.

     

    But even without a power outage, according to Slog usage (if database queries or network shares are heavily used by more than 10 persons for example), the data registered in the Slog and not delivered to drive host of the file before the Special failure would be lost. If it's a modification in an excel, the modifications are lost, not the file. Must be understood that it's only an interval of transactions which are lost.

     

    Special vDev must be mirrored. Metadata being stored on it, you loose your whole data written since the special vDev has been added if the drive is degraded.

    • Like 1
  16. 16 hours ago, Marshalleq said:

    Hi thanks for posting, I'm interested in what advantage this is to you.  I have run the img file on a zfs dataset and also run it as a folder which creates a host of annoying datasets within it.  Does this somehow get the best of both worlds?  Thanks.

    You're welcome!

     

    Concerning to me, my config doesn't annoy me as I'm fully using SSD probably.

     

    Some ZFS tuning are necessary as volblocksize to not get some troubles.

     

    Whatever, I'm never coming back by now as, indeed, I don't have any performance issue and furthermore, I've absolutely not any "snapshots/legacy snapshots" flooding from the docker dataset anymore.

     

    I'd recommand this setting instead of using the docker folders for the simple reason that you're never really expected to edit the docker folder and it's easier to copy a simple .img file for backup.

  17. @Arragon

     

    On 3/24/2022 at 11:39 PM, gyto6 said:

    Thanks again @BVD ! I'm finally able to run the docker.img inside of a ZVOL.

     

    For those interested:

     

    zfs create -V 20G pool/docker # -V refers to create a ZVOL
    cfdisk /dev/pool/docker # To create easily a partition
    mkfs.btrfs -q /dev/pool/docker-part1 # Simple to format in the desired sgb
    mount /dev/pool/docker-part1 /mnt/pool/docker # The expected mount point

     

     

     

  18. 3 hours ago, itimpi said:

    A point to remember is that you still need to have backups of anything important or irreplaceable.  Having a redundant pool is not enough to guarantee that nothing will go wrong that could cause data loss as there are lots of ways to lose data other than simple drive failure.   Your approach needs to take that into account.

     

    I confirm. A copy from your data somehere else on another system in your house or online is a backup. You can't bet on any system to take care from your data without any troubleshoot. NEVER

  19. Hi @Sally-san,

     

    Quote

    Issue 1:
    The unraid default shares weren't automatically created and from the "Spaceinvader One" videos i watched it should.

     

    Didn't watch the SpaceInvader video. But what I can recommand you meanwhile is going to the Settings->SMB Settings and edit the smb configuration with something like that :

     

    [*Share1*]
    	path = /mnt/*YourPool*/*Share1*/
    	browseable = yes
    	guest ok = no
    	writeable = yes
    	read only = no
    	create mask = 0775
    	directory mask = 0775
            write list = *YourUser*
            valid users = *YourUser*	
    	

     

    For each share, copy past the template on a new line and edit *YourPool*, *Share1* and *YourUser*. Remove the * symbol which only used here for a better visibility.

     

    Quote

    Issue 2:
    Minor issue, when I'm install the Plex plugin where do I point "Host path 2" and "Host path 3". I watched a tutorial on it. But where would i point them with the context me trying to use ZFS.

     

    Depends of your config, I'll take my configuration as an example. *YourPool* have a dataset that should be set with "-o recordsize=1M", "-o compression=none" and "-o primarycache=metadata" *Plex* with several subfolders (as recommends Plex) TV Shows, Movies and Transcode.

     

    The Transcode should also be a dataset but set with "-o recordsize=128k". Don't know the exact value for the Transcode folder, but I can't recommand the 1M recordsize for every config. It's better for it to be located onto an SSD and you should then create the dataset on your SSD and create a symlink in the Plex folder.

    So: 

    *YourPool* -> Plex -> Movies
                       -> TV Shows
                       -> Transcode

     

    Host 2 is relative to your Transcode Folder, so you must set the path of your transcode folder. In my case :

    Quote

    /mnt/*YourPool*/Plex/Transcode

    Host 3 is relative to your library folder, so you must set the path of your library folder. In my case :

    Quote

    /mnt/*YourPool*/Plex

    • Like 1
  20. You can  use indeed SSD with our without ZFS on Unraid.

    When creating the ZFS pool, you must add the parameter

    "-o autotrim"

    to be certain that ZFS run a "soft" TRIM according to the disk activities. Once per month, you can set the following script with the plugin "User Script" to run an active full disk TRIM :

    "zpool trim yourpoolname"

     

    Don't know for XFS nor BTRFS commands, but you can run with a script a full TRIM

    "fstrim -v /dev/disks/sdx"