Arragon
-
Posts
60 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Arragon
-
-
Maybe this can help: https://wintelguy.com/zfs-calc.pl
- 1
-
44 minutes ago, trurl said:
But won't perform as well due to slower parity, and will keep array disks spinning since that file is always open.
Since the user asked for a server with SSDs, I did assume he wouldn't put some spinning rust for that obligatory array drive. And with a single drive, there is not parity
-
32 minutes ago, ich777 said:
Please keep in mind that this documentation is for stock unRAID systems
[...]
As you can see it is using the ZFS driver automagically.
Of course Limetech would not account for ZFS since it is not officially supported. But since I was not sure about docker picking the right storage driver and my previous attempt with unstable builds (2.0.3 iirc) was unsuccessful, I recommended setting the docker vDisk location to path on the array which 100% does work.
-
7 minutes ago, ich777 said:
I don't understand, what is not recommended or where does it say it's not recommended?
it says
QuoteIn a specified directory which is bind-mounted at /var/lib/docker. Further, the file system where this directory is located must either be btrfs or xfs.
zfs would have zfs storage driver instead of the btrfs or overlay2 one
-
2 hours ago, ich777 said:
Please try to use not an image, use a Docker path instead, have no problem using everything on a zpool, currently I'm using ZFS 2.1.1 on unRAID 6.10.0-rc1
Limetech does not yet recommend this (https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.9.0#Docker)
I think it would be best if we could get it to work with the ZFS storage driver (https://docs.docker.com/storage/storagedriver/zfs-driver/) once ZFS is officially in Unraid.
@ich777what does
docker info
give you for your setup? Did it autoselect zfs as storage driver?
-
You should have a single drive for Unraid's array, so the system can start since most services need the array started. Also your docker.img should be placed on this drive as was witten earlier.
I have atime off in general and setting autotrim on for SSDs is something I just do like setting ashift manually on creation. For the compression I would go with zstd instead of lz4 however.
- 1
-
Does the user/group have write access? The default when creating a dataset is root:root
-
Doesn't need the array to be started for serivces like SMB and Docker to run?
-
Can't find "Plugin Update Helper" in the apps tab. Is it not available in 6.9.2? Is it sufficient to change the location or do I have to move the docker.img file first?
-
This feature could make ZFS way more interesting for Unraid in the future: https://github.com/openzfs/zfs/pull/12225
- 1
-
All of a sudden my flash drive was not writeable any longer. Searching for that error revealed that this most likely because of bad sectors and Sandisk making their drives readonly in that case. So I want to switch to a new drive but of course I don't have a flash backup. Now I wonder if I can just copy the files in Windows to the new one or if I need additional steps.
-
Does Remote Access require you to have have Port 443 accessible via IPv4? I can't get it to work with IPv6 even thought that can be reached from the internet.
-
1 hour ago, Joly0 said:
Also it seems not to make a difference having Z1 or Z2, but we will see, jsut lets try to pin down this issue as precisely as possible.
I can confirm that since I still had issues with 6.9.0/2.0.3 and my raidz1 didn't change.
-
I think we can omit createtxg as it's described as "The birthtime transaction group (TXG) of the object."
-
3 minutes ago, Joly0 said:
Nope, for me still whole system lockup when having the docker.img on my zfs array on 2.0.4/6.9.1
Now that is strange. I have docker.img in it's own dataset on a raidz1. My settings
NAME PROPERTY VALUE SOURCE tank/Docker type filesystem - tank/Docker creation Sun Feb 21 15:50 2021 - tank/Docker used 106G - tank/Docker available 22.8T - tank/Docker referenced 11.4G - tank/Docker compressratio 1.66x - tank/Docker mounted yes - tank/Docker quota none default tank/Docker reservation none default tank/Docker recordsize 128K default tank/Docker mountpoint /mnt/tank/Docker inherited from tank tank/Docker sharenfs off default tank/Docker checksum on default tank/Docker compression lz4 inherited from tank tank/Docker atime off inherited from tank tank/Docker devices on default tank/Docker exec on default tank/Docker setuid on default tank/Docker readonly off default tank/Docker zoned off default tank/Docker snapdir hidden default tank/Docker aclmode discard default tank/Docker aclinherit restricted default tank/Docker createtxg 14165 - tank/Docker canmount on default tank/Docker xattr sa inherited from tank tank/Docker copies 1 default tank/Docker version 5 - tank/Docker utf8only off - tank/Docker normalization none - tank/Docker casesensitivity sensitive - tank/Docker vscan off default tank/Docker nbmand off default tank/Docker sharesmb off default tank/Docker refquota none default tank/Docker refreservation none default tank/Docker guid 8024818214154210388 - tank/Docker primarycache all default tank/Docker secondarycache all default tank/Docker usedbysnapshots 45.6G - tank/Docker usedbydataset 11.4G - tank/Docker usedbychildren 49.5G - tank/Docker usedbyrefreservation 0B - tank/Docker logbias latency default tank/Docker objsetid 15006 - tank/Docker dedup off default tank/Docker mlslabel none default tank/Docker sync standard inherited from tank tank/Docker dnodesize legacy default tank/Docker refcompressratio 1.49x - tank/Docker written 9.09M - tank/Docker logicalused 171G - tank/Docker logicalreferenced 17.0G - tank/Docker volmode default default tank/Docker filesystem_limit none default tank/Docker snapshot_limit none default tank/Docker filesystem_count none default tank/Docker snapshot_count none default tank/Docker snapdev hidden default tank/Docker acltype off default tank/Docker context none default tank/Docker fscontext none default tank/Docker defcontext none default tank/Docker rootcontext none default tank/Docker relatime off default tank/Docker redundant_metadata all default tank/Docker overlay on default tank/Docker encryption off default tank/Docker keylocation none default tank/Docker keyformat none default tank/Docker pbkdf2iters 0 default tank/Docker special_small_blocks 0 default tank/Docker org.znapzend:zend_delay 0 inherited from tank tank/Docker org.znapzend:enabled on inherited from tank tank/Docker org.znapzend:src_plan 7days=>1hours,30days=>4hours,90days=>1days inherited from tank tank/Docker org.znapzend:mbuffer_size 1G inherited from tank tank/Docker org.znapzend:mbuffer off inherited from tank tank/Docker org.znapzend:tsformat %Y-%m-%d-%H%M%S inherited from tank tank/Docker org.znapzend:recursive on inherited from tank tank/Docker org.znapzend:pre_znap_cmd off inherited from tank tank/Docker org.znapzend:post_znap_cmd off inherited from tank
Running the following images
linuxserver/sabnzbd linuxserver/jackett wordpress linuxserver/sonarr linuxserver/nzbhydra2 linuxserver/plex linuxserver/tautulli linuxserver/nextcloud linuxserver/lidarr linuxserver/mariadb binhex/arch-qbittorrentvpn linuxserver/radarr telegraf grafana/grafana linuxserver/sabnzbd prom/prometheus influxdb jlesage/nginx-proxy-manager b3vis/borgmatic boerderij/varken spaceinvaderone/macinabox spaceinvaderone/vm_custom_icons binhex/arch-krusader jlesage/jdownloader-2
-
@steini84 I have run 2.0.4 in 6.9.1 on my test system without a problem so far and switched my main system (raidz1, 24 docker containers in btrfs img) to unstable. Running fine for the last 8 hours where 2.0.3 under 6.9.0 had the system lock up within minutes. Looking like the issue is fixed in either Unraid 6.9.1 or ZFS 2.0.4. A verification from someone else would be helpful though.
- 3
-
1 hour ago, NeoJoris said:
I'm sorry to maybe misinterpreting your problem, but have you checked for any hardware faults? Please try some cpu and memory tests (xmp was for me a random crasher and causing some drive and network instabilities on a threadripper)
Which problem are you referring to?
-
I have setup a test system with
- zfs 2.0.4 on Unraid 6.9.1
- zpool of only one SSD drive
- znapsend making snapshots hourly
- one dataset for Docker
- placed docker.img in that dataset
- have one docker running (Telegraf)
So far I could not reproduce the error. But there are some key differences to my main system when I had the problem, it had
- raidz1 of 3 drives
- zfs 2.0.3 on Unraid 6.9.0
- multiple docker running
- drives where classic spinning rust
Anyone got an idea on how to reproduce the error? I wouldn't call it fixed in 2.0.4/6.9.1 so soon.
-
7 hours ago, manuel007 said:
how can i switch on 2.0.4 ?
touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
-
3 hours ago, steini84 said:
Also 2.0.3 in the "Unstable" folder you can manually enable
Is it still
touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS
?
I want to reproduce the problems on a test machine but only have 1 drive (since the array wants at least 1) and therefore could only create a single-disk pool. I don't know if that would result in me not being able to reproduce the docker error.
-
2 hours ago, Marshalleq said:
Like I said, I didn't even get to that point as the system completely locked up before I could copy over the data. Starting with a fresh Docker is not an option for me. Also the help page mentions ZFS for /var/lib/docker (where we want to bind-mount to). Currently they recommend btrfs (which Unraid uses by default) or OverlayFS (successor to aufs). I guess that is why it's currently an img-File and not simply a folder.
I just don't understand what 2.0.1+ changed to make the img-File not work any longer.
There was also something strange I noticed on 2.0.3 - I tried to copy the docker.img from ZFS pool to an SSD with XFS and it was really slow. Docker wasn't started and other files copied over just fine. My docker.img had multiple snapshots (like other files) - don't know if that is of importance.
-
1 hour ago, steini84 said:
Did you try the new bind-mount to a directory that was added? In any case I moved 2.0.3 to unstable and added a 2.0.0. build instead (thanks ich777)
I'm still new to docker, so maybe I did it wrong. Here is what I thought I could do:
1) Create new dataset like /mnt/tank/Docker/image for the content that was previously in docker.img
2) Start Docker with docker.img and make sure containers are stopped
3) Copy over content from /var/lib/docker to /mnt/tank/Docker/image
4) Stop Docker and change path to /mnt/tank/Docker/image
5) Start Docker again and it would bind mount /mnt/tank/Docker/image to /var/lib/docker
Unfortunately the system completely locked up during step 3 and I had to reset.
EDIT: Seeing as there is a whole storage driver (https://docs.docker.com/storage/storagedriver/zfs-driver/) and I don't even know how those changes are made in Unraid, I think I'll stay with the .img-file
-
After updating to 6.9.0 all my user shares are gone. /mnt looks like this
d--------- 1 root root 0 Jan 1 1970 user/
and I can't change permission on that
# chmod 1777 /mnt/user chmod: changing permissions of '/mnt/user': No such file or directory
-
51 minutes ago, steini84 said:
This hopefully fixes the problems with docker.img and zfs 2.0.1+
Still running into the same problem with Docker. How can I go back to 2.0.0?
[Plugin] Nvidia-Driver
in Plugin Support
Posted
Would it be okay to have only one GPU for VM Passthrough and Transcoding if both aren't running at the same time? I only need the VM once in a while and would stop Plex before I use it.