PyCoder

Members
  • Posts

    24
  • Joined

  • Last visited

Posts posted by PyCoder

  1. I just read this on the news and just my 2 cents...
    I bought years ago the pro key, which I don't use anymore, so I don't really care but...

    I get it if you had a professional branch and a community like TrueNAS, but Unraid is mostly used by home users, so a annually subscription will probably only push people towards TrueNAS Scale, OMV.

    There is no real advantage anymore over TrueNAS Scale, especially with raidz@expansion and a professional branch with professional support doesn't exist sooooo....


     

  2. 1 hour ago, ich777 said:

    No.

     

    Please see the first post under "unRAID settings".

    I think this is one of the known issues when the Docker image or path is on a ZFS filesystem.

     

    Maybe @steini84 has some more information about this.

     

    Yeah but there was also an issue with docker.img on ZFS with update 2.0 or 2.1 thats why I changed it to directory which worked now for weeks till 2 days ago.

    Hmmm, I'll switch back to docker.iso if that doesn't work I'll try zvol with ext4.

     

    lets try :)

     

    Edit: docker.iso on zfs blocked /dev/loop and docker in zfs directory f* ups contrainers.

     

    My solution with only ZFS:
    zfs create -V 30gb pool/docker
    mkfs.ext4 /dev/zvol/pool/docker
    echo "mount /dev/zvol/pool/docker /mnt/foobar" >> /boot/config/go

     

    Working like a charm :)


     

  3. Hi

    Did you guys make a new update?
    Cause the last time im moved my docker form the btrfs image to  a ZFS directory and now out of the blue i cant update or remove or start docker containers anymore?!

    I even deleted the directory (zfs destroy -r) and reinstalled all dockers.. after  1 day i had the exact same issue again.

     

    Quote

    Execution error

    Image can not be deleted, in use by other container(s)

     

    Has someone a solution?

  4. Hi

     

    Is it possible that you guys introduced some nasty bugs with that update?
    My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool.

     

    Started after i updated zfs for unraid to 2.0.0.

  5. On 12/5/2021 at 7:19 PM, Squid said:

    Not sure if Unraid will ever manage (ie: spin down) drives that are outside of it's control (or UD's)  Probably best to ask in the zfs thread.

     

    I mean I can put the drives to sleep by myself but why are they waking up?
    There is 0 read / write to the drives and on TrueNAS i dont have to export them to make them sleep.

    So it must be some unraid stuff that is going on?

    I made a script now that exports the drives and puts them to sleep. :(

  6. Hi

    I've a little issues with my unraid:

    1) Unraid is not putting my zpool drives to sleep even tough I used "hdparm -S 120 /dev/sdg /dev/sde".
    2) When I put my zpool manually by pressing the button to sleep  unraid will wake the pool up after 5 min

    I can't figure out whats the issue 
    iotop and lsof doesn't show me anything and zpool iostats has  r/w of 1 for the zpool batcave

    Someone knows whats going on and can help?

     

    PS. It's working with Truenas Scale for some reason or when I export the zpool
    PPS. I have to put the drives to sleep cause the WD Red Plus are noise af and hot af!
     

    root@Deadpool:~# zpool iostat -vv
                                                    capacity     operations     bandwidth 
    pool                                          alloc   free   read  write   read  write
    --------------------------------------------  -----  -----  -----  -----  -----  -----
    batcave                                       4.22T  4.87T      1      1  8.73K  22.3K
      mirror                                      4.22T  4.87T      1      1  8.73K  22.3K
        ata-WDC_WD101EFBX-68B0AN0_VCJ3DM4P            -      -      0      0  4.36K  11.2K
        ata-WDC_WD101EFBX-68B0AN0_VCJ3A0MP            -      -      0      0  4.38K  11.2K
    --------------------------------------------  -----  -----  -----  -----  -----  -----
    deadpool                                      5.81T  1.45T      2      2  15.6K  28.3K
      raidz1                                      5.81T  1.45T      2      2  15.6K  28.3K
        ata-WDC_WD20EFRX-68EUZN0_WD-WCC4MJJZXUS6      -      -      0      0  3.99K  7.17K
        ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M2CE11LH      -      -      0      0  3.95K  7.10K
        ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M7SEZK5A      -      -      0      0  3.80K  7.03K
        ata-ST2000VN004-2E4164_Z529JXWQ               -      -      0      0  3.87K  7.00K
    --------------------------------------------  -----  -----  -----  -----  -----  -----
    root@Deadpool:~# 

     

    root@Deadpool:~# lsof /batcave/
    root@Deadpool:~# 

     

    Total DISK READ :       0.00 B/s | Total DISK WRITE :       0.00 B/s
    Actual DISK READ:       0.00 B/s | Actual DISK WRITE:       0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                       
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_gp]
        4 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_par_gp]
        5 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:0-events]
        6 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:0H-events_highpri]
        8 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [mm_percpu_wq]
        9 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [rcu_sched]
       11 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
       12 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/0]
       13 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/1]
       14 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
       15 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
       16 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/1:0-events]
       17 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/1:0H-kblockd]
       18 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/2]
       19 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/2]
       20 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/2]
       22 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/2:0H-kblockd]
       23 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [cpuhp/3]
       24 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/3]
       25 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/3]
       27 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/3:0H-events_highpri]
       28 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kdevtmpfs]
       29 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [netns]
       30 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/u64:1-flush-8:0]
    11295 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [xfs-cil/md1]
    11296 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [xfs-reclaim/md1]
    11297 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [xfs-eofblocks/m]
    11298 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [xfs-log/md1]
     1115 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [usb-storage]
     9713 be/7 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_checkpoint_di]
       42 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/3:1-md]
     1415 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [scsi_eh_3]
       49 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:2-events]
    11315 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other
    11316 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other
     1077 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/0:1H-kblockd]
     1078 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ipv6_addrconf]
     9784 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_null_iss]
     9785 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_null_int]
     9786 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_iss]
     9787 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_int]
     9788 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_int]
     9789 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_int]
     9790 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_int]
     9791 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_rd_int]
     1600 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % rsyslogd -i /var/run/rsyslogd.pid
     1601 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imuxsock]
     1602 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imklog]
     1603 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % rsyslogd -i /var/run/rsyslogd.pid [rs:main Q:Reg]
     9796 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_iss]
     1093 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/1:3-events]
     9798 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_iss_h]
     9799 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9800 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9801 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9802 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9803 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9804 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     9805 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_wr_int]
     1102 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [scsi_eh_0]
    11277 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % bash /usr/local/emhttp/webGui/scripts/diskload
       80 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kworker/2:1-events]
     1105 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [usb-storage]
     9810 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_iss]
     9811 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_iss]
     9812 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_iss]
     9813 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_iss]
     9814 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_iss]
     1111 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [scsi_eh_1]
     9816 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [z_fr_int]
     1113 be/0 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [scsi_tmf_1]

     

  7. 17 minutes ago, Frank1940 said:

    Are you using a cache drive for the array?  Writes directly to the array are much slower than reads.   Second (and major) factor is the file sizes being transferred.  Writing directly to array is very slow because of file creation overhead, and the extra reads and writes necessary to keep parity updated on a real time basis.   

     

    There are also two methods to  (Settings   >>>    Disk Settings   >  'Tunable (md_write_method):')   Using the  "reconstruct write" is faster but it will spin up all of the disks in the array as opposed to the parity disk(s) and one data disk. 

     

    It looks like you are doing an initial data load of a new server setup.  If you have a good backup of all the data you are transferring, you could unassign the parity disk-- leaving the aray unprotected.  But that would at least double the transfer speed.  (When you assigned the parity disk after the data is loaded, a parity build will be required.)

     

    One more observation.  Small capacity drives have slower speeds than large capacity drives because of the higher data density of the large capacity drives. 



    Hi, 

    No I dont use any cache drives and i tested  md_write_method without any success.

    I know that the array is slow as slow as the HDD but 15 MB/s? 

    Even when i copy a file from my PC to my Laptop i have at least 50MB/s soooooooo something must be off.


    And this is now with ZFS. Yes is stripped and faster but 15 MB/s isn't normal.

    Untitled.png

  8. Hi guys!

    I was setting up a new Unraid but this time I went from ZFS zu the "normal" Unraid setup and now my problem:

    The smb transfer speed is sloooooooooooooooooow really sloooooooooow in avg 15 MB/s?!

    Is there anyway to fix that? I mean the drives should make at least 80 MB/s

     

    It's the same Hardware and same Unraid I only removed the ZFS pool and created a  Unraid-Array.
    If I switch back to the ZFS pool I have around 350MB/s.

    I know that the ZFS pool is faster thanks to its "traditional" raid setup and Unraid is as slow as the HDD but 15 MB/s? 

    Can someone help?
     

    Untitled.png

  9. Hi,

    I've  an issue with my docker / VPN setup.
    I switched from "bridge/host" to custom bridge (br0) since then I can't reach any docker via VPN!

    As example:
    I can reach Plex as "host" (192.168.0.10) but not as "br0" (192.168.0.53) via VPN.

    On the other hand I can reach my Unraid (192.168.0.10) via VPN.

    So I assume something with the routing is f* up? 

     

    1473227755_Screenshotfrom2021-01-2219-38-19.thumb.png.437a80e33242af57c3c4199b4245c80e.png

    853888903_Screenshotfrom2021-01-2219-42-21.thumb.png.3d91c575e7bfbaecddc81080a6c3672a.png

     

     

    So do I've to change the routing in Unraid/Docker or whats the matter?

    PS: I't doesn't matter if I use wireguard or openVPN both show me "ERR_ADDRESS_UNREACHABLE" for the dockers with br0.




     

  10. +1

    I would like to see a UI for ZFS in unraid and a one-click installation for the module (so we avoid the license issues)!


    PS. No,  ECC not requiered it's only recommanded!
    Unraid isn't protecting us at the moment from bit-rot neither in ram nor disk.

    ZFS without ECC would at least protect the data that is already on the disk from bit-rot!

  11. Hi

     

    I've a question about Unraid and arrays...

     

    My current NAS is using ZFS (FreeNAS) and I wanna stick to ZFS so there is no "array" needed in my opinion because I want everything on the ZFS pool.

    BUUUUUUUUUUUUUUT!
    ---> I can't start any VM nor Docker because "no array is started/existing".

    Is there any way to bypass that or am I really forced to use a regular array + my ZFS pool?

     

     

    I hope you guys get what I mean.

     

    Cheers :)