PyCoder

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by PyCoder

  1. I just read this on the news and just my 2 cents... I bought years ago the pro key, which I don't use anymore, so I don't really care but... I get it if you had a professional branch and a community like TrueNAS, but Unraid is mostly used by home users, so a annually subscription will probably only push people towards TrueNAS Scale, OMV. There is no real advantage anymore over TrueNAS Scale, especially with raidz@expansion and a professional branch with professional support doesn't exist sooooo....
  2. Changed Priority to Annoyance
  3. I changed the mountpoint to /mnt/deadpool and change all the settings from /deadpool to /mnt/deadpool. How do I downgrade to 6.9? I can only see a rollback to 6.10.2. Ty btw.
  4. Yeah, I just saw it, see the edit. But it's nonsense when everything works outside /mnt (e.g. Docker) but VM isn't working anymore since 6.10. Ty. Anyway, I guess it's time to switch the system. Example: Working in Docker and anywhere else.
  5. I can't download the virt-win ISO anymore because the path doesn't exist according to the UI and as shown in the console output the path exists and has the right permission. Check the screenshot (4. Storage path invalid). I had non of these issues before 6.10. EDIT: Ok, apparently since 6.10 the user is FORCED to use a folder in /mnt?! When I bind (mount -o bind) the folder /deadpool/VM to /mnt/user/VM its working. That's just nonsense.
  6. Yeah reboot works but as soon as I stop and start VM manually I have the same issue and I can reboot again. So that's not a solution. It worked before 6.10.x Also all the paths are broken and offset and they worked before 6.10.x
  7. Since 6.10 I have the issue that: 1) VM Manager doesn't show any path in the drop-down 2) Libvirt Service failed to start. doesn't start after a manual stop/start 3) Drop-down doesn't show any path or ISO file when creating or editing a VM and is off center! 4) Storage path invalid This path since I bought Unraid so 6.7 till 6.9.x and now its broken. deadpool-diagnostics-20220625-1620.zip
  8. Yeah but there was also an issue with docker.img on ZFS with update 2.0 or 2.1 thats why I changed it to directory which worked now for weeks till 2 days ago. Hmmm, I'll switch back to docker.iso if that doesn't work I'll try zvol with ext4. lets try Edit: docker.iso on zfs blocked /dev/loop and docker in zfs directory f* ups contrainers. My solution with only ZFS: zfs create -V 30gb pool/docker mkfs.ext4 /dev/zvol/pool/docker echo "mount /dev/zvol/pool/docker /mnt/foobar" >> /boot/config/go Working like a charm
  9. Hi Did you guys make a new update? Cause the last time im moved my docker form the btrfs image to a ZFS directory and now out of the blue i cant update or remove or start docker containers anymore?! I even deleted the directory (zfs destroy -r) and reinstalled all dockers.. after 1 day i had the exact same issue again. Has someone a solution?
  10. Oh ty. Im running 6.9.2. But i'll switch to directory mode. Cheers Edit: i just figured that i clicked ont he wrong button when i wanted to replay 😂 sorry
  11. Hi Is it possible that you guys introduced some nasty bugs with that update? My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool. Started after i updated zfs for unraid to 2.0.0.
  12. I mean I can put the drives to sleep by myself but why are they waking up? There is 0 read / write to the drives and on TrueNAS i dont have to export them to make them sleep. So it must be some unraid stuff that is going on? I made a script now that exports the drives and puts them to sleep.
  13. Hi Is it possible to integrate dm-integrity for bit-rot detection? It would be an additional layer below the XFS and we would have bit-rot detection. See: https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html I know btrfs has checksums but XFS is still the default and btrfs is "stable" but still not finished. Cheers!
  14. Hi I've a little issues with my unraid: 1) Unraid is not putting my zpool drives to sleep even tough I used "hdparm -S 120 /dev/sdg /dev/sde". 2) When I put my zpool manually by pressing the button to sleep unraid will wake the pool up after 5 min I can't figure out whats the issue iotop and lsof doesn't show me anything and zpool iostats has r/w of 1 for the zpool batcave Someone knows whats going on and can help? PS. It's working with Truenas Scale for some reason or when I export the zpool PPS. I have to put the drives to sleep cause the WD Red Plus are noise af and hot af! root@Deadpool:~# zpool iostat -vv capacity operations bandwidth pool alloc free read write read write -------------------------------------------- ----- ----- ----- ----- ----- ----- batcave 4.22T 4.87T 1 1 8.73K 22.3K mirror 4.22T 4.87T 1 1 8.73K 22.3K ata-WDC_WD101EFBX-68B0AN0_VCJ3DM4P - - 0 0 4.36K 11.2K ata-WDC_WD101EFBX-68B0AN0_VCJ3A0MP - - 0 0 4.38K 11.2K -------------------------------------------- ----- ----- ----- ----- ----- ----- deadpool 5.81T 1.45T 2 2 15.6K 28.3K raidz1 5.81T 1.45T 2 2 15.6K 28.3K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4MJJZXUS6 - - 0 0 3.99K 7.17K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M2CE11LH - - 0 0 3.95K 7.10K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M7SEZK5A - - 0 0 3.80K 7.03K ata-ST2000VN004-2E4164_Z529JXWQ - - 0 0 3.87K 7.00K -------------------------------------------- ----- ----- ----- ----- ----- ----- root@Deadpool:~# root@Deadpool:~# lsof /batcave/ root@Deadpool:~# Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] 5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0-events] 6 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H-events_highpri] 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] 9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] 11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 12 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/0] 13 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/1] 14 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1] 15 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1] 16 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0-events] 17 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0H-kblockd] 18 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/2] 19 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/2] 20 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/2] 22 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/2:0H-kblockd] 23 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/3] 24 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/3] 25 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/3] 27 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:0H-events_highpri] 28 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kdevtmpfs] 29 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [netns] 30 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u64:1-flush-8:0] 11295 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-cil/md1] 11296 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-reclaim/md1] 11297 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-eofblocks/m] 11298 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-log/md1] 1115 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [usb-storage] 9713 be/7 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_checkpoint_di] 42 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:1-md] 1415 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_3] 49 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:2-events] 11315 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other 11316 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other 1077 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:1H-kblockd] 1078 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ipv6_addrconf] 9784 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_null_iss] 9785 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_null_int] 9786 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_iss] 9787 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9788 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9789 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9790 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9791 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 1600 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid 1601 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imuxsock] 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imklog] 1603 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [rs:main Q:Reg] 9796 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_iss] 1093 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:3-events] 9798 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_iss_h] 9799 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9800 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9801 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9802 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9803 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9804 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9805 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 1102 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_0] 11277 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % bash /usr/local/emhttp/webGui/scripts/diskload 80 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/2:1-events] 1105 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [usb-storage] 9810 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9811 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9812 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9813 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9814 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 1111 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_1] 9816 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_int] 1113 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_tmf_1]
  15. I played around with scheduler and md_write_method. Btw if all the disks have to spin then I can just stick to ZFS. 115 MB/s Problem solved
  16. Hi, No I dont use any cache drives and i tested md_write_method without any success. I know that the array is slow as slow as the HDD but 15 MB/s? Even when i copy a file from my PC to my Laptop i have at least 50MB/s soooooooo something must be off. And this is now with ZFS. Yes is stripped and faster but 15 MB/s isn't normal.
  17. Hi guys! I was setting up a new Unraid but this time I went from ZFS zu the "normal" Unraid setup and now my problem: The smb transfer speed is sloooooooooooooooooow really sloooooooooow in avg 15 MB/s?! Is there anyway to fix that? I mean the drives should make at least 80 MB/s It's the same Hardware and same Unraid I only removed the ZFS pool and created a Unraid-Array. If I switch back to the ZFS pool I have around 350MB/s. I know that the ZFS pool is faster thanks to its "traditional" raid setup and Unraid is as slow as the HDD but 15 MB/s? Can someone help?
  18. Hi, I've an issue with my docker / VPN setup. I switched from "bridge/host" to custom bridge (br0) since then I can't reach any docker via VPN! As example: I can reach Plex as "host" (192.168.0.10) but not as "br0" (192.168.0.53) via VPN. On the other hand I can reach my Unraid (192.168.0.10) via VPN. So I assume something with the routing is f* up? So do I've to change the routing in Unraid/Docker or whats the matter? PS: I't doesn't matter if I use wireguard or openVPN both show me "ERR_ADDRESS_UNREACHABLE" for the dockers with br0.
  19. +1 I would like to see a UI for ZFS in unraid and a one-click installation for the module (so we avoid the license issues)! PS. No, ECC not requiered it's only recommanded! Unraid isn't protecting us at the moment from bit-rot neither in ram nor disk. ZFS without ECC would at least protect the data that is already on the disk from bit-rot!
  20. I like the KMV integration. I would like to see checksum or official ZFS support or hostadp support in the future
  21. Hm, ok :( I'll use old crappy USB stick. Thanks guys :)
  22. Hi I've a question about Unraid and arrays... My current NAS is using ZFS (FreeNAS) and I wanna stick to ZFS so there is no "array" needed in my opinion because I want everything on the ZFS pool. BUUUUUUUUUUUUUUT! ---> I can't start any VM nor Docker because "no array is started/existing". Is there any way to bypass that or am I really forced to use a regular array + my ZFS pool? I hope you guys get what I mean. Cheers