PyCoder

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

PyCoder's Achievements

Newbie

Newbie (1/14)

2

Reputation

  1. Changed Priority to Annoyance
  2. I changed the mountpoint to /mnt/deadpool and change all the settings from /deadpool to /mnt/deadpool. How do I downgrade to 6.9? I can only see a rollback to 6.10.2. Ty btw.
  3. Yeah, I just saw it, see the edit. But it's nonsense when everything works outside /mnt (e.g. Docker) but VM isn't working anymore since 6.10. Ty. Anyway, I guess it's time to switch the system. Example: Working in Docker and anywhere else.
  4. I can't download the virt-win ISO anymore because the path doesn't exist according to the UI and as shown in the console output the path exists and has the right permission. Check the screenshot (4. Storage path invalid). I had non of these issues before 6.10. EDIT: Ok, apparently since 6.10 the user is FORCED to use a folder in /mnt?! When I bind (mount -o bind) the folder /deadpool/VM to /mnt/user/VM its working. That's just nonsense.
  5. Yeah reboot works but as soon as I stop and start VM manually I have the same issue and I can reboot again. So that's not a solution. It worked before 6.10.x Also all the paths are broken and offset and they worked before 6.10.x
  6. Since 6.10 I have the issue that: 1) VM Manager doesn't show any path in the drop-down 2) Libvirt Service failed to start. doesn't start after a manual stop/start 3) Drop-down doesn't show any path or ISO file when creating or editing a VM and is off center! 4) Storage path invalid This path since I bought Unraid so 6.7 till 6.9.x and now its broken. deadpool-diagnostics-20220625-1620.zip
  7. Yeah but there was also an issue with docker.img on ZFS with update 2.0 or 2.1 thats why I changed it to directory which worked now for weeks till 2 days ago. Hmmm, I'll switch back to docker.iso if that doesn't work I'll try zvol with ext4. lets try Edit: docker.iso on zfs blocked /dev/loop and docker in zfs directory f* ups contrainers. My solution with only ZFS: zfs create -V 30gb pool/docker mkfs.ext4 /dev/zvol/pool/docker echo "mount /dev/zvol/pool/docker /mnt/foobar" >> /boot/config/go Working like a charm
  8. Hi Did you guys make a new update? Cause the last time im moved my docker form the btrfs image to a ZFS directory and now out of the blue i cant update or remove or start docker containers anymore?! I even deleted the directory (zfs destroy -r) and reinstalled all dockers.. after 1 day i had the exact same issue again. Has someone a solution?
  9. Oh ty. Im running 6.9.2. But i'll switch to directory mode. Cheers Edit: i just figured that i clicked ont he wrong button when i wanted to replay 😂 sorry
  10. Hi Is it possible that you guys introduced some nasty bugs with that update? My system is not reacting anymore and everytime when i force it to reboot loop2 starts to hang with 100% cpu usage and docker.img is on my ZFS pool. Started after i updated zfs for unraid to 2.0.0.
  11. I mean I can put the drives to sleep by myself but why are they waking up? There is 0 read / write to the drives and on TrueNAS i dont have to export them to make them sleep. So it must be some unraid stuff that is going on? I made a script now that exports the drives and puts them to sleep.
  12. Hi Is it possible to integrate dm-integrity for bit-rot detection? It would be an additional layer below the XFS and we would have bit-rot detection. See: https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/dm-integrity.html I know btrfs has checksums but XFS is still the default and btrfs is "stable" but still not finished. Cheers!
  13. Hi I've a little issues with my unraid: 1) Unraid is not putting my zpool drives to sleep even tough I used "hdparm -S 120 /dev/sdg /dev/sde". 2) When I put my zpool manually by pressing the button to sleep unraid will wake the pool up after 5 min I can't figure out whats the issue iotop and lsof doesn't show me anything and zpool iostats has r/w of 1 for the zpool batcave Someone knows whats going on and can help? PS. It's working with Truenas Scale for some reason or when I export the zpool PPS. I have to put the drives to sleep cause the WD Red Plus are noise af and hot af! root@Deadpool:~# zpool iostat -vv capacity operations bandwidth pool alloc free read write read write -------------------------------------------- ----- ----- ----- ----- ----- ----- batcave 4.22T 4.87T 1 1 8.73K 22.3K mirror 4.22T 4.87T 1 1 8.73K 22.3K ata-WDC_WD101EFBX-68B0AN0_VCJ3DM4P - - 0 0 4.36K 11.2K ata-WDC_WD101EFBX-68B0AN0_VCJ3A0MP - - 0 0 4.38K 11.2K -------------------------------------------- ----- ----- ----- ----- ----- ----- deadpool 5.81T 1.45T 2 2 15.6K 28.3K raidz1 5.81T 1.45T 2 2 15.6K 28.3K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4MJJZXUS6 - - 0 0 3.99K 7.17K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M2CE11LH - - 0 0 3.95K 7.10K ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M7SEZK5A - - 0 0 3.80K 7.03K ata-ST2000VN004-2E4164_Z529JXWQ - - 0 0 3.87K 7.00K -------------------------------------------- ----- ----- ----- ----- ----- ----- root@Deadpool:~# root@Deadpool:~# lsof /batcave/ root@Deadpool:~# Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] 5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0-events] 6 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H-events_highpri] 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] 9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 10 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] 11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 12 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/0] 13 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/1] 14 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1] 15 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1] 16 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0-events] 17 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0H-kblockd] 18 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/2] 19 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/2] 20 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/2] 22 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/2:0H-kblockd] 23 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [cpuhp/3] 24 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/3] 25 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/3] 27 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:0H-events_highpri] 28 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kdevtmpfs] 29 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [netns] 30 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u64:1-flush-8:0] 11295 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-cil/md1] 11296 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-reclaim/md1] 11297 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-eofblocks/m] 11298 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [xfs-log/md1] 1115 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [usb-storage] 9713 be/7 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_checkpoint_di] 42 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:1-md] 1415 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_3] 49 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:2-events] 11315 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other 11316 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % shfs /mnt/user0 -disks 2 -o noatime,allow_other 1077 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:1H-kblockd] 1078 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ipv6_addrconf] 9784 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_null_iss] 9785 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_null_int] 9786 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_iss] 9787 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9788 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9789 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9790 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 9791 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_rd_int] 1600 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid 1601 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imuxsock] 1602 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [in:imklog] 1603 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % rsyslogd -i /var/run/rsyslogd.pid [rs:main Q:Reg] 9796 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_iss] 1093 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:3-events] 9798 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_iss_h] 9799 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9800 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9801 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9802 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9803 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9804 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 9805 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_wr_int] 1102 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_0] 11277 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % bash /usr/local/emhttp/webGui/scripts/diskload 80 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/2:1-events] 1105 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [usb-storage] 9810 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9811 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9812 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9813 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 9814 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_iss] 1111 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_eh_1] 9816 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [z_fr_int] 1113 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [scsi_tmf_1]