SeanJW

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by SeanJW

  1. Same issue on 6.11.5 with all static IPs (unraid and container) Stop and restart docker can fix it, but it happend again after unraid reboot.
  2. But cache pool only can use the btrfs which performance is so poooor.
  3. Yes, that you saied is why I need multi arrays. One array WITHOUT parity, can instead of poor performance pool. And one array with parity can be used for important data. I can get a saft array and a performance array at the same time, don't need to choose the safe or the performance.
  4. Cache pool's IO performens is so poor, I think btrfs cause that
  5. You can check the license before starting those VMs. It is so crazy to assign every function with array starting. When I try to modify some small configure, like server name, I should stop all my docker container and all my vm (someone are router and dns server, so the remote connecting will be lost), and then I can stop array, for what I want to do. But they obviously don't need array.
  6. mutli Arrays +1 I want to use some disks for write-heavy tasks without parity, because it works so slow with parity. And the other disks are protected by parity for my important data. I have tried to create a cache pool for write-heavy tasks, but the IO performence is so poor. At last, I abanden the parity disk, put all disk in one arrary, and create shares only in some disks for write-heavy tasks. I'm afraid about my data in other disks, so I have to backup them weekly..
  7. I had to uninstall it sadly, hope it will be fixed soon.
  8. I find a file named super.old in flash, may I can move old file to instead the corrupted file? What have be configed in the super.cfg? Or I can delete it?
  9. Changed Status to Closed
  10. It's work well during last 3 weeks.
  11. Thanks for your suggest. I uninstalled cpufreq plugin, and add a X550T2 card instead mainborad's ethernet. Up to now, it works well. I will keep following for one week more. Thanks to JorgeB again.
  12. OK, I will try it and repond next week. Thank you.
  13. 重新挪动docker和vm文件后,其他硬盘可以休眠,有一个硬盘一直不行。 这个硬盘接sata口可以休眠。其他硬盘接HBA卡也可以休眠,但是这个硬盘接HBA卡,就是不行。
  14. The system crashes in a few days (about 1-7 days). Unable to Ping when crashing, can only press the button to reset. This problem has appeared several times this month. I mirror the syslog to the flash and get the following log. The file < syslog > is a crash log selected between 05:36:32 and 08:13:20 on December 22 when crash. The file < syslog-all > is all logs between opening the syslog mirror and reseting system . File < unraid-diagnostics-20211222-1004.zip > is a diagnostic. syslog syslog-all unraid-diagnostics-20211222-1004.zip
  15. Not yet. I try delete this disk from array, format it and add to array again. After that operation, I'm sure this disk is empty. It can sleep a little more hours, but wakeup again with unknown reason. Than I move all vm/docker files to pool, all array disks that connect to same HBA card can sleep except this disk. So I think this disk maybe has some problem, can't keep sleeping long time...
  16. Which plugin do you take the snapshots? Its much more comfortable that taking snapshots on the PROXMOX...
  17. I change my mind, Multiple Arrays +1. and.. how long we can get it?
  18. I meet the same problem. But mine looks like different between HBA and MB sata port
  19. Same problem. It wakeup with read SMART. Is that disk connected to motherboard or HBA card? Mine is connected to HBA card. Sep 18 15:53:41 unraid emhttpd: read SMART /dev/sde Sep 18 16:02:58 unraid emhttpd: spinning down /dev/sde Sep 18 16:09:05 unraid emhttpd: read SMART /dev/sde Sep 18 17:09:02 unraid emhttpd: spinning down /dev/sde Sep 18 17:10:26 unraid emhttpd: read SMART /dev/sde
  20. in old system (8100,z370m), all harddisks connect to motherboard sata port and harddisk(all are sata disk) spindown work well in new system(10900 z490m), some harddisks connect to a HBA card(ASR-78165, work in HBA/direct mode), others connect to motherboard which disks connect to motherboard spindown works well as in old system but which(1 disk without read/write) connect to HBA spindown, then spinup by read SMART operation quickly those disks' [Spin down delay] are all set to 1 hour Sep 18 15:53:41 unraid emhttpd: read SMART /dev/sde Sep 18 16:02:58 unraid emhttpd: spinning down /dev/sde Sep 18 16:09:05 unraid emhttpd: read SMART /dev/sde Sep 18 17:09:02 unraid emhttpd: spinning down /dev/sde Sep 18 17:10:26 unraid emhttpd: read SMART /dev/sde unraid-syslog-20210918-0952.zip
  21. 我也是一样的问题,6.92 不过我可以确定100%是6.92和HBA卡的兼容性问题。 在旧硬件系统直连主板sata,可以正常休眠 而在新硬件系统上,连接主板sata的硬盘可以正常休眠。但连接HBA卡(ASR78165)的硬盘,就无法休眠。 LOG上可以看到在spindown硬盘后马上读了smart,导致硬盘无法休眠 所有硬盘是sata硬盘
  22. ZFS +1 It is well known that the r/w speed of the unraid array is slow, and the SSD pool can help it. However, the vm or container files in the pools cannot be protected by the array, and cannot be backed up or snapshotted while the vm or container is running. So I hope that zfs can realize online backup of hot data without stopping the virtual machine or container. We can get the r/w speed of the SSD pool, and the safety of the array too. BTW: btrfs seems to be abandoned, and ZFS continues to forward...