trott

Members
  • Posts

    140
  • Joined

  • Last visited

Everything posted by trott

  1. same here, anyone know how to fix this?
  2. I have a 3 disk btrfs cache with raid1, I'd like to remove one disk from the cache, can I simple use btrfs device delete 3 /mnt/cache or I need to stop the array, remove one disk from the cache pool and start the array again?
  3. yes, I just notice the same thing, I'm quite sure before 6.92, it is work as I had to manually add pcie_asmp=off to get rid of some PCIe Bus Error message
  4. usually for wireguard, you only need to forwarder the wirguard port on UDM Pro to urnaid IP, all other port are still close to public
  5. 你要把硬盘直通给TRUENAS啊,和urnaid没关系了
  6. yes, it is dual port, I'm only use one of them yet, will test it later
  7. Guys, when I using the VF on VM, it cannot talk to the VM on br0, do you guys has this issue?
  8. I follow the how-to in the forum and using SR-IOV to create some VFs , and create a VM using the VF, but the VM on VF cannot talked to the VM on br0 network 1. VM1 is assigned a direct VF, on network 192.168.2.0/24 2, VM2 is usinn the urnaid default br0 to 192.168.2.0/24 3. VM1 can access unraid host and internet 4. vm2 can access urnaid host and internet 5. other PC on same network can acess unraid host, vm1, vm2 6. But VM1 and VM2 cannot take to each others google search can find some post like below: https://community.intel.com/t5/Ethernet-Products/SR-IOV-on-82576-on-VM-on-VF-can-not-talk-to-VM-on-Bridged/td-p/196387 the solution is to use "bridge fdb add" command to add VF mac addresses and eth0 mac address to bridge forwarding but the issue here is unraid don't have the "bridge" command included, is there any other sulution for this issue?
  9. this is the reason I asked if we can attached a VF to container in the thread, it can help to solve the issue and meet the requirement
  10. 如果主要为了moments,你第一时间就没必要选择unraid
  11. never mind, I just find out the network-rules.cfg is changed, change it back fixed the issue
  12. I just update to 6.9.1 today, but after reboot, I cannot access the unraid, it has no network at all, in syslog, I find below, how I can have unraid do not change the interface name to eth15
  13. thanks for the how-to, is it possible to assign the VF to docker container?
  14. check your bios setup, I believe the fan control setting in bios should be smartfan, I using the same board and plugin works without issue
  15. I reformat the cache, now the issue is gone
  16. not sure what's the issue here, but after update to beta25, I just notice fstrim -v /mnt/cache take forever to complete; the iotop shows it take 100% with 0 bype read/write
  17. to be honest, I read some document, the manual enablement includes load modules, enable vf, even I can do it sucessfully, I don't know how to make it survive after the reboot
  18. I see some old discussion says unraid dose not support, not sure if there is any changs now
  19. problem here is when there is an error detected, how can we know if there is only one corrupt disk, or there are two corrupt disks
  20. not directly related to this topic, but assume I use btrfs for the disks in the array, when there is an error during the parity check (no automatic fix), can I run btrfs scrub on each disk to confirm if it is disk issue or parity issue?
  21. I have used 9211-8i, 9217-8i before,they cost less than $25 in China and work well with the unraid, the only issue is they run very hot, regular air flow in the normal case is not enough for them, I have to install a small fan using PCI mount kit to blow directly on them; now I'm using lenovo R430-8I (LSI-3408), it cost around $80, but runs much cooler, no addtional fan is needed, personally I will recommend it
  22. yeah, this is also my concerns, I just ordered another 14t HDD, and plan to convert the pool to raid 10
  23. since 6.9 has multi-pool support, I setup a btrfs raid-5 pool with three 14T HDD, the issue I find is the slow scrub speed, only around 40MB/s, it might take 3 days for a single scrub if I have 10T data on it, is it normal?
  24. is this a plugin, wha't the name of it