• Posts

  • Joined

  • Last visited

Everything posted by trott

  1. usually for wireguard, you only need to forwarder the wirguard port on UDM Pro to urnaid IP, all other port are still close to public
  2. 你要把硬盘直通给TRUENAS啊,和urnaid没关系了
  3. yes, it is dual port, I'm only use one of them yet, will test it later
  4. Guys, when I using the VF on VM, it cannot talk to the VM on br0, do you guys has this issue?
  5. I follow the how-to in the forum and using SR-IOV to create some VFs , and create a VM using the VF, but the VM on VF cannot talked to the VM on br0 network 1. VM1 is assigned a direct VF, on network 2, VM2 is usinn the urnaid default br0 to 3. VM1 can access unraid host and internet 4. vm2 can access urnaid host and internet 5. other PC on same network can acess unraid host, vm1, vm2 6. But VM1 and VM2 cannot take to each others google search can find some post like below: the solution is to use "bridge fdb add" command to add VF mac addresses and eth0 mac address to bridge forwarding but the issue here is unraid don't have the "bridge" command included, is there any other sulution for this issue?
  6. this is the reason I asked if we can attached a VF to container in the thread, it can help to solve the issue and meet the requirement
  7. 如果主要为了moments,你第一时间就没必要选择unraid
  8. never mind, I just find out the network-rules.cfg is changed, change it back fixed the issue
  9. I just update to 6.9.1 today, but after reboot, I cannot access the unraid, it has no network at all, in syslog, I find below, how I can have unraid do not change the interface name to eth15
  10. thanks for the how-to, is it possible to assign the VF to docker container?
  11. check your bios setup, I believe the fan control setting in bios should be smartfan, I using the same board and plugin works without issue
  12. I reformat the cache, now the issue is gone
  13. not sure what's the issue here, but after update to beta25, I just notice fstrim -v /mnt/cache take forever to complete; the iotop shows it take 100% with 0 bype read/write
  14. to be honest, I read some document, the manual enablement includes load modules, enable vf, even I can do it sucessfully, I don't know how to make it survive after the reboot
  15. I see some old discussion says unraid dose not support, not sure if there is any changs now
  16. problem here is when there is an error detected, how can we know if there is only one corrupt disk, or there are two corrupt disks
  17. not directly related to this topic, but assume I use btrfs for the disks in the array, when there is an error during the parity check (no automatic fix), can I run btrfs scrub on each disk to confirm if it is disk issue or parity issue?
  18. I have used 9211-8i, 9217-8i before,they cost less than $25 in China and work well with the unraid, the only issue is they run very hot, regular air flow in the normal case is not enough for them, I have to install a small fan using PCI mount kit to blow directly on them; now I'm using lenovo R430-8I (LSI-3408), it cost around $80, but runs much cooler, no addtional fan is needed, personally I will recommend it
  19. yeah, this is also my concerns, I just ordered another 14t HDD, and plan to convert the pool to raid 10
  20. since 6.9 has multi-pool support, I setup a btrfs raid-5 pool with three 14T HDD, the issue I find is the slow scrub speed, only around 40MB/s, it might take 3 days for a single scrub if I have 10T data on it, is it normal?
  21. is this a plugin, wha't the name of it
  22. as the mover tuning plugins has been removed, is there any way to disable mover scheduling, I don't want to run Mover only based on schedule
  23. Hi, when the plugin will be updated to use the data from emhttp?
  24. I'm mix sata ssd, hdd and sas hdd on the same lsi 3408 with an expander without any issue