• Posts

  • Joined

  • Last visited

Everything posted by trott

  1. 你可以多个cache啊,cache可以用btrfs 的raid,你也可以多个
  2. Thanks, I just need the delete/purge remote sends/snaps on the schedule
  3. is there way to limit the number of those received incremental backup?
  4. the btrfs send in schedule setup is not working, log as below, it might because I select snap send incremental, but it is the first time the schedule run and no snapshot has been created before: Jul 2 10:51:47 Tower snapshots: btrfs subvolume snapshot -r '/mnt/ssd/backup/wei' '/mnt/ssd/.snapshot/wei_20220702105147.daily' OK Create a readonly snapshot of '/mnt/ssd/backup/wei' in '/mnt/ssd/.snapshot/wei_20220702105147.daily' Jul 2 10:51:47 Tower snapshots: btrfs snapshot send -p / /mnt/ssd/.snapshot/wei_20220702105147.daily To /mnt/disk10/.snapshot/ Error ERROR: empty stream is not considered valid screenshot of the setting:
  5. 你设置的docker mirror没有?国内的几个mirror现在好像都不行
  6. thanks for this great plugin, I used to manually create subvol on each disk for my share, then do the snap via script; I played this plugin yesterday, it works well and below is my findings 1. when you click the button to create the snapshot, it will allow you to select to create a read-only snapshot, but there is no such selection in schedule setup 2. I had though the read only selection on the page is the option used for snapshot, but actually it is not, when you select it, it actually will put the whole user share to read-only, it is better to have a remind here, or it will surprise people
  7. same here, anyone know how to fix this?
  8. I have a 3 disk btrfs cache with raid1, I'd like to remove one disk from the cache, can I simple use btrfs device delete 3 /mnt/cache or I need to stop the array, remove one disk from the cache pool and start the array again?
  9. yes, I just notice the same thing, I'm quite sure before 6.92, it is work as I had to manually add pcie_asmp=off to get rid of some PCIe Bus Error message
  10. usually for wireguard, you only need to forwarder the wirguard port on UDM Pro to urnaid IP, all other port are still close to public
  11. 你要把硬盘直通给TRUENAS啊,和urnaid没关系了
  12. yes, it is dual port, I'm only use one of them yet, will test it later
  13. Guys, when I using the VF on VM, it cannot talk to the VM on br0, do you guys has this issue?
  14. I follow the how-to in the forum and using SR-IOV to create some VFs , and create a VM using the VF, but the VM on VF cannot talked to the VM on br0 network 1. VM1 is assigned a direct VF, on network 2, VM2 is usinn the urnaid default br0 to 3. VM1 can access unraid host and internet 4. vm2 can access urnaid host and internet 5. other PC on same network can acess unraid host, vm1, vm2 6. But VM1 and VM2 cannot take to each others google search can find some post like below: the solution is to use "bridge fdb add" command to add VF mac addresses and eth0 mac address to bridge forwarding but the issue here is unraid don't have the "bridge" command included, is there any other sulution for this issue?
  15. this is the reason I asked if we can attached a VF to container in the thread, it can help to solve the issue and meet the requirement
  16. 如果主要为了moments,你第一时间就没必要选择unraid
  17. never mind, I just find out the network-rules.cfg is changed, change it back fixed the issue
  18. I just update to 6.9.1 today, but after reboot, I cannot access the unraid, it has no network at all, in syslog, I find below, how I can have unraid do not change the interface name to eth15
  19. thanks for the how-to, is it possible to assign the VF to docker container?
  20. check your bios setup, I believe the fan control setting in bios should be smartfan, I using the same board and plugin works without issue
  21. I reformat the cache, now the issue is gone
  22. not sure what's the issue here, but after update to beta25, I just notice fstrim -v /mnt/cache take forever to complete; the iotop shows it take 100% with 0 bype read/write
  23. to be honest, I read some document, the manual enablement includes load modules, enable vf, even I can do it sucessfully, I don't know how to make it survive after the reboot
  24. I see some old discussion says unraid dose not support, not sure if there is any changs now