• Posts

  • Joined

  • Last visited

Everything posted by trott

  1. 这个是macvlan的docker问题吗,这个不是要等docker或者kernel修复吗,可能还涉及到网卡驱动,unraid本身并不会去修复这个,另外你们要考虑是不是一定要用macvlan只为了分配一个单独的ip,在我看来大部分是不用的,一定要的话,直接开个LXC的虚拟机跑吧; 我个人只有plex的docker用了host,其他的container都用bridge的,我的unraid从两个没有崩溃过
  2. I only has this one, so it is working on ZFS pools now?
  3. I'm new to zfs, with the new RC start to support ZFS, I'm quite excited to test it as I happened to have some 16T disks on hands, my unraid server is mainly used as plex media server and torrent downloading server My setup is to first download to SSD cache and move to this pool for seeding and hard link the file for plex; the pool create with 1M recordsize, during the 2 days testing, I find out when qbit is seeding with 20M/s, the zpool iostat show the actually reading is around 80-150MB/s, which is 4-8 times read amplification, my pool is 5 x 16T raidz, I did some search and tended to think the read amplification cannot be avoided, I don't want this as it is kind of wasting and might also create some extra heat. For me, I don't think the ZFS suit for my use case, so I start to this topic to discuss the pro and con of zfs for different use case, which might help new ZFS user like me to choose between array, zfs, btrfs for different use case
  4. yes, you can alway use command line
  5. where can I find the setting for "scheduled trimming of ZFS pools."
  6. You can boot with ubuntu live USB and see if everything is ok, if not, then you might start to look at the whole network setting and hardware issue
  7. I add the following in my samba extra config, it helps a lot: ea support = no store dos attributes = no
  8. try to delete network.cfg and setup network again, but why do you enable bonding if you only has nic
  9. Is this new plugin able to control the fan speed?
  10. 开一个lxc container,安装clash 用tun模式,同时安装sniproxy,在DNS里里把需要代理的域名解析到这个container的IP就行
  11. I'd like to move my LXC container to another disk, just want to check what's the proper way to do that
  12. 直接考虑客户端解码会更好,这种cpu硬解也没有意义,转码涉及到很多步骤,有些是一定要CPU的,比如subtitle的burning
  13. It really depends on your user case, for most of the home server, I don't think it is necessary, don't use them simply because you can the hardware
  14. I don't think so, Arc support should be linux kernel 6.0
  15. 你可以多个cache啊,cache可以用btrfs 的raid,你也可以多个
  16. Thanks, I just need the delete/purge remote sends/snaps on the schedule
  17. is there way to limit the number of those received incremental backup?
  18. the btrfs send in schedule setup is not working, log as below, it might because I select snap send incremental, but it is the first time the schedule run and no snapshot has been created before: Jul 2 10:51:47 Tower snapshots: btrfs subvolume snapshot -r '/mnt/ssd/backup/wei' '/mnt/ssd/.snapshot/wei_20220702105147.daily' OK Create a readonly snapshot of '/mnt/ssd/backup/wei' in '/mnt/ssd/.snapshot/wei_20220702105147.daily' Jul 2 10:51:47 Tower snapshots: btrfs snapshot send -p / /mnt/ssd/.snapshot/wei_20220702105147.daily To /mnt/disk10/.snapshot/ Error ERROR: empty stream is not considered valid screenshot of the setting:
  19. 你设置的docker mirror没有?国内的几个mirror现在好像都不行
  20. thanks for this great plugin, I used to manually create subvol on each disk for my share, then do the snap via script; I played this plugin yesterday, it works well and below is my findings 1. when you click the button to create the snapshot, it will allow you to select to create a read-only snapshot, but there is no such selection in schedule setup 2. I had though the read only selection on the page is the option used for snapshot, but actually it is not, when you select it, it actually will put the whole user share to read-only, it is better to have a remind here, or it will surprise people