Jump to content

rampage

Members
  • Posts

    45
  • Joined

Everything posted by rampage

  1. I have it disabled. When VPN/Wireguard enabled I think the network traffic are managed by the vpn application so the deluge port settings use ramdon ports go through the VPN not the 58946 ports on the docker host machine makes sense. But when you are not using vpn, and the port mapping from the host to the container is 58946 to 58946 port, why we don't put 58946 in the deluge network incoming/outgoing settings but it remains random port? Will it be mapped again by docker, say deluge random port to container 58946 to host 58946?
  2. Does change of the settings with incoimg port/outgoing port make a difference , if changed from random ports to a specific port? I thought the docker use 58946 for the bittorrent tcp/udp connection
  3. If choose a non-binhex downloader docker, for example deluge docker from linuxserver, the raddrr will report error of The path and folder actually exists inside the container
  4. how to manually do the port forwarding for rTorrent when used with wireguard? rTorrent doesn't come with upnp
  5. Hi, Thanks for the great work. Does your docker image has a version comes with the deluge 2.0.3 release version or ubuntu version? Some PT ban dev versions.
  6. It was working fine after first installation, but after the docker been restarted, the torrent became stalled and no matter what I do it won't resume reinstall the docker still have stalled issue
  7. The settings for 'port used for incoming connections' change will not save after restart docker it reset back to default
  8. is there a tool in the docker or I can install in Unraid to create torrent from the files?
  9. Hi, Thanks for the great work. Is there a stable deluge 2.0.3 version docker? Some PT ban dev versions.
  10. Is sonarr looking for the finished download in 'completed' or 'complete' folder? The old deluge seems to default to completed folder, but the new deluge and transmission seems to default to complete folder
  11. Thanks for taking time to help me. Attached is the diagnostics. I also clicked upload hardware profile. It is possible there's some hardware problem, it's an old computer. I can't tell what could be the issue. tower-diagnostics-20201213-2110.zip
  12. When I tested this setup at the beginning, I only use two 8TB hdd as pool , and no parity drive, at that time there's unlcean shutdowns. But later when I was happy with everything, I added the parity drive and one more 8TB, since that time there's no unclean shutdowns. I have written about 15TB into the 24TB pool, mostly through USB 3.0 unassigned drive , rsync or cp through the shell.
  13. Hi, I have 4 8TB hdd, one of them is the parity drive. The last parity check is a week ago. There's been lots of writing recently. And I upgraded the system from 6.8.3 to 6.9.0-rc1. I've set the parity check to run once a week. Yesterday I upgraded the system to 6.9.0-rc1 Last night's parity check reports 332 errors, it seems to be too many, is it normal? Drives report no error, mostly less than one month old. RAM has been tested via memtest for 18 hours without error before the setup. What's the recommended frequency to run parity check? Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212888 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212896 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212904 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212912 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212920 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212928 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212936 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212944 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059212952 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215776 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215784 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215792 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215800 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215808 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215816 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215824 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215832 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215840 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215848 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215856 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215864 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215872 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215880 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215888 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215896 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215904 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215912 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215920 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215928 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215936 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215944 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215952 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215960 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215968 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215976 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215984 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059215992 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216000 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216008 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216016 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216024 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216032 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216040 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216048 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216056 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216064 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216072 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216080 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216088 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216096 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216104 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216112 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216120 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216128 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216136 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216144 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216152 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216160 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216168 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216176 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216184 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216192 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216200 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216208 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216216 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216224 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216232 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216240 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216248 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216256 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216264 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216272 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216280 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216288 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216296 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216304 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216312 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216320 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216328 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216336 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216344 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216352 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216360 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216368 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216376 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216384 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216392 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216400 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216408 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216416 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216424 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216432 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216440 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216448 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216456 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216464 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216472 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216480 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216488 Dec 13 07:47:23 Tower kernel: md: recovery thread: P corrected, sector=8059216496 Dec 13 07:47:23 Tower kernel: md: recovery thread: stopped logging
  14. Thanks for the quick reply. I might use either goodsync or Resilio Sync to do a local sync, or maybe syncthing
  15. Hi, Does unRAID support folder/dataset compressing so the files can be stored with fast compressing method and can be read anytime and decompressing on the fly? It might be worth it while storing miliions of small files. Does unRAID support deduplicate with the btrfs file system? Does the system or any app do folder replicate? I wish to have a folde replicated on to a different harddisk or array, to be on the safer side incase one of the drive died and the parity can not repair it.
  16. Hi there, I tried the 2K8 R2 X64 windows on kvm, with 2k8 R2 template , the installation will always go black screen after file loading. Use windows 7 template can install fine. With this installation, when trying to install driver for balloon it always crash. Reboot the system can see the device is not enabled, or no driver has loaded. Same problem with the vitrual serial device. The same driver works fine while goes with a unbuntu machine and kvm. So maybe the template for windows 7 is doing something here? Tried virtio-win-0.1.190.iso, 189 , 185 or virtio-win-drivers-20120712-1.iso same result. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='13'> <name>Windows 2008 R2</name> <uuid>8d2f8869-9e88-0e56-dff2-99fc52616ef1</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 7" icon="windows7.png" os="windows7"/> </metadata> <memory unit='KiB'>3670016</memory> <currentMemory unit='KiB'>3670016</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='2'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/cache/domains/Windows Server 2008 R2/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/2008R2-7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso' index='2'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.190.iso' index='1'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:24:41:fc'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-13-Windows 2008 R2/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  17. Thanks for the reply. Yes I guess generally it is better to not use the cache if a write job is going to be more than the cache size.
  18. Hi, I'd like to copy the files on the usb drives to the array directly via attaching them and mount. They are same brand model 8TB HDD. However some can be mounted, some can not, which seems to report with this message 'Anyway to mount GPT drive which has 'Partition 1 does not start on physical sector boundary' or mount ntfs-3g just say it can't access the HDD. Any idea if there's a way to mount those? Best Regards,
  19. Hi, Just testing out unraid. It seems to be very promising. I have one problem that, writing to the share and the mover job can happen at the same time, if I put a large cache drive say 500GB or 1TB into the box, and have a writing job say 2TB in the progress keeps hammering the cache drive for long time, then the mover kicks in to read, this situation could bring the cache drive's temperature to close 60 celsius which is really bad ( already have the best cooling I could give for it). So I wonder if there's a setting to limit how much cache space can be used for share writing? The the limit has reached then switch to write to the array directly so the cache drive could rest and cool down. When the mover kicks in hourly and the files have been moved to the array, maybe the current share writing could switch back to use the cache or the next writing job will. For example a 500GB cache drive, I'll allocate 100GB for share writing cache, 300GB for vm and docker stuff, and keep 100GB spare space to ensure performance. Thank you.
×
×
  • Create New...