oldsweatyman

Members
  • Posts

    16
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

oldsweatyman's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I am having an identical issue, has anyone figured this out? Can't connect via https.
  2. Trying to use the same container to have multiple rtorrent instances but only one with an active VPN, setting network to none and using --net=container:rutorrent under extra parameters. I've tried changing every necessary port but I can't figure out how to change port 7777 in the configurations. I keep getting this error in the logs: [ERROR] unable to bind listening socket for address '127.0.0.1:7777': Address already in use (98) Any ideas on how to get around this? This seems to be the best optimized rtorrent container for unraid and I'd really like to be able to use multiple-instances, given that the web-gui crashes around 10k torrents.
  3. Removing the Dynamix System Stats plugin fixed it for me. Maybe it isn't compatible with the IPMI plugin running simultaneously? edit: nevermind, they came back. Lol. However, unloading drivers as @joelrfernandes commented below did work.
  4. Sorry to resurrect an old threat but, did you guys ever get this working? I could use a working syslinux.cfg as well. It seems my card is recognized in a similar way: IOMMU group 12: [10b5:8112] 03:00.0 PCI bridge: PLX Technology, Inc. PEX8112 x1 Lane PCI Express-to-PCI Bridge (rev aa) [13f6:8788] 04:04.0 Multimedia audio controller: C-Media Electronics Inc CMI8788 [Oxygen HD Audio] But I get the same error: internal error: process exited while connecting to monitor: 2020-11-12T21:56:52.505369Z qemu-system-x86_64: -device vfio-pci,host=0000:04:04.0,id=hostdev4,bus=pci.0,addr=0xa: vfio 0000:04:04.0: Failed to set up TRIGGER eventfd signaling for interrupt INTX-0: VFIO_DEVICE_SET_IRQS failure: Device or resource busy I'm not passing through my NVIDIA HDMI audio controller. EDIT: The fix was moving the Asus Xonar Essence STX sound card from a PCIe x16 slot to a PCIe x1 slot. It seems, according to various reddit threads and the threads here, that the PCIe x16 slots interact on some motherboards, and so a VFIO was caused somehow. All good and no stutters for me.
  5. Yeah thats my current setup with one instance of mergerfs, I just can't use the cache with this because you can't hardlink from cache to array it duplicates the file instead thought I'd just make another mergerfs instance on a separate share thats cache only to solve that.. Nevermind I think I'm complicating it too much lol. But, just to know, do you happen to know if your scripts will conflict with two separate instances running? EDIT: Dumb question I guess. There's clearly some conflicting stuff. I'll work on it some more and see.
  6. I might not be understanding some pathing here. Last time I set my downloads share to use the cache, radarr wouldn't hardlink properly because the cache and array are separate obviously. Instead, it would copy the file, renamed, to the array. So, using your setup, would this be the flow? 1. Radarr tells torrent client to save movie to /mnt/cache/downloads/local_storage/gdrive/seed 2. Radarr renames & imports completed movie to /mnt/cache/downloads/local_storage/gdrive/movies 3. Mover transfers imported movie to /mnt/user/downloads/local_storage/gdrive/movies 4. Rclone upload script moves from local_storage/gdrive/movies local to rclone, preserved by mount_mergerfs path In this case, I'd point Plex to /mnt/user/downloads/mount_mergerfs/gdrive/movies? Does the mover affect how plex sees the file? I think there might be some extra confusion because my three mounts (mount_mergerfs, local_storage, and mount_rclone) are inside of a single share called "downloads" instead of three separate shares.
  7. Trying to get a setup to combine cache and array usage for torrents/radarr/sonarr/plex. Anyone have this setup? I know this is complex, but: Ideally, Radarr will send some temporarily seeded torrents to the cache directory to be seeded for 2 weeks and the renamed hardlinks moved to gdrive at some point. Similarly, Radarr will upgrade existing permanently seeded torrents on the array. This becomes problematic because radarr can't hardlink from the cache to the array (separate filesystems or something?). My solution is to simply run two instances of mergerfs. One instance is currently working perfectly in my /mnt/user/downloads/ share. Can I run a second instance of mergerfs on a cache only share? This would result in hardlinks being created within the cache only for that share and a separate rclone mover script to move the cache hardlinks to gdrive, in theory. I would just tell Radarr which directory to use, array share or cache share. Just don't know if the the scripts will conflict. EDIT: Just wanted to clarify that a big reason I want to do this is to avoid repeatedly writing to the last bits of unfilled sections on my drives with files that are only temporarily there. My understanding is that its not good to repeatedly write to the same sections like this? Using the cache drive for these frequent writes would be much better.
  8. Hey guys, Getting: Script Starting Jun 06, 2020 14:03.56 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt 06.06.2020 14:03:56 INFO: *** Rclone move selected. Files will be moved from /mnt/user/downloads/local_storage/gcrypt/cloud/requests for gcrypt *** 06.06.2020 14:03:56 INFO: *** Starting rclone_upload script for gcrypt *** 06.06.2020 14:03:56 INFO: Exiting as script already running. Script Finished Jun 06, 2020 14:03.56 despite running the script after a complete reboot. Not sure if there's a file I need to manually delete somewhere from which the script is wrongly reading that the script is already running? Rclone mount working fine.
  9. Thanks for the help. Attached are the diagnostics. 1. I have never built parity. 2. I haven't tested whether or not the drive is recognized by the mobo. 3. It is not recognized by unraid when plugged into an internal SATA port. It was before, and on a restart, it stopped being recognized, and so I removed it because the sata connector on the hard drive is missing plastic. It was working despite this plastic damage before. 4. I am using the SATA -> USB adapter because I see it showing up on unRAID, the USB adapter seems to fit the SATA connector on the hard drive better than a standard SATA cable. This is a last resort to try and get it working, sorry I know that's complicated. FYI the adapter is what is inside the WD easystore case that's removed when you shuck it, I had one saved. EDIT: getting "emhttpd: device /dev/sdg problem getting id" in the drive's log now. Might be permanently damaged.. EDIT2: i guess it got the ID eventually: "emhttpd: WD_easystore_25FB_32544B3233415A44-0:0 (sdg) 512 15628052480" tower-diagnostics-20200527-1652.zip
  10. Hey everyone, I recently had some cabling issues with a hard drive and so I removed it from the array. I haven't had a chance to build parity yet, I started with unraid relatively recently. I am trying to recover the data on the drive. It is recognized by unRAID when I use the WD Easystore SATA -> USB adaptor: However, I don't understand how I can mount it to recover the files? Any advice on this? The drive is formatted to XFS.
  11. Same with transmission. Deluge poops itself with far less than rtorrent. It's good for small numbers. I was wondering if in the rtorrent config there was maybe a setting that would work around this, say, the network.max_open_files.setmax_open_files setting. I've tried toying around with this and haven't had much success. EDIT: It seems it resolved after some time after I set "Maximum number of open files" under Connection in the settings to 1024. My download speeds are much better, just hit ~50 mbps, the rest probably being taken up by some background processes I have running. After changing it many torrents paused, but I just hit resume on all of them and they are now all seeding. I'll have to monitor to see if my upload speeds are affected/see if they're actually seeding..
  12. The .torrent files? My appdata is on cache. The downloading/seeding files are on the array.
  13. Hey everyone, anyone have advice on config for an instance with a large amount of torrents (>4k)? My current issue is that the download speed caps out at ~15-30 mbps (I have gigabit) and it was using the full gigabit when I had ~200 torrents. Additionally, the hash recheck is very slow, some few times slower than when I had ~200 torrents. CPU and RAM aren't getting maxed out so that's not it, not sure if there's a particular config setting I should change to fix it.
  14. Hey, thanks so much for this script DZMM. Just a few things: I'm an idiot and it took me forever to get hardlinks working but for anyone else who makes the same error, you need to: download to /mnt/user/downloads/mount_mergerfs/gcrypt/radarr in order to hardlink to /mnt/user/downloads/mount_mergerfs/gcrypt/hardlink i was doing the following (WRONG😞 download to /mnt/user/downloads/mount_mergerfs/temp in order to hardlink to /mnt/user/downloads/mount_mergerfs/gcrypt/hardlink it is the gcrypt (or whatever you named it in ur config, default gdrive_vfs) folder that is linked. anyway, I'm having a bit of an issue with the docker start portion of the script. When I run it, despite all containers having autostart off, I still get: INFO: dockers already started. any ideas? edit: just want to clarify that i meant this occurs at the start of the array, which is when i have the script set to run.