Jump to content

Pulteney

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by Pulteney

  1. Authelia doesn't start after a completed backup. Errors make me think it's because it's trying to reach Redis before redis is ready. Is there any way to add a delay to start container, or some other solution for my issue?

  2. On 3/10/2024 at 7:18 PM, MichelPigassou said:

    Hey everyone.

    If my compose file references an existing network, in Unraid's UI the network is referenced by its ID instead of its name.

    First screenshot is the UI (in the list of containers), second screenshot is the output of `docker network ls`.

     

    Content of my docker-compose (top-level):

    ```

    networks:
      default:
        name: lsio
        external: true

    ```

     

    Looks like a bug, because it's all working. Not sure where to report it, and maybe I'd like to get a confirmation first that is indeed a bug.

     

    Screenshot 2024-03-10 at 7.14.50 PM.png

    Screenshot 2024-03-10 at 7.15.11 PM.png

    Having the same problem. Reverse proxy seems to work as I can connect to the container with the docker hostname, but it sucks not being able to see which containers use which networks in the Docker Webui.
     

    Is there any solution to this?

  3. 4 hours ago, carnivorebrah said:

    Wow, okay, rebooting fixed it.

     

    Now the renamed share is showing up with the new name and all the other shares.

     

    I really wish it would have just said "Reboot to apply changes" instead of "share name deleted".

     

    At least I still have my data which is what matters in the end.

    Glad you sorted it. Next time, copy the files you want backed up instead of renaming :D

    • Like 1
  4. 33 minutes ago, apandey said:

    Are you doing anything intensive on that VM that needs dedicated cpu resources? I always start with no isolation / pinning and only look at this when I encounter a cpu saturating workload

    Not really. Mainly web browsing and video playback using Madvr, so high GPU usage but not CPU.


    Still, Unraid should be more than fine with one P-core and five E-cores, right? Maybe I'll try undoing all isolation just to see how it feels.

  5. I'm running a i5 13500, with 6 P-cores and 8 E-cores.

    Currently I have isolated P-core 2-6 and pinned them to a Windows 10 VM. As I understand, Unraid will always utilize core 1, even if you isolate it?

     

    I have isolated and pinned two E-cores to a VM instanace of  Alpine Linux running only Adguard Home in docker. Running Adguard in a VM because of constant issues with port 53 and DNSMASQ of the VM system.

     

    This leaves 1 P-core and 6 E-cores for Unraid and its around 20 docker containers. How much resources does Unraid really need to run smoothly?

     

    For people who have experimented a bit, does this sound like an OK way to split resources?

     

  6. On 2/28/2022 at 7:46 PM, Dural said:

     

    I seem to be having the same issue with an 11500T, h264 starts up immediately and shows hw encoding.  When I try to do the same on an h265 file it shows it starts and shows hw encoding but the client just sits there spinning.  It looks like the cpu has one thread that goes to 100% while the gpu shows no load.

     

    edit-Just figured it out, uncheck Use hardware accelerated vid

    eo encoding but leave Use hardware acceleration when available checked.  Immediately worked after doing this and cpu load is <20% (4 threads being used for VM so not necessarily just from transcoding) while gpu video load is ~8%.

     

    This just helped me, thank you!

  7. Yesterday I was running Adguard home in host mode and a Windows VM without issues.

    This morning Adguard had stopped and couldn't be restarted due to port 53 already being used by the VM.

     

    Spent some time tinkering with using Adguard with br0 but it simply won't work without adding some routing options to my router.

    I will set this up when I get a better router.

     

    Just before giving up, I decided to try again with VM active and Adguard in host mode, and suddenly it works again.

    Even tried restarting Adguard while the VM is running, and no issues with port 53 already being used.

    Can someone tell me what's going on here?

  8. Thanks for the input.

    I ended up retiring the old server. Simply couldnt justify the power usage from 8 2TB drives I don't really trust with any important data.
    Decided to set up rclone with mergerfs on my main server instead and it suits my needs beautifully.

     

    Rest in peace ye old faithful

     

     

    20230202_103640.jpg

    • Like 1
  9. 11 hours ago, wwe9112 said:

     

    I think it is working now. So, removing tailscale (which sucks to have to remove since I used it as a subnet router to access other devices outside of unraid). Then using as 'run' \\vault\media worked. 

     

     

    Thank you for your time fellas. I appreciate the support!

    You can easily do this with the built in Wireguard.

  10. Thank you so so much for this script. Been using rclone for years, but never had the need for mergerfs until recently and your scripts made it work in under 5 minutes, haha.

    I'm planning to use local storage for my data, with gdrive mostly as a backup.

     

    After copying my local stuff to gdrive, will the local version still be used when playing media from the mergerfs mount point?

     

    Edit: Yep, it will. This is brilliant!

     

    • Like 1
  11. There was sales on the internet so I ended up with one Seagate Firecuda 520 1TB and two WD Black SN850X 1TB.

     

    Thinking of using the Firecuda as temp storage for downloads and the WDs for appdata, VMs and stuff in raid1. How bad of an idea is it to use two identical and new in raid1? The likelhood of them going down within the same week is low, but still a bit scary.

     

    From what I understand the raid1 must be btrfs. Should I use xfs or btrfs for the temp drive?

     

    Are there any other idea for how to use these drives most optimally?

     

    Thanks!

  12. Ordered a new server to run Unraid. As I went for overkill specs I could only afford one HDD - a Seagate EXOS 20TB. I'll buy more in a few weeks for more storage and parity.

    Sadly the Exos was DOA. I'm an impatient person, so I threw in 6 old 2TB's from a previous Linux software raid setup. This was a AMD Phenom II 955 and 4GB of RAM.

    The drives have been powered on from around 60 to 70k hours. A few throws out a few SMART errors, but nothing critical.

     

    My replacement Exos 20TB arrives tomorrow, so I'm a bit torn with what to do with the old hardware and drives. I don't want to run these old drives in my main array. When can we have multiple arrays with parity, pls? :D

    Taking idea for what do with them, but I'm considering to move the drives back in my old server, and run Unraid on it. I have a total of 8 2TB drives I can deploy. 14TB of pure storage is dumb to throw in a closet, even if the drives are old. As a straight smb/backup/fileserver, I assume the specs are good enough.


    I would like to keep my current USB drive/config for the main server though.
    If I move these drives back to my old hardware, can I just add the drives in the same order to a new array, and the data will be there - or do I need some config files from the current USB drive?


    When buying new hardware the WD 850x 1TB NVME was on 50% sale, so naturally I bought two. My current server already has a Firecuda 520 1TB. Looking for ideas for how to include them all in a sensible way.

    The WD 850x's in raid 1 for appdata and important files, also backed up to the array + gdrive. And the Firecuda as download cache? Seems a bit overkill, lmao. I'm still deciding if I need a NVME for download cache, or if a good old HDD is good enough.

    I'm also a bit skeptical of running two new and identical NVME drives to a raid1. The likeyhood of them both failing at the same time is small, but could happen.

    If this is nonsensical, I can still return the Firecuda.

    Apologize for the messy OP, hope it makes sense. Every idea is appreciated.

    Regards,
    A fresh Unraid fanboy

  13. On 1/12/2023 at 12:46 PM, trurl said:

    Don't think of array and cache separately. Cache and array are both included in user shares. Just work with user shares and everything will look the same whether files are on cache or array. 

     

    Thank you. It all made sense when I actually started using unraid. Pretty sleek.

  14. Been planning out my Unraid build for the last couple of days, and one of the challenges I still have is how I will handle long term seeding. I am a member some private trackers where I want to seed a lot of data for basically forever.

    Almost all of my torrent downloads will be handled by *arrs and qbittorrent. Ideally I want this to be 100% automated.

     

    Before getting into the details, I'm curious how you all handle Plex libraries with cache drives involved:

    Do you add multiple sources (cache and array) and activate Plex's automatic thrash setting for when cache files are removed?

    Do you simply wait for the new media to be moved to the array by Mover before its accessible in Plex?

    From what I've found so far, long term seeding will be difficult to automate using a smaller NVME cache drive.

    I'm looking at these three solutions to my problem, all with various drawbacks:

    NVME Cache for downloads:

    1. Will fill up quickly

    2. Hardlink doesn't work across drives/arrays.

    3. Mover wont be able to move files that are in use by qbittorrent, and even if it was (by temporarily stopping qbittorrent), there's no way Unraid/Mover can tell qbittorrent about their new location to continue seeding.

     

    Dedicated HDD (10TB-ish) for downloads:

    1. Hardlink doesn't work across drives/arrays.

    2. Can use copy instead of hardlinks in *arr settings to copy all downloaded files to array when completed. This would cause the array to spin up every time a torrent has downloaded.

    3. Will have to delete torrents or add drives if the array fills up

    4. Adds extra cost, especially if I want more drives and/or parity

    - Does Unraid have a copy feature similar to Mover? This way files could be copied to the array once per night instead of when every download finishes.

     

    Download directly to main array and seed from there:

    1. One or more drives will almost always be spun up.

    2. Might cause performance problems if there's a lot of torrent traffic going on while many users are using Plex. But unlikely?

    3. From what I've read, parity drives are not used when simply reading from the array, so that's good.

     

     

    All in all I'm actually leaning towards the last option. I don't think performance will be an issue, as I currently have 750Mbit connection. Far below the drives capacity.

    Note, I still haven't actually used Unraid, so I might have misunderstood some things here. Is there an obvious solution to my problem that I'm not seeing?

×
×
  • Create New...