Leaderboard

Popular Content

Showing content with the highest reputation on 12/18/20 in all areas

  1. The latest Unraid blog highlights all of the new major changes in Community Applications 2020.12.14 including: New Categories and Filters Autocomplete Improvements Repositories Category/Filter Community Applications now viewable on Unraid.net As always, a big thanks to @Squid for this amazing Unraid Community Resource! https://unraid.net/blog/community-applications-update
    1 point
  2. The app should see all files in the Complete folder when it browses /media/.
    1 point
  3. This is proper behavior. The app inside the container sees /media/<name of movie>. The actual location on your server will be /mnt/user/Complete/<name of movie>
    1 point
  4. Yeah my case is correct everywhere. So it ended up being the fact that I had categories set in nzbget which nothing like.
    1 point
  5. @xthursdayx It's been a crazy month for me personally and work-wise! Ready for this year to be over with. I was going to start with these steps, but then read the Roon docker has been updated so it works updated from an end point device now. Nice! I moved from my Ubuntu VM to the docker and updated successfully (just read I needed to delete the Roon app data folder which I did not). Now I'm restoring a backup. Runs better on the docker than it did in the Ubuntu VM via SMB. Should I remove the docker, delete the app data folder and start again? I chose to remove the image when I removed the doc
    1 point
  6. It's on two 'r's my guess is it didn't update the template for you yet as it show Overseerr for me
    1 point
  7. Mine worked immediately following this change. I also had to remove the Host Port 1 and then recreate it: I also had to modify the port from 3000 to 5055 in my reverse proxy config. More info about that here.
    1 point
  8. Righto, advice taken. A set of Crucial MX500s are on their way and the ADATAs are going back. I'll let you know how I get on when I receive and install them. Many thanks to you!
    1 point
  9. Well, pressure is going down, thanks for the mental coaching 😉 VMs and containers restarted properly after the restoration of appdata and the move of both domains and system. Moving from array to SSD was ofc way faster than the contrary, so I didn't have to wait too long. The only expected difference I can see is the appdata, domains and system shares are now tagged with "Some o all files unprotected" in he shares tab, which makes sense, as they are on a non-redundant XFS SSD. I checked the containers for the appdata assignment, and only found Plex to have /mnt/cache/appdata. Bu
    1 point
  10. Marking as solved. I've now confirmed that the unraid server itself isn't the bottleneck by getting 1Gb speeds between unraid and my firewall.
    1 point
  11. Checks it on the schedule you set in Notification settings
    1 point
  12. If there are sync errors but no disk errors you can run a correcting check, if there are sync errors and disk errors, or just disk errors, replace the disk.
    1 point
  13. @JorgeB Thank you for your help again. The parity drive rebuild has been running for 16 hours now (with 4 remaining) and not a single error. I did two things: Updated the firmware as you highlighted in the above linked guide. For my card, I needed the P20 installer. I read elsewhere that tied or bunched SATA cables can cause interference. I gave the cables a little more space. I suspect it's mostly (1) that did it but noting both in case someone comes across a similar issue in future.
    1 point
  14. Knock on wood but I think I'm good now. Rebuild completed with no errors and disk is enabled again. Thanks for the help!
    1 point
  15. Danke für den Link, da lag das Problem! Nun wird alles übertragen mit den richtigen Schriftzeichen. Es läuft aktuell noch über den Odroid, da der wenig Energiebedarf hat und der unraid Server erst neu dazu kam. Werde das vermutlich zukünftig umstellen aber aktuell bleibt es mal noch so.
    1 point
  16. Okay. Fair decision, your call after all. Yet it still causes a significant downtime for me when the backup runs if compression only runs on one thread. I found a way to use pigz without needing to modify the plugin tho. I installed pigz via nerd tools and replaced the gzip binary with a symlink to pigz, so it's used by tar --auto-compress with the .tar.gz ending like CA Backupv2 invokes it. It yields substantial performance improvements with my 8C/16T CPU. Currently that's a reduction down to 6min from 37min (with verification of the backup file). If anyone stumb
    1 point
  17. This is awesome! I've tried Ombi before but I like this better even in the ALPHA state. Looking forward to the updates!
    1 point
  18. Hi @linuxserver.io, It seems there is an issue regarding the support of 3 last gens of intel CPUs while using this Plex App with HW HDR to SDR tone mapping. It seems one of your devs created a working solution, creating the OpenCL branch in this container GitHub here: https://github.com/linuxserver/docker-plex/tree/opencl Shall I ask you if there is any plan to provide this solution in the main tree, so we can all benefit of it without hack/trick ?
    1 point
  19. It is important that users who choose a non subscription model, even if that is just implicit by the fact they use only the traditional unRAID product, that there be no phone home or other services that reach out of the system by lieu of the subscription services running in "off" mode of or any other mechanism. I cannot stress this enough. Feel free to add value in whatever way suits your business but dont break that trust model whilst doing so.
    1 point
  20. Whilst it is not ideal that the poster did not follow normal security reporting etiquette it is clear there is an issue and it is off our own making. See versus http://www.slackware.com/security/list.php?l=slackware-security&y=2020 tl;dr we are long overdue an update but we have slipped into the old habit of waiting for the development branch to be ready and ignoring the stable branch. It is not the end of the world but its a habit we need to break again ASAP
    1 point
  21. Can you be more specific? What vulnerabilities are you referring to you referring to? Vulnerabilities are ranked differently based on the complexity, feasibility of the execution and impact on Confidentiality, Integrity and Availability (aka CIA) . And you measure your own risks, @limetech addresses appropriates risks in a timely fashion as we've seen in the past. I'd like to get more context around this, what are you eluding to and what risks do you need mitigated.
    1 point
  22. Since you are upgrading, any database conversion, if necessary, will take place in the upgrade. This is not a big leap from 5.12 to 5.14. I don't think there were major database upgrades between those two. Going up to 6.xx or downgrading from 6.xx to 5.xx is a bigger step and will involve some db changes. If you were going from LTS (5.6), to 5.14, there would be some necessary interim steps as outlined earlier in this thread.
    1 point
  23. As I had read contradictory information on this matter, I hot-plugged a spare cold storage SanDisk SSD Plus 2TB in a spare tray. This SSD seems to have the required features, even if DRZAT is called "Deterministic read data after TRIM" : root@NAS:~# hdparm -I /dev/sdj /dev/sdj: ATA device, with non-removable media Model Number: SanDisk SSD PLUS 2000GB Serial Number: 2025BG457713 Firmware Revision: UP4504RL Media Serial Num: Media Manufacturer: Transport: Serial, ATA8-AST, SATA 1.0a, SATA II
    1 point
  24. There is already an image tag called "v4-preview" for testing and dev purposes. When tidusjar (ombi dev) pushes v4 to master/stable, our image will update the latest tag accordingly.
    1 point
  25. I need some help here. Ive switched from the linuxserver version of nzbget to binhex. Now Sonarr can't find the downloaded files. It seems as if the mapping for /data in the container does not work. The download is within the container at /usr/local/bin/nzbget/downloads/completed. And after completion it is clear that Sonarr cant find the file thats within the container. Can somebody tell me what I am doing wrong?
    1 point
  26. I just Friggin hit 206TB usable. Approaching .25PB 😍
    1 point
  27. Yup...it's saving to "/usr/local/bin/nzbget/downloads/completed/Series/" INSIDE the Docker. Can anyone shed some light on how to set up the paths in NZBGet?
    1 point
  28. Got it downloading! Yay! Still not seeing anything in my Media/TV Shows folder, though. Boo... If I go into NZBGet and click on the completed file, it has a path of "/usr/local/bin/nzbget/downloads/completed/Series/showname". Is it saving IN the Docker container?
    1 point
  29. I've been down some crazy rabbit holes with windows before, but this one really takes the cake. A little googling, and you quickly see that tons and tons of people have experienced this particular error. There are dozens upon dozens of potential solutions, ranging from simple to extremely complicated and everything in between. Reading posts of people's results couldn't be more random. For every person that is helped by a particular solution, there are twenty people for whom it didn't work. I myself had tried about a dozen of the best sure-fire fixes without any success. I rea
    1 point