sreknob

Members
  • Posts

    41
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sreknob's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. Can you remove the "--runtime=nvidia" under extra parameters - should be able to start without the runtime error then. See where that gets you... Beside the point though, there should be no reason that you need to use GUI mode to get Plex going. If I were you, I'd edit the containers preferences.xml file and remove the PlexOnlineToken and PlexOnlineUsername and PlexOnlineMail to force the server to be claimed next time you start the container. You can find the server configuration file under \appdata\plex\Library\Application Support\Plex Media Server\Preferences.xml
  2. Reallocated sectors are ones that have been successfully moved. A gross over-simplification is that with this number going up (especially if more than once recently) is an indicator that the disk is likely to fail soon. Your smart report shows that you have been slowly gathering uncorrectable errors on that drive for almost a year. Although technically you could rebuild to that disk, it has a high chance of dropping out again. I would vote just replace it!
  3. Having the same issue with on server not connected to the mothership, as The funny thing is that it is working from the "My Servers" webpage but when I try and launch it from another server, I have another problem. It tries to launch a webpage with HTTP (no S) to the local hostname at port 443 so I get a 400 (https to non https port --> http://titan.local:443) See the screenshots below and let me know if you want any more info! The menu on the other server shows all normal, but the link doesn't work like it should as noted above - launching http://titan.local:443 instead of https://hash.unraid.net So when I select that, I get a 400: but all launches well from the webui launching the hash.unraid.net properly! EDIT1: The mothership problem is fixed with a `unraid-api restart` on that server but not the incorrect address part. EDIT2: A restart of the API on the server providing the improper link out corrected the second issue - all working properly now. Something wasn't updating the newly provisioned link back to that server from the online API.
  4. Sorry I'm late getting back. I just used unBalance and moved on! unBalance shows my cache drive and allows me to move to/from it without issues, so not sure why yours is missing it @tri.ler Seeing that both of us had this problem with the same plex metadata files, I think it must be a mover issue. @trurl I didn't see anything in my logs either, other than the file not found/no such file or directory errors. My file system was good. I'm happy to try and let mover give it another go to try to reproduce, but to what end if that is the only error? Let me know if I can help test any hypotheses...
  5. Just throwing an idea out there regarding the VM config that came to mind as I read this thread... feel free to entirely disregard. I'm not sure how the current VM XML is formed in the webUI, but could this be improved by either: 1) Allow adding tags that prevent an update of certain XML elements from the UI. 2) Parse the current XML config and only update the changed elements on apply in the webUI.
  6. FWIW, I'm running two unRAID servers behind a UDMP right now with this working properly. I'm on 1.8.6 firmware. No DNS modifications, using ISP DNS. The first server worked right away when I set it up. The second one was giving me the same error as you yesterday but provisioned fine today.
  7. @limetech thanks so much for addressing some of the potential security concerns. I think that despite this, there still needs to be a BIG RED WARNING that port forwarding will expose your unRAID GUI to the general internet and also a BIG RED WARNING about the recommended complexity of your root password in this case. One way to facilitate this might be that you must enter your root password to turn on the remote webUI feature and/or have a password complexity meter and/or requirement met to do so. The fact that most people will think that they can access their server from their forum account might make them assume that this is the only way to access their webUI, rather than directly via their external IP. Having 2FA on the WebUI would be SUPER nice also 🙂 Yes, this is a little onerous, but probably what is required to keep a large volume of "my server has been hacked" posts happening around here...
  8. Same here in 6.90 stable. Using unBalance/rsync to move currently. Not sure if this is new or old... I don't usually move my appdata except for needing to format cache for the 1MiB realignment and hence mover involvement. Array is XFS and cache is BTRFS. The filenames and paths shouldn't be the limiting factor unless mover can't handle the arguments for some reason is my thought....
  9. Just did the update from rc35 to rc1 and had a warning about my cache pool missing devices. Funny thing is that it's showing up in the GUI, the pool appears to work and I can find nothing obvious in the logs. Array and caches are encrypted but unlock on boot. What am I missing here (I'm sure it's obvious :-)? Thanks. neo-diagnostics-20201214-0905.zip
  10. Glad it helped [mention]oskarom [/mention] I’ve been stuck in many an adoption loop before - even outside of docker networking! Adopting a USG into an existing infrastructure can be a real pain... but always worth it in the end :-) Sent from my iPhone using Tapatalk
  11. I wouldn’t wait for another hard crash. Much nicer to avoid file system errors with a clean boot than risk issues. Looks like most likely bad memory. Do the memtest now, if there is bad RAM it often shows up pretty quickly and you can got on with a warranty RMA. You should also do a file system check on you array and cache as well. Sent from my iPhone using Tapatalk
  12. Thanks for the info @Squid I'll chock this up to the slow zipping of the backup then and just use the terminal for backups instead. Looking forward to more good things 🙂 Closing this report.