sreknob

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by sreknob

  1. Can you remove the "--runtime=nvidia" under extra parameters - should be able to start without the runtime error then. See where that gets you... Beside the point though, there should be no reason that you need to use GUI mode to get Plex going. If I were you, I'd edit the containers preferences.xml file and remove the PlexOnlineToken and PlexOnlineUsername and PlexOnlineMail to force the server to be claimed next time you start the container. You can find the server configuration file under \appdata\plex\Library\Application Support\Plex Media Server\Preferences.xml
  2. Reallocated sectors are ones that have been successfully moved. A gross over-simplification is that with this number going up (especially if more than once recently) is an indicator that the disk is likely to fail soon. Your smart report shows that you have been slowly gathering uncorrectable errors on that drive for almost a year. Although technically you could rebuild to that disk, it has a high chance of dropping out again. I would vote just replace it!
  3. Having the same issue with on server not connected to the mothership, as The funny thing is that it is working from the "My Servers" webpage but when I try and launch it from another server, I have another problem. It tries to launch a webpage with HTTP (no S) to the local hostname at port 443 so I get a 400 (https to non https port --> http://titan.local:443) See the screenshots below and let me know if you want any more info! The menu on the other server shows all normal, but the link doesn't work like it should as noted above - launching http://titan.local:443 instead of https://hash.unraid.net So when I select that, I get a 400: but all launches well from the webui launching the hash.unraid.net properly! EDIT1: The mothership problem is fixed with a `unraid-api restart` on that server but not the incorrect address part. EDIT2: A restart of the API on the server providing the improper link out corrected the second issue - all working properly now. Something wasn't updating the newly provisioned link back to that server from the online API.
  4. Sorry I'm late getting back. I just used unBalance and moved on! unBalance shows my cache drive and allows me to move to/from it without issues, so not sure why yours is missing it @tri.ler Seeing that both of us had this problem with the same plex metadata files, I think it must be a mover issue. @trurl I didn't see anything in my logs either, other than the file not found/no such file or directory errors. My file system was good. I'm happy to try and let mover give it another go to try to reproduce, but to what end if that is the only error? Let me know if I can help test any hypotheses...
  5. Just throwing an idea out there regarding the VM config that came to mind as I read this thread... feel free to entirely disregard. I'm not sure how the current VM XML is formed in the webUI, but could this be improved by either: 1) Allow adding tags that prevent an update of certain XML elements from the UI. 2) Parse the current XML config and only update the changed elements on apply in the webUI.
  6. FWIW, I'm running two unRAID servers behind a UDMP right now with this working properly. I'm on 1.8.6 firmware. No DNS modifications, using ISP DNS. The first server worked right away when I set it up. The second one was giving me the same error as you yesterday but provisioned fine today.
  7. @limetech thanks so much for addressing some of the potential security concerns. I think that despite this, there still needs to be a BIG RED WARNING that port forwarding will expose your unRAID GUI to the general internet and also a BIG RED WARNING about the recommended complexity of your root password in this case. One way to facilitate this might be that you must enter your root password to turn on the remote webUI feature and/or have a password complexity meter and/or requirement met to do so. The fact that most people will think that they can access their server from their forum account might make them assume that this is the only way to access their webUI, rather than directly via their external IP. Having 2FA on the WebUI would be SUPER nice also 🙂 Yes, this is a little onerous, but probably what is required to keep a large volume of "my server has been hacked" posts happening around here...
  8. Same here in 6.90 stable. Using unBalance/rsync to move currently. Not sure if this is new or old... I don't usually move my appdata except for needing to format cache for the 1MiB realignment and hence mover involvement. Array is XFS and cache is BTRFS. The filenames and paths shouldn't be the limiting factor unless mover can't handle the arguments for some reason is my thought....
  9. Just did the update from rc35 to rc1 and had a warning about my cache pool missing devices. Funny thing is that it's showing up in the GUI, the pool appears to work and I can find nothing obvious in the logs. Array and caches are encrypted but unlock on boot. What am I missing here (I'm sure it's obvious :-)? Thanks. neo-diagnostics-20201214-0905.zip
  10. Glad it helped [mention]oskarom [/mention] I’ve been stuck in many an adoption loop before - even outside of docker networking! Adopting a USG into an existing infrastructure can be a real pain... but always worth it in the end :-) Sent from my iPhone using Tapatalk
  11. I wouldn’t wait for another hard crash. Much nicer to avoid file system errors with a clean boot than risk issues. Looks like most likely bad memory. Do the memtest now, if there is bad RAM it often shows up pretty quickly and you can got on with a warranty RMA. You should also do a file system check on you array and cache as well. Sent from my iPhone using Tapatalk
  12. Thanks for the info @Squid I'll chock this up to the slow zipping of the backup then and just use the terminal for backups instead. Looking forward to more good things 🙂 Closing this report.
  13. When I attempt to use the Flash Backup in the GUI, the zip is created but a download never starts. Logs show: Nov 9 11:21:16 Neo nginx: 2020/11/09 11:21:16 [error] 27222#27222: *3553324 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.20.5, server: , request: "POST /webGui/include/Download.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "neo.local", referrer: "http://neo.local/Main/Flash?name=flash" Granted, this particular server has old hardware and the zip takes a while to complete, but even when checking the zip is finished in /mnt/user/system the download does not initiate. Any chance this is due to the change of backup location moving from root? Thanks!
  14. For me, flash backup appears to not work. It creates the zip in /mnt/user/system (?moved from previous versions) but the download from the webpage never starts. I presume it's because emhttp is expecting the backup file in the root rather than in system?
  15. Thanks @Cpt. Chaz and @nitewolfgtr for this. Just one minor thing - might want to double quote $BACKUP_DIR in the code to prevent globbing and word splitting in case of backup directories with spaces. EDIT: Does not appear to work on 6.9b30 as the backups are being stored on /mnt/user/system rather than / The GUI backup appears to also not work due to this as well. I've posted on the beta thread to see if it's just me.... EDIT 2: For 6.9b30 the move line needs to be changed to reflect the new location, then works nicely. echo 'Move Flash Zip Backup from Root to Backup Destination' mv /mnt/user/system/*-flash-backup-*.zip "$BACKUP_DIR"
  16. I know this is old, but just posting because it is the first hit in google for "unraid luks2" I just created an encrypted array drive and it is indeed LUKS2. You can check by running cryptsetup luksDump /dev/mdX where 'X' is your drive number and it will output the keyslots and keyslot area. The defaults is 16MB keyslot area for LUKS2 and under keyslots, it will list the type "luks2"
  17. @edgespresso - the repeat host ports are for TCP and UDP which are listed separately for docker. - LOCAL Xbox users do not need phantom, they will see the LAN game automatically already. - The use case for this is that is needs to run on the same network as the REMOTE Xbox (not on the same network as your server) and phantom created a bridge to make it look like your server is on the same LAN as that REMOTE Xbox. @binhex There is something wrong with the template or the start.sh script you have not constructing the "MINECRAFT_SERVER'" variable correctly as when starting the container, I get: 2020-06-06 13:29:48,460 INFO Included extra file "/etc/supervisor/conf.d/phantom.conf" during parsing 2020-06-06 13:29:48,460 INFO Set uid to user 0 succeeded 2020-06-06 13:29:48,463 INFO supervisord started with pid 7 2020-06-06 13:29:49,465 INFO spawned: 'start-script' with pid 57 2020-06-06 13:29:49,465 INFO reaped unknown pid 8 2020-06-06 13:29:49,471 DEBG 'start-script' stdout output: [crit] No Minecraft Bedrock server specified via env var 'MINECRAFT_SERVER', exiting... This is using the template with no modifications except entering an address for REMOTE_MINECRAFT_IP I also noted that you pinned "latest" release tag as v0.3.1 in your pull rather than the latest release, Phantom is now at 0.5.1. Thanks!
  18. Just to follow-up.... RAM was all good. Seems when I cleared my docker image and reset my docker networking I appear to have uncovered a different problem. The more recent crashes seem to be actually due to some sort of conflict between my custom docker networking and maybe the nvidia driver. I've turned off folding@home which was using it and no issue since then. What clued me in was the modules from the call trace Modules linked in: nvidia_uvm(O) tun macvlan xt_nat veth ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables xfs md_mod nct6775 hwmon_vid nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) crc32_pclmul intel_rapl_perf intel_uncore pcbc aesni_intel aes_x86_64 glue_helper crypto_simd ghash_clmulni_intel cryptd drm_kms_helper kvm_intel kvm drm intel_cstate coretemp mpt3sas r8169 syscopyarea sysfillrect crct10dif_pclmul sysimgblt mxm_wmi fb_sys_fops intel_powerclamp crc32c_intel agpgart i2c_i801 i2c_core x86_pkg_temp_thermal wmi ahci realtek video libahci raid_class pata_jmicron cp210x backlight usbserial button pcc_cpufreq scsi_transport_sas Looking at this thread --> [6.5.0]+ Call Traces when assigning IP to Dockers and the fact the nvidia module was linked I just turned off F@H and it went away. When I move this back over to Plex, I may have to create a different docker network as in the linked thread if it returns.
  19. Thanks, that was going to be my next step. I actually have a new set of RAM in the mail as of last week as I was planning on adding more, so I'll save my memtest downtime for the new sticks and test the current modules in another machine afterwards. Will update here with anything new! Thanks again.
  20. May 20 21:39:43 smtower kernel: Out of memory: Kill process 7793 (makemkvcon) score 820 or sacrifice child Your are running 'makemkvcon' that appears to be using too much memory and is getting killed. I would recommend checking with the makemkv forums about memory use or see if there is some file it is having this error with...
  21. Thanks for your help so far johnnie! As an update, I wiped my cache drives and reformatted to a new cache pool, recreated my docker image. I also ran XFS repair on all my array drives. No lost and found folders were created and no other obvious error was flagged. Things seemed ok for a bit, but then had another crash. Different error this time. Seems to only happen overnight, not sure if mover is to blame. I set up remote logging over the network after my last issues, so I have a system log of the crash but I was unable to retrieve diagnostics at the time due to being completely unresponsive with not even a console active. I've included an updated diagnostics file from just now since I rebooted this morning to see the current configs and status. Here are some log entries from just before it became unresponsive and stopped logging: May 22 03:22:25 unRAID crond[1904]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null >>> May 22 04:40:14 unRAID kernel: BUG: unable to handle kernel paging request at ffffffff820e0740 May 22 04:40:14 unRAID kernel: PGD 4e0e067 P4D 4e0e067 PUD 4e0f063 PMD 41d500063 PTE 800ffffffaf1f062 May 22 04:40:14 unRAID kernel: Oops: 0002 [#1] SMP PTI >>> May 22 04:40:15 unRAID kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000030 May 22 04:40:15 unRAID kernel: PGD 0 P4D 0 May 22 04:40:15 unRAID kernel: Oops: 0000 [#2] SMP PTI >>> I've truncated for brevity of this post, the more complete syslog and full traces are attached to the post. Any thoughts or advice highly appreciated. unraid-2020-05-22-crash.log unraid-diagnostics-20200522-1935.zip