• Posts

  • Joined

  • Last visited

Everything posted by sreknob

  1. Hard to do much with a 750 with 2GB. Options are something with a small DAG file size (UBIQ) or a coin without a DAG file, like TON. T-rex doesn't support either of those options, however.
  2. You are correct - I was having a look while you were as well. Tag cuda10-latest works fine but 4.2 and latest give the autotune error My config had both lhr-autotune-interval and lhr-autotune-step-size set to "0", which are both invalid. I also don't use autotune but it was set to auto "-1" from the default config that I used. I set the following in my config to resolve it "lhr-autotune-interval" : "5:120", "lhr-autotune-mode" : "off", "lhr-autotune-step-size" : "0.1", For those that do use autotune, you should leave "lhr-autotune-mode" at "-1" Thanks for the update and having a look. Perhaps this will also help someone else 🙂
  3. FYI, on latest tag just pushed, I'm now getting ERROR: Can't start T-Rex, LHR autotune increment interval must be positive Cheers!
  4. Can you remove the "--runtime=nvidia" under extra parameters - should be able to start without the runtime error then. See where that gets you... Beside the point though, there should be no reason that you need to use GUI mode to get Plex going. If I were you, I'd edit the containers preferences.xml file and remove the PlexOnlineToken and PlexOnlineUsername and PlexOnlineMail to force the server to be claimed next time you start the container. You can find the server configuration file under \appdata\plex\Library\Application Support\Plex Media Server\Preferences.xml
  5. Reallocated sectors are ones that have been successfully moved. A gross over-simplification is that with this number going up (especially if more than once recently) is an indicator that the disk is likely to fail soon. Your smart report shows that you have been slowly gathering uncorrectable errors on that drive for almost a year. Although technically you could rebuild to that disk, it has a high chance of dropping out again. I would vote just replace it!
  6. Having the same issue with on server not connected to the mothership, as The funny thing is that it is working from the "My Servers" webpage but when I try and launch it from another server, I have another problem. It tries to launch a webpage with HTTP (no S) to the local hostname at port 443 so I get a 400 (https to non https port --> http://titan.local:443) See the screenshots below and let me know if you want any more info! The menu on the other server shows all normal, but the link doesn't work like it should as noted above - launching http://titan.local:443 instead of So when I select that, I get a 400: but all launches well from the webui launching the properly! EDIT1: The mothership problem is fixed with a `unraid-api restart` on that server but not the incorrect address part. EDIT2: A restart of the API on the server providing the improper link out corrected the second issue - all working properly now. Something wasn't updating the newly provisioned link back to that server from the online API.
  7. Sorry I'm late getting back. I just used unBalance and moved on! unBalance shows my cache drive and allows me to move to/from it without issues, so not sure why yours is missing it @tri.ler Seeing that both of us had this problem with the same plex metadata files, I think it must be a mover issue. @trurl I didn't see anything in my logs either, other than the file not found/no such file or directory errors. My file system was good. I'm happy to try and let mover give it another go to try to reproduce, but to what end if that is the only error? Let me know if I can help test any hypotheses...
  8. Just throwing an idea out there regarding the VM config that came to mind as I read this thread... feel free to entirely disregard. I'm not sure how the current VM XML is formed in the webUI, but could this be improved by either: 1) Allow adding tags that prevent an update of certain XML elements from the UI. 2) Parse the current XML config and only update the changed elements on apply in the webUI.
  9. FWIW, I'm running two unRAID servers behind a UDMP right now with this working properly. I'm on 1.8.6 firmware. No DNS modifications, using ISP DNS. The first server worked right away when I set it up. The second one was giving me the same error as you yesterday but provisioned fine today.
  10. @limetech thanks so much for addressing some of the potential security concerns. I think that despite this, there still needs to be a BIG RED WARNING that port forwarding will expose your unRAID GUI to the general internet and also a BIG RED WARNING about the recommended complexity of your root password in this case. One way to facilitate this might be that you must enter your root password to turn on the remote webUI feature and/or have a password complexity meter and/or requirement met to do so. The fact that most people will think that they can access their server from their forum account might make them assume that this is the only way to access their webUI, rather than directly via their external IP. Having 2FA on the WebUI would be SUPER nice also 🙂 Yes, this is a little onerous, but probably what is required to keep a large volume of "my server has been hacked" posts happening around here...
  11. Same here in 6.90 stable. Using unBalance/rsync to move currently. Not sure if this is new or old... I don't usually move my appdata except for needing to format cache for the 1MiB realignment and hence mover involvement. Array is XFS and cache is BTRFS. The filenames and paths shouldn't be the limiting factor unless mover can't handle the arguments for some reason is my thought....
  12. Just did the update from rc35 to rc1 and had a warning about my cache pool missing devices. Funny thing is that it's showing up in the GUI, the pool appears to work and I can find nothing obvious in the logs. Array and caches are encrypted but unlock on boot. What am I missing here (I'm sure it's obvious :-)? Thanks.
  13. Glad it helped [mention]oskarom [/mention] I’ve been stuck in many an adoption loop before - even outside of docker networking! Adopting a USG into an existing infrastructure can be a real pain... but always worth it in the end :-) Sent from my iPhone using Tapatalk
  14. I wouldn’t wait for another hard crash. Much nicer to avoid file system errors with a clean boot than risk issues. Looks like most likely bad memory. Do the memtest now, if there is bad RAM it often shows up pretty quickly and you can got on with a warranty RMA. You should also do a file system check on you array and cache as well. Sent from my iPhone using Tapatalk
  15. Thanks for the info @Squid I'll chock this up to the slow zipping of the backup then and just use the terminal for backups instead. Looking forward to more good things 🙂 Closing this report.
  16. When I attempt to use the Flash Backup in the GUI, the zip is created but a download never starts. Logs show: Nov 9 11:21:16 Neo nginx: 2020/11/09 11:21:16 [error] 27222#27222: *3553324 upstream timed out (110: Connection timed out) while reading response header from upstream, client:, server: , request: "POST /webGui/include/Download.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "neo.local", referrer: "http://neo.local/Main/Flash?name=flash" Granted, this particular server has old hardware and the zip takes a while to complete, but even when checking the zip is finished in /mnt/user/system the download does not initiate. Any chance this is due to the change of backup location moving from root? Thanks!
  17. For me, flash backup appears to not work. It creates the zip in /mnt/user/system (?moved from previous versions) but the download from the webpage never starts. I presume it's because emhttp is expecting the backup file in the root rather than in system?
  18. Thanks @Cpt. Chaz and @nitewolfgtr for this. Just one minor thing - might want to double quote $BACKUP_DIR in the code to prevent globbing and word splitting in case of backup directories with spaces. EDIT: Does not appear to work on 6.9b30 as the backups are being stored on /mnt/user/system rather than / The GUI backup appears to also not work due to this as well. I've posted on the beta thread to see if it's just me.... EDIT 2: For 6.9b30 the move line needs to be changed to reflect the new location, then works nicely. echo 'Move Flash Zip Backup from Root to Backup Destination' mv /mnt/user/system/*-flash-backup-*.zip "$BACKUP_DIR"
  19. I know this is old, but just posting because it is the first hit in google for "unraid luks2" I just created an encrypted array drive and it is indeed LUKS2. You can check by running cryptsetup luksDump /dev/mdX where 'X' is your drive number and it will output the keyslots and keyslot area. The defaults is 16MB keyslot area for LUKS2 and under keyslots, it will list the type "luks2"
  20. @edgespresso - the repeat host ports are for TCP and UDP which are listed separately for docker. - LOCAL Xbox users do not need phantom, they will see the LAN game automatically already. - The use case for this is that is needs to run on the same network as the REMOTE Xbox (not on the same network as your server) and phantom created a bridge to make it look like your server is on the same LAN as that REMOTE Xbox. @binhex There is something wrong with the template or the script you have not constructing the "MINECRAFT_SERVER'" variable correctly as when starting the container, I get: 2020-06-06 13:29:48,460 INFO Included extra file "/etc/supervisor/conf.d/phantom.conf" during parsing 2020-06-06 13:29:48,460 INFO Set uid to user 0 succeeded 2020-06-06 13:29:48,463 INFO supervisord started with pid 7 2020-06-06 13:29:49,465 INFO spawned: 'start-script' with pid 57 2020-06-06 13:29:49,465 INFO reaped unknown pid 8 2020-06-06 13:29:49,471 DEBG 'start-script' stdout output: [crit] No Minecraft Bedrock server specified via env var 'MINECRAFT_SERVER', exiting... This is using the template with no modifications except entering an address for REMOTE_MINECRAFT_IP I also noted that you pinned "latest" release tag as v0.3.1 in your pull rather than the latest release, Phantom is now at 0.5.1. Thanks!
  21. Just to follow-up.... RAM was all good. Seems when I cleared my docker image and reset my docker networking I appear to have uncovered a different problem. The more recent crashes seem to be actually due to some sort of conflict between my custom docker networking and maybe the nvidia driver. I've turned off folding@home which was using it and no issue since then. What clued me in was the modules from the call trace Modules linked in: nvidia_uvm(O) tun macvlan xt_nat veth ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables xfs md_mod nct6775 hwmon_vid nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) crc32_pclmul intel_rapl_perf intel_uncore pcbc aesni_intel aes_x86_64 glue_helper crypto_simd ghash_clmulni_intel cryptd drm_kms_helper kvm_intel kvm drm intel_cstate coretemp mpt3sas r8169 syscopyarea sysfillrect crct10dif_pclmul sysimgblt mxm_wmi fb_sys_fops intel_powerclamp crc32c_intel agpgart i2c_i801 i2c_core x86_pkg_temp_thermal wmi ahci realtek video libahci raid_class pata_jmicron cp210x backlight usbserial button pcc_cpufreq scsi_transport_sas Looking at this thread --> [6.5.0]+ Call Traces when assigning IP to Dockers and the fact the nvidia module was linked I just turned off F@H and it went away. When I move this back over to Plex, I may have to create a different docker network as in the linked thread if it returns.