sreknob

Members
  • Posts

    58
  • Joined

  • Last visited

Posts posted by sreknob

  1. Recommend way is to use the Replace the USB flash device procedure from the guide.

     

    When you do this, all of your current settings and shares will be preserved.

     

    If you want to start "fresh", then you'd use a brand new flash drive set up as new, boot the fresh drive and then use the "New Config" tool, reassigning all your drives to their proper assignments and selecting the "parity is already valid" checkbox. If you don't select the proper assignments though, there is risk of data loss here. Alternatively, you can set up a new USB drive and just copy the /config folder from your current drive, which should preserve your array and shares configuration as well.

     

    I would suggest using the method from the guide - what are you hoping to achieve by "starting fresh"?

     

  2. Thanks for the information - I was not aware of the overly file system issue.

     

    Having said this, I've never had a problem in the past, so perhaps I'm just lucky.

     

    It would make sense if this was somehow integrated into the new update workflow in this case. There is no reason to overwrite the "previous" folder file as the intermediate version never was booted, so the previous should say as is. I suppose it could be tracked by comparing the versions between what is loaded and what is in the previous folder before copying over the current files.

  3. 21 minutes ago, itimpi said:

     

    You can always revert to doing things manually as described here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. 

     

    Thanks itimpi - I am aware of the manual process, it's just nicer to hit a button 🙂

     

    Maybe more of a suggestion for the new method as it matures further.

     

    For now, using the Update Assistant tool seems to work around this issue.

     

    Thanks

  4. Hi All - 

     

    I have one server that I hit the update button on but didn't reboot yet, and now there is a new version.

     

    Is there any way with the new upgrade workflow to complete a "double upgrade", meaning skipping an intermediate version if you previously hit upgrade and didn't reboot or if an upgrade came out shortly after starting an upgrade to the previous version?

     

    In the past, I could just use the update tool again to get to the current version prior to rebooting, but it appears that there is no easy way to do this "double upgrade" anymore. I now need to reboot to get to the intermediate version, then reboot a second time after updating again to get to latest (save manually doing the upgrade on the flash).

     

    Have we lost this ability to do this easily with the new enforced workflow? If so, any chance this could be address in future versions of the new update workflow? Or is this already possible and am I missing something?

     

    Thanks!

     

    EDIT - Looks like I was able to work around this for now by using the old update method. I use the Update Assistant which checked for a new version as part of it and then click on the banner notification at the top to update the server. Then it used the old update method to successfully complete it. Still, it would be nice to include this ability in the new update workflow instead.

  5. Back to the post thread topic here - Is there any way with the new upgrade workflow to complete a "double upgrade".

     

    For example, I have one server that I hit the update button on but didn't reboot yet, and now there is a new version.

     

    In the past, I could just use the update tool again to get to the current version prior to rebooting, but it appears that there is no easy way to do this "double upgrade" anymore. I now need to reboot to get to the intermediate version, then reboot again to get to latest (save manually doing the upgrade on the flash).

     

    Any insights here or am I missing something?

     

    Thanks!

     

    Edit - for reference, I was able to get this done by using the old method, triggered by using the Update Assistant Tool, more discussion here. Perhaps this could incorporated into the new update workflow somehow by comparing the version in the previous folder to the running version and allowing a new update without overwriting the previous folder again during the upgrade.

  6. I just logged onto the forum to get some help with a 12TB drive I was adding to my array that wasn't formatting after pre-clear... If only that upgrade notification came earlier! Will report back if 6.11.3 doesn't solve the problem...

     

    Update - Formatting working as expected now. Thanks!

    • Upvote 1
  7. On 5/11/2022 at 1:24 PM, cypres0099 said:


    hmmm ok, so do you think that's the problem I'm having or just a side note that maybe it's not worth trying to fix the issue because mining won't really work with my card?

     

    Hard to do much with a 750 with 2GB.

    Options are something with a small DAG file size (UBIQ) or a coin without a DAG file, like TON.

    T-rex doesn't support either of those options, however.

  8. 8 minutes ago, PTRFRLL said:

    Can you check your config.json and see what the value is set to? I don't use LHR and it seems to run just fine for me. I'm guessing the new lhr-autotune-interval param isn't set

     

    You are correct - I was having a look while you were as well.

     

    Tag cuda10-latest works fine but 4.2 and latest give the autotune error

     

    My config had both lhr-autotune-interval and lhr-autotune-step-size set to "0", which are both invalid. I also don't use autotune but it was set to auto "-1" from the default config that I used.

     

    I set the following in my config to resolve it

     

            "lhr-autotune-interval" : "5:120",
            "lhr-autotune-mode" : "off",
            "lhr-autotune-step-size" : "0.1",

     

    For those that do use autotune, you should leave "lhr-autotune-mode" at "-1"

     

    Thanks for the update and having a look. Perhaps this will also help someone else 🙂

     

    • Thanks 2
    • Upvote 1
  9. Can you remove the "--runtime=nvidia" under extra parameters - should be able to start without the runtime error then. See where that gets you...

     

    Beside the point though, there should be no reason that you need to use GUI mode to get Plex going.

     

    If I were you, I'd edit the containers preferences.xml file and remove the PlexOnlineToken and PlexOnlineUsername and PlexOnlineMail to force the server to be claimed next time you start the container.

     

    You can find the server configuration file under \appdata\plex\Library\Application Support\Plex Media Server\Preferences.xml

     

     

  10. Reallocated sectors are ones that have been successfully moved. A gross over-simplification is that with this number going up (especially if more than once recently) is an indicator that the disk is likely to fail soon.

    Your smart report shows that you have been slowly gathering uncorrectable errors on that drive for almost a year.

    Although technically you could rebuild to that disk, it has a high chance of dropping out again.

    I would vote just replace it!

  11. Having the same issue with on server not connected to the mothership, as 

     

     

    The funny thing is that it is working from the "My Servers" webpage but when I try and launch it from another server, I have another problem. It tries to launch a webpage with HTTP (no S) to the local hostname at port 443 so I get a 400 (https to non https port --> http://titan.local:443)

     

    See the screenshots below and let me know if you want any more info!

     

    titan-unraid.png.4221f404eabe2b67ae98c1542126a888.png

     

    The menu on the other server shows all normal, but the link doesn't work like it should as noted above - launching http://titan.local:443 instead of https://hash.unraid.net

     

    helios-unriad.png.38739924910cced33b9708798c079eb5.png

     

    So when I select that, I get a 400:

    url-titan-443.png.5ad3e4f79fc4f03b2685735588cb6450.png

     

    http-443.png.14a423ec826807d9c69ad8f1da0684e2.png

     

    but all launches well from the webui launching the hash.unraid.net properly!

     

    EDIT1: The mothership problem is fixed with a `unraid-api restart` on that server but not the incorrect address part.

    EDIT2: A restart of the API on the server providing the improper link out corrected the second issue - all working properly now. Something wasn't updating the newly provisioned link back to that server from the online API.

    • Like 1
  12. Just throwing an idea out there regarding the VM config that came to mind as I read this thread... feel free to entirely disregard.

    I'm not sure how the current VM XML is formed in the webUI, but could this be improved by either:

    1) Allow adding tags that prevent an update of certain XML elements from the UI.

    2) Parse the current XML config and only update the changed elements on apply in the webUI.

  13. @limetech thanks so much for addressing some of the potential security concerns. I think that despite this, there still needs to be a BIG RED WARNING that port forwarding will expose your unRAID GUI to the general internet and also a BIG RED WARNING about the recommended complexity of your root password in this case. One way to facilitate this might be that you must enter your root password to turn on the remote webUI feature and/or have a password complexity meter and/or requirement met to do so.

     

    The fact that most people will think that they can access their server from their forum account might make them assume that this is the only way to access their webUI, rather than directly via their external IP.

     

    Having 2FA on the WebUI would be SUPER nice also 🙂

     

    22 hours ago, limetech said:

    Let's suppose we have these additional mitigations in place. If your server reboots for some reason while you are away (perhaps power failure/restore), here's what you would do to get things going again:

    1. login to forum: specify username, password, enter 2FA code from your phone.
    2. click server remote access link: specify password, enter different 2FA code from your phone (once implemented)
    3. enter encryption password, click Start to bring up array
    4. enter flash backup encryption password, re-enable automatic flash backup (once implemented)
    5. start any services which are not set to autostart

    To me this seems fairly onerous but in the interest of maximum security, is probably what has to be done.

     

    Yes, this is a little onerous, but probably what is required to keep a large volume of "my server has been hacked" posts happening around here...

     

  14. I wouldn’t wait for another hard crash. Much nicer to avoid file system errors with a clean boot than risk issues.
    Looks like most likely bad memory. Do the memtest now, if there is bad RAM it often shows up pretty quickly and you can got on with a warranty RMA.
    You should also do a file system check on you array and cache as well.


    Sent from my iPhone using Tapatalk

    • Like 1
  15. Thanks @Cpt. Chaz and @nitewolfgtr for this.

    Just one minor thing - might want to double quote $BACKUP_DIR in the code to prevent globbing and word splitting in case of backup directories with spaces.

     

    EDIT: Does not appear to work on 6.9b30 as the backups are being stored on /mnt/user/system rather than /

    The GUI backup appears to also not work due to this as well. I've posted on the beta thread to see if it's just me....

     

    EDIT 2: For 6.9b30 the move line needs to be changed to reflect the new location, then works nicely.

    echo 'Move Flash Zip Backup from Root to Backup Destination'
    mv /mnt/user/system/*-flash-backup-*.zip "$BACKUP_DIR"