hmoney007

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by hmoney007

  1. On 6/30/2022 at 6:40 PM, tjb_altf4 said:

    fpcalc is a fingerprinting app, so I think Lidarr is scanning your library trying to identify music.

     

    I would use cpu pinning options in the docker template (toggle advanced) and give it 2-4 vcores, that way it won't cripple your server

    yea this puts any cores i give it at 100%...

  2. On 5/5/2022 at 2:39 PM, Zeze21 said:

    Hi Guys,

    i tried updating my Nextcloud via the integrated Updater, but unfortunately I ran into several problems I was not able to solve myself. I have checked this thread, several other sources (like the Nextcloud forums) and unfortunately I get stuck at every solution I was able to find. But first things first, here's what happened:

    1.) I started the web based updater: I get until step 3 then I get the following Error:

    <html>
    <head><title>504 Gateway Time-out</title></head>
    <body>
    <center><h1>504 Gateway Time-out</h1></center>
    <hr><center>nginx</center>
    </body>
    </html>
    <!-- a padding to disable MSIE and Chrome friendly error page -->
    <!-- a padding to disable MSIE and Chrome friendly error page -->
    <!-- a padding to disable MSIE and Chrome friendly error page -->
    <!-- a padding to disable MSIE and Chrome friendly error page -->
    <!-- a padding to disable MSIE and Chrome friendly error page -->
    <!-- a padding to disable MSIE and Chrome friendly error page -->

    On clicking Retry Update I get:

    Step 3 is currently in process. Please reload this page later.

    If i delete the .step file I get back to the first problem.

    2.) So I tried: 

    docker exec -it nextcloud sudo -u abc php /config/www/nextcloud/updater/updater.phar

    I get an error at Point 6

    [✔] Check for expected files
    [✔] Check for write permissions
    [✔] Create backup
    [✔] Downloading
    [✔] Verify integrity
    [ ] Extracting ...PHP Warning:  require(/config/www/nextcloud/updater/../version.php): failed to open stream: No such file or directory in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658
    PHP Fatal error:  require(): Failed opening required '/config/www/nextcloud/updater/../version.php' (include_path='.:/usr/share/php7') in phar:///config/www/nextcloud/updater/updater.phar/lib/Updater.php on line 658

    I tried copying the version.php into that folder but that did not work either

    3.) The occ command does not work at all at least I can't get it to work

     

    SOLUTION (i found while typing this - i am leaving everything so others with the same error can find it easier)

    I copied the missing file (version.php) from the nextcloud/update_somerandomstring/downloads/nextcloud/ to the  appdata/nextcloud/www/nextcloud/ folder and also the shipped.json to the /core/ subfolder. Then 

    docker exec -it nextcloud sudo -u abc php /config/www/nextcloud/updater/updater.phar

    works just fine...

     

    Hey I am unable to figure out where shipped.json should be moved to. I found it in the "nextcloud/update_somerandomstring/downloads/nextcloud/core" folder but dont know where it needs to be copied to. Appreciate any help.

     

    edit: i figured it out. you make a new folder called core and move the shipped.json file into it at appdata/nextcloud/www/nextcloud

  3. 3 hours ago, TrCl said:
    
    [warn] Unable to successfully download PIA json to generate token from URL 'https://privateinternetaccess.com/gtoken/generateToken'

     

    I've been getting this message for the past few hours and can't access the webui or start the container. Is this a PIA issue?

    Same issue here

  4. Nginx is erroring out with this repeated in logs:

     

    nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-26/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/letsencrypt/live/npm-26/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file)

     

    I woke up to backup/restore v2 completed and no containers/vms exposed. Figured out it was nginx which shit the bed and didnt come back up after backup. Backup simply kills all containers then backs them up then starts again, therefore no changes to my config took place. I tried rolling back to v 1.16.0 with no luck.

     

    There is no npm-26 anymore.. that was an old proxy host which is not used anymore.

     

    Please let me know what I should do to get this up again :)

     

     

    *EDIT* - I just went to the "live" folder and copied/pasted another proxy host folder and renamed as npm-26 and now it works. Seems like something is broken on the code side but this is a quick and dirty workaround.

  5. On 4/19/2021 at 2:05 PM, cheesemarathon said:

    Have you tried reverting to version 4.0? You can do this by opening the container settings in the unRAID UI and then changing the repository value from "ghost" to "ghost:4"

     

    With regards to your friends question, I do not provide the container image. The ghost organisation builds and manages the image. I just provide a template for unRAID. This template will automatically use the latest image provided by the ghost organisation. There should be no issue with this but if you are concerned you can fix the version you run the same way I describe in the previous paragraph or you can turn off automatic updates for the ghost container and just run them manually when you are happy the update is stable.

     

    As for removing the index, if your having the same issue as one of the other commenters (I've not seen your logs so I can't be sure) then running the below should remove the index causing issues. I would backup your db before touching it however. Make sure that the ghost container is not running whilst you make changes.

     

    
    alter table `migrations_lock` drop key migrations_lock_lock_key_unique

     

    As for the issues with mysql-workbench I would recommend HeidiSQL for Windows and DBeaver for Mac. Both work well with Maria/MySQL databases. Failing this, ask for help at the Ghost forum where this issue may have been seen before.

     

     

    Thanks for the detailed response. Yes I did attempt forcing the 4.0 branch but the errors from db were still present. I will do as you suggested and take a fresh backup of the db prior to attempting the fix and report back. I'll also take a look at your suggested replacements for mysql-workbench.

     

    On another note i am curious if there would be a major downside to baking mariadb into the container. I noticed my mariadb container uses like 30mb of ram and cpu sits at 0% usage with multiple containers writing to it. Either way, thanks again for the response!

  6. On 4/4/2021 at 12:10 PM, cheesemarathon said:

    Have you tried removing the index as mentioned earlier?

    I am not sure how to do that. I installed mysql-workbench but it doesnt seem to play nicely with mariadb. Also per the same suggesiton you are mentioning above, the revert to 4.0 was suggested *instead of* touching the database. It would be greatly appreciated if you would provide the command(s) needed to remove the index.

     

    edit: when i asked a friend to take a look he wondered if there was an issue with the container provided by you since it "upgrades the base image when pulling a new version". Is this something of concern?

  7. On 12/5/2018 at 5:31 PM, slimshizn said:

    I wanted to reply here in case anyone was following or reading this. As a work around for now, I was able to log out of the VM completely instead of selecting reboot or shutdown, and THEN shut down. Doing this allowed me to not run into a server lock up/crash. I am still troubleshooting absolutely everything I can before I use this MB for file serving and non-pass through VM's. 

    Edit: after removing VFIO allow interrupts, rebooting server, starting back up and starting the server back up and shutting down the VM in the same fashion, I had another crash. I'm enabling VFIO allow interrupts again to see if that was the culprit in this work around or just a coincidence. 

    Edit 2: Seems to have been the USB controller I had passed through. Took that out of the equation and had a single error which I'm still working out. 

    I have the same mobo and am having the same issues. what usb controller were you passing through?

     

  8. On 7/10/2019 at 11:09 AM, binhex said:

    im afraid this wont get done, too complex and too large, but i am considering introduction of a socks proxy server, as part of privoxyvpn, this would then allow you to simply point hexchat at that, keep in mind not all irc servers allow connections from vpn's so you may find your fav irc server is not connectable.

    Ah ok, I don't use irc that often and only on freenode. I just figured this would be better than using a bouncer.

  9. On 5/12/2016 at 6:13 AM, dlandon said:

    There have been several posts on the forum about VM performance improvements by adjusting CPU pinning and assignments in cases of VMs stuttering on media playback and gaming.  I've put together what I think is the best of those ideas.  I don't necessarily think this is the total answer, but it has helped me with a particularly latency sensitive VM.

     

    Windows VM Configuration

     

    You need to have a well configured Windows VM in order to get any improvement with CPU pinning.  Have your VM configured as follows:

    • Set machine type to the latest i440fx..
    • Boot in OVMF and not seaBIOS for Windows 8 and Windows 10.  Your GPU must support UEFI boot if you are doing GPU passthrough.
    • Set Hyper-V to 'yes' unless you need it off for Nvidia GPUs.
    • Don't initially assign more that 8 GB of memory and set 'Initial' and 'Max' memory at the same value so memory ballooning is off.
    • Don't assign more than 4 CPUs total.  Assign CPUs in pairs to your VM if it supports Hyperthreading.
    • Be sure you are using the latest GPU driver.
    • I have had issues with virtio network drivers newer than 0.1.100 on Windows 7.  Try that driver first and then update once your VM is performing properly.

     

    Get the best performance you can by adjusting the memory and CPU settings.  Don't over provision CPUs and memory.  You may find that the performance will decrease.  More is not always better.

     

    If you have more than 8GB of memory in your unRAID system, I also suggest installing the 'Tips and Tweaks' plugin and setting the 'Disk Cache' settings to the suggested values for VMs.  Click the 'Help' button for the suggestions.  Also set 'Disable NIC flow control' and 'Disable NIC offload' to 'Yes'.  These settings are known to cause VM performance issues in some cases.  You can always go back and change them later.

     

    Once you have your VM running correctly, you can then adjust CPU pinning to possibly improve the performance.  Unless you have your VM configured as above, you will probably be wasting your time with CPU pinning.

     

    What is Hyperthreading?

     

    Hyper threading is a means to share one CPU core with multiple processes.  The architecture of a hyperthread core is a core and two hyperthreads.  It looks like this:

     

    HT ---- core ---- HT

     

    It is not a base core and a HT:

     

    core ---- HT

     

    When isolating CPUs, the best performance is gained by isolating and assigning both pairs for a VM, not just what some think as the '"core".

     

    Why Isolate and Assign CPUs

     

    Some VMs suffer from latency because of sharing the hyperthreaded cpus.  The method I have described here helps with the latency caused by cpu sharing and context switching between hyperthreads.

     

    If you have a VM that is suffering from stuttering or pauses in media playback or gaming, this procedure may help.  Don't assign more cpus to a VM that has latency issues.  That is generally not the issue.  I also don't recommend assigning more than 4 cpus to a VM.  I don't know why any VM needs that kind of horsepower.

     

    In my case I have a Xeon 4 core processor with Hyperthreading.  The CPU layout is:

     

    
    0,4
    1,5
    2,6
    3,7
     

    The Hyperthread pairs are (0,4) (1,5) (2,6) and (3,7).  This means that one core is used for two Hyperthreads.  When assigning CPUs to a high performance VM, CPUs should be assigned in Hyperthread pairs.

     

    I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive:

     

    
    append isolcpus=2,3,6,7 initrd=/bzroot
     

    This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux.

     

    There is an additional setting for vcpus called 'emulatorpin'.  The 'emulatorpin' entry puts the emulator tasks on other CPUs and off the VM CPUs.

     

    I then assigned the isolated CPUs to my VM and added the 'emulatorpin':

     

    
      <cputune>
        <vcpupin vcpu='0' cpuset='2'/>
        <vcpupin vcpu='1' cpuset='3'/>
        <vcpupin vcpu='2' cpuset='6'/>
        <vcpupin vcpu='3' cpuset='7'/>
        <emulatorpin cpuset='0,4'/>
      </cputune>
    
     

    What ends up happening is that the 4 logical CPUs (2,3,6,7) are not used by Linux but are available to assign to VMs.  I then assigned them to the VM and pinned emulator tasks to CPUs (0,4).  This is the first CPU pair.  Linux tends to favor the low numbered CPUs.

     

    Make your CPU assignments in the VM editor and then edit the xml and add the emulatorpin assignment.  Don't change any other CPU settings in the xml.  I've seen recommendations to change the topology:

     

    
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>
    
     

    Don't make any changes to this setting.  The VM manager does it appropriately.  There is no advantage in making changes and it can cause problems like a VM that crashes.

     

    This has greatly improved the performance of my Windows 7 Media Center VM serving Media Center Extenders.

     

    I am not a KVM expert and this may not be the best way to do this, but in reading some forum posts and searching the internet, this is the best I've found so far.

     

    I would like to see LT offer some performance tuning settings in the VM manager that would help with these settings to improve performance in a VM without all the gyrations I've done here to get the performance I need in my VM.  They could at least offer some 'emulatorpin' settings.

     

    Note: I still see confusion about physical CPUs, vcpus, and hyperthreaded pairs.  CPU pairs like 3,7 are two threads that share a core.  It is not a core with a hyperthread.

     

    When isolating and assigning CPUs to a VM, do it in pairs.  Don't isolate and assign one (3) and not its pair (7) unless you don't assign 7 to any other VM.  This is not going to give you what you want.

     

    vcpus are relative to the VM only.  You don't isolate vcpus, you isolate physical CPUs that are then assigned to VM vcpus.

    3 years later, are there any changes and/or additions to the VM settings available in unRAID's vm creation template that you'd suggest?

  10. 2 hours ago, wgstarks said:

    Actually, the Deluge website shows that Mac and Windows versions are not yet complete but they do have instructions on how to install v2 on those systems.

    https://deluge.readthedocs.io/en/latest/intro/01-install.html

    Looks like most Windows users maybe SOL for the time being though.

     

    Looks like you’ll have to update to v2 for GTK to work. Should be able to connect via the webUI though. 

    I am unfamiliar with intrepid and incorrectly assumed it wasn't possible to build it on my own. The webui works fine, sure, but the whole reason i use this container and deluge GTK specifically is to be able to use my windows 10 vm to click magnet links and then have them download and live on my array.

  11. 4 hours ago, binhex said:

    Check the deluge web UI there is an option to allow remote connections, tick it and see if that fixes it

    Sent from my EML-L29 using Tapatalk
     

    I'm having the same issue and remote connections is enabled. I connect from a Windows computer, however the deluge website shows no plans for 2.x on mac nor windows... Are you able to duplicate this?

  12. 6 minutes ago, Djoss said:

    Are you able to access the Nginx Proxy Manager interface?

    I apologize for not updating this. I was getting errors in the nginx proxy manager interface when trying to make changes to a new proxy host. I took a screenshot of my config, deleted the docker container + folder within appdata and then installed and configured it fresh. It's now working as expected!

     

    One thing I did notice: when adding multiple proxy hosts i went back to confirm that all of the settings were correct and noticed that almost all of my newly-configured proxy hosts had all of the options unchecked in the SSL tab, and I am 100% sure that I had checked them all off.

  13. I updated nginxproxymanager last night and today I noticed that nextcloud wasn't connecting. I confirmed that my nslookups are still hitting my WAN IP, but none of my subdomains configured through nginxproxymanager are currently working. I did not change any configuration within the container. Is anyone else experiencing this?