Nem

Members
  • Posts

    177
  • Joined

  • Last visited

Posts posted by Nem

  1. 1 hour ago, Ryonez said:


    Https. This is starting to feel like a mess. Https redirects to the local domain name ("atlantis.local" for me) and then my browser throws an error, which is attached. I will use http for now because I have to, but what is going on?

     

     

    I have the same issue. Made a thread about it here but no solution so far: 

     

  2. Probably at least 4C/8T 3.3+GHz. Not an insane amount of horsepower as the GPUs will be doing the heavy lifting

     

    Dont I need integrated graphics in order to do GPU passthrough? I seem to remember trying passthrough with one of the i5s or i7s and it didn't work because it lacked an iGPU, but I may be remembering wrong...

     

    Do none of the E5s have integrated graphics?

  3. Im planning on upgrading CPU, motherboard, and RAM in my server. Currently trying to find a CPU. Any recommendations for a Xeon that supports the following:

     

    - ECC RAM

    - hyperthreading

    - VTx and VTd

    - Integrated graphics

     

    The main reason for the upgrade is that I have 2 VMs, both of which need access to 2 separate physical GPUs for machine learning purposes, so I think this requires the CPU to have integrated graphics and support both VTx and VTd

     

    I took a look at the ARK website, but there are so many CPUs that meet the requirements. Anything I should pay attention to that differentiates them? Is there a Xeon that is most commonly used in unraid builds that meet the requirements above?

  4. the FAQ has been updated to include log rotation for docker containers. If I turn this feature on now, does affect existing containers? i.e. should I expect it to go through my existing (overly large) log files and rotate them out based on the options I chose? Or do you need to reinstall the container for log rotation to apply?

  5. The Fix Common Problems plugin has indicated a similar warning for a number of my containers:

     

    CouchPotato (needo/couchpotato:latest) has the following comments: The unRaid community generally recommends to install the CouchPotato application from either linuxserver's or binhex's repositories This application has been deprecated from Community Applications for that reason. While still functional, it is no longer recommended to utilize it.

     

    Whats the correct way to migrate to, say, linuxserver's version of couchpotato while retaining all of my settings and related files?

     

    I didn't post this in any specific container thread in the other subforum because I have this issue with quite a number of my containers

    • Like 1
  6. Just updated unraid to 6.4 and turned on SSL. It redirects me to https://server.local/Settings/Identification (the unraid machine is called "Server"), but its an error page containing: 

     

    server.local didn’t send any data.

    ERR_EMPTY_RESPONSE

     

    When I try to navigate to http://<localip> I get redirected to the same place

     

    As of right now, I can't access the webgui. The array still seems to be online, and I can manually turn off SSL in the ident.cfg file, which brings everything back

     

    Probably unrelated, but I have a VM set up as an apache proxy for some docker containers. My router forwards all 80 and 443 requests to that VM. Given that, I still expect the unraid gui to work when trying to get there via the local ip (my ultimate goal is to add the unraid gui to my proxy list)

  7. Is it possible to plug an external USB drive into an unraid box, and have it perform some action? Say, rsync some directory with the external USB automatically? I've seen this type of functionality on synology boxes and was wondering what support unraid has for external (temporary) drives

  8. 9 hours ago, binhex said:

    logs please guys, i can only guess without them, switch DEBUG to true, restart the container and wait a few mins and then post the log (watch out for passwords).

     

    How do you turn on debug mode and get access to the correct log file?

     

    (the https fix didnt work for me, nor did using a different device)

  9. 1 hour ago, AnyColourYouLike said:

    I'm having issues similar to pe4nut1989 above. After updated Unraid to 6.3.3 I can't access the webgui. My automated tasks show it's still running/grabbing things, I just can't access webgui. I did take a look at the settings and didn't notice anything awry, or that 'Container Variable: LAN_NETWORK' had changed like pe4nut1989's experience.

    PIA enabled or otherwise, can't get host ip/8112 page


    Anyone else?

     

    I'm experiencing the same thing (although I'm on 6.3.2 and using airvpn instead of pia). I cant access the webgui, and I cant connect using desktop Deluge either (host status is red)

     

    When I look in the log for deluge (through the unraid gui) I dont see anything in there that stands out as being problematic. How else could I diagnose this problem?

  10. OpenVPN requires a certificate that you generate so nobody else has it... so that's one reason it's more secure than a normal public facing open port. Plus, there is no HTTPS/SSL for unRAID's web GUI. If you are just passing a specific docker port like 3400 for Plex, that is fine as Plex has HTTPS/SSL support baked in.

     

    Just to be clear, you aren't talking about passing port 80 to unRAID from WAN, correct?

     

    well I have nginx on port 80 and moved unraid to 88. So on my router I pass port 80 through to the nginx docker, which has an SSL certificate, so I'm guessing thats a secure setup?

  11. I'm getting an error when I try to run this from the command line:

     

    Phase 1 - find and verify superblock...

            - block cache size set to 709592 entries

    Phase 2 - using internal log

            - zero log...

    zero_log: head block 2424704 tail block 2423719

    ERROR: The filesystem has valuable metadata changes in a log which needs to

    be replayed.  Mount the filesystem to replay the log, and unmount it before

    re-running xfs_repair.  If you are unable to mount the filesystem, then use

    the -L option to destroy the log and attempt a repair.

    Note that destroying the log may cause corruption -- please attempt a mount

    of the filesystem before doing this.

     

    However, I'm now able to get into the webui, so if I click on drive 2 and go into the Check Filesystem Status section and run the repair tool with the -n flag, I get this output:

     

    Phase 1 - find and verify superblock...

    Phase 2 - using internal log

            - zero log...

            - scan filesystem freespace and inode maps...

    Metadata corruption detected at xfs_agf block 0x575428d9/0x200

    flfirst 118 in agf 1 too large (max = 118)

    agf 118 freelist blocks bad, skipping freelist scan

    agi unlinked bucket 37 is 193213157 in ag 1 (inode=2340696805)

    sb_ifree 210, counted 205

    sb_fdblocks 495495645, counted 495211384

            - found root inode chunk

    Phase 3 - for each AG...

            - scan (but don't clear) agi unlinked lists...

            - process known inodes and perform inode discovery...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

            - process newly discovered inodes...

    Phase 4 - check for duplicate blocks...

            - setting up duplicate extent list...

            - check for inodes claiming duplicate blocks...

            - agno = 0

            - agno = 1

            - agno = 2

            - agno = 3

    No modify flag set, skipping phase 5

    Phase 6 - check inode connectivity...

            - traversing filesystem ...

            - traversal finished ...

            - moving disconnected inodes to lost+found ...

    disconnected inode 2340696805, would move to lost+found

    Phase 7 - verify link counts...

    would have reset inode 2340696805 nlinks from 0 to 1

    No modify flag set, skipping filesystem flush and exiting.

     

    Is it adviseable at this point to use the webgui and run it without the -n flag? Or should I be concerned about the log error?

  12. so I logged into root by plugging a keyboard into the server, changed the line in disk.cfg so start array is set to no

    rebooted

    logged back in as root and tried to run xfs_repair -v /dev/md2

     

    It gives me an error:

     

    /dev/md2/: No such file or directory

    Could not intialize XFS library

     

    I guess thats happening because its not part of the drive array anymore because I stopped it so I should use /dev/sdXX/ instead, but how do I tell which one is disk2? I have sda, sda1, ..., sde, sde1

  13. I upgraded to the latest version of unraid the other day and today I've been going through the process of upgrading docker containers. Some strange things started happening, but unfortunately I cant go into specifics because I wasnt expecting anything to go wrong so didn't go looking for warnings/error messages

     

    The server started its monthly parity check and at about 70% I noticed that the web ui was no longer responsive and I couldn't access my shares. I thought maybe the server had crashed and restarted the server physically. Web ui started working again and a notification said the parity check was OK (which it clearly wasn't because I don't believe it even finished)

     

    I upgraded my sonarr container, after which when I try to access a mapped share from my Windows machine all subdirectories appeared empty. No amount of server/client restarts made the directories visible. They could only been seen when I navigate directly to the ip (as opposed to through the mapped share)

     

    I then upgraded my nginx-letsencrypt container, and after downloading images an error showed up saying something about layers not being found. I didn't have time to write this message down because I then lost connection to the web ui

     

    Restarted server and clients and now the unraid machine is completely dead. I cant access shares from clients, the web ui doesn't work either.

     

    I plugged a monitor and keyboard into the server and normally it would boot and stay on the user login screen. I noticed that some more text appears after it asks for a username:

     

    https://dl.dropboxusercontent.com/u/33102508/2016-10-01%2010.03.38.jpg

     

    I tried to log in as root despite the extra text, which worked, and took a look in /mnt/cache/ and one of my hard drives cant be found.

     

    As of right now the server is unusable as I cant access either the web ui or any shares. I'm not sure what the physical hard drive would have to do with the drive that went missing as it was a data drive and all of my docker containers are stored on a cache SSD. Maybe it has something to do with crashing on the parity check?

     

    How should I diagnose and fix this problem? At the very least I want to backup/recover the data on the server in case I end up losing anything...

  14. I've read in a number of places that unraid is not secure enough to run an internet facing web server on the machine, and running things like nginx/apache reverse proxy are not advised. Could anyone explain why this is the case? I would like to be able to hook up a domain name and access some of my dockers while I'm away from my LAN

     

    On a related note...if it is insecure to run a webserver/reverse proxy on an unraid machine, is it also not advised to run an openvpn server on the machine for the same reason? If ovpn servers (in a container) is secure, what makes that different from running a webserver in a container?

  15. I just tried that but its giving me an error:

     

    cp: failed to clone ‘/mnt/cache/VM/UbuntuNew/vdisk1.img’ from ‘/mnt/cache/VM/UbuntuBase/vdisk1.img’: Invalid argument

     

    What I ran was:

     

    cp --reflink /mnt/cache/VM/UbuntuBase/vdisk1.img /mnt/cache/VM/UbuntuNew/vdisk1.img

     

    Both source and destination are on my SSD, which is BTRFS formatted

     

    A file shows up at the destination but its size is 0, so at least the paths are correct

  16. When I started using unraid almost year ago I used to check my docker containers for updates weekly, and every week there was at least 1 container that needed to be updated to a newer version

     

    However, I've noticed that for the past 2 months whenever I check for updates, no update has been available for any containers as they are all always 'up-to-date'

     

    I'm wondering if this is a bug, or if something changed in unraid that is causing it to say there are no updates, even if there might be? Plugin updating (e.g. community applications) seems to be fine as I see updates every now and then

     

    Given I used to update containers almost weekly it seems strange that for a 2 month stretch no updates have been available for any of them. Is there a way to manually check (outside of the unraid/docker interface) whether an update is available for a given container?

     

    I recently had an issue with nginx-letsencrypt where it wasn't autorenewing my SSL certificate. Turns out that was a bug that was fixed in the most recent version of the container, however, no container update was available and I had to manually reinstall it. So clearly in that case there was a newer version of the container available but unraid wasnt seeing it

     

    This is on unraid 6.1.9, and the containers I have set up are:

    - couchpotato (needo)

    - ddclient (captinsano)

    - delugevpn (binhex)

    - embyserver (emby)

    - nginx-letsencrypt (aptalca)

    - openvpn access server (linuxserver)

    - sonarr (needo)

    - zoneminder (aptalca)