Jump to content

lsaranto

Members
  • Posts

    44
  • Joined

  • Last visited

Posts posted by lsaranto

  1. 4 hours ago, SpyisSandvich said:

    Unfortunately your config doesn't really help.

     

    What I ultimately did was downgrade my server version to 13.0.0, which seems to be the last good version that continues to work with my current setup.  Looks like there was an announcement for this on their main site.

     

    I haven't updated Piwigo in a while and I hadn't seen that announcement. I used to be the guy who always ran the latest versions, preferably even beta. Not anymore.

     

    I haven't touched those configs so one would expect those configs that are in the container to work. Or there should be an instructions how to migrate.

  2. 17 hours ago, SpyisSandvich said:

    I tried renaming these with the suffix .old and the server started to function again, but it took me to the setup screen and I'm not sure exactly what in the configs was pointing everything to my existing library.  Any thoughts?

     

    Maybe you just made a typo in the post, but mine is just named 'default' (no .conf extension). Here's its content:

    server {
    	listen 80 default_server;
    
    	listen 443 ssl;
    
    	root /config/www/gallery;
    	index index.html index.htm index.php;
    
    	server_name _;
    
    	ssl_certificate /config/keys/cert.crt;
    	ssl_certificate_key /config/keys/cert.key;
    
    	client_max_body_size 0;
    
    	location / {
    		try_files $uri $uri/ /index.html /index.php?$args =404;
    	}
    
    	location ~ \.php$ {
    		fastcgi_split_path_info ^(.+\.php)(/.+)$;
    		# With php5-cgi alone:
    		fastcgi_pass 127.0.0.1:9000;
    		# With php5-fpm:
    		#fastcgi_pass unix:/var/run/php5-fpm.sock;
    		fastcgi_index index.php;
    		include /etc/nginx/fastcgi_params;
    
    	}
    }
    

     

  3. I post this update here. Maybe it will help someone someday.

     

    So it seems my issue was caused by some plugin. I began to suspect it after I realized Unraid was showing me the same notification about plugins having an update available, even though I had already done the update (several times) successfully. Every time after doing the updates the flash drive lost it's GUID until I rebooted the server.

     

    There were 3 plugins in the notification: Community applications, Unassigned drives and OpenVPN Server TAP.  I'm pretty sure we can ignore the first two. I also may have had both old and new versions of USB Manager installed (which I know I should not have had). I also tried booting into Safe mode and the system ran fine.

     

    So my solution was to rename /config/plugins directory on the flash drive and create a new one. I then copied back just the user scripts and dockerman directories. All other important plugins I reinstalled via GUI.

  4. 22 hours ago, itimpi said:

    Very difficult to tell as it depends on why this is happening.  A GUID should never change so I doubt anyone has looked at the consequences.  It could mean for instance that Unraid is no longer able to read and/or write any configuration information from the flash.

    I just now tested reading and writing to the flash and it was fine at least at this moment. Of course I'm not going to do any changes to the system in its current state., but it's good to have a running system as my smart home runs on it. I think this behaviour is better than what I had before, when my previous flash failed. It just crashed the whole system. I couldn't find anything about the cause in the logs. The only way I knew it was the flash drive was because it wouldn't boot with just a reset. A full power off was needed. Then it booted fine again and the system ran for about a week or so until it crashed the same way again.

  5. I'm having repeating troubles with my USB flash drives. I've already switched drives twice, but the troubles won't go away. With the current drive, after a boot, it shows the flash GUID ok and the license is validated, but after a while the GUID changes to all zeros and Unraid shows BLACKLISTED in the top right corner of the GUI. However Unraid and the array seem to continue to operate normally.

     

     

    I don't have any reliable spare (new) USB drives and it's gonna take a few days to get one. What are the consequences if I just keep running Unraid like this? I believe the array wouldn't start if the license doesn't match the drive GUID, but like I said the GUID is fine after every boot (so far) and the array starts along with everything else. So far I've noticed that I can't remote login with SSH and VMs won't start after it fails the GUID. Are there other disabled features? Or is there something more serious that can happen?

     

  6. On 1/29/2022 at 4:29 PM, SpencerJ said:

    Yep, as long as the flash drive produces a unique GUID, you are fine. 

    I have to disagree with this. I have a Sandisk Blade drive that worked fine for 5-6 weeks. Then it became blacklisted showing all zeros in the GUID. If I reboot, it works fine and shows a valid GUID and the license is ok. It seems to lose the GUID again in 12-24 hours.  I'm now flashing a second Blade. We'll see how long this one lasts.

     

    I originally had one of those Sandisk Fit drives. It started to act up in march after about two years. First the whole system would appear to be crashed. It's a headless system so I don't know if it would have been possible to access it locally, but I was not able to access it remotely (SSH, webUI, dockers, VMs). Sometimes it appeared that some network services kept working. My guess is, if there was an open network connection, it might stay running, but it was not possible to make new connections. After a reboot, the boot process would hang when it tried to read the root image from the flash drive. Full shutdown would result to a successful boot and perfectly running system for a few days or even weeks. Then another crash.

     

    As these Sandisk Blade drives (16 GB) did ok in the SpaceInvaderOne's video I bought a few of them from Amazon.de. IIRC, there was also some availability issues with the drives that did better in the test. Sadly I missed this post.

  7. I had to change my Unraid USB drive and I lost my OpenVPN setup. I managed to grab and install the forked (TAP) server plugin linked a few post above. It's been running fine for a month or so. Yesterday I rebooted the Unraid server and now OpenVPN won't start. The logs showed no error, but I copied the startup command shown in the log

    /usr/sbin/openvpn --writepid /var/run/openvpnserver/openvpnserver.pid --config 
    /mnt/user/appdata/openVPNserver/openvpnserver.ovpn --script-security 2 --daemon

    and ran it in shell. It gave an error: 

    /usr/sbin/openvpn: error while loading shared libraries: libcrypto.so.1: cannot
    open shared object file: No such file or directory

     

    I created a symbolic link for the file in '/usr/lib64'. It then complained about another file:

    /usr/sbin/openvpn: error while loading shared libraries: libssl.so.1: cannot open 
    shared object file: No such file or directory

     

    Again I created a symbolic link for it. Then it gave an error:

    /usr/sbin/openvpn: symbol lookup error: /usr/sbin/openvpn: undefined symbol: 
    SSL_library_init
    

     

    This I don't know how to fix. I'm guessing it's some kind of library mismatch.

     

    I don't remember updating anything related to either Unraid or OpenVPN. So I don't understand what broke or why.

  8. On 3/23/2022 at 6:29 AM, greencode said:

    As the post says I followed this guide to set up the Wireguard and it works as intended, when outside my home network when I activate the VPN my IP address is changed to my home IP address. As far as I am aware this means it works. However, if I try to access the server UI or any of the containers it just doesnt connect. This make sense as I have it set to remote tunneled access which means it cant assess the local network.

     

    I then tried to make a second peer this time set to remote access to LAN but now it does not connect. Not sure what setting to check or where to go. The whole reason I wanted this was so that I could manage my containers remotely. Here is my set up, the top one called Laptop is the that does not work and the Tunnel Only does. The only difference is that I can see is the working one has a peer DNS server set to host subnet but I don't think that is needed for the other to work. 

     

    I have also tried following the guide posted here and the config looks the same so I don't know what is wrong. 

     

    On my WG settings page I have this note when I select tunneled access: "this must be the only peer in the tunnel and sole active tunnel when in use". So according to this, you can't use both at the same time. To me this sounds like a major restriction.

     

    For some reason in my gui most of the settings for tunneled access are different then what is shown in the guide(s). And the eye icon to show and download the peer setting is disabled.

     

  9. 8 hours ago, EwanL said:

    I have successfully installed the Piwigo docker from CA, but every time I edit the instance (to map a shared folder) I am presented with the Install.php configuration form again.

     

    It doesn't look like any configuration files are being saved to appdata/piwigo which I guess means the docker drives are being blown away each restart. 

     

    Edit: has anyone else had this issue and managed to solve it?

     

    I think you have the same issue as me. You need to add another path mapping in the container's configuration. See my post above.

  10. 10 hours ago, ljm42 said:

     

    I have actually never used that feature. Thinking out loud... I wonder if it can't block access to resources that are on this server? See if it works to block access to something else on your network.

     

    Doesn't seem to block access to other LAN devices either.

  11. Now I have an issue with the Local tunnel firewall option, which doesn't seem to have any effect. I've entered the IP of the docker I want to access and changed the rule to Allow. However when testing I can still access any IP on the LAN. I also tried rule Deny, but that didn't have any effect either.

     

    Have I misunderstood the purpose of that setting?

  12. I think my docker settings were somewhat corrupted. I think even though 'Host access to custom networks' showed enabled it actually wasn't. Possibly for containers created after a certain Unraid update. I stopped and started docker service a couple of times and toggled Host access setting back and forth in-between. Now I got access to docker containers with custom ips, too.

  13. 1 minute ago, ljm42 said:

     

    It looks like you have set the Gateway to the IP of your router? Per the OP that should be the IP of your Unraid system.

     

     

     

    Yes, I should have been more clear. My Unraid is 192.168.0.1. My router is 192.168.0.254. Call me weird, but I don't like the router taking the first address.

  14. I'm trying to get access to a docker that has custom Ip. I've tried to do everything listed in complex setups section, but just can't get it to work.

     

    Currently I can access Unraid server over the wg connection. I can access dockers that use the server ip. I can access other lan devices. But cannot access dockers with custom ip.

     

    I have set Use NAT to No and I have Host access to custom networks enabled. I'm using DD-WRT on my router and have set static route as follows. Is it set correctly?

     

    Edit: I have Peer type set to 'Remote access to LAN'.

     

    routing.png.2fcbdb9d68217aeed28c2b71bb69faa1.png

  15. Do you have a gallery directory in the /mnt/user/appdata/piwigo? Because I don't have it. There's a symbolic link in piwigo/www to /gallery but that doesn't of course work outside the container.

     

    If I use the LocalFiles Editor plugin and save the edits, I can find the edited file under /gallery path, if I look for it inside the container. Aren't those files and edits lost when the container is restarted, because they exist only inside the container? What's the deal? Or are the edits also in the database?

     

    Edit: Docker hub document instructs to expose /galley. I wonder why it's not included in the template. Pretty major thing to miss.

    • Like 1
  16. On 1/4/2021 at 7:24 AM, Vaslo said:

    I'm having trouble getting lychee to work.  I can get the app up and running. My Lychee has the following values:

    Host Port: 89

    Host Path: /mnt/user/Photos/

    PUID: 99 (matches the MariaDB)

    PGID: 100 (matches the MariaDB)

    UName: lychee

    PW: lychee

    DBase Name: lychee

    App Data Config: /mnt/user/appdata/lychee

     

    I had created the database earlier and used commands that were similar to how to setup the DBase for NextCloud. 

     

    The app opens up but there are no photos.  When I try to manually upload photos from my desktop I get ( I have no idea what the console of a browser is):

    image.png.7498f850bc20347480678908e4ad53d4.png

     

    If I try to import from the server using the Host Path above and forcing it to be manual, I get:

    image.png.87bd1ed65ba3bd1cc2add4d656fd2712.png

     

    I was also getting an errors like "Given path is not a directory".  I tried messing around with all the settings but I cannot get it work.  Diagnostics say:

     

    
    
    
    
        Diagnostics
        -------
        Info: Latest version of PHP is 7.4
        Error: '/app/lychee/public/uploads/big' is missing or has insufficient read/write privileges
        Error: '/app/lychee/public/uploads/medium' is missing or has insufficient read/write privileges
        Error: '/app/lychee/public/uploads/small' is missing or has insufficient read/write privileges
        Error: '/app/lychee/public/uploads/thumb' is missing or has insufficient read/write privileges
        Error: '/app/lychee/public/uploads/import' is missing or has insufficient read/write privileges
        Warning: You may experience problems when uploading a photo of large size. Take a look in the FAQ for details.
        Warning: Dropbox import not working. dropbox_key is empty.
    
    
    
    
        System Information
        --------------
        Lychee Version (release):        4.0.8
        DB Version:                      4.0.8
        
        composer install:                --no-dev
        APP_ENV:                         production
        APP_DEBUG:                       false
        
        System:                          Linux
        PHP Version:                     7.3
        Max uploaded file size:          20M
        Max post size:                   200M
        MySQL Version:                   10.4.17-MariaDB-1:10.4.17+maria~bionic-log
        
        Imagick:                         1
        Imagick Active:                  1
        Imagick Version:                 1802
        GD Version:                      bundled (2.1.0 compatible)

    )

     

     

    I spent a few hours but I'm pretty stuck.  Any suggestions?  Thanks in advance

    Did u get it work? I attached to the docker console and created the missing directories with proper owner and permissions. I feel that that is not the right way. I also exposed the path /app/lychee/public/uploads/import/ and put some files in it . This way I got the import running. It ran for a while, then stopped and didn't load any pages anymore (some API error). Now it won't start anymore.

     

    What is the exposed /pictures path. Is that supposed to be used as import source?

  17. I've been looking for something like this. I managed to get it running, added photos, but there's no albums. It shows all photos in the Photos tab. No sub sections or albums for directories. And I can't find a way to add albums manually. The github page has no documentation.

     

    Edit: Nevermind, I found where to add albums. It's when you click pen icon on a photo. It's just that I have 5000 photos. I'm not gonna add them to an album one by one.

  18. I don't remember too well all the things what I tried. I wasn't (and I'm still not) too good with Linux. I think the GFX card was a factor why I din't get it to work. I'm now using a Geforce 710 card. After booting up Unraid I start the VM once so it grabs the display. When I shutdown the VM the monitor goes to power saving mode.

     

    So I have access to console (or GUI) only after server reboot. This allows troubleshooting if needed, e.g. when network isn't working. Other times I use SSH or webUI anyways. This has worked for me.

     

    About the second part of my original post: I was setting up smart home stuff soon after, and I setup a zigbee button to start the VM. The button is placed next to the monitor.

  19. Final update: The HD 7750s worked, but I could not find any logic in it. They worked both whether they were the primary or the secondary gfx card. Sometimes you got just one good VM launch. Sometimes more. And then on the next try just black screen. Then that gfx card didn't work until the server was rebooted. The secondary card was maybe a bit more co-operative.

     

    I ended up getting a GT710 and a GT1030.

×
×
  • Create New...