sonofdbn

Members
  • Posts

    492
  • Joined

  • Last visited

Posts posted by sonofdbn

  1.  

    I have had a similar problem to what @csendre reported a few posts above, as described in the thread above. I followed the suggestion in that thread of removing the SSD that I had as an Unassigned Device (removed virtually) and now everything looks fine except that of course I can't use the SSD.

     

    It does look as if something is going wrong with my UD setup, but I haven't changed anything for a long time, except that a few days ago I couldn't access the SSD and had to set the Unassigned Devices SMB Security Settings to Yes (which fixed that issue). And I updated the UD plugin during this time. Currently on 2023.02.26 (and unRAID 6.11.5).

     

    I've attached my syslog and diagnostics files. Please note that the syslog covers about 2 - 4 restarts while I tried to fix the problem, as well as unclean shutdowns.

     

    Hoping for some advice on how to fix my UD situation.

    syslog-192.168.1.14 (3).log tower-diagnostics-20230301-2348.zip

  2. Unfortunately It's happened again. GUI is getting unresponsive. Main tab shows up with the Unassigned Devices showing only the unRAID logo. (Pic is attached.) Can't get into other tabs, like Dashboard or Docker. VM is still OK, but looks like containers aren't running, or at least can't be accessed. Can't get to Plex, qBittorrent or Audiobookshelf from local network, for example.

     

    I've uploaded the current syslog. Can't get to the GUI page for getting diagnostics. Parity check is running because of previous unclean shutdown.

    Missing UD.png

    syslog_2.log

  3. Just over the last few days the GUI has hung and on the Main page instead of showing the single Unassigned Device that I have, in the UD section there is an animated unRAID logo. At first, I can still access a few things, including files on the UD, and most (but not all) docker containers and my Win10 VM still seem to be running. But gradually the whole system seems to hang. When I restart, everything seems fine (although I end up with an unclean shutdown - but that's another issue I think), but then after a while - some hours or a day or so - it happens again.

     

    I'm on 6.11.5.

     

    I've attached diagnostics and a syslog file. Hope someone can help.

    syslog-192.168.1.14.log tower-diagnostics-20230227-2214.zip

  4. Just, I hope, to close this: got everything working by fixing my Swag configuration. My problem was that I hadn't updated the proxy.conf file to the latest updated one. I was still using a version from 2019. Once I updated, I connected with no problem and could hear the audio.

     

    TL;DR If you're using Swag for reverse proxy, use the default config file for audiobookshelf and make the very minor change to server_name for your sub-domain. Also make sure your proxy.conf is up-to-date. (Easily checked by looking at the Swag logs.)

    • Like 2
  5. Thanks for the quick response. Indeed using "https://" and my abcaudiobookshelf.domain.org address (no port) got me in. But while I can see the library, I can't listen - the cloud icon is orange and connection status is "Socket not connected".

     

    (Just documenting this for anyone reading the thread. I'll go onto Discord and look around.)

  6. On 10/5/2022 at 1:07 AM, jxjelly said:

     

    I don't think this is the correct way to do this. Not sure what problems you were having with the standard config. This worked for me, just make sure you're on the same network as SWAG

     

    ## Version 2021/05/18
    
    # make sure that your dns has a cname set for <container_name> and that your <container_name> container is not using a base url
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name audiobookshelf.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
        # enable for Authelia
        #include /config/nginx/authelia-server.conf;
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /ldaplogin;
    
            # enable for Authelia
            #include /config/nginx/authelia-location.conf;
    
            include /config/nginx/proxy.conf;
            include /config/nginx/resolver.conf;
            set $upstream_app audiobookshelf;
            set $upstream_port 80;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
        }
    
    }

     

     

    I'm struggling to get this to work, and quite possibly I'm misunderstanding what I'm meant to be doing. I'm using Swag, and Audiobookshelf is on the same network, and I have subdomains set up. (Swag currently works fine for Nextcloud and Calibre-web.) My objective is to be able to reach Audiobookshelf when I'm away from my home network.

     

    I see that the above config file is pretty much the same as the sample audiobookshelf.subdomain.conf file that comes with Swag, so this looks like it should be quite sound.

     

    My question is about

    # make sure that your dns has a cname set for <container_name> and that your <container_name> container is not using a base url

    How would I go about doing this? My container name is Audiobookshelf, and I've got a file named audiobookshelf.subdomain.conf with the lines as set out by @jxjelly above. The only difference is that my subdomain is in the form abcaudiobookshelf.domain.org, so I've set the server_name line to

        server_name abcaudiobookshelf.*;

     

    I'm not sure whether the problem actually lies with the way I try to connect to Audiobookshelf from Android. On my home network over wi-fi I just use the internal IP address and port 13378 and it connects fine. But when I try from outside my home network, I've tried using abcaudiobookshelf.domain.org (without a port), as well as abcaudiobookshelf.domain.org:13378, I get a message in the app saying "Failed to ping server". I also can't reach Audiobookshelf from my Win10 browser when I try abcaudiobookshelf.domain.org with and without the port.

     

    In my container template, I've set the Network Type to "Custom: someproxynet" where someproxynet is the proxy network used by my other reverse proxy containers. Also, my Web UI Port is set to 13378 for the container port 80.

     

    Any help would be appreciated.

  7. This is a slightly odd question and probably didn't word it too well, so to give some context: I understand that not having ECC RAM increases the risk of data corruption. My question is around where this data corruption might come from in daily operations.

     

    For example, if I'm writing data to the unRAID server from my Windows PC, or transferring family photos to the server, I can see that I'd like as much protection as possible. But let's say I was running some docker containers like calibre, bubbleupnpserver and Plex and some non-serious VMs on an unRAID server without ECC RAM, but keeping the data on a second unRAID server with ECC RAM, what is the risk? Or put another way, is that a significantly bigger risk than running everything on a server with ECC RAM?

     

    I assume it's possible that something I do in calibre might mess up a book if there was a RAM error, and that corrupted data would then be copied to the data server. But then again I don't have ECC RAM on my Windows PC, and that's the source of some important data that goes onto the server.

     

    (I do have ECC RAM on my current server - as well as an extra unRAID licence 😀)

     

  8. On 2/9/2023 at 6:23 AM, Zonediver said:

     

    Transcode to what?

    The iGPU of my i7-9700 can transcode 4 Streams with 4k to 1080p/8MBit without a problem - the iGPU laughs at the four streams.

    So the question is: What is the target resolution and bitrate...

     

     

    That's an excellent question, and proved to be much harder to answer than I expected. Turns out I didn't know as much about Plex as I should. So after setting up a second Plex server (didn't know I could do that) on a Windows 10 PC with an i7-9700T CPU, and some experimenting, it looks like going for 1080p at 8 Mbps is a reasonable target for 4K HEVC files. Had a bit of a problem with HDR tone mapping - according to Plex it doesn't work with Intel under Windows (but is fine for Linux and dockers) - so couldn't get H/W transcoding going for a while.

     

    So next question is, if I want ECC RAM, which Intel CPU and motherboard combos would work?

  9. OK, after a bit of reading and thinking, which I should probably have done first, things are more complicated than I thought.

     

    I thought that transcoding was just transcoding, but it seems that transcoding 4K is much more demanding. Having seen all the dire warnings about NOT transcoding 4K, I'd still like to be able to do it. I had thought there was no problem, as family members watched various things with no complaints, but it turns out that they were watching mainly DVD rips and maybe occasionally 1080p files on devices like Amazon tablets, and possibly rarely (never?) watching 4K media. I had previously tried watching 4K media on a lower-res device and there was terrible stuttering, which I had attributed to a) the device and b) bad wi-fi.

     

    Anyway, while there are recommendations to avoid the problems of 4K transcoding by keeping lower-res versions, I don't particularly want to spend time re-encoding files and managing different versions. So hardware-wise, what's needed to transcode 4K movies? There's a lot of different and conflicting information out there, and some of it is not very recent, so I don't know if the technology is more capable now. Even pointers to reliable sources would be very helpful.

  10. I'm considering my next unRAID server. On my current unRAID I have a Xeon D-1541 (doesn't have Quick Sync), no GPU and run Plex. I don't think I have more than 2-3 Plex streams that need transcoding simultaneously and there don't seem to have been any problems.

     

    I'm wondering whether I need to get an Intel CPU with Quick Sync for the next server for future-proofing (I'm very unlikely to be increasing the number of Plex streams). I think the other option for transcoding is to get a graphics card, but I don't think the expense is warranted. The reason I don't just go with Intel is that I have a slightly sentimental and illogical desire to try AMD for the next server.

     

    I'm looking only at "normal" PC parts, no server-level CPUs - no Xeon, no Threadripper. For the AMD CPU I'd be looking at something in the current generation with lots of cores.

     

    So is Quick Sync necessary for transcoding just a few streams?

  11. Thanks so much! Got Nextcloud working again.

     

    What I did after reading the Swag support thread and after stopping Swag:

    1. Went to my Swag folder in /mnt/appdata

    2. Went to the nginx sub-folder

    3. Renamed ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (in case something went wrong)

    4. Made a copy of ssl.conf.sample and named the new file ssl.conf

    5. Made a copy of nginx.conf.sample and named the new file nginx.conf.

    6. Restarted Swag.

     

    NOTE: I didn't have any customisations in the ssl.conf and nginx.conf files.

    (I can't claim any credit for this - all taken from the Swag support thread)

    • Like 3
    • Thanks 1
  12. OK, thanks for that info @dius. Now, I'm very hazy about exactly what it does, but I use Swag (previously Letsencrypt) in conjunction with Nextcloud. The Swag logs seem to be constantly writing this line:

    nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/stream.conf:3

    Maybe that points to something that's relevant to the problem.

  13. @dius I'm having the same problem. Nextcloud was working fine, today after updating recently my Windows client can't access Nextcloud (which I access via a domain).

     

    No errors that I could see in the log but there were these lines:

    **** The following active confs have different version dates than the samples that are shipped. ****
    
    **** This may be due to user customization or an update to the samples. ****
    **** You should compare the following files to the samples in the same folder and update them. ****
    **** Use the link at the top of the file to view the changelog. ****
    /config/nginx/nginx.conf
    /config/nginx/site-confs/default.conf
    
    cont-init: info: /etc/cont-init.d/85-version-checks exited 0

     

    I don't know much about nginx but I think it's the application that allows the accessing via the domain. So maybe there's a problem here?

  14. Thanks for the quick reply.

     

    I run my server on a UPS because of occasional lightning power outages, and it's worked well. But over the last few months I've had unclean shutdowns. At first I thought it might be lack of battery capacity, but after shutting down the server (cleanly I thought) and then restarting after a holiday, I got the unclean shutdown message. Today there was a power outage and the UPS kicked in and should have shut down the server cleanly, but again on restarting I got the unclean shutdown message and the parity check started.

     

    I'm on UNRAID 6.9.2, and there's usually one VM running and a few dockers. There's one unassigned device. I've attached the diagnostics file generated at the time of the shutdown. Hope someone can give some suggestions about how to fix the unclean shutdowns.

     

    tower-diagnostics-20221216-1714.zip

  15. Normally if I need to create a diagnostics zip file, I can go to Tools in the GUI and there's an option box to "Anonymise diagnostics". But in the case of an unclean shutdown, the file gets created automatically, so there's no option to anonymise. Should it be anonymised, and if so, how? Or is there a folder from the zip file that could be uploaded to the forum safely (e.g., just the logs folder)?

  16. 14 hours ago, BVD said:

     

    This sounds like a space issue - you're at something like 0.4% free space, and even if that equates to multiple GB of space, you have to take into account the fact that the filesystem has journaling/maintenance to address that uses your free space to do so. Any filesystem (and each disk contains a single unique filesystem - unraid just presents the array as a single namespace for it) starts acting more and more wonky the closer you get to capacity; even for write-once-read-many things like you'd typically put on the unraid array, 3% is the bare minimum I'd want free (5% is better).

     

    Add more capacity, and I'm near certain your issue will go away.

     

    I think space was indeed the problem. My Disk 1 was very full, so I deleted some stuff. Didn't touch Nextcloud, but next time I looked, everything was working. Thanks very much for the help.

  17. 19 hours ago, BVD said:

     

    Pull up the container's console and check the available storage from the container's side - just 'df -h' will show you something like:

    Filesystem         Size  Used Avail Use% Mounted on
    overlay             80G   30G   51G  37% /
    tmpfs               64M     0   64M   0% /dev
    tmpfs               24G     0   24G   0% /sys/fs/cgroup
    shm                 64M     0   64M   0% /dev/shm
    shfs                91T   61T   30T  68% /data

     

    If nothing there's showing near full, do the same from your unraid terminal. Depending on what's full, it may be as simple as a restart of the container, up to possibly remounting a tmpfs volume with a larger capacity specified - whatever the case, finding out what's full should give you the breadcrumb trail needed to research and correct it.

     

    If nothing is legitimately showing 'full', I'd increase the nextcloud logging level and reattempt.

    Edit - assuming you've verified you've got all your array disks set to available to the share mounted to the /data dir of course, though I'm sure you've covered that lol. Doesn't matter how much free space you have in the array if the nextcloud containers share isn't allowed to use em all hehehe

     

    Thanks for the suggestions. I checked, but everything looks OK. (Sure, some storage said 100% use, but it was a rounding issue - when you have an 8TB disk you can still have 34GB free even though usage is 100% allowing for overhead).

     

    But I was encouraged to check further, and wonder if it's not a permissions problem. Using WinSCP (I'm not a Linux person) I was that the owner of the folders in my Nextcloud data was "nobody" - seems quite common and expected, so no issues there. But the rights of the folders vary: either A) rwxr-xrwx (0757) or B) rwxr-x--- (0750). These are the folders on Unraid that are synced with folders on my Windows PC. Don't know why they would be different, and, if one is "better" than another. Or are these different permissions expected?

     

    I seem to have inconsistent results when I test the syncing. I thought at first that new files were not syncing to 0750 folders, but were syncing to 0757 folders, but this seems not to be the case. But it's getting late here; I'll test a bit more tomorrow.

  18. My Nextcloud (24.0.4.1) container has been running fine for a while, but just recently it's been unable to sync new PC files to the server. This happened when I copied a couple of small files (2MB-4MB) to a synced folder on my Win 10 PC. The new files are not being synced (I get the small red x icon in the lower left of the file icon). In the error logs, I get continuous messages like this:

     

    2022/08/19 15:01:05 [error] 183#183: *825 FastCGI sent in stderr: "PHP message: PHP Notice:  fwrite(): write of 1798 bytes failed with errno=28 No space left on device in /config/www/nextcloud/lib/private/Log/File.php on line 89" while reading response header from upstream, client: 172.18.0.3, server: _, request: "GET /index.php/css/user_status/62ab-0e7b-user-status-menu.css?v=41545bc493ea402b7a9b19b640ab406b-6b0fbc6d-2 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "the hostname I use"

     

    The line 89 mentioned above reads:

    fwrite($handle, $entry."\n");

     

    Here's that line 89 in context in this extract from File.php (8th line down):

    	public function write(string $app, $message, int $level) {
    		$entry = $this->logDetailsAsJSON($app, $message, $level);
    		$handle = @fopen($this->logFile, 'a');
    		if ($this->logFileMode > 0 && is_file($this->logFile) && (fileperms($this->logFile) & 0777) != $this->logFileMode) {
    			@chmod($this->logFile, $this->logFileMode);
    		}
    		if ($handle) {
    			fwrite($handle, $entry."\n");
    			fclose($handle);
    		} else {
    			// Fall back to error_log
    			error_log($entry);
    		}
    		if (php_sapi_name() === 'cli-server') {
    			if (!\is_string($message)) {
    				$message = json_encode($message);
    			}
    			error_log($message, 4);
    		}
    	}

     

    I have no experience in this, so please bear with me. From the error message it seems that something has run out of space, but I have no idea where. My Unraid dashboard Memory section tells me that only 65% of the docker image is used and Log is showing only 1%. My array has plenty of free space, cache disk (where the container appdata sits) has 50GB free space.

     

    Googling hasn't produced anything that seems to help/work. I really don't think I changed anything - other than updating containers and plugins on my Unraid machine. Any advice or suggestions would be greatly appreciated.

     

     

     

  19. I'm on 6.9.2 and have 8 disk in the array (almost all 8TB) and two parity disks (both 8TB). Given case size, motherboard, HBA card, SSD cache and one SSD unassigned device, I'm maxed out as far as storage devices are concerned. My longer term goal is to increase capacity by increasing the standard disk size to whatever the new sweet spot is - maybe 14TB? So the problem is that first I need to buy 2 14 TB drives to replace the parity drives, and then a third 14 TB drive for data before I get any increased storage capacity.

     

    I had a similar problem moving from 4TB to 8TB, so it's not a new thing, and given the long rebuild times with larger disk sizes, I do feel that dual parity is needed. In reality, the important things are backed up, so it's not a mission critical issue, but it would just be a huge pain to rebuild from scratch.

     

    Is there any alternative? Or is there an Unraid roadmap of where parity is going? (Apologies for not keeping up on this.) It seems that with ever-increasing hdd sizes but minimal improvement in r/w speeds, the rebuild times will just get MUCH longer and dual parity becomes even more important.

  20. I've managed to install a Win XP SP3 VM on my Unraid box (6.9.2), without too much trouble. The machine is pc-i440fx-5.1 (with SeaBIOS). I'm running two Logical CPUs and using 2GB RAM.

     

    But after installation when I check Device Manager I see some devices with the yellow exclamation mark indicating that the drivers have not been installed. So I've tried using Windows to update the drivers, searching on the virtual cd-rom, which is virtio-win-0.1.217.iso, and on the virtual floppy, which is viostor-31-03-2010-floppy.img.

     

    For some devices, like the Ethernet controller, the driver installs without a problem. But for some PCI devices and the Balloon driver (not sure what that is) there is a list of possible drivers. I believe Win XP SP3 is 32-bit, so I assume I should pick the xp/x86 ones but in the Uraid Wiki where it talks about loading drivers during installation it says to not use the x86 folder, and use the amd64 folder instead. Is this irrelevant for installing drivers after installation? Or is this because the Wiki is covering only 64-bit Windows OS's?

     

    Regardless, I've tried various drivers from  and the messages I get are either that it's the wrong driver or that the installed file is newer than the one I've selected. When I've selected the older file, it installs and the yellow exclamation mark goes away, but the VM doesn't seem stable. If I leave it running, it ends up with a black screen, and I have to do a hard stop from the VM tab in Unraid. After that the VM won't boot at all.

     

    How do I get the correct drivers?