thespooler

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by thespooler

  1. 1. Add a new share 2. Select the 'Read settings from ' option from the upper right and pick an existing share 3. 'Include Drives' and 'Exclude Drives' drop downs disappear (In Brave anyways) 4. If you click on 'Help', it completely covers the drive boxes 4. 'Reset' at the bottom and just set it up manually to continue
  2. I just installed my first docker with this, and while trying to edit stack - > compose file, after hitting save, I've lost all the UI buttons at the bottom and the UI is now inserted into the edit description area. It seems rather permanent, leaving the Docker page and coming back does not fix it. UPDATE: I was able to Edit the description to just be My Description and brought everything back. I swear it had HTML in that field to begin with as I thought that was odd and I guess I removed the textarea closing tag. Raw html in a field is usually a security no-no for this very reason of modifying beyond the intent. IMGUR
  3. I managed to get it to run by only providing a filename in the DB_FILEPATH variable. Knowing the database wouldn't survive an update, I just wanted to take it for spin. The docker user is not 'nobody', so there's going to be permission issues with appdata. Best solution for me was to just switch to the linuxserver.io container and everything worked flawlessly.
  4. Lost all my shows to a bug in 0.8. 0.8.1 is out now. How soon does the docker get updated? I see the last build was manual.
  5. I acknowledge this isn't so much of a bug, but it cost me 20 minutes of resorting all of my dockers. Clicking anything in a table header should not be destructive. The icon looks just like the icon generally used to change the table display from icons, grids, details, etc. I clicked it before the tool tip showed up because again, clicking anything in the header should not be destructive, it just effects the sorting order. Since there is already a row of buttons below the dockers, it should be included down there as a "Reset Starting Order" button.
  6. @bonienl I appreciate you commenting on this. I'm happy to hear there is a change going forward that might address this. While it's true we might not all suffer this issue for the same reasons. I see some people get this after a few days, so I'm thankful it only happens after 20+ days. Which I can appreciate also makes it harder to understand what is happening. It feels like a slow memory leak to me. In regards to memory, in my use case, when things start misbehaving I always hit the Dashboard to verify the logs are at 100% and conveniently Memory Utilization is the only other state (going from my own memory) that is working. So I always get a sense of memory use after and it was never anything that alarmed me. But here's memory from diagnostics once the system is toast. total used free shared buff/cache available Mem: 11Gi 3.6Gi 1.7Gi 1.0Gi 6.1Gi 6.6Gi Swap: 0B 0B 0B Total: 11Gi 3.6Gi 1.7Gi nginx was still running with the same process id as indicated by the logs, so I don't think any additional memory was suddenly freed up once it started throwing signals 6s. Without a notification that the logs are filling up, it's hard to know when this is occurring to catch it in real time and then there is such a short window before it's all over. After the thousands of single error signal 6s, memory errors start mixing in with the signal 6s: Jan 4 00:21:43 Tera nginx: 2022/01/04 00:21:43 [crit] 14673#14673: ngx_slab_alloc() failed: no memory Jan 4 00:21:43 Tera nginx: 2022/01/04 00:21:43 [error] 14673#14673: shpool alloc failed Jan 4 00:21:43 Tera nginx: 2022/01/04 00:21:43 [error] 14673#14673: nchan: Out of shared memory while allocating message of size 7391. Increase nchan_max_reserved_memory. Jan 4 00:21:43 Tera nginx: 2022/01/04 00:21:43 [error] 14673#14673: *8142824 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jan 4 00:21:43 Tera nginx: 2022/01/04 00:21:43 [error] 14673#14673: MEMSTORE:00: can't create shared message for channel /disks Jan 4 00:21:44 Tera nginx: 2022/01/04 00:21:44 [alert] 16163#16163: worker process 14673 exited on signal 6 And these signals 6s (if not all of them) are coming from nchan's process id. And finally to tie it all back to old logins, the thousands of signal 6s come after this chunk in the logs. Though these entries are 20+ minutes apart, they are back to back: Jan 3 18:02:18 Tera nginx: 2022/01/03 18:02:18 [error] 15731#15731: *8026641 limiting requests, excess: 20.409 by zone "authlimit", client: 192.168.0.101, server: , request: "PROPFIND /login HTTP/1.1", host: "tera" Jan 3 18:27:53 Tera nginx: 2022/01/03 18:27:53 [alert] 16163#16163: worker process 15731 exited on signal 6 Does the GUI use WebDAV calls like PROPFIND? That IP is my main desktop, there's nothing WebDAVish I use. I don't know if Brave might try and do some discovery, but it seems like a stretch. Not sure if your third suggestion is referring to plug-ins. Everything extravagant is Docker. The majority of plug-ins are all yours or Squids. With my 20+ days of up time, I'm going to avoid going into safe mode for now.
  7. I have experienced this same problem, and it's frustrating to see the same issue recurring across multiple threads for many years and I don't think I've seen anything from Unraid devs on this, other than to suggest a memory check. Multiple years, dozens of users, and near silence. My uptime stays about 20-25 days then the logs start filling up, and once the log hits 100% the GUI has major issues. Most of the web UI works in the sense that you can move around sluggishly, but as noted the web sockets suffer complete failure, but the only time I look at the Dashboard is to confirm that logs hit 100% and it's time to restart. Upon which I would suffer a forced multi day parity check. (This I recently discovered was a result of restarting with an ssh session being open, so glad that problem is at least resolved) Usually I know the logs are full when the docker menu starts misbehaving. My first suggestion is to send a series of notifications that the log is nearing capacity as we experience for drives with storage and temperature. This is an absolutely necessity. It won't be a big window in terms of reacting to the issue, but it's something. It took 6 hours and 7500+ errors every 2 seconds in my most recent experience. Not sure how usable the system is during this part. line 2829: Jan 3 18:27:53 Tera nginx: 2022/01/03 18:27:53 [alert] 16163#16163: worker process 15731 exited on signal 6 .... line 10504 Jan 4 00:21:40 Tera nginx: 2022/01/04 00:21:40 [alert] 16163#16163: worker process 14537 exited on signal 6 Then nginx and nchan start to fail with memory allocation errors and things get much worse, that only lasts for 8 minutes before logs are completely full and it just stops. Jan 4 00:21:41 Tera nginx: 2022/01/04 00:21:41 [crit] 14620#14620: ngx_slab_alloc() failed: no memory ... Jan 4 00:29:57 Tera nginx: 2022/01/04 00:29:57 [error] 12073#12073: *8145363 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", hos And then 3 days later, Jan 7, I discover Unraid is having issues because of zero notifications. In my use case I don't use web terminals. I think that's just another indicator of the logs being full just like the docker menu and the dashboard failures. Simple html templates function, but anything advanced fails. What does ring a bell though is the idea behind stale tabs. I have a desktop and a laptop that I rotate through, and each one has a Brave tab open to Unraid for admin convenience. I usually sit on Docker or Main tabs. I don't think there should be anything wrong with that. But as a workaround for now I will stop this behavior or see if I can find an extension to reload the tab each day. I remember the days I only went down for an Unraid upgrade. Those days have been sorely missed for years. I find it interesting that quite a few people are just restarting nginx and life goes on. What are you doing about the logs being at 100%? Just ignoring it, or is there a service to restart that as well?
  8. "Do not overlap with autoupdate" What does this line mean? The plugin is called Auto Update, the tabs say Auto Update, so what does autoupdate refer to? Is there some unraid setting I'm not aware of? If this is suggesting don't configure Plugin and Docker auto update frequency at the same time, does that mean they both can't run daily for example? Or in fact, they can both be set to Daily, just at different times, in which case, wouldn't the message be more informative if displayed under the scheduled time? Perhaps "Do no overlap with plugin schedule" would be more clear?
  9. PROBLEMS: Black screen Stuck on Apple logo Can't reach recovery servers / no network I followed the video EXACTLY and I hit all of these issues one after another. I went back 10 pages and see these issues showing up repeatedly. The solution to all 3 of these, is to skip ahead in the video to the 14 minute mark and add your server name to the script. Run the script a second time and now you jump back to the 8:50 mark and install for the first time. I don't know why this issue is plaguing so many of us first time installers. Of course, you'll still end up with Catalina until that's fixed. I was going to update the ID for Big Sur and start over, but decided that xcode 12 still runs under Catalina, so I'll try that first. I'm just curious how responsive it is compared to my older Macbook Pro. But man, for a walk through video this has now taken me 6 hours from start to where I am now which is just installing the OS. The walkthrough video needs a walkthrough video.
  10. This was a surprisingly easy Docker to set up! And to think I avoided it for so long... Incase anyone else is using Caddy v2 for their reverse proxy, or I suffer a huge failure one day: Modify /config/config.php as follows (for completeness, I know this is repeated over and over in this forum): 'trusted_domains' => array ( 0 => 'unraid_IP:unraid_PORT', 1 => 'cloud.your_Domain.your_TLD', ), 'trusted_proxies' => array('unraid_IP'), 'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'), 'overwritehost' => 'cloud.your_Domain.your_TLD', 'overwriteprotocol' => 'https', 'overwrite.cli.url' => 'https://cloud.your_Domain.your_TLD', And in your caddyfile: cloud.your_Domain.your_TLD { reverse_proxy https://unraid_IP:unraid_PORT { transport http { tls_insecure_skip_verify } } encode gzip tls [email protected] { } header { # don't advertise "Caddy" as server -Server # docker ngix server contains all the necessary security headers already - instant A+ at https://scan.nextcloud.com/ Strict-Transport-Security "max-age=31536000; includeSubDomains;" } }
  11. I gave it a shot, but it doesn't seem to support .HEIC files out of the box anyways. Would VLC help with that? I'm aware of the patent issues, but this is the iPhone default is it not? I've got enough wasted space with JPEG/RAW pairs without enabling compatibility mode for Apple to create JPG/HEIC pairs as well. The breadcrumb hierarchy was driving me crazy. If I have pictures from the last 20 years and I look by "When" I have to scroll ALOT to hit 2001 way down at the bottom of the page. I expect to click on the year and select from all the possible years immediately. I shouldn't have to scroll to see my kids birth pictures when I can jump right to his birth year. Although I selected Auto Org to see what this paid feature might mean for me, I had hoped my original directory structure would be keyword tagged to the files, but it wasn't. The pictures in the 'cuba' directory could have been tagged 'cuba' as a keyword. They're pre-GPS so they aren't tagged by location. Obviously ignore "DCIM*" type numbered directories, but directories with alpha characters are likely meaningful. I don't care about Camera or Lens as categories, so I would want settings to turn them off.
  12. I noticed Websockets defaults to off in this template, so that's where I've left it. But curious what's the benefit of enabling it as I see most of the proxy examples are including it.
  13. If you're referring to Phantom, it specifically says Switch is not supported. However, I've had some success getting Switch players into the server without the Switch users having to do anything. The only clients I'm concerned about are iOS and Switch. If my son launches a world on his iOS device, his iPad. His Switch friends try to join that world, but because the external IP of his iPad and the docker server are the same, they actually end up on the docker server instead. So once all of his friends are in the docker server, he closes his game and joins them on the docker server which he uses the local IP to connect to. For his iPad, his "Add Server" used the domain name, which I resolved over local DNS to a local IP, but externally would resolve to an external IP. I had hoped this would allow invites to be sent with a domain name successfully, but I'm not sure if it even maters as the Invite is never received on the Switch anyways. It's crazy that Fortnite just works on every client, and Minecraft is so perplexing. Just getting all the permissions right for online play was a nightmare. The one thing I have noticed however is that the server has some issues remaining green to clients. After a restart, it's definitely green, but as the uptime ticks onwards the server ends up red and only a restart will get it green again. I've read the whole thread multiple times, so I know I'll hear it works on PE, etc. Just stating my experience...
  14. Is anyone using Minecraft for Switch and connecting to this? We're playing on iOS, but we're trying to figure out how to get a young kid on Switch to join. The ports are open from the guide a few posts back. We can "Invite" him when we play the server, but he's not seeing it or any option to "Add Server" like we do. Permissions are probably okay as they can play together on the featured servers.
  15. If we're not using VMs, should we stay the course with rc7 until 6.9.0-rc1 is out?
  16. I'm having a problem that has only appeared since I've updated my PCs to Windows 10 1903. I can no longer drag and drop files onto network shares (files with the green pipe coming out of them) or even paste on to them. I have to open the network share first so there is a folder view, then I can drop or paste on the white space to add them to the root of that network share. This is super unproductive. It's likely not unRaids fault, but I also find it hard to see why MS would do this on purpose. Hoping there is some tweak to restore this ability between the server or the clients. I can drop files on to the network share directly in the bread crumb bar but technically that represents the open folder so I guess it's treated differently.
  17. I'm running the latest Jellyfin docker from here. CPU is pegged at 100%, memory is instantly eaten up, and the Jellyfin GUI doesn't come up, though it tries. Unraid is still responsive. From what I can tell, Jellyfin is indexing and it's running hundreds of /usr/bin/ffprobe -i commands until all my memory is gone, then it sits there. After around 5 minutes, dotnet crashes with an our of memory error according to the Unraid log and the whole process starts again. It's hard to get a sense of what's happening as I'm not much of a Linux guy and the delay in doing everything is so great. I can stop the docker, but starting it back up just results in the same scenario. I'm seeing all the ffprobes when I run top or ps when I ssh into Unraid itself. I don't quite understand why the ffprobes just sit there. They don't seem to be doing anything, not even erroring out. If I manually look in the /usr/bin unRaid directory, I don't see ffprobe, but I would have assumed this was running from the docker container anyways. I see someone else has complained about ffprobe missing, but in my scenario, how can a process being showing in top if it's truly missing. Not sure why Jellyfin would launch more than a few ffprobes at a time anyways. At this point, I'm just waiting things out to see what happens. I'm at the letter T in the indexing after 22 hours, but thought I'd pop in here and post this. Just tried to copy the top output to the clipboard and Kitty crashed... It did manage to copy though. There are more than 900 ffprobes listed in top.
  18. How are these doing years later? Now that I'm not SATA port limited, I'm thinking of tossing one of these into my array. Had 2 for backup, but I dropped one! So the other has been sitting on a shelf for years.
  19. If you're still interested in maintaining this, there are some corrections you need to do. For the thread: I also got the "DOS/16M Error: [40] not enough available extended memory (XMIN)" error. Pulling a stick did nothing, that still left me with 4GB in the PC. I couldn't get around this no matter what I tried. I ultimately had to replaced DOS4GW.EXE with DOS/32A a more recent DOS Extender which didn't have a problem with what I guess is TOO MUCH ram. I just dumped everything from the extract binw directory into the root of my USB drive and renamed DOS32A.EXE to DOS4GW.EXE and the DOS parts worked. The EFI shell scripts have a few problems, mostly cosmetic: "@echo is off" should be "@echo -off" "cd..\xxxxxxxxx" should be "cd ..\xxxxxxxxx" <- you need the space after "cd" or it fails. "echo . sas2flash.efi -l blah blah" <- this confused "echo", it thinks the "-l" is meant for it, which is invalid. Perhaps wrapping the echo with quotes would make that work. Since you were unsure of EFI, I was following along with another blog to verify the commands, and the other blog had a reboot between 5.2 (P7) and 5.3 (P20). Ultimately I didn't reboot. Step 6 looked sketchy since it wants you to edit it, but doesn't say that until you run it. Maybe REM the sas2flash.efi line so they have to add the SAS Address and remove the REM. In any case, your part was easy, thanks for doing this, maybe 10 minutes to flash between DOS and EFI Shell. But getting around the DOS4GW errors took hours. Out of curiosity, I've only hooked up my cache drive to the controller since mine was eBayed and I'll stay like that until the controller earns my confidence. But has anyone every encountered issues with moving drives between controllers? Should I turn parity correction off?
  20. Inside the configuration.yaml file, your "dash_url" is blank. Makes you wonder what other trivial errors you will see if it can't handle that happening. Even if you remove the dash_url, it will generate a different error. Remove the entire "hadashboard" section so the dashboard will be disabled, and the Docker should start up. #hadashboard: # dash_url:
  21. Ah, interesting! Previously, when the flash dropped, all the other drives stayed assigned. But I guess that's the difference between having an array started, and then the flash dropping vs the flash dropping before the array is started. I've rebuilt drive 2 and so far so good. I didn't realize dropping of drives was an issue. I would have saved myself $500 bucks in new hardware had I known. I just assumed it was a definite hard drive failure. Thanks for your assistance.
  22. I've disconnected and reconnected the cables and moved the flash drive to a USB2 port. I also plugged in my spare precleared drive to a PCIE SATA card I wasn't using just to see if it persists if the others drop again. unRAID has booted, everything assigned correctly without me doing anything, but array is stopped and drive 2 is still red x'd. Since drives are dropping, I'm not sure how concerned I should be with the array. How do I get disk 2 back? Remove it, start array, stop array and reassign the original disk 2 back to disk 2? Will that cause a rebuild of that drive? I'm a little nervous with what happens if the drive or other drives start dropping during that process.
  23. Right above your post is mine having a similar problem, but there is a file created "/tmp/preclear_stat_sd?" that you can look at. The file not changing, or the time stamp might help you understand if it's still functioning.
  24. Sorry for the confusion. These are two unrelated events. Pre reboot, I remounted /boot to sdh1 in case I modified the drives. The mounting worked, the errors went away, but I ultimately didn't do anything since the new SATA device wasn't detected. After reboot, everything was detected and in its place. All drives were assigned, disk 2 was still red x'ed, but now SMART was working. Later, if you look at the system log, when the USB was dropped, so were all the drives. That's when everything showed up under Unassigned Devices. In the original scenario, only the flash and disk 2 dropped. This time after adding a new drive to the system, they all did. I think the configuration is good, but I haven't rebooted yet. Going to redo the cables first.