• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Indmenity83's Achievements


Newbie (1/14)



  1. I was able to identify the cause of the repeated log entries in the debug.log file for me were related to Plex's healthcheck. The --no-healthcheck option can be used to disable the healthcheck mechanism in Docker. By default, Docker performs a healthcheck on each container to ensure that the container's main process is running and healthy. If the healthcheck fails, Docker can take appropriate action, such as restarting the container. In my case, it seems that the healthcheck mechanism is causing repeated log entries in the debug.log file. Disabling the healthcheck seems to be a workaround for this issue, but it's more of a band-aid than a solution. If the main process of the container crashes or becomes unresponsive, Docker may not be able to detect the issue and take appropriate action. To implement the work-around, edit your Plex container, toggle the 'advanced' setting using the switch in the top right of the page, and add --no-healthcheck to the 'Extra Parameters'. I hope this helps others, but if somebody knows more about how to troubleshoot the actual problem where healthcheck causes excessive logging, I'd greatly appreciate it.
  2. I realize this is a serious necropost, but wanted to give @maciekish a huge thank you for sticking with this after the responses he got. I'm using a reverse proxy with Nginx and had the same problem and this lead me to the same solution. So for anybody getting here from Google you'll want to add the following to your Nginx config to get things working again. You could be clever and only apply it to the locations that are broken, but since my reverse proxy is not even exposed outside my network I just disabled gzip for the whole server definition: server { gzip off; ... } As an FYI to all the doubters above, if they still hold their positions. A reverse proxy is a very handy way to make it so you don't have to remember unique port numbers of all your internal services. http://unraid.home.local or http://sonarr.home.local are totally valid internal domains if your DNS is setup right and will hint the server to your password database tool of choice so you don't have it offering up 30 passwords because everything is on the same IP address of the server.
  3. This makes managing the files on the system really complicated. Unraid really expects all the content to be 99:100. Read through coppit's message below; is this something that can be adapted into your container? I'm willing to make the PR if its something you would support.
  4. Closing this issue as backups appear to be working. I'll open a new thread for the include/exclude bug that was the root cause.
  5. Well here's an interesting observation; this may be a different but related issue. I'm not at home to fully test TimeMachine. I had set a single "include disk" on the share in order to target a disk with a good amount of free space. As part of the testing suggested above, I created a .sparsebundle locally and tried to move it to the share and found that I get an error about there not being enough free space. In fact, I can't even create a folder in the share. However, if I remove the single "include disk" then there are no issues adding content to the share. I've done some additional testing and anytime I set an "include disk" the share will not accept content UNLESS I select 'disk 1' as the included disk. Similarly, if I exclude disk 1 at all I cannot write to the share. When I get home tonight I'll see if this free space issue is what was causing TimeMachine to throw the "does not support the required capabilities" error (TimeMachine doesn't see the share over VPN).
  6. The operating system "OS X" officially changed its name to "macOS" in 2016, probably time to update the UI in Unraid to follow suit. Places where this shows up: SMB Settings: ENHANCED OS X INTEROPERABILITY There may be other locations where "OS X" or "OSX" might show up too, I'll try to list them here or can create new reports if that is easier.
  7. I tried public, private and secured ... all had the same error message; so while I'm not sure if its still required, it is at least a system check that I believe happens after whatever is causing my current failure.
  8. Thanks, I had TimeMachine backups on Unraid working prior to the AFP -> SMB changes (High Sierra I think that was?). I've never had it work since then. I've got a backup running now to a USB device, I'll try copying the file over to Unraid and then picking the drive to see if that make any difference This is interesting ... I did upgrade an existing Unraid box and had to go enable "Enhanced OS X interoperability" and I certainly had an existing connection to Unraid SMB shares prior to making that change. Would a restart of the MacOS machine reset the "first tcon" or is it more than that (if anybody knows). Either way, I'll make sure I restart my laptop when I get home and try again.
  9. I had added the 'time-machine' share after the initial post as a test to see if it was just some client-side caching tied to the network name of the share. Both shares result in the same error. [edit, output moved to attached file to save scrolling] output.txt
  10. Whoops; that was not what I meant to do. But after changing it (to 2000000 and removing it completely) I still get the same error.
  11. A picture is worth a thousand words; - Running Unraid 5.7.0-rc5 - Share named 'backup' exported as SMB time machine - After picking share in Time Machine and entering appropriate username and password for the share I'm told the disk does not support required capabilities. Not sure what I'm doing wrong here, or if something is actually broken?
  12. I was hoping to prove it out on the hardware before spending any money on a newer card; might just have to bite a bullet and roll some dice.
  13. Another note, because I realized I haven't mentioned it anywhere, is that if I set the GPU/Soundcard back to VNC/None that the machine boots just fine.