jserio

Members
  • Posts

    33
  • Joined

  • Last visited

Posts posted by jserio

  1. 1 hour ago, jserio said:

    I just tried File Manager + and received the same error.

     

    I wonder if this isn't related to Samba but perhaps the file caching or recycle bin plugins?

    Screenshot_20231229-201853.png

    To answer my own question, it has nothing to do with folder caching or the recycle bin.

     

    However, if I comment out the min client directive in the Samba config (essentially enabling SMB1) and set the protocol to SMB1 in Solid Explorer, everything works. Seems to be an issue with SMB2/3 support in Unraid or the Samba version in the apps is incompatible.

  2. 14 minutes ago, aqua said:

    No, they haven't fixed it. File Manager + works but doesn't have multiple tabs/windows available, which kills the way I use the file managers. I haven't found one that works that allows for multiple windows/tabs.

    I just tried File Manager + and received the same error.

     

    I wonder if this isn't related to Samba but perhaps the file caching or recycle bin plugins?

    Screenshot_20231229-201853.png

  3. I hate to beat a dead horse, but has Lime Tech resolved this issue yet? I used to use Solid Explorer on Android all the time to manage files on my shares. Every since 6.11, I have also been experiencing failures when deleting directories with files. I've tried all of the file managers listed in this thread and none of them work. I'm sure I'm not the only one with the problem and it would be great if the devs prioritize this.

  4. Hi guys. Is anyone able to get NVIDIA HW acceleration working with the Shinobi docker? I modified the template to include --nvidia=all and added the appropriate GPU ID and 'all' to the capabilities variables. However, Shinobi throws errors int he log:

     

    Unknown decoder 'h264_cuvid'

     

    Shelling into the container and running nvidia-smi shows the GPU status. However, ffmpeg does not show "--enable=nvidia" as a compiled in configuration. I searched apt for ffmpeg with nvidia but it doesn't exist and seems you need to compile it yourself. Is this correct or is there an easier way to make this work?

     

    BTW, Plex and Tdarr both work with my GPU so the issue is not Unraid or the passtrhough.

  5. 7 minutes ago, jserio said:

    I've been on v16 for some time and finally went through the multiple upgrade process to get to 18. Everything went well but my homepage shows what appears to be a spinning icon (as if it's scanning for something). The icon never goes away. Restarting the container doesn't help. This is on Firefox. Only add-on is UBlock Origin which is disabled for my Nextcloud host.

     

    Any ideas?

     

     

    To answer my own question, it looks like it has something to do with "Use rich workplaces." Disabling this removes the icon.

  6. I've been on v16 for some time and finally went through the multiple upgrade process to get to 18. Everything went well but my homepage shows what appears to be a spinning icon (as if it's scanning for something). The icon never goes away. Restarting the container doesn't help. This is on Firefox. Only add-on is UBlock Origin which is disabled for my Nextcloud host.

     

    Any ideas?

    Capture.PNG

  7. Hey guys. I'm having some trouble with file permissions after a transcode. Media directories are nobody:users 666 (rwrwrw). UID and PGID are 99 and 100 in the container settings for Tdarr. However, after a transcode, when Tdarr copies the new file to the original location, the file permissions are root:root 644. This causes problems when I need to write to those directories later.

     

    I have tried adding "-e UMASK_SET=0000" to the Extra Parameters section in the container and this has had no effect. I've also tried UMASK. I've also added these as variables in the container settins with no such lock. For kicks, I've tried other UMASK values and no change.

     

    Any ideas or suggestions?

  8. Hey guys. I have an issue accessing the Pi-Hole docker from outside my network over Wireguard. I believe this is a known issue but I can't seem to find a definitive answer.

     

    My Unraid server is 192.168.1.110

    Pi-Hole is using custom br0 192.168.1.111

    No other dockers use a custom br0.

     

    When outside my network using Wireguard, I can access Unraid and all other dockers, in addition to all other devices on my LAN (router, Windows clients, etc). However, I am unable to access Pi-Hole. Can someone confirm if this is a known issue and if it will be fixed in a future release?

  9. 20 hours ago, binhex said:

    hmm ok sadly that particular A record only resolves to a single ip address so i can see why it may go down, ideally it should resolve to multiple ip addresses, each of which is a vpn server, openvpn can then randomly reconnect to another server on disconnection, an example of this is from pia:-

     

    
    Name:    swiss.privateinternetaccess.com
    Addresses:  185.230.125.48
              185.230.125.49
              185.230.125.51
              185.230.125.85
              195.206.105.213
              82.102.24.162
              91.132.136.44
              185.156.175.91
              185.212.170.178
              185.212.170.180
              185.212.170.188
              185.230.125.35
              185.230.125.43

     

    at the moment this docker does not support multiple remote lines in the ovpn file (but as mentioned does support multiple server connections) so you might be stuck for now unless you switch vpn provider.

     

    Interestingly, I went to the airvpn page to choose a few different servers and noticed this on the page (never saw it before):

     

    Quote

    DNS are updated every 5 minutes to reflect the best recommended servers available in the areas

     

    I suspect when the issue arises, that deluge is caching the original IP and not resolving to a new one? Any way to force a flush of the dns cache?

  10. 13 minutes ago, binhex said:

    Why not download a VPN configuration file that uses a hostname rather than an IP address, that way you are not targeting a specific server.

    Sent from my CLT-L09 using Tapatalk
     

    Thanks @binhex for the reply. The ovpn files does indeed use the hostname (remote us.vpn.airdns.org 443). When the issue reoccurs I'll post the logs here.

  11. Hey guys. I've used AirVPN for years now but it seems that every few weeks DelugeVPN will go into a disconnect/reconnect loop every 2-3 minutes. I believe it's due to the particular server going down or having issues. Usually, when this happens I just download a new config file from AirVPN and copy it to the Deluge openvpn directory, restart the docker and all is well for a few weeks.

     

    Is it possible to copy several openvpn config files to this directory and have Deluge rotate through them if it disconnects from a server?

  12. 11 hours ago, Hoopster said:

    This is a well-known issue with the MX500 and there are some threads in these forums going back over a year on this issue.  It is a Crucial firmware bug and they have never fixed it.

     

    I have this same drive in a 500GB capacity.  The only way to "permanently" disable the warning is to disable unRAID tracking of SMART attribute 197 for those SSDs.

     

     

    Thanks for the quick reply. I tried searching prior to posting but didn't see that particular thread. Good to know I'm not the only one with the problem. For now, I've disabled attribute 197.

  13. Using Unraid 6.7.2. Hey guys, I picked up two new Crucial MX500 (2TB) ssd drives on Amazon Prime Day. one to replace my cache and the other to replace my unassigned Download drive. After installing them (prior to formatting) I saw the following "Warnings" in my dash - one for each drive:

     

    Current pending sector count 1 (sdm)

     

    I ran an extended SMART test which resulted in no errors. However, the SMART error log shows the following (for both drives):

     

    Warning: ATA error count 0 inconsistent with error log pointer 4
    
    ATA Error Count: 0
    	CR = Command Register [HEX]
    	FR = Features Register [HEX]
    	SC = Sector Count Register [HEX]
    	SN = Sector Number Register [HEX]
    	CL = Cylinder Low Register [HEX]
    	CH = Cylinder High Register [HEX]
    	DH = Device/Head Register [HEX]
    	DC = Device Command Register [HEX]
    	ER = Error register [HEX]
    	ST = Status register [HEX]
    Powered_Up_Time is measured from power on, and printed as
    DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
    SS=sec, and sss=millisec. It "wraps" after 49.710 days.
    
    Error -3 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours)
      When the command that caused the error occurred, the device was in an unknown state.
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      00 ec 00 00 00 00 00
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      ec 00 00 00 00 00 00 00      00:00:00.000  IDENTIFY DEVICE
      ec 00 00 00 00 00 00 00      00:00:00.000  IDENTIFY DEVICE
      ec 00 00 00 00 00 00 00      00:00:00.000  IDENTIFY DEVICE
      ec 00 00 00 00 00 00 00      00:00:00.000  IDENTIFY DEVICE
      c8 00 00 00 00 00 00 00      00:00:00.000  READ DMA

     

    I've acknowledged the warning, but it seems to return every few days. I suspect it's just Unraid referring to the last error log (which happens to be the one above, and seems to have occurred at first power up). I don't believe both drives are bad... I suspect perhaps this is a general error upon a new drive install? Is there any way to clear the most recent SMART error log? Or perhaps any way to deal with this so it does not keep popping up in the dashboard?

  14. 19 hours ago, Squid said:

    Is it on a particular screen, or always?

    Sent via telekinesis
     

    It's on the main Apps page. I opened up Chrome's dev tools and see this in the console (hope this helps). The error comes after I type in "beet" (no pressing Enter).

     

    image.thumb.png.78b845bcd7f7e89d9293330639f51be0.png

  15. 2 hours ago, saarg said:

     

    You have probably not used -u abc when you execute the command to import the music. Then you run beet as root and your files will end up with root:root permissions.

    You have to run it as abc as that is mapped to 99/100.

     

    
    docker exec -u abc -it beets the command to import

     

    Forgive my ignorance but I haven't used the beet command this way. I open the beet console and then within the container I run beet import.... Is this not the correct way? If I run the command outside of the container, then the permissions will be correct? I do know that the user inside the container appears to be 'abc'. 

  16. Hey guys. I've used both this docker and thetarkus version and they both exhibit this problem. When importing new files, when beets moves/copies them to the destination, the file ownership is root.root - even though the docker settings for UID/GID is 99/100. This requires me to chown/chmod after each import in able to edit the files later on. Short of writing a cron job, is there a way to fix this? Or does beets support a post-processing script (I couldn't find anything in their docs) that can call external commands? Any help would be appreciated.

  17. Hey guys. A quick search didn't yield anything. Has anyone noticed in the last few versions of CA, that the auto-complete in the search box no longer works? I've disabled all ad-blockers and browser plugins. Tried in both Chrome and Firefox and nothing. The changelog makes note of a few autocomplete changed but nothing about it being removed. Is this a widespread issue or just me?