Jump to content

je82

Members
  • Content Count

    250
  • Joined

  • Last visited

Community Reputation

16 Good

About je82

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. i tried both through the gui and nothing happened when i hit the button to delete, it did however generate that error i posted above, if i removed the folder via cli it still appeared among the shared folders in unraid. it did resolve itself when i restart unraid tho so its not really a problem, it has not happened before so probably just a weird rare bug.
  2. hmm , ill take a look next time it updates and see if it appears in the template again, i managed to remove the jdownloader share after a unraid array restart.
  3. from log when trying to delete jDownloader share: Mar 16 09:10:04 NAS nginx: 2020/03/16 09:10:04 [error] 4214#4214: *13195377 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 1.1.1.2, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "nas.darkone.network.local", referrer: "http://nas.darkone.network.local/Shares/Share?name=jDownloader"
  4. can you please remove the default added path of /user/jDownloader that keeps getting added everytime the docker updates? everytime i update the jdownloader2 docker i get a share called "jDownloader", this time around the share is also presistant and cannot be deleted.
  5. I run docker ich777/jdownloader2, it keeps adding a jDownloader share everytime it is updated which has been annoying but i've been able to delete the share no problem. This time i cannot delete the share. I can go to /mnt/user/ and rm -R jDownloader/ but the share remains in unraid ui. I cannot delete the share via the ui even though it tells me share is empty and i tick the box to delete and hit apply. I've turned of the docker and removed the string that keeps getting added config everytime it updates to create this share. Share still remains, cannot be deleted via gui and if deleted via cli the share remains in the gui. What do i do? EDIT: Running 6.7.2
  6. setting case sensitive to yes in smb config does that mean in order to access a file that is on smb path you have to write out the path and filename exactly as it is with the case?
  7. I had a mount in fstab such as : \\1.1.1.3\Logs I removed this mount in fstab and did: umount /home/nasbackuplogs/ unmount was successful. then i check cat /proc/mounts there is no mount pointing to \\1.1.1.3\Logs Yet... My syslog is every second filling up with: Feb 15 15:39:19 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:22 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:24 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:26 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:28 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:30 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:32 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:34 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:36 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:38 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:40 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Feb 15 15:39:42 NAS kernel: CIFS VFS: BAD_NETWORK_NAME: \\1.1.1.3\Logs Where do these come from? There's no mounted volume \\1.1.1.3\Logs, yet it keeps spamming this message. Any idea why this message would keep appearing?
  8. well that was easy, i realize there are ctime and atime as well in unix, never mind
  9. Hi, I've setup rsync to run every 3 days and copy files that are important, files that have been removed or changed are automatically moved on the backup server to /mnt/disk5/Deleted by using the "-backup-dir" switch in the rsync command. Now my /mnt/disk5/Deleted on the backup server is filling up and i need to make another script to clean this up. I was thinking of simply going by a simple "older than x days script" but i realize now that it may not work as i want it to because file creation time is not when they are copied to the deleted folder, it is rather when they are created on the backup server the first time. Running a script like: Should delete all files older than 60 days but like i said many files may be older than 60 days when they first appear in the /mnt/disk5/Deleted/ so the effects may be grim wiping the entire /mnt/disk5/Deleted/ folder entirely. How would you best go about cleaning up this folder? The reason i ask is because i ran some trials in the folder before actually running it as this: and even running the command at 1000 days it still finds files in the path it wants to delete, i started using unraid backup server 2 months ago so there should be no files found running it at 1000 days. Any ideas?
  10. thanks, i though you could have a remote syslog + logging to a share at the same time to create some kind of backup of the logs in case unraid goes down and you cannot access content on the shares. i will setup a job on a server that mirrors the logfile created in the share essentailly giving me the same feature, thanks for the help!
  11. ip address to a remote syslog server on the same lan that has port 514 udp open for incoming messages
  12. Hi, I've setup syslogging both remote and to a share, it appears not to be doing any logging at all, have i missed anything? Im on unraid version 6.7.2 No log files appears in the syslog share, no remote logging is being received from unraid system either. Do i have to restart the syslog service via cli? Or restart the entire system? The temporary syslog in /logging.html seems to be getting messages:
  13. thanks for the heads up, i opened this issue again as it seems to persist even when hardlink is disabled.
  14. I was reading through the documentation regarding the sFTP docker and it appears i have to specify the home path for each user directly in the startup string of the docker container. Does this mean i can only have 1 user with 1 home path per SFTP container and not multiple different users with multiple different home paths?
  15. so hardlinking does not exist in 6.7.2? then i wouldn't be affected by upgrading and disabling it as it is obviously something i do not use even if i though my radarr/sonarr hardlink setup was working correctly, i guess it never was thanks for the heads up!