Jump to content
We're Hiring! Full Stack Developer ×

trurl

Moderators
  • Posts

    44,098
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Yes and no, sort of. User shares are indeed built and run in memory, but that memory can be claimed by any and all other processes. When something is accessed, the disks are spun up to rebuild that portion of the user share fs. Cache dirs keeps the underlying disks contents fresh in memory, so accesses are nearly instantaneous. So to answer the OP's question directly, you will see a benefit from running cache dirs on the DISKS that make up the user share that you wish to cache, as long as you have enough RAM so the directory tree can actually stay cached without being overrun by other processes. Whether or not you use disk shares has no bearing on cache dirs being useful to user shares. Where people are running into issues is trying to use cache dirs to keep too much of the directory tree in memory, as that causes cache dirs to keep the disks spun up because as soon as it's done walking the disk, something else comes along and needs that RAM, knocking the directory list out, causing the disk to stay spun up as cache dirs reads it into RAM again, causing a loop. OK, that's basically the way I thought it worked. My users and my apps are only given access to user shares and I only access disk shares as root from telnet when needed. I had thought my user shares benefitted from cache-dirs and that's what it sounds like you are saying. I have plenty of RAM so haven't ever had an issue with spinning drives because of cache-dirs.
  2. Could you give a reference for your understanding that user shares are already managed in memory? The User Share file system is a virtual file system built by FUSE in memory, which should make it unnecessary (and wasteful and inefficient) to require caching it in additional memory. I've always felt that CacheDirs is not useful at all for User Shares, but I vaguely remember a very knowledgeable user saying there WAS a use in certain circumstances, which I don't remember. I'd say uninstall CacheDirs and see if you ever spin spun-down drives up while browsing the shares. OK. I found Joe L. post in original thread. I had thought that if you run it on the drives the user shares already got the benefit. Did he mean the user shares already have them cached even if you don't run it at all?
  3. Unassigned Devices and SMB Mounts are not mounted as unraid user shares (/mnt/user). Unassigned Devices and SMB Mounts are mounted at /mnt/disks and are shared using the smb_extra configuration file. The unassigned device and SMB Mount is shared and shows as an smb share, but unraid is not aware of it as a user share. Mount a device and look at the /boot/config/smb-extra.conf file and you'll see an include that inserts the share properties of the device. The include file is created by unassigned devices when the share is mounted and the share switch is turned on. The include file is located at /etc/samba/unassigned-shares/. I mostly knew all this but kizer is changing it from /mnt/disks/name-you-want-for-share to /mnt/user/share-that-he-created if I understand him correctly.
  4. Since you can mkdir path-to-mount-point then mount .... path-to-mount-point it makes sense that this would work. What I wonder about is what happens when path-to-mount-point is inside a user share as kizer is doing. This method is even in the wiki. But how does unRAID treat this "folder" when it comes to calculating the free and used space in the user share, or when doing other things with the folder? What if you cache that user share and write to that folder within the user share, does it go to the cache drive or the external drive or where? If it does go to cache, where does it get moved to? Etc.
  5. I don't think you can expect to have SMB shares show up as unRAID user shares or get unRAID to share SMB shares if that is what you are trying to do. User shares takes place at a lower level than I think this plugin does or even should work.
  6. Could you give a reference for your understanding that user shares are already managed in memory?
  7. Sounds like you may have found a way to trick it into preclearing your flash!
  8. Why have you posted this as a plugin when your description says it is a docker?
  9. No need for repositories. If you click Add from CA then it fills in the page for you. Selecting a template is only for when you want to get the page filled in from one you have already saved.
  10. I agree so completely I could have written Gary's post! I too use Corz and would like to see a fully supported tool that creates and maintains the separate hash files. Maybe bonienl will reconsider, and add that alternative some day? It's a little disappointing, thought we had a good hash tool and PAR2 on the way, but I can understand if Squid is unwilling to enslave himself to us! He knows better than most of us the work, the commitment, and the responsibility involved in maintaining an important plugin. Always better sooner than later to admit the drive and interest is not there, before more users are relying on it. +1 I am still using this.
  11. Community Applications has a feature that will help you with trying to use dockers out in the wild, wild, web. The unRAID docker page is really just giving you a form to enter parameters for the docker run command. Nothing special about the way unRAID does this really. If you have some dockers that tell you how to run it then you should be able to give it a try.
  12. Even if you only want a NAS, V6 gives notifications of SMART issues and other things which can prevent you from accumulating problems until you have more than you can recover from. And there are other features besides VMs and dockers that make it a better NAS. Also, V6 is going to be better supported by the forum since that is what most will be running.
  13. Without a parity drive you can just stop and change anything. Nothing will be lost. You really should get parity going though. And you should have backups of anything important even if you have parity.
  14. Just thought I would chime in with my experience. As far as I know I am using the latest version without issue. My use may not be typical. I usually only mount a single NTFS drive for as long as it takes to rsync update my offsite backup disk, less than 30 minutes. I do use the scripting capability to trigger my backup when the disk is mounted. In addition to that I usually access preclear from this page but that doesn't mount any partitions of course. I would hate to see this go away, but I do know how to mount my disks from the command line as documented in the wiki.
  15. I think your latest post got lost in quoting. Is this it?
  16. Looking at your post history I assume you were using the plugin. You should be able to see the results by clicking on the "eye" next to the disk. The results are also saved to your flash.
  17. Did you ever get an answer? I have just started adding Dockers to my setup and was puzzled to see no port numbers etc in config for the web interface. Is that not configurable or is this Docker broken? Port Mapping is only available when Network type is Bridge. Also try the Advanced Settings slider in upper right.
  18. unRAID can only tell you when the docker has been updated. It can't really tell when the application in the docker is updated because the details on how to determine this can vary widely for different dockers. When I open my Plex Server WebUI it tells me there is an update available, and I just close it and restart the docker to get the update.
  19. Parity check will not do anything for these problems. Go to Tools - Diagnostics and post the complete diagnostics zip.
  20. I use this to mount NTFS all the time. Possibly you have multiple partitions on these disks. You have to mount a specific partition. Is there a '+' icon next to the drives that would let you expand to see the partitions? Maybe post a screenshot showing what you are talking about.
  21. Haven't tried it myself, but like Squid said, just edit the one you want to duplicate and change its name and whatever settings you need to be different. The docker template and the docker container both get whatever name you give it so if the name is different, you have a different one to run. I doubt the original will be removed or even stopped, but even if it is, you can just reload it from its template. Try it and let us know if it works for you.
×
×
  • Create New...