Jump to content

JonathanM

Moderators
  • Posts

    16,720
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Click compute all, wait a while and refresh the page, see what comes up.
  2. Redundancy or speed, depending on RAID level configured. Poorly. https://carfax.org.uk/btrfs-usage/
  3. Shares tab, compute should get you closer.
  4. Not without setting up from scratch, including re-adopting your equipment and setting up the networks. Set a fixed tag as described in the recommended posts at the top of every page in this topic.
  5. Should be fine for reasonable lengths and drive counts. The root issue is the voltage drop from one end to the other, so amps and gauge aren't enough information, you need total length of run as well. 16 gauge is plenty thick for a 1 foot run powering up 10 spinning hard drives, and not nearly thick enough for 250 feet. There are calculators online for voltage drop and such, but you have to make some assumptions about just how tolerant your drives are to low voltage. In general a properly put together custom system should be leaps and bounds better than a few generic splitters and extenders, especially when they are using cheap stamped pins and aluminum wire.
  6. To extend that a little, you should definitely install either apcupsd or nut software, whichever you have managing the host Unraid, on each VM, and set it to shut down the VM pretty much immediately when power goes out, that way the VM's are already down and Unraid can shut itself down without hanging. Both nut and apcupsd can be easily configured to watch a host instance and react.
  7. Yeah, the cross sectional area of wire and all the contact points in the chain are very important to keep the voltage steady under load. Longer runs need thicker wires, and every junction MUST have a large solid contact patch. 4 pin connectors are better than the SATA power wires, but only if the pins and sockets are all precisely the correct dimensions, not bent or corroded.
  8. Yes. It's pretty rare (never happened to me and I can't remember hearing about any incidents) for an update to a fixed tag to break something. BTW, in order to tag someone, you must type the @ key to bring up the list, but you must NOT finish typing the user id, you must click the correct one in the popup that shows when you @.
  9. A container is more than just the application, think of it as a miniature virtual machine. It has an entire operating system, albeit only the pieces essential to supporting the specific application. The tag means the application itself will not be updated, only the supporting OS files inside the container. As an aside, one of the reasons docker containers can be so efficient, they share common pieces between them. So if you have a bunch of LSIO containers using the same internal OS, it's not duplicated no matter how many different containers use those same basic pieces. Running multiples of the same container with different appdata uses almost zero additional resources in the docker image.
  10. Exactly correct. Please read this post. https://forums.unraid.net/topic/38582-plug-in-community-applications/?do=findComment&comment=1044792
  11. Post the docker run commands for Sonarr and the downloading container
  12. Not currently. The root user is only used to manage the server, it's not allowed to connect to your shares.
  13. You either have to rebuild the parity disk and lose the data that was written to the data slot while it was disabled, or rebuild the data disk from parity. Your choice, but if the emulated disk is mounting and reading properly it's better to rebuild the data disk to match.
  14. It only works for an array WITH a failed drive. If you don't have any failed drives, you simply replace the parity drive with a larger one and rebuild parity on it.
  15. Custom soldered heavier gauge wiring harness, ideally fed directly from the PSU circuit board. Not an option if you aren't adept at electronics and soldering your own wiring.
  16. It's supposed to reduce the amount of typing by giving helpful information automatically.
  17. SAS is the only external multi disk connection with solid performance. I don't know of a way to reliably use a NUC with external array drives at the moment. Technically USB will work, but you will likely experience reliability and speed issues.
  18. The 'arrs are configured to move the file instead of copying it.
  19. I don't think hardlinks work like that, I could be wrong but I think moving the source file breaks the hardlink. So if you need to use hardlinks I think you will have to set cache: no so the downloads go directly to the array. I don't use hardlinks, so maybe there is a better way of doing things that I'm not aware of. The user share system complicates hardlinks greatly.
  20. Shouldn't matter, /mnt/user/downloads will show all content of /mnt/disk1/downloads and /mnt/cache/downloads, so that shouldn't be the issue. However, why not set the downloads to cache only?
  21. Sure, set that share to cache only. Yes, shares can have folders on multiple array drives and pools. You would set the share to cache: no if you wanted new files to be written to the array, or cache: only if you wanted new files to be written to cache. Mover ignores cache no and only. Yes. When you add new drives the parity will no longer be a mirror, but it will still allow the reconstruction of a single failed drive as long as the other data drives are ok. Previous versions of Unraid only allowed 1 pool, and it was always called cache. The ability to set up multiple pools is new, as is the ability to name them differently. The usage is still the same. There is a restriction on automatic scheduled moves, each share can only have 1 pool assigned for scheduled movement, but if you manually put files on a different specific pool in a root folder, that folder is still part of the user share with that name. Shares with cache: yes and cache: prefer will move files between their assigned pool and the main array when the mover schedule dictates.
  22. What exactly was rude about my response? I have no idea of your level of knowledge, and I couldn't figure out from your question what the end goal was. When you set up a share, you tell Unraid what drives you want to use for the share, how you want new files to be allocated with split levels and file allocation settings. If you set the split level too low and the allocation to most free, then each new file written to the share will likely end up on a different disk. This is a perfectly valid way of using Unraid, so unless you have a reason to change that, there is no need to mess with where files are stored. That's why I asked what you wanted to accomplish.
  23. That's how user shares work. What are you trying to accomplish?
×
×
  • Create New...