Jump to content

jonathanm

Moderators
  • Content Count

    10296
  • Joined

  • Last visited

  • Days Won

    34

jonathanm last won the day on July 25

jonathanm had the most liked content!

Community Reputation

930 Guru

8 Followers

About jonathanm

  • Rank
    unAdvanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm not quite following, but if you need to have NPM as the gateway, then just use a plain vanilla apache or nginx container to host the static site and point NPM to that container. I use LSIO's LE with basic authentication for some static pages as well as using it to reverse proxy a bunch of other sites in my LAN, some on Unraid, some hosted on VM's, etc.
  2. You said that's where /data is mapped to, so job done. /data inside the container IS /mnt/user/Downloads/Incomplete on the host.
  3. Exactly. Keeping parity valid all the time every time is the name of the game. If parity must be deemed invalid and corrected or rebuilt, it had better be for a good reason. If your equipment is healthy and nobody downs the server without stopping the array, parity should never be wrong. Any fault should be investigated and corrected. Incorrect parity means a failed drive will be recreated with wrong data, you can only hope the flipped bits are in an unallocated section or the picture area of a media file, if they are in a filesystem area you are looking at file system corruption. Data integrity is the "You had ONE job" of a NAS.
  4. The LSIO Letsencrypt container would be the typical choice. This particular container is set up for proxy, not hosting.
  5. So, you are telling qbittorent to save the downloads to /data, correct?
  6. The default position is to be sure parity is always fully correct, and alert if that's not the case, so the user can take action to be sure parity is correct. If you change that logic and assume parity isn't valid for some portion of the parity disk, then there are many changes that would need to be made to account for that. When you make such low level decisions in the code that could effect data integrity and drive recovery, the amount of work to validate all the possible scenarios and ways it could mess up is a monumental task.
  7. No, Unraid will clear a drive before adding it to the parity protected array if a valid pre-clear signature is not detected. That way parity always remains fully valid, in the event of a drive failure during the adding process. The only time parity is recalculated when adding a drive is if you execute a "new config" and tell Unraid to rebuild parity from scratch based on the existing content of the data drives. I know you are going to cry semantics, but it's an important distinction to make when dealing with parity that an empty drive with a valid filesystem has data. The filesystem structure itself is handled by parity as well. So a formatted drive with no user data is still full of content for the purposes of parity. Parity is just ones and zeroes, across the entire partition, whether there is a blank XFS filesystem or a filled to the brim encrypted BTRFS volume, it's all the same for parity.
  8. So what is the internal container path that points to that host path?
  9. So that if you clear a data disk the same size as the parity disk and add it to the array, parity is already valid instead of having to recalculate. It's much better to always keep parity valid, that way you don't have to code for a bunch of scenarios whether or not to trust that parity is correct.
  10. Containers are sort of like their own little computer, they don't have access to anything unless you map it in their set up configuration. Read through this entire article. https://wiki.unraid.net/What_are_the_host_volume_paths_and_the_container_paths
  11. Sorry I can't offer constructive advice, but I dumped crashplan when they killed the option to back up to another device you own and forced all backups into their server farm. Do you have any relatives with fast internet that would be willing to host a box that you provide in exchange for space on your local server and or shared payment for power and internet?
  12. Lots of RAM. Make sure you limit the number of cached folders since you only have 8GB.
  13. That statement makes me think hardware failure. Are you sure your cooling is ok? Heatsink came loose, fan not working, ambient temp in the room changed?
  14. Frequency of appearance in the database of that location and speed.
  15. As long as you understand that a user share IS the files on the disks, just presented in a different view, you can work out what is safe to do and what isn't. For example, moving a file or folder from /mnt/disk1/share1 to /mnt/user/share1 is going to cause whatever you moved to be corrupted, because the paths are actually pointing to the same thing. Moving from /mnt/disk1/share1 to /mnt/disk3/share1 is perfectly fine. Moving from /mnt/user/share1 to /mnt/user/share2 is perfectly fine as well, but may end up with the files on a disk you didn't expect, since linux first tries a rename and succeeds, so the files will stay on whatever disk they were on to begin with. If you move the files from user share to user share over SMB, they will obey user share allocation rules and go to the disk specified by the destination share. It's a little complex when you first dive in, but if you obey the rule to not mix /mnt/user with array disks, you'll be fine.