Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Tools, diagnostics, attach the intact zip file to your next post in this thread. Theoretically what you are saying shouldn't happen, because if a read from a drive fails, unraid should calculate that value from the rest of the drives, and write that calculated value back to the drive. If the write fails, the drive will be red balled, and no longer used, all further access to that drive will be calculated from all the rest of the drives, seamlessly allowing you to access the data on the failed drive. I suspect that you may have more than one failing drive by the way you are describing it, in which case you are probably either already losing data, or just about to.
  2. Liquid cooling is fine for gaming rigs, you really should have the thermal mass of a plain old heatsink and fan for a server.
  3. To expand on what @Energen said, unless you are trying to distribute the weight so that it's less concentrated in the case and instead making many lighter packages, then removing the hard drives SHOULD be unnecessary. That assumes they are mounted properly, obviously if they are only held in with one screw on a makeshift piece of scrap, then remove them. Bare hard drives are especially vulnerable to impact, the simple act of setting one bare drive on top of another can exceed the design specs for instantaneous G loading. If it made a clack noise, you probably went over the limit. However... your heatsink should be either removed or secured. I've seen multiple instances where a heatsink came off the mounts and played pinball inside a tower. In one shipment from Alaska to the southern US, the end result was not pretty, I think the only thing salvageable was the DVD drive.
  4. Since /mnt/cache and /mnt/disk1 /mnt/disk2 etc are not included in your mapping, you won't be able to see those disks in Krusader. However, everything that's on those disks should be included in /mnt/user, which shows up under /unraid_shares in your mapping. Before you go changing and possibly messing up or deleting your data, I suggest learning how Unraid user shares work. Because of the way user shares work, you really shouldn't mess with the individual drives that make up the array unless you have a good understanding of what you are looking at. https://forums.unraid.net/topic/32836-user-share-copy-bug/
  5. What are your path mappings for the Krusader container?
  6. Probably because most people actually plug in a monitor to the card.
  7. I'm not quite following, but if you need to have NPM as the gateway, then just use a plain vanilla apache or nginx container to host the static site and point NPM to that container. I use LSIO's LE with basic authentication for some static pages as well as using it to reverse proxy a bunch of other sites in my LAN, some on Unraid, some hosted on VM's, etc.
  8. You said that's where /data is mapped to, so job done. /data inside the container IS /mnt/user/Downloads/Incomplete on the host.
  9. Exactly. Keeping parity valid all the time every time is the name of the game. If parity must be deemed invalid and corrected or rebuilt, it had better be for a good reason. If your equipment is healthy and nobody downs the server without stopping the array, parity should never be wrong. Any fault should be investigated and corrected. Incorrect parity means a failed drive will be recreated with wrong data, you can only hope the flipped bits are in an unallocated section or the picture area of a media file, if they are in a filesystem area you are looking at file system corruption. Data integrity is the "You had ONE job" of a NAS.
  10. The LSIO Letsencrypt container would be the typical choice. This particular container is set up for proxy, not hosting.
  11. So, you are telling qbittorent to save the downloads to /data, correct?
  12. The default position is to be sure parity is always fully correct, and alert if that's not the case, so the user can take action to be sure parity is correct. If you change that logic and assume parity isn't valid for some portion of the parity disk, then there are many changes that would need to be made to account for that. When you make such low level decisions in the code that could effect data integrity and drive recovery, the amount of work to validate all the possible scenarios and ways it could mess up is a monumental task.
  13. No, Unraid will clear a drive before adding it to the parity protected array if a valid pre-clear signature is not detected. That way parity always remains fully valid, in the event of a drive failure during the adding process. The only time parity is recalculated when adding a drive is if you execute a "new config" and tell Unraid to rebuild parity from scratch based on the existing content of the data drives. I know you are going to cry semantics, but it's an important distinction to make when dealing with parity that an empty drive with a valid filesystem has data. The filesystem structure itself is handled by parity as well. So a formatted drive with no user data is still full of content for the purposes of parity. Parity is just ones and zeroes, across the entire partition, whether there is a blank XFS filesystem or a filled to the brim encrypted BTRFS volume, it's all the same for parity.
  14. So what is the internal container path that points to that host path?
  15. So that if you clear a data disk the same size as the parity disk and add it to the array, parity is already valid instead of having to recalculate. It's much better to always keep parity valid, that way you don't have to code for a bunch of scenarios whether or not to trust that parity is correct.
  16. Containers are sort of like their own little computer, they don't have access to anything unless you map it in their set up configuration. Read through this entire article. https://wiki.unraid.net/What_are_the_host_volume_paths_and_the_container_paths
  17. Sorry I can't offer constructive advice, but I dumped crashplan when they killed the option to back up to another device you own and forced all backups into their server farm. Do you have any relatives with fast internet that would be willing to host a box that you provide in exchange for space on your local server and or shared payment for power and internet?
  18. Lots of RAM. Make sure you limit the number of cached folders since you only have 8GB.
  19. That statement makes me think hardware failure. Are you sure your cooling is ok? Heatsink came loose, fan not working, ambient temp in the room changed?
  20. Frequency of appearance in the database of that location and speed.
  21. As long as you understand that a user share IS the files on the disks, just presented in a different view, you can work out what is safe to do and what isn't. For example, moving a file or folder from /mnt/disk1/share1 to /mnt/user/share1 is going to cause whatever you moved to be corrupted, because the paths are actually pointing to the same thing. Moving from /mnt/disk1/share1 to /mnt/disk3/share1 is perfectly fine. Moving from /mnt/user/share1 to /mnt/user/share2 is perfectly fine as well, but may end up with the files on a disk you didn't expect, since linux first tries a rename and succeeds, so the files will stay on whatever disk they were on to begin with. If you move the files from user share to user share over SMB, they will obey user share allocation rules and go to the disk specified by the destination share. It's a little complex when you first dive in, but if you obey the rule to not mix /mnt/user with array disks, you'll be fine.
  22. Yep. That's what a user share is, a root directory on one of the array disks or cache pools.
  23. Correct, what you did was perfectly fine. What you should not do is copy from array shares to array disks and vice versa. If you want to force content from non array disks to be on a specific array disk, it's perfectly ok to copy from non array disks to /mnt/diskX/ShareY. When you copy to /mnt/user/ShareY, you are relying on the share settings to pick the destination disk. Share allocation settings can be tricky to fully understand, so be sure to turn on the help in the GUI for that page and read through each option carefully.
  24. The front comes with 12 5.25 bays, https://www.newegg.com/p/N82E16811129043 you are seeing four of these installed in those 5.25 bays. https://www.newegg.com/icy-dock-mb155sp-b/p/N82E16817994155
  25. Yeah, that's what I anticipated based on what you posted, the mess of lines confuses rather than informs. Maybe even explore what a 3D plot would look like, with the current result draped across the 3d landscape, that way you could easily show abnormalities with green where the current plot follows the trend, blending through yellow and then red the farther away from normal you get.
×
×
  • Create New...