Jump to content

JonathanM

Moderators
  • Posts

    16,686
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Parity1 is an equation that sums the total of the bits and modifies the parity drive to force the answer of the equation to be even (0). So, if parity is found to be incorrect, the error could possibly be in any of the drives, there is no way to single it down to any particular drive. If one of the drives fails, then it's obvious which drive to recreate, and that is done seamlessly, but if all of the drives are operating normally there is no way to know which bit of the parity sum is wrong.
  2. It will need to be cleared (several hours) but otherwise, yes, add it to the array and format it.
  3. Or, you have exposed it to the internet without a firewall and somebody is shutting it down.
  4. To determine whether or not the download has been altered, there are MD5 checksums alongside each download. As long as the checksum matches, you are safe.
  5. Are you sure that RAM is on the approved list for that motherboard? The quick google hit for that board lists UDIMM, you are speccing RDIMM. Also, does the board come with the mini-SAS, or is that an addon? I didn't dig hard enough to tell the difference.
  6. No, only the trial (time limited) is required to have an internet connection to start the array. The purchased license validates based on the USB GUID, so as long as your USB stick is connected properly, no need to have internet for basic NAS functionality. The license is only validated at array start, so theoretically you could start the array then pull the internet to test.
  7. Are you using a marvell chipset controller? You didn't post the diagnostics zip file so we don't have much to go on.
  8. Pretty much the normal way of doing it 20 years ago. Now we have fancy 3D printers and can print fan shrouds.
  9. If you can't get this container secured to your satisfaction, you could do as I do and not run it unless you are actually using it. Not the greatest answer, but it certainly limits the exposure. Another "security through obscurity" trick is to have another inconsequential container set to use the same port as this one, or even a second copy of this container, duplicated EXCEPT for the permissions of the mount point. Set the no permission container to auto start, then when you need to edit files you shut down that container and start up the dangerous one, when done restart the other. That way the fully privileged container can't be started unless the auto run container is stopped. The starting and stopping could be scripted, and triggered by a shortcut on your daily driver. Click an icon, do your file maintenance, click another icon, server secured.(ish)
  10. 6.8 includes wireguard, which is a better option for most people.
  11. You could assign a second USB stick as the Unraid disk1.
  12. Unfortunately that won't work, and is even worse than you would imagine, because EVERY write to a data drive generates a write on the parity disk(s). So, if sector1 on disk1 gets a write, and sector4 on disk2 gets a write, you have to wait for the parity to move back and forth between sector1 and 4 for each chunk written, so the write time for both transfers is much worse than a single transfer done one disk at a time. Each disk that you add as a target slows down all the transfers.
  13. I recommend going to youtube and watching spaceinvader one's videos. He has a BUNCH of good videos covering pretty much every question you asked.
  14. @Squid, @trurl, at first I was convinced it was a container path mismatch problem, after digging a little I'm not so sure any more. It seems like an issue with LL, not linked to the downloading container. The path not found reference is apparently when trying to do a send operation internal to LL on a file already properly referenced inside LL. It does seem odd to have /downloads pointing where it is, and there may still be an issue there, but I'm not sure it's what's causing this error.
  15. I totally agree with the concept of using shucked drives. All of my 8TB drives are shucked mybooks. NAS / RAID drives are designed with drive cooperation in mind, things like timeout and vibration tuning. Since unraid can spin down unused drives, many of the NAS specific features aren't utilized or needed. As long as you fully test the drive before putting it into service, by whatever means gives you confidence in the drive, warranty and MTBF numbers are pretty meaningless for the quantity of drives 99% of unraid users interact with. Both warranty and MTBF are statistical constructs based on almost nothing to do with actual single drive reliability.
  16. The / short / glib / smart alec / true / answer for me is not using windows as my daily driver machines, and not forwarding any ports besides what some containers need. Honestly, there may be some windows viruses hanging out in a file saved somewhere on one or more of my servers, but with no clients to infect, I don't really care.
  17. 1. sd? assignments are not to be used for slot identification. They can and will change. You must use drive serial number to assign the drives. 2. At the start the of the procedure you linked, it explicitly says that before you start the removal you must have copied all the data from that drive slot elsewhere. Did you do that? 3. Without command line directives, unraid assumes you want to build parity based on the remaining drives that you assigned, the data previously on the removed drive is gone. 4. If you don't have a backup of the missing drive and need to rebuild it, the easiest way is to rebuild it to a new drive. Now that you've set a new config without it, things get very complicated. I'd wait until @johnnie.black can chime in, I can't remember if it's possible to use the set invalid slot command without having a replacement drive to rebuild to. Any writes to ANY of the data or parity drives will corrupt the emulated missing drive. Hopefully the damage already done by mounting one or more of them is minimal.
  18. I don't think 40MB is big enough to install Ubuntu server. Try specifying 40G as the disk size.
  19. I second this. Squid does a good job maintaining the backup app to remain compatible and keep up with any dockerman issues, why not let him to the hard work and you just rsync the resulting archive to your backup of choice. That way you can take advantage of his restore functionality if needed, vs. figuring out what needs to be extracted from your archives. Just point your script to backup the .tar.gz file his program creates and be happy.
  20. Well then. My install must be totally borked, it looks nothing like that. Sorry, I can't really help much if I can't recreate the issue.
  21. I must be using a totally outdated version of LL. Can you post a screenshot? I don't have an eBooks tab.
  22. It's been a while since I played with LL, so I opened up an instance and looked around. I can't seem to find that option. Is it only available when working with a specific book?
  23. CA Backup / Restore Appdata plugin provides a nice scheduled and configurable interface, no need to reinvent the wheel. However, doing it your way with rsync will probably provide acceptable results, as long as the containers don't mind backups being taken while running. Many won't get a clean backup without stopping them first, so you would need to script a docker stop <container> rsync job docker start container sequence.
×
×
  • Create New...