Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. @DaFr0n, all folders in the root of an array disk or a pool are automatically user shares. Since you are telling zm to create its folders in the root of the pool, they are showing up as user shares. Create a folder appdata on the zm pool device and map the container data location to /mnt/zoneminderpool/appdata/ZoneminderData
  2. QC is too expensive, cheaper to just ship everything and let <deity> sort it out. You want QC? Pay a local vendor to thoroughly test the part before you buy it. Be prepared to double or more the cost on most items though, since qualified technician time is not cheap. Once upon a time profit margins in some tech fields were sufficient to cover real QC, but not any more.
  3. What do you mean by that? The license key file is restricted to the physical USB stick it was issued to. Please explain more fully what you did, and what is happening.
  4. You got it, only snag is that you need not just a port passthrough to the more capable router, you need the external IP to be assigned to the WAN port of the router. Typically that's referred to as bridge mode, so see if you can configure that, some ISP's require that sort of thing to be configured by them, not something you can do yourself.
  5. Pretty sure that is referring to the plex forums, not here. However, if you are using one of the plex containers from binhex or LSIO, there are support threads here for those. Click on the icon on the dashboard, and there will be a support text that links to the correct location to post.
  6. I'd run a long smart test again, followed by another preclear cycle. Is there any possible way to connect directly instead of USB?
  7. What were / are the path mappings for the container?
  8. Why not just build parity to the 2 new 14TB with all your current data drives, then rebuild data1 and data2 with the other 14TB drives, rebuild data3 and data4 with the old 8TB parity drives, and all that would remain would be copying the content of the remaining 3 5TB drives into the free space of the 14TB drives, set a new config and rebuild parity in the final layout? Seems like much less work than what you currently have laid out, and all the removed drives would still have copies of the data for backup instead of wiping it out.
  9. That's an understatement. I remember waiting over 24 hours for a drive to mount after an unclean shutdown back in 4.7 days. Didn't lose anything though.
  10. This, except copy vs. move. ReiserFS performance is horrible over 2TB, especially when deleting things, worse when the file system is mostly full. It's never going to get better, the project has been largely abandoned and the author of ReiserFS is incarcerated in California for murdering his wife. It's a pity, because for 1TB and smaller it was an extremely robust filesystem that could recover from almost any corruption. It just hasn't aged well.
  11. https://docs.netgate.com/pfsense/en/latest/nat/reflection.html
  12. Depends on your hardware. XFS seems to tolerate crashes and other hardware issues better. BTRFS itself is plenty stable, just "brittle" as I see it. Just my opinion though. BTRFS is arguably better as it gives an error if it is asked to return a file that it detects as corrupt so you know to restore it from your separate backup location instead of just giving you the corrupt data as best it can without warning you, which is what XFS does.
  13. cache no disables mover. cache yes is what you want, turn on the help beside the setting for a more thorough explanation.
  14. post a screenshot of your settings, scheduler, parity check section.
  15. Speed sounds about right, the script is not optimized at all. You can either write all zeroes to the data drive, or rebuild parity, either way works, but rebuilding parity is much quicker.
  16. Some comments regarding your setup. Labels. The physical location of the drives isn't very important, the biggest issue here is heat, I'd arrange the drives in a way that keeps the temps most consistent. Not necessarily the lowest temps possible, but most consistent over time. Instead of focusing on the disk numbers, you should put the last few digits of the drive's serial number in a place you can see without disturbing the drive. That way when you need to replace a drive you know exactly which drive is involved. Using old drives in Unraid. Keep in mind that the parity recovery mechanism in Unraid requires not only the parity drives, but ALL the other data drives. So, you are trusting all your data to the LEAST reliable drive in the array. Do not use questionable drives in the parity array. It sucks to lose 12TB of data when one of your brand new drives decides to die unexpectedly and one of your old 4TB also dies while trying to rebuild the first failed drive. Don't use any drives in your parity array that you don't trust completely. As a corollary, only use as many drives as you need to hold your data. Empty drives in the array are still required to be read end to end completely accurately bit for bit to rebuild any failed drive.
  17. Which means windows is using the user name and password from the windows login as the credentials.
  18. Parity check reads all the data drives, does the parity calculation, reads the parity drives and compares. If it's correct, it moves on to the next sector. If it's wrong, parity errors are incremented, and depending on the write corrections setting, it either leaves the error as is, or writes the calculated value back to the parity disks. Normally you don't want writes to happen unless you know WHY it's wrong to begin with. For example, after a forced shutdown where writes to the data drives were occurring, parity errors are somewhat expected because the data writes are committed first and parity may not have had a chance to complete. Under typical operation you should NEVER have parity errors, so finding out why ( bad cable, bad RAM, etc, etc) and fixing the root cause before committing the writes is the prudent thing to do. So, normal routine parity checks, NON-correcting, if parity errors are found, figure out a plausible explanation and correct the issue, including any file system corruption, then do a parity check with write corrections, typically followed up by a non-correcting check to be sure whatever was wrong is now fixed. Zero parity errors is the only acceptable outcome. Any errors means a failed drive will be reconstructed with the wrong bits.
  19. Parity doesn't work with files, but complete filesystems. The entire drive capacity must be written.
  20. Reading back through this, I'm starting to think something else is going on. What happens if you remove the license file, will it let you get a trial key? @SpencerJ may be able to troubleshoot more effectively.
×
×
  • Create New...