Jump to content

JonathanM

Moderators
  • Posts

    16,724
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. If you have a windows machine on the same subnet, open a command prompt and net time \\<serverIP> /set /yes If it returns with "Current time at \\<serverIP> is <DATE> <TIME> The command completed successfully." Then the ntp service on the server is responding.
  2. Consumer grade cases almost never provide adequate cooling in their typical layout for more than one or two drives, because nobody except us storage nuts runs a bunch of drives, normally the biggest cooling load is the graphics card and CPU. Perhaps try disconnecting the case fans and blocking off all the openings except for the single fan in front of the drives and the power supply exhaust, and see how the temperatures act. You can use packing tape as a quick block, just lay strips of tape across all gaps except for the ones directly at the fans. One exit at the PSU, one intake directly on the drives. If the power supply is at the bottom of the case as pictured, possibly open the exhaust fan at the top rear as well. Don't allow any intake air except the single opening that feeds the drives. Close the case and monitor the temperatures.
  3. Make sure that all the air that moves through the case MUST flow over the drives. That means possibly taping over some case holes, building cardboard or plastic ducting, whatever it takes to keep air from passing through the case without flowing past the drives in some way. It's entirely possible that adding the extra fans did more harm than good, if they are just blowing air through the main open space in the case.
  4. Yes. Keep in mind that RAID or Unraid is not a replacement for backup, it can't recover from deleted or corrupted files, only drive failure.
  5. Can you verify that the server and camera are on the same subnet with no firewall rules that would restrict traffic? I just verified again and it "just works" for me.
  6. PIA credentials aren't being accepted. I've always just used the same login and password that works on PIA's website, is that what you are using?
  7. Hmm. I was thinking of the minimum free space of pools. The setting that tries to keep BTRFS pools used in mover operations from getting filled to the brim and corrupting, regardless of whether the files come from SMB or simply writes to /mnt/user/share that go to the pool.
  8. How about a rule that minimum free space won't go above a ceiling value of some percent of the total size? I have a pool that hosts a single VM taking up 90% of the space, but also is used for appdata. Alternatively exclude vdisk image files from all calcs.
  9. Parity never replaces the functionality of a real backup, it only allows rebuilding a failed drive, and keeps the data available while the drive is being replaced. Parity can't help with file corruption or deletion, so backups are needed regardless of parity. If the convenience of high availability is worth an extra drive is up to you.
  10. If you had a parity drive assigned to parity2 slot (doesn't matter if you only had one parity drive) then it will be invalid if the drive order is wrong, and need to be rebuilt. A parity drive in parity1 slot doesn't care about data slot order, it should still be valid. You can put the drives in the slots you think they were assigned to, check the parity is valid box, and immediately start a parity check. If you get very few or zero errors, then good. If you immediately get constant increase in error count, stop, new config, keep all slots, but DON'T check parity is valid, and let it build parity fresh. your plex container will probably still work, but you won't be able to interact with it properly until you set it up and save it again. If you didn't change anything in the stock template it should just work. If you customized it, and make the same customizations, it should work ok. Some of the container config is stored in the appdata which should still be on your cache, but the docker container xml is stored on the flash drive, and that will need to be recreated. by installing the container again, and just use the same appdata folder.
  11. Nope. As long as you stay cognizant of the fact that a file will appear in two locations, and the computer isn't smart enough to keep you from overwriting (and subsequently erasing) what appears to be a second copy but isn't, you are fine. In fact as long as you are careful and don't interact with the file's twin in the same operation, you can mix user and disk shares. *(shh, don't tell anyone)* It's just easier to make a blanket statement to never do it, because most people aren't careful enough or think through what seems to be a perfectly innocent operation that ends up nuking their data. Once you are thoroughly familiar with how user shares work and how disk shares are melded together to make the user shares it's pretty obvious what is and isn't ok to do.
  12. Physically, drives are almost sealed against water ingress. If they are helium drives, they are sealed. Problem is, if the drives were hot, and the deluge of water was cold, the drives could have sucked water into their pressure equalization ports if so equipped. I think I'd put them in a tightly temperature regulated environment at around 50C for several hours if possible, as well as doing the alcohol wipedown. As long as the drive is only mounted read only, the array will still be valid. If you have a spare PC, I think my approach would be to first try to boot the spare PC with ONLY the Unraid USB, NO OTHER DRIVES ATTACHED, and see if it boots, preferably in GUI mode. If it does, change the array to not autostart (not like it would anyway with no drives) and power off from the GUI. Then attach the first suspect drive, boot Unraid, and see if it shows the drive in the appropriate slot. If it does, you could run a SMART test from there. Try each drive in turn, one at a time. Take inventory and see where you are at. Depending on the results we can formulate a game plan from there. (Actually the first thing I would do is attempt to copy the config folder from the Unraid boot drive to a safe place) I think using the Unraid boot USB is one of the safest ways of evaluating each individual drive.
  13. Then one of your plugins is causing it. Remove 1/2 of the plugins, test, if it's ok add back 1/2 of the remaining. If it's still messed up, remove another 1/2 of the running plugins. Use logic to determine the culprit, then post in the specific plugins' thread that caused the issue.
  14. You could temporarily get it running by removing disk5 and disk6 from the parity array, assign them to a new pool and rebalance to RAID0, when that is done you will have a pool with ~2.4TB of space. Copy the vdisk file to it with sparse always, then change the VM XML to point to the new location. vdisk image files are sparse when created, so they only actually take up the room on disk that is used, but they appear to the VM as the full capacity. If you put more data in the vdisk than is available on the single array disk or pool, the VM will crash. Pools can be created with multiple volumes using BTRFS RAID levels, so you can create a larger contiguous space. What I am describing is not easy to do, and if done incorrectly will result in the loss of your VM data. If it's important, I recommend making a backup first, which will require some space larger than 1.2TB, which is the current real size of the VM data. To copy it will require the use of tools that understand sparse files, and how to copy only the occupied parts of the file.
  15. The servers it runs? Yes, I have a server up that my local game group uses. Only standard mc port is forwarded. The management interface? IDK, never tried. All my management is done either locally or VPN.
  16. Please add selection boxes to each container, along with the appropriate buttons at the bottom, i.e. "start selected", "stop selected", "update selected" "pause selected", "resume selected" "move to top" "move to bottom" The additional buttons could be below their "all" counterparts, and optionally only appear when any selection checkbox is filled.
  17. Please add an action button to the bottom of the page to fire off the autostart routine. That way after stopping all containers for maintenance that doesn't require stopping the array we can start all our normally autostarted containers with their associated delay without restarting the whole array.
  18. Probably the result of formatting with different versions of XFS. Newer type formats reserved more space for added filesystem features. There were several threads a couple years ago complaining about all the extra "wasted" space when drives were formatted with the new version of Unraid that included an updated XFS.
  19. Probably just file system overhead. You will need to rebuild parity after removing them, regardless of whether there are any files showing.
  20. If you are decent at scripting you could watch a share location for a specific file to be created, if that file exists, run docker restart on the container and delete the file. That would only require access to the share. There are plenty of ways to do it if you are handy with scripting or other programming techniques.
  21. Yes. I assumed from the way you worded your statement that I quoted that you didn't have it set up. If there is nothing logged, that typically means it's hardware related, where the crash happens before anything can be logged. Since your combo has a history of hangs, the first thing I would do is swap around hardware if possible. Intermittent faults like this can be a bear to track down, because you change something, and the error can appear to go away until it happens again randomly. Maybe run with 1/2 RAM for a period of time, then switch to the other pair?
  22. Logs are in RAM by default to keep from wearing out the flash drive unnecessarily. You must set up the syslog server and specify a destination to keep the logs, be sure to disable it again after you solve the issue if you log to the flash drive.
  23. Before you reboot, collect diagnostics and attach to your next post in this thread.
×
×
  • Create New...