Jump to content

JonathanM

Moderators
  • Posts

    16,692
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. My point is that you can ignore CFM, and purchase purely by comparing static pressure vs. db and choosing the highest static pressure at a db level you can live with. That chosen fan WILL have plenty of CFM in free flow, but it would be foolish to pick a db number and choose the fan with higher CFM vs the one with higher static pressure at that db number. If the fan can't push air past the restriction, it won't flow the CFM number. Period. This whole argument changes depending on the primary heat load in the computer case. Typical PC's the CPU and RAM is the primary heat load, and if it's a gaming PC, the graphics card is the primary heat load. In those situations, where the CPU and GPU have their own dedicated local cooling, it's the job of the fan wall to keep the ambient air inside the case as cool as possible with as many air changes as possible through mostly wide open vents. CFM rules in that situation. Hard drives don't have dedicated local cooling, and are packed close together, typically with little consideration for mass cooling. We've got to ensure that all incoming cool air is forced around the hard drives. If the case is pulling air in that bypasses the drives, any work we do trying to cool the drives is negated.
  2. cfm isn't the important metric, it's static pressure.
  3. No. You lose whatever is on the failed disks.
  4. Choose the Next version in the USB creator tool.
  5. Plug it in to a controller that is passed through to the VM?
  6. Quick question. What happens when the array is cleanly powered off, like for instance if your UPS commands a powerdown, if the VM is asleep?
  7. Read the pinned post at the top of every page in this thread.
  8. Set the spin down time such that you only have 1 or 2 spin events per day. If you don't access a drive for weeks, no point in having it spun up the entire time, but don't spin down every hour just to have it spin back up again 3 hours later.
  9. I recommend copying, not moving. 1. It's faster, ReiserFS is VERY slow to delete files, especially on very full or very well used volumes, and moving involves writing to both the source and destination where copying only involves writing on the destination, so parity is less busy. 2. After the copy is done, you can verify the results before you format the source to XFS. You can use any method to copy that you are comfortable with, I personally use rsync at the command line, one pass for a quick copy, second pass with checksums for verification that the copy is complete. These are the commands I used. To copy rsync -arv /mnt/disk(source)/ /mnt/disk(destination) To verify rsync -narcv /mnt/disk(source)/ /mnt/disk(destination) Where (source) and (destination) are the literal numbers, like /mnt/disk12/ Be careful to include the slashes at the end where needed, otherwise you will end up with a root folder of disk12 with all your shares inside it, which can get VERY confusing since it will automatically show up as a share "disk12", but it will be on the destination disk, which is another disk number. To use the command line for Unraid, there are MANY different ways to access it, but for your purposes my first choice would be the actual keyboard and monitor attached to the Unraid tower. 2nd choice would be a remote SSH session, and start a "screen" command before doing the copy and verify, that way if the session gets disconnected you aren't killing the copy. Normally unbalance would work, but I suspect ReiserFS is causing your issues. If you had pointed to your current thread where you were discussing your array failure with @johnnie.black, I would never have gone through the trouble of typing this up. You really need to stop trying to move data around on the array, and copy anything important to good drives.
  10. Tools, Diagnostics, download the zip file and attach the intact zip file to your next post in this thread.
  11. The usual way of dealing with a disabled disk would be to rebuild to the replacement disk. I'm not even sure how you convinced unraid to allow you to add a disk while one was disabled, but beyond that, you are running mostly ReiserFS disks, so you eventually want to migrate everything to XFS anyway. Your new almost empty disk is ReiserFS, which is unfortunate, I'd have preferred any new disks be XFS. If I were in your shoes, I'd get another new 4TB now, rebuild disk4 on it, that way you are back to being parity protected. Then empty out disk12 on to the new disk4, reformat disk12 as XFS, and start the process of copying ReiserFS disks onto XFS disks and formatting the ReiserFS disks to XFS after their contents are safely on an XFS disk. Running for an extended period of time with a disabled disk is very risky, especially with disks that old. How confident are you in the health of the rest of your disks?
  12. Read through this thread. Yes it's long, but it covers the subject quite well.
  13. You don't need any significant extra above it if the intake and exhaust are unrestricted with separated room air access. Foam block around the front so it can't suck in any exhaust, and make sure the rear doesn't just dead end in a cavity, and you'll be fine.
  14. They are specific to the container that created each folder inside appdata.
  15. Download the diagnostics zip file. Stop array and then reboot Unraid. Start array, download the diagnostics again. Attach both diagnostics files to your next post in this thread.
  16. Sure, it will just slow down any operations on the parity protected drives, and extend the time it takes to finish the parity.
  17. Did you copy the contents of the downloaded zip to the root of the flash drive? There should be a syslinux folder on the flash.
  18. USB is bad about dropping and reestablishing connections on a whim, which doesn't bother some things, but Unraid relies on constant reliable communication with parity protected array devices, and if it can't write to a disk on demand, it disables it. If you don't care about the parity protection part of Unraid, they may work ok. TLDR; Don't run an Unraid array with USB disks in the parity protected portion.
  19. Click on the disk in question on the Main tab and go through the check filesystem status section. Post the diagnostics zip file after you have done the check.
  20. Try using the latest beta by selecting the next branch in the installation tool.
  21. I believe you're mixing up "websites" and internal sites. https://iknowwhatyoudownload.com/en/peer/
  22. Migration plan sounds reasonable. I still don't see a local backup plan being implemented though. Unraid (or RAID) is not a backup. It only protects from drive failure. Accidental deletion, file system corruption, ransomware attack, etc all require a second copy of your files somewhere.
  23. The worst part is not that they don't work at all, but that they work sometimes, but can kick a drive out for no apparent reason at any time. I strongly advise not using any marvell ports.
×
×
  • Create New...