J.Nerdy

Members
  • Posts

    192
  • Joined

  • Last visited

Everything posted by J.Nerdy

  1. Got it, thanks! Only my second WD (of many) to fail prematurely. Bummer. At least I have original parity disk... and, replacement is already in post.
  2. Heard. VMs shutdown. Only Dockers active are: Plexpy | Plex | CrashPlan | Netdata | CAadvisor New disk is already on way and will be waiting for me. Heard, thank you. What do you mean by monitoring list?
  3. Heard Final question: I am traveling through Friday - should I take array offline? While away it is only going to be serving media. I can continue preclearing with array off-line.
  4. @johnnie.black smart extended attached - 8 pending sectors On a drive so short in its life cycle, should I RMA. The error count has not increased WDC_WD120EFAX-68UNTN0_2AGLW2YY-20190806-0601.txt.zip
  5. I would reboot, but don't want to interrupt extended test. Thanks again!
  6. Cheers. So once all four disks have been rebuilt successfully, I can repurpose the disks. Fix common problems is obviously throwing a fail for the array do to the disk errors... can I ignore for the time being (since I want to monitor if new issues arise rather than have a panic attack every 12 hrs from the same failed scan).
  7. I will wait for the results of extended then. If healthy, should I recalculate parity prior to replacing disks? My only concern is that I will be replacing 4 disks (each 4tb with a 10tb) - and running a parity operation after each disk is replaced and date rebuilt on a shaky parity disk seems like I am courting disaster.
  8. Honestly, it was configured not knowing any better and paranoia. Understanding how parity operates a little bit better now, and understanding how it thrashes the disk I changed it to monthly. Currently running extended. No further errors have been logged. Will post, once it is completed (on 12tb it is calculating near 20 hrs to complete). Would these errors lead to corruption if I was to replace a data disk, emulate its contents and rebuild? Also, thanks Johnnie for clearing up (I confused UDMA with UNC). Cheers. Edit: I also installed dynamix file integrity (and am looking at Squids checksum plug-in) to start looking at disk | file hashes
  9. Will run extended. Parity completed without any errors. Could it be a cabling problem? Also, finished preclearing 1st 10tb data disk for replacement. Would rebuilding data with this disk questionable put corruption at high risk? Thanks (will attach results of extended SMART)
  10. I know that this in no longer supported, but, is running this and Dynamic File Integrity concurrently a bad idea? Thanks!
  11. These errors are occurring during parity check:
  12. In the process of upgrading storage capacity on the array and swapped a 4tb red for a new 12tb red as parity disk (4 data disks are going to be 10tb). Before rebuilding parity, moved disk through 2 passes of preclear, with no issue. Perhaps anecdotal: read errors are occurring while parity check is running (I do weekly check) and in the middle of preclearing 2 x 10tb reds. Do I need to replace? I have the old 4TB parity disk, which I will rebuild with while waiting for replacement drive. Diagnostics attached nerdyraid-diagnostics-20190804-1654.zip
  13. All of the above. (I have used clover to do that - are there further steps to make sure that I need to undertake?)
  14. Successfully passed through an rx570 to VM, running 4k glass smooth. However, can not for the life of me get HDMI audio passed through. maxOS only recognizes soundflower emulated devices which won't output to monitor speakers. Been trying for a month, so any advice greatly appreciated. Thanks!
  15. I am banging my head against the wall: I am running 10.14.6 with an rx570 passed through and handling the graphics. Everything works smooth as glass, but, for the life of me I can not get hdmi audio passed through. The only sound output devices recognized are sound flower (64 and 2 ch), apple emulated devices. Does anyone have any solutions for passing through HDMI audio? Its cuckoo because the card is handling graphics (on a 4k display) with zero lag. Thanks!
  16. Just to add... (and to my very basic understanding) Parity is an "equation" that needs to "solved for". If you were not include the full capacity of the parity drive in the calculation, you would not be able to "solve for" (rebuild data) in the case of failure or increased capacity.
  17. Have to agree, if upgrading Parity disk - go with the largest capacity that your budget will allow. (to stretch, consider shucking an external disk)
  18. Thank you @wgstarks I am using the script / plugin to stress-test the drives prior to adding them to array. While it may not guarantee drive robustness... it does give added piece of mind. Cheers.
  19. FWIW: the best gigabyte/$ value right now in large storage format is 8TB reds... (or a shucked 8tb). I just went through this during WD / PRIME DAY sales: I settled on 1 12TB (for parity) and 4 x 10 TB (for data) This is replacing 5 x 4tb in my array. The jump up in price for the 12TBs were not worth propagating over 5 drives... and the 8TB while great deals, would find me doing this dance again in a year. For me best mix was 12 TB at 279 and 4 10TB at 215 each. Though 5 x 8 TB at 160 would have been a nice savings.
  20. I am currently running 6.7.2 and ran it to a problem, I will use scenarios to try to be succinct: Scenario 1: was pre-clearing a 8TB and 4TB via UD in a usb 3.1 dual slot external dock. Preclear failed on Post-Read on the 4TB. Researching lead me to shutdown server and run memtest for 48hrs. No issues. Then did digging and found that the Preclear plugin has failed on Post-Read when running concurrent disks in a few scenarios. Ran Preclear on each disk indivdually, 2 cycles and passed. Scenario 2: I need to preclear 4 10TB to swap data disks to increase array capacity. Based on Scenario 1 results, I opted to use patched (Bin-Hex) fast_preclear.sh script. Running 2 drives concurrently in same dock, one drive became unresponsive during zeroing 30hrs in. The other drive continued to move along until I accidentally terminated the screen session (not detached, killed). Which has led to... Scenario 3: I am currently running preclear concurrently on the 2 10TBs except one is via plugin and gfjardin script, and the other a screen session and fast_preclear.sh. Both are progressing... but the plugin is Pre-Reading at 186mb/s while the scripts is moving at 68mb/s. In both prior scenarios the speed of concurrent drives was identical... at around 180 mb/s Questions: 1. Why would one drive drop off in scenario 2 (flaky dock?). The server has 64gb ram so I don't think Ram availability is issue. (I don't have slots available internally otherwise I would run off of native sata bus instead of dock) 2. MORE IMPORTANTLY: should the speed discrepancies between plugin and script be cause for concern. Thanks for any help!
  21. Has anyone had any issues using the plugin on multiple drives concurrently? I had a 4TB fail as soon as Post Read began. I shutdown ran mem-test for 24 hrs and eliminated RAM as an issue. I did some googling and found anecdotal evidence on a reddit thread that concurrency was an issue. I happened to be running preclear on an 8TB red at same time. Spun the array back up, ran extended smart (no suggestions of disk health issues) and no am running the plugin on the 4TB again. It is currently 65% through post-read. Running 6.7.2 and using an external usb 3.0 dock to run preclear (don't have any free sata ports or pci lanes for card atm). I only ask, because I am upgrading my array from 5 x 4 TB, to 1 x 12tb + 4 x 10tb... and that is a lot of bits to preclear. I would love to be able to run concurrently. Does the script have any issues with multiple drives if I were to just use user scripts?
  22. Current array: 5 x 4tb disks (1 parity | 4 data) 1 nvme + 1 sata ssd cache pool 1 nvme unassigned device | boot disk for winVM I am planning to replace all 5 platter disks in the array. New array: 1 x 12TB parity disk 4 x 10TB data disks 1 nvme + 1 sata ssd cache pool 1 nvme unassigned device | boot disk for winVM However I do not have any available PCI lanes (for a riser card) nor any available ports to run preclear on a disk while the array is up. The process I was planning: stop array uncheck start array on boot power down remove parity 4TB disk add 10TB disk boot unraid preclear 10TB disk repeat 1 - 7 for each of the data 10TB disks repeat steps 1-7 for 12 TB parity disk leave 12 tb parity disk in boot unraid assign 12 TB as parity rebuild parity on array once parity confirmed, replace each data disk (one at a time) rebuilding array progressively Obviously pre-clearing 52TB of data will be brutally time consuming, leaving the array off-line for a week or two... but I do not see any other way while completely protecting my data and installing fault tested new spinners. From what I understand is this the best practice? Once Parity is rebuilt on 12tb Parity disk... can I use the array while rebuilding each data disk? Thanks for any help! Cheers. EDIT: would it be safe to use a usb 3.0 dock and desk fan?
  23. @dlandon I am getting the following error in UD: Warning: Illegal string offset 'DEVTYPE' in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 1347 This is for a passed through nvme drive. The device is using a modified clover to boot a win10 vm. I had this same configuration previously with no issues (UD listed device and partition tables without issue when VM powered down). I am way out of my depth when it comes to functions and setting arrays in php. What I am wondering is since this is the getting partition info function... could have something to do with the partition map created in windows? Here is the disk log info: Jul 16 12:29:02 nerdyRAID kernel: nvme1n1: p1 p2 p3 p4 Jul 16 12:29:15 nerdyRAID emhttpd: Samsung_SSD_970_PRO_512GB_S463NF0K713728K (nvme1n1) 512 1000215216 Jul 16 13:11:56 nerdyRAID unassigned.devices: Adding disk '/dev/nvme1n1p3'... Jul 16 13:11:56 nerdyRAID unassigned.devices: Mount drive command: /sbin/mount -t ntfs -o auto,async,noatime,nodiratime,nodev,nosuid,umask=000 '/dev/nvme1n1p3' '/mnt/disks/GAMES' Jul 16 13:11:56 nerdyRAID ntfs-3g[1459]: Mounted /dev/nvme1n1p3 (Read-Write, label "GAMES", NTFS 3.1) Jul 16 13:11:56 nerdyRAID ntfs-3g[1459]: Mount options: rw,nodiratime,nodev,nosuid,allow_other,nonempty,noatime,default_permissions,fsname=/dev/nvme1n1p3,blkdev,blksize=4096 Jul 16 13:11:56 nerdyRAID unassigned.devices: Successfully mounted '/dev/nvme1n1p3' on '/mnt/disks/GAMES'. Jul 16 13:11:56 nerdyRAID unassigned.devices: Adding disk '/dev/nvme1n1p4'... Jul 16 13:11:56 nerdyRAID unassigned.devices: No filesystem detected on '/dev/nvme1n1p4'. I clearly see 2 of the 4 partitions: the unformatted raw partition (that I want to use as a XFS partition for array when machine is not running and the primary partition created for storage of 'games' in NTFS) I do not see the 'efi' partition nor the windows [C:\] drive. Clearly this is some type of user error on my part... but just not sure how or why it got borked when previous steps worked flawlessly in past. If it would help I could post 1347 - 1403 of lib.php and see if something got corrupted? Any suggestions would be greatly appreciated. Thank you! EDIT: I did not mention, the VM works flawlessly. It is simply I can not pull accurate partition map in UD. Sorry.
  24. Come to think of it... I did at one point have it stubbed. However, then I copied my winVM to it, and partitioned the remaining drive in xfs to share some high throughput storage to the array. It did not manifest this issue at the time. However, after nuking the VM and passing through the controller again with a new VM. The performance on the VM is spot on... my only issue is that though UD can see the drive, it can not see the partitions - so I can not format the unused space and share back to the array. i might stub it out. Reboot. Then reboot and pass through. See if that makes a difference.