128bytes

Members
  • Posts

    20
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

128bytes's Achievements

Noob

Noob (1/14)

0

Reputation

1

Community Answers

  1. It doesn't appear so, I get quite a nice red warning All existing data on this device will be OVERWRITTEN when array is Started bridge-diagnostics-20231023-1030.zip
  2. Thank you they were removed, However now the NVMes are listed as Cache 3 and Cache 4. How can I move them up to 0 and 1?
  3. I've upgraded my system from 2 sata ssds to 2 nvme ssds. When I started up the array I added the 2 nvmes to the same pool as the existing 2 satas. i.e. there are now 4 drives in the pool. As far as I can tell they are raid 1. Clicking cache says: Data, RAID1: total=206.00GiB, used=204.18GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=2.00GiB, used=1.14GiB GlobalReserve, single: total=341.97MiB, used=0.00B No balance found on '/mnt/cache' Current usage ratio: 99.1 % --- No Balance required I want to remove the 2 satas. I thought I could just stop array and says pool has 2 disks but I can't lower it. So what's the procedure to simply remove the drives and let it 'rebalance' (I think) to just the 2 nvmes? bridge-diagnostics-20231022-1536.zip
  4. Yes I did maintenance just to keep it offline for now. I'm saying the message sent though implies the rebuild started.
  5. Also, bug report (maybe just bad wording?): I started in maintenance mode and did the unassign/assign and the notifications tell me its being reconstructed even though I didnt hit sync yet. Event: Unraid Disk 3 message Subject: Notice [BRIDGE] - Disk 3, is being reconstructed and is available for normal operation Description: WDC_WD100EMAZ-00WJTA0_JEGW40YN (sdd) Importance: normal Event: Unraid Disk 5 message Subject: Notice [BRIDGE] - Disk 5, is being reconstructed and is available for normal operation Description: WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sde) Importance: normal bridge-diagnostics-20231018-1311.zip
  6. 1) This is xfs_repair below, just wanted to make sure seems ok to you. 2) Parity main bottleneck is drives right? I plan on upgrading the CPU on this (3rd gen i7 3770k to 12th gen i5-12600K 6p4e), just wanted to know if I should bother doing the upgrade first. Phase 1 - find and verify superblock... - block cache size set to 1419048 entries Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 4 - agno = 0 - agno = 3 - agno = 5 - agno = 6 - agno = 7 - agno = 2 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (6:207268) is ahead of log (1:2). Format log to cycle 9. XFS_REPAIR Summary Wed Oct 18 12:23:00 2023 Phase Start End Duration Phase 1: 10/18 12:18:08 10/18 12:18:08 Phase 2: 10/18 12:18:08 10/18 12:19:10 1 minute, 2 seconds Phase 3: 10/18 12:19:10 10/18 12:19:18 8 seconds Phase 4: 10/18 12:19:18 10/18 12:19:18 Phase 5: 10/18 12:19:18 10/18 12:19:19 1 second Phase 6: 10/18 12:19:19 10/18 12:19:26 7 seconds Phase 7: 10/18 12:19:26 10/18 12:19:26 Total run time: 1 minute, 18 seconds done bridge-diagnostics-20231018-1302.zip
  7. Completed with no errors! (Guess the old power supply is garbage) UI hasn't changed, I still have: Disk 1 Unmountable DIsk 3 disabled Disk 5 disabled And my option by rebuilt parity is still just Read-Check. bridge-diagnostics-20231018-1147.zip
  8. So Read-Check is about to finish, 0 errors so far. Assuming that completes without errors, what's the next step?
  9. So to confirm you'd recommend I should try diff PSU/cables and then rerun Read-Check?
  10. I have 5 disks (14TB & 10TB) and dual parity (18TB). i7-3770K, 32GB RAM, on a UPS etc... Over the weekend I've had 1 disk get corrupted, then 2 disks get disabled. Timeline (roughly): 10/6/2023: Parity check running 10/7/2023: Presumably around here errors started happening as I have bunch of emails from AppdataBackup failing Description: Array has 3 disks with read errors Importance: warning Disk 1 - WDC_WD140EDGZ-11B1PA0_Y5KYWNNC (sdi) (errors 885744) Disk 3 - WDC_WD100EMAZ-00WJTA0_JEGW40YN (sdh) (errors 885744) Disk 5 - WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sdg) (errors 2048) 10/8/2023 ~4AM (diagnostics attached before next reboot): * **disk1 (WDC_WD140EDGZ-11B1PA0_Y5KYWNNC) is disabled** * **disk5 (WDC_WD100EMAZ-00WJTA0_JEGW1ADN) is disabled** * **disk1 (WDC_WD140EDGZ-11B1PA0_Y5KYWNNC) has read errors** * **disk3 (WDC_WD100EMAZ-00WJTA0_JEGW40YN) has read errors** * **disk5 (WDC_WD100EMAZ-00WJTA0_JEGW1ADN) has read errors** * **/var/log is getting full (currently 100 % used)** * **Unable to write to disk1** 10/8/2023 9:30PM: * **disk1 (WDC_WD140EDGZ-11B1PA0_Y5KYWNNC) is disabled** * **disk5 (WDC_WD100EMAZ-00WJTA0_JEGW1ADN) is disabled** Disk 3 - WDC_WD100EMAZ-00WJTA0_JEGW40YN (sde) (errors 108928) 10/8/2023 10PM: I pull out disk1 and swap for another in case I mess something up Disk 1, is being reconstructed and is available for normal operation Data-Rebuild started 10/9/2023 12AM: Parity - ST18000NE000-2YY101_ZR54LMDX (sdc) - active 45 C [OK] Parity 2 - ST18000NM000J-2TV103_ZR56TDY7 (sdb) - active 45 C [OK] Disk 1 - WDC_WD140EDGZ-11B2DA2_2CHDSP1P (sdk) - active 34 C [DISK INVALID] Disk 2 - WDC_WD140EDGZ-11B1PA0_Y6G0EEUC (sdh) - active 37 C [OK] Disk 3 - WDC_WD100EMAZ-00WJTA0_JEGW40YN (sde) - active 33 C (disk has read errors) [NOK] Disk 4 - ST10000DM0004-2GR11L_ZJV6AXZB (sdd) - active 33 C (disk has read errors) [NOK] Disk 5 - WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sdf) - active 30 C [DISK DSBL] Cache - Samsung_SSD_860_EVO_250GB_S59WNMFN703104R (sdi) - active 41 C [OK] Cache 2 - CT1000MX500SSD1_2307E6ABC2BB (sdj) - active 34 C [OK] Also note the DISK INVALID issue, seemingly need an XFS_REPAIR. I attempted that on previous disk before pulling it out and just trying to do parity rebuild. 10/12/2023 1:45AM Parity done: Disk 1 returned to normal operation Description: WDC_WD140EDGZ-11B2DA2_2CHDSP1P (sdk) Data-Rebuild finished (3299065 errors) Description: Duration: 3 days, 3 hours, 20 minutes, 25 seconds. Average speed: 51.6 MB/s <NOTE: STILL HAVE Unmountable: Unsupported or no file system for Disk 1> 10/12/2023 2AM start reconstruction on disk5: Disk 5, is being reconstructed and is available for normal operation Description: WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sdf) Event: Unraid Data-Rebuild 10/13/2023 2AM: Disk 5 in error state (disk dsbl) Description: WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sdf) - Disk 3 in error state (disk dsbl) Description: WDC_WD100EMAZ-00WJTA0_JEGW40YN (sde) Description: Array has 3 disks with read errors Disk 3 - WDC_WD100EMAZ-00WJTA0_JEGW40YN (sde) (errors 64) Disk 4 - ST10000DM0004-2GR11L_ZJV6AXZB (sdd) (errors 156) Disk 5 - WDC_WD100EMAZ-00WJTA0_JEGW1ADN (sdf) (errors 1024) 10/13/2023 ~11AM Diag attached: Data-Rebuild finished (3299065 errors) (obviously incorrect) Description: Duration: 3 days, 3 hours, 20 minutes, 25 seconds. Average speed: 51.6 MB/s Rebooted to change around cables. After reboot: Disk 1 - WDC_WD140EDGZ-11B2DA2_2CHDSP1P (sdj) (errors * **disk3 (WDC_WD100EMAZ-00WJTA0_JEGW40YN) is disabled** * **disk5 (WDC_WD100EMAZ-00WJTA0_JEGW1ADN) is disabled** 10/13/2023 12PM: I tried doing a recheck but the noises coming from disk1 and errors popping up make me think not best way forward. Disk 1 - WDC_WD140EDGZ-11B2DA2_2CHDSP1P (sdj) (errors 990668) Disk 4 - ST10000DM0004-2GR11L_ZJV6AXZB (sdf) (errors 607524) Currently: Disk 1: Officially available, but (1) Corrupted file system (2) making noises and log has errors Oct 13 11:48:29 kernel: I/O error, dev sdj, sector 84870416 op 0x0:(READ) flags 0x0 phys_seg 128 prio class 2 Oct 13 11:48:29 kernel: device offline error, dev sdj, sector 84871440 op 0x0:(READ) flags 0x0 phys_seg 32 prio class 2 Oct 13 11:48:30 kernel: sd 11:0:4:0: [sdj] Synchronizing SCSI cache Oct 13 11:48:30 kernel: sd 11:0:4:0: [sdj] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=DRIVER_OK Disk 3: Emulated & Disabled Disk 4: Had some Reallocated Sector accounts last week, but stopped at 136. Planned on replacing soon. Disk 5: Emulated & Disabled Me: Confused bridge-diagnostics-20231008-2035.zip bridge-diagnostics-20231013-1216.zip bridge-diagnostics-20231013-1034.zip
  11. 4. Shut down the docker container before restarting your VM
  12. Thanks - Just wanted to confirm, it does show the errors were corrected? I plan on Adding dual parity now so want to make sure.
  13. My monthly parity check without corrections said 4149 errors - this made sense to me as I had a few system hangups* the previous weeks. Started a new parity check manually with corrections on, but the history log shows 0 errors. But the email said there was 4149 errors, and the previous parity check email said I cancelled it, so I don't know whats going on now... *The hangups are a separate issue (I dont know if answers will be in the logs since I had to force reboot). They happened as I was attempting to preclear a new drive as I would like to switch to dual parity. I'm using a random Amazon pcie SATA card, so I'm assuming that the issue is simply from that and I plan on swapping that out for an LSI card. Parity Operation History: EMAILS 1-9-2023 11:47 PM: Event: Unraid Parity-Check Subject: Notice [MY_SYSTEM] - Parity-Check finished (4149 errors) Description: Duration: 1 day, 16 hours, 41 minutes, 34 seconds. Average speed: 122.9 MB/s Importance: normal 1-8-2023 10:50 AM Event: Unraid Parity-Check Subject: Notice [MY_SYSTEM] - Parity-Check started Description: Size: 18.0 TB Importance: warning 1-8-2023 10:12 AM Event: Unraid Parity-Check Subject: Notice [MY_SYSTEM] - Parity-Check finished (0 errors) Description: Canceled Importance: warning 1-6-2023 5:30 PM Event: Unraid Parity-Check Subject: Notice [MY_SYSTEM] - Parity-Check started Description: Size: 18.0 TB Importance: warning Archived notifications: 09-01-2023 11:47 PM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check finished (4149 errors)Duration: 1 day, 16 hours, 41 minutes, 34 seconds. Average speed: 122.9 MB/s normal 09-01-2023 12:20 AM Unraid StatusNotice [MY_SYSTEM] - array health report [PASS]Array has 8 disks (including parity & cache) normal 08-01-2023 10:50 AM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check started Size: 18.0 TB warning 08-01-2023 10:12 AM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check finished (0 errors) Canceled warning 07-01-2023 05:00 PM Community ApplicationsApplication Auto Updatetips.and.tweaks.plg Automatically Updated normal 07-01-2023 05:00 PM Community ApplicationsApplication Auto Updatecommunity.applications.plg Automatically Updated normal 06-01-2023 07:30 PM Community ApplicationsDocker Auto UpdateAirConnect bazarr firefox flaresolverr netdata plex-meta-manager prowlarr qdirstat radarr Automatically Updatednormal 06-01-2023 05:31 PM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check started Size: 18.0 TB warning 05-01-2023 01:52 PM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check finished (0 errors) Canceled warning 05-01-2023 01:51 PM Unraid Parity-CheckNotice [MY_SYSTEM] - Parity-Check started Size: 18.0 TB warning bridge-diagnostics-20230109-2353.zip
  14. So what worked for me @PCR is adding HardwareDevicePath="/dev/dri/renderD129" to the preferences.xml https://forums.plex.tv/t/preferred-hw-transcoder-linux/593507
  15. @nuhll this is thread on Plex forums maybe you can make more sense of it. By working again you mean on 1.29.0, right?