ezzys

Members
  • Posts

    33
  • Joined

  • Last visited

ezzys's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Did you ever solve this? I am getting same error with upgrade to 6.12.2?
  2. Any luck with this. I am getting the same issue?
  3. I just upgraded to 6.12.2 and kept getting the following error message "*ERROR* Unexpected DP dual mode adaptor ID 01 (or 03)" to the extent my logs were filling up with the error. I did a google search and the other posts with the error all had AS Rock motherboards, so not sure whether it is related. I have ASRock B360M-HDV. I have now reverted back to 6.11.5. Does anyone have any idea what might be causing this? Diagnostics attached. unraid1-diagnostics-20230707-2359.zip
  4. Thanks changed cabels and appears to be going okay - will monitor and change if I get anymore errors.
  5. An old 2TB drive started showing errors and they have reacurred again after a reboot at the weekend. I have run an extended smart test and is passed (attached). What does this mean? Is the drive okay or should it be replaced. I have bought another 2TB drive as it was cheap so can replace, but want to know whether the drive is knackered or it could be pre-cleared and used again? unraid1-smart-20230510-1854.zip
  6. I have just installed qbitorrentvpn, however I cannot seem to get the WEBUI to work. I am using PIA VPN. The logs suggest it is all working. The final message in the log afer a restart is: 2023-03-11 17:08:47,265 DEBG 'watchdog-script' stdout output: [info] qBittorrent process listening on port 8082 I did change the ports as I had a clash, but made sure I was consistant with what I changed. Host Port 1 Containter Port: 6881 = 6881 Host Port 2 Containter Port: 6881 = 6881 Host Port 3 Container Port: 8080 = 8082 Host Port 4 Container Port: 8118 = 8128 Container Variable: WEBUI+PORT = 8082 After leaving it run for a while I just see the following in the log 2023-03-12 19:38:49,166 DEBG 'start-script' stdout output: [info] Successfully assigned and bound incoming port '54605' Any suggestions on what to adjust, any log information that will help troubleshoot?
  7. Okay thanks - my approach was to use disk 1 that was disabled for the rebuild. Given it is not showing any errors does that sound okay? Also how to I do that - the disabled disk is not showing as an unassigned device nor when the array is started does it let me select that disk? I assume becuase it is mounted. Do I need to disconnect the disabled drive and reboot and then readd the drive in on a second reboot?
  8. Thanks both, I ran "check file system" on disabled disk 1 - see below: It looks okay? Should I simply create a new array configuration and and rebuild parity?
  9. Hi, Diagnostics attached with array started. Also attached data from my syslog server which was running when drive failed. unraid1-diagnostics-20230219-0115.zip 2023-02-19.txt
  10. One of my drives just failed on me. It occured when I tried to start a Windows 10 VM (which didnt start and came up with an error message) - possibly related? I tried to reboot the system, but it locked up and I had to force power it off and restart. A load of error messages came up on the reboot, but it came up with a green message once I got to the GUI. I have also successfully rebooted it since the drive failed and think the only error is the failed drive - I did not see the same sort of error messages on the second restart. I ran a short smart test on the failed drive and no errors came back . Output attached. Should i run an extended smart test? I am going to address the VM issue seperatly as it was only used for testing so nothing lost. However I would be greatful for advise on next steps. Should I rebuild the array from the parity OR should I rebuild the parity on basis drive shows as disabled as it is out of sync (i.e. create a new array configuration)? unraid1-smart-20230219-0025.zip
  11. Thanks rebuild array from parity and run the file check. I got the following message when I used tags -nv. Phase 1 - find and verify superblock... - block cache size set to 1417896 entries Phase 2 - using internal log - zero log... zero_log: head block 1792898 tail block 1792898 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 3 - agno = 0 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Wed Jan 25 18:14:31 2023 Phase Start End Duration Phase 1: 01/25 18:13:38 01/25 18:13:39 1 second Phase 2: 01/25 18:13:39 01/25 18:13:39 Phase 3: 01/25 18:13:39 01/25 18:14:16 37 seconds Phase 4: 01/25 18:14:16 01/25 18:14:16 Phase 5: Skipped Phase 6: 01/25 18:14:16 01/25 18:14:31 15 seconds Phase 7: 01/25 18:14:31 01/25 18:14:31 Total run time: 53 seconds My understanding is the above is the output from a dry run. Do I need to re-run with just the -v tag or should I do somthing else? I cannot see any errors or warnings. Thanks
  12. I recently had a drive fail on me. However as I was trying to locate the right drive I managed to drop and break another drive. Luckily (prehapse), this drive was the parity drive. So at this point I had two failed drives. And becuase I dont have dual parity - I could not rebuild the array. I managed to copy of the data from the original drive that failed onto a spare using windows explorer (one drive was pluged into my second unraid machine and I did a network copy onto a NTSF formatted drive in windows). After all this I set up a new array config. However the original drive that failed was being shown so I added that back in and put a new drive in and set of the parity sync. The parity sync completed however during the parity sync I got errors saying "current pending sector is 64" on the old drive. The old drive has failed for me again today (after a couple of days with no issues) so realised I need to replace that drive. My thoughts were: Create a new config with my existing drives (miuns the one that has failed) and add in a new one (to repolace the failed one) and readd the parity drive and set of a sync. I believe this should leave me with the data on all my drives minus the one that failed. I will then manually copy across all the data from the old drive that failed back onto the array. I can copy the data from the failed drive (if it can still be read) or the drive I copyued the data across to when it originally failed. Now my questions: Is the partiy drive I recently created any good? Or does the "current pending sector is 64" mean that data is likely to be corrupt? My thoughts are yes, and probably not a good idea to rely on it for a rebuild - hence the new config. What tools should I use to get data off my old failed drive. I previously used windows explorer to do this as I wanted to get as much data off as quickly as I could. It seemed to work, but not sure if any errors or issues would of stopped it. Should I of used another tool. I am happy to try another approach. Cheers