mfort312

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by mfort312

  1. No, I just marked it as ignore in the plug-in and moved on. If this file has login information, I would take care to scrub it before posting diagnostics.
  2. Newly installed CA Fix Common problems is reporting the following error for me: * **/boot/config/unraid_notify.cfg corrupted** However, when I investigate the file, it opens fine, saves a copy fine, and in all respects seems a false positive to me. Is this a deprecated file? Safe to ignore with Fix Common Problems? I've been running this Unraid setup since 2011, continuously updated to 6.12.6 currently. More concerning, I was just about to post my "anonymized" diagnostics when I noticed that my plaintext email login and password are included in this particular diagnostics file, and a quick search revealed at least one other user has inadvertently posted sensitive information with their diagnostics file. I have DMed them, there may be more with this issue.
  3. So I'm a bit confused. If I want to use this updated plugin with 6.5, do I still need to download either the Joe L or bjp999 script and then patch it with the Frank1940 link a couple pages back? The community apps page says the script is not included. edit: I'm also given the option of a gfjardim beta script. Has this been patched already?
  4. Also, what will happen with my Docker apps (on cache drive) and User Shares with a New Config? Will I need to rebuild them?
  5. Ok, good news: I managed to copy everything off the disabled disk8 while still in emulation mode to a drive off the unraid array. Next, to fix disk9, I started in maintenance mode and from a terminal attempted: xfs_repair -v /dev/md9 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I tried mounting and unmounting again with the same error, so back in maintenance mode I next tried: xfs_repair -vL /dev/md9 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. After finishing, I stopped and started the array again in normal mode, and bingo, there were all my missing files. Lost and found had only a few files from the failed MC copy yesterday. Everything else is in its place. I am now copying everything from disk9 off the unraid array. With disk9 free, I will use it to replace the failing parity drive. And next work on replacing disk5. Disk5's SMART status looks pretty similar if not worse than the failing Parity drive's SMART status. How can I spot the difference between currently failing and still hanging on? Thanks again for your help and advice.
  6. Thank you, Johnnie. The parity drive is the other drive failing? So I should set up a new config with a new parity drive first and then run xfs_repair on disk9 as part of the array? Or try xfs_repair first, with disk9 in array? I was thinking to use disk9 as the new parity drive if I can recover and move the files. Sounds like it would be better to get a new drive for parity before attempting xfs_repair?
  7. Rebooted and the SMART attributes from disk8 are now available. However Disk9 is now reporting as unmountable with no file system. When I was moving files from Disk8, I first copied to Disk3 with no problems. When I was moving a folder to Disk9, a few files copied before the read/write errors and then the Disk9 was inaccessible via MC and all the files disappeared. There were only about 80G of files, nothing irreplaceable, but now I am worried about the other drives. If possible, it would be nice to recover at least a file list to know which files were on that drive before hand. Actually, it would be nice to do that for all the drives if possible before proceeding, is there an easy way to get a full file list? tower-diagnostics-20180403-0732.zip
  8. Something else has happened while using MC to copy files from the disabled drive to another: a read/write error (5) occurred and all the files on the drive (drive 9) I was copying to disappeared! I am leaving the system as is for time being. Attached are the new diagnostics. tower-diagnostics-20180402-2247.zip
  9. For the first time, a monthly parity check returned errors and a drive is now showing as Disabled. I am unable to read SMART attributes on this drive although the SMART status is green. A couple of other drives have some SMART reallocated sector errors but have been holding steady for some time now. My plan of action is to first move the data off emulated/disabled drive and then remove the drive from the array, as I have plenty of spare capacity in remaining drives. Should I upgrade to 6.5 first? Any steps I should be mindful of? Do my logs indicate what happened to cause the Disabled drive? I'm assuming simple failure from an old drive.
  10. I've been having some funny problems with this release... I keep getting 'simple features' email notifications that I have an array fault (missing or invalid disk). The array and AFP/SMB shares seem to be running ok at the moment, but the drive mappings in the email don't match my actual setup (screenshot attached). The email erroneously lists disk1 as my parity drive, and the rest of the drives are also mismatched in sequence. I don't actually have a parity drive assigned yet; I'm still in the process of adding data before activating one. The problems started after a automated backup to a Time Machine Share initiated at the same time as a large file copy to the same drive (but different share) and AFP crashed and unmounted all the shares. It took a restart of both server and client and several minutes of waiting to reconnect to my AFP shares. I've been receiving the emails since, even after repeated restarts and file system checks. The Time Machine backups are still pretty flaky; every few hours it fails to mount the drive but eventually succeeds. It usually takes several minutes to reconnect my Time Machine share after the backup fails. Any advice on troubleshooting? I've attached my recent syslog and my console messages from yesterday when the AFP crash happened. I'm running the 5beta10 version and accessing from a Snow Leopard client. Array Fault email: This message is a status update for unRAID Tower ----------------------------------------------------------------- Server Name: Tower Status: unRaid array not started. Status: The unRaid array needs attention. One or more disks are disabled or invalid. Date: Fri Aug 12 14:18:05 CDT 2011 Disk Temperature Status ----------------------------------------------------------------- Parity Disk [sdc]: 35°C (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313L5PD) Disk 1 [sdd]: 34°C (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313NV0D) Disk 2 [sde]: 33°C (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313S42D) Disk SMART Health Status ----------------------------------------------------------------- Parity Disk PASSED (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313L5PD) Disk 1 PASSED (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313NV0D) Disk 2 PASSED (DiskId: Hitachi_HDS5C3020ALA632_ML0220F313S42D) Output of /proc/mdcmd: ----------------------------------------------------------------- sbName=/boot/config/super.dat sbVersion=2.1.2 sbCreated=1311308833 sbUpdated=1313175533 sbEvents=31 sbState=1 sbNumDisks=4 sbSynced=0 sbSyncErrs=0 mdVersion=2.1.2 mdState=STOPPED mdNumProtected=4 mdNumDisabled=1 mdDisabledDisk=0 mdNumInvalid=1 mdInvalidDisk=0 mdNumMissing=0 mdMissingDisk=0 mdNumNew=0 mdResync=0 mdResyncCorr=0 mdResyncPos=0 mdResyncDt=0 mdResyncDb=0 diskNumber.0=0 diskName.0= diskSize.0=0 diskState.0=4 diskId.0= rdevNumber.0=0 rdevStatus.0=DISK_DSBL_NP rdevName.0= rdevSize.0=0 rdevId.0= rdevNumErrors.0=0 rdevLastIO.0=0 rdevSpinupGroup.0=0 diskNumber.1=1 diskName.1=md1 diskSize.1=1953514552 diskState.1=7 diskId.1=Hitachi_HDS5C3020ALA632_ML0220F313L5PD rdevNumber.1=1 rdevStatus.1=DISK_OK rdevName.1=sdc rdevSize.1=1953514552 rdevId.1=Hitachi_HDS5C3020ALA632_ML0220F313L5PD rdevNumErrors.1=0 rdevLastIO.1=1313175588 rdevSpinupGroup.1=0 diskNumber.2=2 diskName.2=md2 diskSize.2=1953514552 diskState.2=7 diskId.2=Hitachi_HDS5C3020ALA632_ML0220F313NV0D rdevNumber.2=2 rdevStatus.2=DISK_OK rdevName.2=sdd rdevSize.2=1953514552 rdevId.2=Hitachi_HDS5C3020ALA632_ML0220F313NV0D rdevNumErrors.2=0 rdevLastIO.2=1313175588 rdevSpinupGroup.2=0 diskNumber.3=3 diskName.3=md3 diskSize.3=1953514552 diskState.3=7 diskId.3=Hitachi_HDS5C3020ALA632_ML0220F313S42D rdevNumber.3=3 rdevStatus.3=DISK_OK rdevName.3=sde rdevSize.3=1953514552 rdevId.3=Hitachi_HDS5C3020ALA632_ML0220F313S42D rdevNumErrors.3=0 rdevLastIO.3=1313175588 rdevSpinupGroup.3=0 diskNumber.4=4 diskName.4= diskSize.4=0 diskState.4=0 diskId.4= rdevNumber.4=4 rdevStatus.4=DISK_NP rdevName.4= rdevSize.4=0 rdevId.4= rdevNumErrors.4=0 rdevLastIO.4=0 rdevSpinupGroup.4=0 diskNumber.5=5 diskName.5= diskSize.5=0 diskState.5=0 diskId.5= rdevNumber.5=5 rdevStatus.5=DISK_NP rdevName.5= rdevSize.5=0 rdevId.5= rdevNumErrors.5=0 rdevLastIO.5=0 rdevSpinupGroup.5=0 UPDATE: Once I built my parity disk, the hourly emailed errors went away. console.txt syslog-2011-08-12.txt.zip
  11. I'm in the process of building a 5 Drive Budget Box as spec'd and I have a dum dum question about installing the Norco SS-500... Is it necessary to supply power to both power inputs? Do they power different drives in the array? When I expand with one or two more of these drive bays, but I've run out of power connecters from the Corsair CX430, can I daisy chain them with some sort of expansion power cable?