erniej

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

erniej's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks for having a look. I spotted this earlier - decided it's about time I dive into the logs and learn a thing or two anyway. It did repair no problems, but that disk 20 is a disk that was giving me problems in a different unraid server too. Guess it's time to take it right out of service.
  2. Booted up a backup server today to find that within only one of several shares, and within only one folder within that share (of which there are multiple folders), all the files in the one area are gone. They don't show when navigating to the share (but the others do), and they don't show in Krusader. BUT they do still show in the individual drives if I navigate around that way. The next thing I see is that when I navigate through the shares folders, I end up with this message on the folder with the missing files "No listing: Too many files". Now, that all being said, this server gets my 'reject' drives - so if something starts to fail out of a production server, it gets put into this machine until absolute end of life (or that is the plan anyway). Can any of you diagnostis experts have a gander of the file and see if this is a single drive problem, or something else that I can recover from? If not, what's the best way to delete the share (and all the files - including the 'lost/missing files') to recover the space and just let my backups run again? Thanks! zoidberg-diagnostics-20211106-1158.zip
  3. Yes - the system did a check and although the drive still says unmountable, unraid says "Array Started•Parity-Sync / Data-Rebuild" -- not sure really what's going on - the data is still missing even though the icon on my disk 1 says it's being emulated. I still fear the data is lost and I'll have to bring it back from a backup source -- the frustration is that there was only a small amount of data on this drive and it would have been nice to just restore that data if I could find out what had been on that drive vs split across the other drives. And the ultimate in frustration is why unRaid doesn't see this as a failed drive and properly emulate? At this point, I'm ready to turn off parity protection - for the twice now this drive failed like this, parity has done nothing to help in keeping things running until I can swap a drive and rebuild.
  4. I'll replace those cards with LSI's over the next week or so.. good to know! Is there any 'study materials' out there on the LSI vs Marvell controllers so I know the difference and the hows/whys? Also - for the port multiplier - again, not sure what to deal with there - I just plug it in and things 'worked' for me. I guess I better get it figured out to eliminate future problems though. 2nd - this 'failed' drive isn't showing as failed though - and therefore parity isn't emulating anything from this drive - it's just like it completely 'disappeared' as an operating drive with files on it and now shows as a new drive ready for formatting instead. Any reference to those files outside of this server shows those files as missing now - hence I had hoped that unRaid would have a log of what was on that drive if I can't get parity to emulate the drive while I put in a new drive. I can restore from backups - but since I don't know what was on that specific drive, I'm stuck having to do a full comparative restore of 23tb of data. Not impossible of course, but would be easier if I could just pull the specific missing files and put back.
  5. @trurl Just as an FYI, all was working since this last message ( I ended up formatting the drive and adding it back to the array)... However today, shares were 'dead' and various docker apps also dead. I rebooted the server and this same drive was back to "unmountable: no file system". Parity doesn't recognize a failure here. Of course, since the 6th, there have since been a lot of files added. Is there a log somewhere that I can see what was on the drive so I can recover from backups since parity isn't doing it's job? I'm just going to pull this drive and take it out of service than risk this type of thing happening again. Alternatively, if I pull this drive now - do you think unRaid would show it as a failed drive and them emulate the previous contents? The thing is that the way it shows right now it's just like that drive never existed in my array. Any advice or thoughts on this? There must be some way to get a list of what was on this drive - assuming that unRaid tracks this info to manage itsfile system. Thanks!
  6. I've killed my Calibre install... any way to recover it? I recently installed Calibre... had some second thoughts on the thing, but finally got a system that worked with Calibre then Calibre-Web for reader/download to ipad/iphone mechanism. It seemed to be reasonably working, so I went "all-in" and dumped hundreds of magazines and ebooks into the import folder. Well, my docker use sky rocketed, Calibre started hogging CPU resources and finally the docker use hit 100% and it crashed. After the fact, I expanded my docker image, but Calibre just won't start for me anymore. I get an "execution error -- server error" message and nothing more. My docker container log shows Calibre with 11.2Gb Container, 10Gb writable and 1.04Mb Log -- I'm guessing the container at 11.2gb is full due to the 10gb writable data - so is there a way to just expand Calibre's docker image? And what is in this Calibre Docker image? By my library and import folder is on the array disks. App Data is not however.... it is /mnt/user/appdata/calibre - perhaps I should move it, but can I without losing my Calibre data? I have a few other Docker containers, so was pretty certain my config is all fine since nothing else runs astray chewing up resources and image space. Any ideas on how to get it to run again? Thanks!
  7. I'm fairly certain all was good yesterday. Here's the report I had in email this morning prior to seeing Disk 1 as unmountable. Event: Unraid Status Subject: Notice [FRACTAL] - array health report [PASS] Description: Array has 14 disks (including parity & cache) Importance: normal Parity - ST10000DM0004-1ZC101_ZA2BV175 (sdn) - active 36 C [OK] Disk 1 - ST8000DM004-2CX188_ZCT0N83Q (sdf) - active 36 C [OK] Disk 2 - ST1000LM024_HN-M101MBB_S2TPJ9CC605626 (sdo) - standby [OK] Disk 3 - ST10000DM0004-1ZC101_ZA2C36GR (sdj) - active 34 C [OK] Disk 4 - ST6000DM003-2CY186_ZF200ZQG (sdi) - active 36 C [OK] Disk 5 - WDC_WD30EZRX-00MMMB0_WD-WCAWZ1175087 (sdl) - standby [OK] Disk 6 - ST1000LM024_HN-M101MBB_S2YFJ9BD400082 (sdm) - standby [OK] Disk 7 - ST8000DM004-2CX188_ZCT0R4X4 (sdh) - active 35 C [OK] Disk 8 - ST10000DM0004-1ZC101_ZA2B9BQH (sde) - standby [OK] Disk 9 - ST10000DM0004-1ZC101_ZA2BBQV6 (sdb) - standby [OK] Disk 10 - ST10000DM0004-1ZC101_ZA2C640S (sdg) - standby [OK] Disk 11 - ST10000DM0004-1ZC101_ZA2C6CA5 (sdk) - standby [OK] Cache - Samsung_SSD_860_QVO_1TB_S59HNG0MB23020Y (sdc) - active 34 C [OK] Cache 2 - Seagate_BarraCuda_SSD_ZA1000CM10002_7M101GBJ (sdd) - active 42 C [OK] Parity is valid Last checked on Thu 03 Sep 2020 03:14:47 PM MDT (3 days ago), finding 0 errors. Duration: 4 days, 19 hours, 54 minutes, 14 seconds. Average speed: 24.0 MB/s This is from September 3 when I the data rebuild finished from my Disk 1 replacement. Event: Unraid Parity sync / Data rebuild Subject: Notice [FRACTAL] - Parity sync / Data rebuild finished (0 errors) Description: Duration: 4 days, 19 hours, 54 minutes, 14 seconds. Average speed: 24.0 MB/s Importance: normal This is August 29 - I pulled the original 1.5tb Disk 1 and inserted this 8tb disk in it's place - rebuild started: Event: Unraid Disk 1 error Subject: Warning [FRACTAL] - Disk 1, drive not ready, content being reconstructed Description: ST8000DM004-2CX188_ZCT0N83Q (sdf) Importance: warning I definitely remember all the reads occurring - and the writes to the new Disk 1 and the Parity drive and was quite certain everything was done and working correctly prior to this morning. Odd is that all my shares disappeared to start this problem off is it not? Is that a separate problem that occured?
  8. I've formatted it and it's back into the array and usable... I did lose some temporary data, but it was quite temporary stuff and no issues with the loss - just a minor hiccup in my flow of things for today. I guess my main concern as a relatively new unRaid user is that this seems like a catastrophic failure - the drive went unusable (for whatever the reason), but unRaid didn't pickup on this and allow for a parity simulation & rebuild - perhaps it was just purely coincidence it happened right after removing this drive from another system and putting it into my unRaid array and letting rebuild occur - but going back through my logs, that initial rebuild and parity rebuild all finished up so the array should have been solid as far as redundancy being available. Oh well... back to usual now and will monitor that drive closely for a while. I'm just a bit paranoid about touching that system - I wanted to bump all the drives to 10tb or bigger over the next couple of weeks... might take it slower now!
  9. No, the original disk went into my second unRaid build and was wiped... there would have been minimal (if any) data on it and hence why I already installed and wiped it elsewhere.... the 'used' amount was showing as about 25gb so very negligible. Since this disk did rebuild into the 8tb drive, what would have triggered it now to show unmountable and needing a format? Any ideas? I'd hate to see any other disk start 'dropping' like this - since it's essentially a critical fail with no way to recover data. Anyway, do you think I'm good to just format and potentially forget about any data that 'might have' rebuilt onto it? I'm all ok with that, just concerns that this type of fail could occur again - but with significant data next time.
  10. Ok... a bit of history: Last week, I shut down and replaced a working 1.5tb drive with an 8tb drive (the 8tb worked fine - came from my synology nas). Unraid server threw the errors up saying the 1.5tb was missing and is simulated with parity. I followed by saying it was replaced by this 8tb drive and data rebuild and parity rebuild commenced. All seemed fine. I didn't really check much after that - parity and data rebuild said it would take 4.5 days or so and I proceeded to function as normal use on the server. I'm pretty certain it all completed and was back up and running as expected (my email reports say FAIL (data rebuild/parity rebuild in progress ending on Sept 3rd) and PASS starting Sept 4th. All seemed fine... Fast forward to today.... I'm copying a few smaller files over to the server and all of a sudden I get 'no access' (from windows). I log into the server and see that all my shares are gone. I also notice that my Disk 1 now says "unmountable: no file system" - and at the bottom, I'm prompted to format. I do a bit of a google search on the problem of missing shares, and then reboot the server. Shares all return, but the drive still says unmountable and wants me to format it. My question is - what happened and what do I do now? I expected parity and data rebuild to just bring this drive back online and operate normally with the extended space - and am pretty sure it did.... or is this the normal process - any data on that initial 1.5tb drive was rebuilt onto a different drive instead of the replacement - and now I need to format this new drive? BUT during the rebuild process, the server was showing the extra space. I've attached a few screenshots showing progression - with the 1.5tb drive I had 70.5TB of space, during the rebuild 77TB of space (with the 8tb drive replacing the 1.5drive) and now today 69TB of space. Hopefully someone can shed some insight into what has happened and my next course of action here.... I have no idea what might have ended up on the 8tb drive that was replacing the 1.5tb drive during the last couple of days (if anything). Also new is: Event: Docker high image disk utilization Subject: Warning [FRACTAL] - Docker image disk utilization of 77% Description: Docker utilization of image file /mnt/user/system/docker/docker.img Importance: warning No idea if the above is related or unrelated. Thanks for any assistance (or perhaps just clarification on a drive rebuild/replacement process). fractal-diagnostics-20200906-1412.zip