DrBobke

Members
  • Posts

    50
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DrBobke's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I just shut down unraid and saw that when I took out the Parity drive, apparently, I accidently ejected the cache drive Sata cable. Still not sure how I managed to do that, as it is in a pretty secure location that I cannot reach easily. But Dockers and VM's are back up and running! 🤩
  2. Is there a way to get the VM's and Dockers running again? Disk 1: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 9 - agno = 3 - agno = 4 - agno = 5 - agno = 7 - agno = 1 - agno = 8 - agno = 10 - agno = 6 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Result Disk 3: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 6 - agno = 5 - agno = 4 - agno = 7 - agno = 8 - agno = 1 - agno = 9 - agno = 10 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. I just realize, that I am actually missing my cache drive. I was looking for answers on the forum and read somewhere that files could be on the cache drive, but mine is not listed anymore (at all).
  3. Hey all, I unassigned Parity II, as said (stop array, go to Parity section, select dropdown and unassign, checked that it was unassigned and shut down unraid, removed Parity II and booted up again, but was stuck with 'Stale configuration', the only way to being able to start the array again, is insert Parity II again. What is going on? I obviously want to start the array without Parity II being present... Edit - SHIT! I just see I have lost all my dockers??? Nextcloud, duckdns, mariadb, Plex, Wireguard, OpenVPN, all of them gone! Also my 2 VM's are gone! HEEEELLLPPPP leonore-diagnostics-20220115-1450.zip I have just checked, my entire AppData folder is EMPTY! 😮 Luckily, I still have a backup of it from 9/01/2022 at 1.06AM. Can I restore without having to use the backup? I hope to God I didn't loose any data, but how on earth is this possible? I did the right things, no?
  4. Okay, thanks a lot! I will check if I can see anything 'not right' with disk 3. Thanks a lot for all your help, I really appreciate it! What is the best course of action in removing the Parity II? Shut down UnRaid, pull the drive out and power back up, or do I need to do that in another way?
  5. I think I bought them from two separate companies, so I could first return the Parity drive and get a new one and then disk 3, no? Since there is 'only a few TB' on Disk 3, I could copy that over to an external HDD too if needed, or doesn't it work that way?
  6. Awesome, thanks a lot, so should I send back the Parity drive II and okay disk 3 then, or just keep Parity drive II as well? I don't know how serious the errors are.. Thanks again!
  7. Thanks a lot for your information. Disk 3 just finished its extended run (I'm sure it has taken more than 28 hours), when I went to bed last night, it was around 80%. Hereby attaching the zip file. Thanks again for all your help. Should I okay the errors, or send the drive(s) back? leonore-smart-20220113-0922.zip
  8. The disk is not being used during the test. Yesterday evening, my weekly Parity-Check started, which will end in a few hours. But that was after the Parity II test ended. Is there something I can/should change there? And anything you can tell me why the SMART status in Dashboard remains on Error or what I can do to fix that? As said, disk is less than 1 year old, the other one is also still in warranty at just over 1 year old.
  9. I'm pretty sure it has taken longer than 28 hours. Also, the SMART status on the disk still shows as error in orange with a thumbs down. Disk 3 (12TB) is still clearing and also stuck at 10% (I don't see CPU or other parameters shoot up unexpectedly). I don't know where I can find the SMART polling timer setting, do you mean SMART capabilities:0x0003Saves SMART data before entering power-saving mode. Supports SMART auto save timer. ? I am sure I wouldn't have touched any such setting, so it should be at default, if there is any...
  10. I don't know how or why, but all of the sudden, this went from 10% (I checked this morning) to 'completed without error' when I checked just now. I am posting the outcome of Parity drive II here. I will post the same results when disk 3 clears (if that happens). leonore-smart-20220111-1901.zip
  11. Hereby attaching the diagnostics file. As I read in other posts, it would be helpful to note which drives are affected. Parity 2 (sdf) Z030A010FP7G (14TB) and Disk 3 (sdg) 6050A07RFPBG (12TB). Any help is greatly appreciated! leonore-diagnostics-20220111-0906.zip
  12. After making sure the disks now keep running and killing the extended smart test, it is still stuck at 10% (this has been over 5 hours). The Parity drive is 14TB, but still, I should be seeing some progress by now, no?
  13. Thanks a lot, disks were spun down indeed, very strange, I would assume the disks had operations, so they would not spin down. I have killed the self-test and restarted it. Hopefully it will move along this time around..
  14. Hi all, Since yesterday, I am having an issue with my share in Plex-Media-Server. I have always added movies to my folder and Plex updates itself, to show the new movies. Since having added movies, it does not want to update the library. Not even after giving the command to scan the entire library. I have 914 movies in the library, but it doesn't want to update - also tried refreshing all the metadata. All my movies are named "moviename (year) resolution.ext" with an .nfo file with a link to themoviedb for the movie and a jpeg with the movieposter, each movie is in it's own folder, except for collection sets (Harry Potter, Lord of the Rings, The Hobbit, Fast and Furious, etc). To do a test, I have made a new library in my Plex, which is pointing to the same directory as the one described above and the new movies do pop up there. Strange thing is, it tells me there are 853 movies in there (vs 914 in the original). Because I thought this was very strange, I made another exact library, and in that one, I get 635 movies... Should all be the same number of movies, no? They should all be set up with the same movie scanner and agent as far as I'm aware. Ideally I would want to keep "library 1 / the original", as that knows which movies I have seen and which not yet and get it to update with the new movies. I am using a docker "Plex-Media-Server" since a few months now, ever since I was having issues with the Binhex-Plex docker (which is disabled/not running at the moment). (the docker container is up-to-date too obviously). Hope someone can help, as I didn't find the answer on the forum. Thanks in advance, Best regards, DrBobke
  15. Hi All, since some months, I am having smart errors on two drives (one Parity and one array drive). I have done the smart Short self-test on both and it couldn't find any issues. I then thought to run the extended self-test, at first I tried both simultaneously, but it keeps stuck at 10% (when it starts, it starts at 10% and never progresses), after a few days of running, I decided to 'kill' the extended test on the normal HDD and only let it run the test on the Parity drive. After a few days, still at 10%, I decided to kill that test too. I rebooted my pc and the Unraid and ran another extended test, but it has now been running for 13 days and 21 hours and it is still stuck at 10%. Anyone knows why it doesn't want to progress further? It's driving me crazy. I even noticed a few weeks ago, that after having rebooted my PC, the self-test read 'aborted by user', which seems strange, as the self-test should be run inside unraid itself and there is no reason for it aborting just from rebooting my PC, no? I have read different articles on the forum (this one was most helpful), but seems I cannot even get it to run through the entire thing. The Parity drive is only a few months old, the HDD is just over 1 year old, but as said, it has been like that since the beginning, or at least for several months. Both (all in the array), have been pre-cleared before and were ready to go, bought them from good suppliers that I trust and are new (no other deployment in another machine). The Parity is even my 2nd parity drive, should that help. Hope someone can help. Thanks in advance! Best regards,