Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Maybe you already know, but you can open the command line of any docker by clicking on its icon in the webUI and selecting >_ Console
  2. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  3. Those are all SMART warnings for your disks, so no, updating to 6.8 did not cause them but apparently it made you aware of them and they have probably been that way for a while. You should setup Notifications to alert you immediately by email or other agent as soon as a problem is detected. No SMART report for disk 8 in your diagnostics and the SMART you attached separately also has no information. I would just shut it down and wait until you are ready to replace disk8 and hope for the best when trying to rebuild with so many other questionable disks.
  4. You should post your diagnostics so we can get a better understanding of your problem and suggest how to proceed.
  5. There are a lot of threads about repairing filesystems, and any particular error message isn't the full picture. I think you must have been looking at some old threads because trying to mount to tmp isn't the usual way now. You can do the repair from the webUI now and are less likely to get the commands wrong that way. I notice that you were working with the md device though, so that seems to me as if you weren't actually doing the repair outside the array as I thought originally. So you would have been repairing the emulated disk in the case of working with a disabled disk. Let's let @johnnie.black comment on this new information.
  6. I actually keep a hard copy and print a new one when I change disks.
  7. As I said fixed on 6.8 so if you had corruption perhaps it was earlier. I'm not sure how to fix since I have not had it. I'm sure you could fix it by deleting appdata for plex and starting over but don't know if anything less drastic can be done or not. Maybe something on the plex forums.
  8. Not a question of which disk appdata is on, but instead how you specify the path in the docker mapping. /mnt/cache/appdata doesn't involve the user shares, /mnt/user/appdata does, even if the share is completely on cache.
  9. The reason I ask is because there were reports of database corruption on earlier version, seemed to be related to user shares, since it didn't affect people who had appdata specified with a disk path. Supposedly fixed on 6.8.
  10. I thought about that but didn't bother to spell it out. @ani3114 You can also remove that statistics.sender.plg since it has no purpose without the preclear plugin.
  11. So this might have happened when you were on an earlier version of Unraid?
  12. If you have the complete backup, all you need is a new install and the config folder from the backup to get going just as before.
  13. Most people do not have failing flash drives. I have been using the same one for over 6 years. I recommend a brand name USB2 flash drive 4-32GB capacity, physically large enough so it doesn't overheat. And always use it in a USB2 port.
  14. Do you know your disk assignments? Do you have a copy of your license key? If you haven't made any changes to the disk assignments since then we can get those from the diagnostics you posted here: https://forums.unraid.net/topic/82725-log-partition-full/?tab=comments#comment-767391
  15. Worth a try and really nothing to lose since Previous Apps will get it all back.
  16. Obviously having a flash backup that can only be accessed after you have successfully booted from flash is not the best plan. I have mine go to NTFS Automounted Unassigned Device, so I can just plug that into Windows if I need to. Another possibility is UD mounted share on another computer. Neither approach is perfect though since if those can't be mounted for some reason, the backup path is in RAM.
  17. Put flash in your PC and let it checkdisk. While there make a backup. Reboot into memtest and let that run for a while.
  18. As far as I know it still reports size incorrectly for different sized disks. I don't know if that affects Minimum Free for cache or not. Do you know how Minimum Free works? That is important even if the reported size is correct.
  19. Since you haven't visited since making that first post in the thread, I really hope you haven't been trying to fix this yourself.
  20. And syslog is after reboot of course, so we can't see why the disks were disabled. Connections issues would be my first guess, so those should be checked before attempting any further fixes.
  21. I should have thought to at least check that one also
  22. You should upgrade your Unraid. The version you have is 1.5 years old. If for no other reason than it doesn't give as much diagnostic information for us to work with. Go to Settings - Docker, disable, delete, and recreate your docker image. Then the Previous Apps feature on the Apps page will install your dockers just as they were.
  23. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=781601
  24. You should have asked for advice before doing anything if you were unsure what to do, and it seems you were. But from your description it isn't obvious you have done anything to make things worse, though that is perhaps by accident. Since you have dual parity, you should be able to rebuild both the disabled parity(1) and the disabled disk23. You mentioned running xfs repair on a data disk that wasn't disabled. You should always capture exactly the command used and the results so you can post them. You also mention running xfs repair on the disabled data disk that you somehow mounted yourself. This is a bit more complicated, and perhaps I have misunderstood you. You should never attempt to work with array disks outside of the array, or you will invalidate parity. Since the disk was being emulated and it will have to be rebuilt anyway it doesn't invalidate parity. But it was also a waste of time to repair since it is the emulated disk that should have been repaired if it needed repair. Rebuilding the disk will just put it back like it was before the repair you did outside of the array. Again, as mentioned above, you should always capture exactly the command used and the results so you can post them. That would have perhaps clarified exactly what you did in this case since I might have misunderstood. SMART reports for both disabled disks looks OK. You have too many disks for me to examine them all. Do any of your disks show SMART warnings on the Dashboard?
×
×
  • Create New...