Jump to content

trurl

Moderators
  • Posts

    44,361
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Looks like everyone missed the most important part of your problem. Don't cache the initial data load. Cache just gets in the way since there is no way to move to the slower array as fast as you can write to cache. And trying to do a parity check at the same time as loading data will make writing and checking both slower since those operations will compete for the disks. Some even wait until after the initial load before even installing parity.
  2. There is no stable 6.8 release yet and you don't mention which Release Candidate you are running. Instead of syslog you should always go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post. It includes syslog and many other things.
  3. Have you read either of the threads you posted in?
  4. Drives spinning down when not in use is normal and a feature of Unraid, since each drive is independent they don't all need to spin all the time. The first part of your post doesn't really seem like it is about spun down drives though. And if you hard reset then you should have gotten a parity check for unclean shutdown when you started again, but you didn't mention it. Did you cancel that parity check?
  5. Go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  6. But you should install cache before setting up any dockers or VMs, since you don't want to let them get created on the array. Building parity is really only about the size of the parity disk. It doesn't matter at all how many data disks there are or how much data is on them. But each data disk must be completely (even the "empty" parts) and reliably read to build parity (or rebuild a data disk, it is basically the same operation).
  7. https://forums.unraid.net/bug-reports/prereleases/how-to-install-prereleases-and-report-bugs-r8/
  8. Perhaps this should be closed as a bug report and moved to General Support.
  9. The reason I ask is because you have reported this as an RC6 bug. In order to diagnose we need better information, including whether or not this is an issue confined to this release.
  10. So it is possible you were already having this problem before you installed RC6?
  11. Was this working differently before RC6?
  12. The next time you reboot and before it glitches, go to Tools - Diagnostics and attach the complete diagnostics zip file to your NEXT post.
  13. Since you didn't mention it, I have to ask. Did you
  14. You have completely filled your cache. Possibly it is corrupt. And your appdata share is cache-no, with some of it on cache and some of it scattered across the array. You actually have a fairly large cache so it shouldn't be a problem but you let things get out of hand somehow and then maybe you changed appdata to cache-no trying to fix it. Stop all writes to your server until you get this fixed. Leave appdata cache-no for now until you can make some room on cache. You have some other shares currently with files on cache. These are all cache-yes (good) and anonymized in the Diagnostics as D--------------s M---a S---f Run mover to get all of these moved to the array then when mover completes post new diagnostics.
  15. I don't think there is any reason to assume parity would be valid whatever combination of disks you try to start with, so ultimately New Config to create a new array with whichever disks you want and let it rebuild parity. You could include the UD rescue copy of disk2 in the new array. Similarly for the rescue destination of disk1. If disk4 is truly OK, and the UD rescue copy of disk4 is truly the same, then it wouldn't matter which of these disks you include in the new array. Not sure why you would think the 2 different "disk4" are the same though. The original when through a bad rebuild. The rescue tried to get what it could from that bad rebuild. I would be surprised if the are identical at the bit level. If the original disk itself has no SMART issues then maybe it would make sense to use it as the destination for rescuing disk1.
  16. What can happen is if you have something that is writing to /mnt/disks/whatever, but the disk whatever isn't actually mounted, then those writes would be to RAM.
  17. I see I did misunderstand. Need to put my glasses on when reading my phone.
  18. Those are already available in the cache pool: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/#comment-480421 And support for multiple pools is planned for Unraid 6.9
  19. You can go directly to the correct support thread for any of your dockers by clicking on its icon and selecting Support.
  20. Even with single parity assignments matter. You must never assign a data drive to a parity slot. Assign all disks as data and none as parity then start the array. All disks except parity should mount, in your case you would have 2 and only 2 unmountable. Come back for more advice and post Diagnostics. Tools-diagnostics, attach the complete Diagnostics zip file to your next post. Since you don't know which parity is which you will have to rebuild both.
  21. Possibly the Flash has disappeared. Put it in your PC and let it checkdisk. While there make a backup. Make sure to use a USB 2 port for booting from Flash.
  22. Sounds like it could just be the normal write speed of parity. Have you tried Turbo Write? https://forums.unraid.net/topic/50397-turbo-write/
  23. Maybe I misunderstand you. Do you mean you want the same computer that is asleep to wake up and run a script to wake itself up? Doesn't make much sense does it? Maybe I misunderstand you.
  24. Your appdata and system shares have some files on the array instead of cache. This will cause dockers (and VMs) to keep array disks spinning and will also impact their performance due to parity. Ideally these shares should be completely on cache and stay there. Do you have any VMs? Go to Settings - Docker, disable and delete docker image. Set appdata share to cache-prefer and run mover. When it completes go to Shares - User Shares and click Compute All. Wait for the result. What you want is to see that appdata and system shares have all of their contents on cache and none on any array disk. Set appdata and system share to cache-only. Go to Settings - Docker, enable to recreate docker image on cache where it belongs. Then Apps - Previous Apps will install your docker just as they were.
  25. Just noticed your syslog says you have an unclean shutdown and the resulting parity check is underway. Did you reboot from the webUI? You must always reboot or shutdown from the webUI. Unless Unraid can stop the array before power is removed or reboot, you will get a parity check. Unraid records the stop/start status of the array on flash, so it is also possible this has been caused by some problem that prevented Unraid writing the status to flash. So, did you reboot/shutdown from the webUI, or did you just use the reset/power switch?
×
×
  • Create New...