Jump to content

trurl

Moderators
  • Posts

    44,350
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. You might try some recovery software running on Windows. If that is all it had on it those should be easy enough to download again.
  2. Have you tried eliminating that from the network to see if you still have problems? Are you sure this isn't a problem with your ISP?
  3. Go to Tools-Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  4. Worked for me. None of your array disks are mounted. Have you formatted them yet?
  5. According to that screenshot and diagnostics (and previous diagnostics now that I look again), docker.img is now /dev/loop4, don't know why /dev/loop2 is still hanging around. Maybe try rebooting.
  6. Doesn't look like you deleted the corrupt docker.img
  7. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  8. Not the way to fix that problem. You almost certainly have an app writing to a path that isn't mapped. You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  9. The problem is often a path specified within the application not matching a mapping.
  10. Why have you allocated 100G for docker.img? 20G is usually much more than enough, but I see you have already used 39 of the 100G. I have 17 dockers and they are using less than half of 20G docker.img Making docker.img larger won't fix problems with filling and corrupting it. It will only make it take longer to fill. And your docker.img is indeed corrupt. You will have to recreate it (set it to use only 20G) and reinstall your dockers using the Previous Apps feature. But, reinstalling your dockers won't be enough since you obviously have one or more of you docker applications misconfigured. The usual reason for filling docker.img is an application writing to a path that isn't mapped to Unraid storage. Typical mistakes are specifiying a path within the application using a different upper/lower case than in the mappings (Linux is case-sensitive so /downloads is different from /Downloads), or specifiying a relative path (not beginning in /). Probably the best idea after you get your cache fixed is to recreate docker.img at 20G, and instead of reinstalling your containers, see if we can figure out what you have done wrong one application at a time.
  11. You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  12. yes Not clear these are causing your problem, but your config/docker.cfg has 12 for docker.img size but system/df.txt is showing 20. 20 is probably a more reasonable setting. More importantly, config/docker.cfg has docker.img path as /mnt/user/docker.img. It isn't clear which disk if any this would be on, since it isn't technically within any user share. The "standard" setup would be to put docker.img in a (cache-prefer) user share named "system". Also, looks like you were getting parity sync errors on a non-correcting parity check after an unclean shutdown. You must run a correcting parity check to correct those. The only acceptable result is exactly zero sync errors and until you get there you still have work to do. Those diagnostics are a few days old now so don't know if you fixed those or not.
  13. If you started the array with dockers / VMs enabled, but with no cache installed, then you probably have had your docker / VM related shares (appdata, domains, system) recreated on the array.
  14. That sentence wasn't well written. Many native english speakers don't do that well writing it.😉
  15. I only see one comment in that thread about converting, and the wording doesn't suggest to me that plex is doing the conversion.
  16. Doesn't look like Plex supports HEIC yet https://forums.plex.tv/t/heic-heif-support-in-photo-libraries-new-photo-format-for-iphones/195514/44
  17. And another thing since the title here is about a CPU upgrade. Many of us have made extensive hardware upgrades without changing anything about Unraid at all. I recently replaced motherboard, RAM, and CPU. Unraid just booted up as if nothing had happened. All data and everything working just as it was.
  18. No reason to preclear unless you just want to test them. If they are already working well no need to test. As for formatting, you must let Unraid format a disk after it is already in the parity array or pool (cache).
  19. If you don't start from scratch with your flash drive, make sure you disable dockers and VMs (in Settings - Docker and Settings - VM Manager) until you get cache installed. You want those running from cache and if you enable then before cache is installed they will wind up on the array and you have to go to some extra trouble to get them on cache where they belong.
  20. I can't think of any advantages. Unraid is essentially a clean install each time it boots. If you really want to get rid of all your webUI settings I guess you could start over, but you can just change any you want and keep the rest without starting over. Are you really planning to remove disks, or just reformat them? If you change disk assignments you have to New Config and rebuild parity, but reformatting is different and parity is maintained during format. No point in balancing, and if you reload user shares with the default Highwater allocation they won't be balanced anyway. And Highwater is default for a good reason, it is a compromise between using all disks (eventually) without constantly switching disks just because one disk temporarily has more free space. Most Free allocation will be the most balanced in a way, but has other disadvantages that can affect performance and keep disks spunup. I've never bothered with btrfs in the parity array so maybe someone else will have an opinion. Depends on your use of course. That would be plenty for dockers and VMs, and some user share caching. Do you cache a lot of user share data every day? I say every day because mover is intended for idle time and daily in the middle of the night is default for good reason. Latest beta allows multiple pools. I have 2x500G SSD as btrfs raid1 "cache" pool for user share writes, and 256G nvme as "fast" pool for dockers and VMs. Way more than I need but I just happened to have them.
  21. Read, Write, and Error counts will all reset when you reboot (or you can reset them at Main - Array Operation - Clear Stats). Your parity disk will have to be rebuilt to get it enabled again.
  22. This is the first mention of an actual possible problem and not just a discussion about upsizing disks. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  23. The CRC error count won't reset, but you can acknowledge the current count on the Dashboard page by clicking on the warning, and it will warn you again if the count increases.
  24. Don't know why you mention iso share, I never did. I did mention system share, and that is the most important one to fix. Cache-only is ignored by mover, so setting those shares to cache-only will not move them to cache. Cache-prefer is the only setting that can move them to cache. The reason they weren't moved to cache when you had them at prefer is because mover can't move open files, so in addition to setting them to prefer, you have to go to Settings and disable both Docker and VMs then run mover. If you corrupt docker.img by filling it then of course docker can't work. I never suggested setting those to "only" as noted above. appdata contains the "working storage" of each of your containers. Perhaps you have even more than that in mind when you ask about "configs". The settings for each container that you make in the Add / Edit Container page are stored on flash. Both appdata and flash (and libvirt) can have scheduled backups using the CA Backup plugin. Go to the Docker page and click the Container Size button, then post a screenshot of the result.
×
×
  • Create New...