• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About snowmirage

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Took me a while to get time to jump back to this but over the last few days I did some tests of all the drives in my unraid server. I was able to write to all the drives at the expected rates (i.e. more than a minimum of 50MB/s, which in those rare cases seemed likely something else hit the drive other than the test.) I removed the docker container for urbackup, removed its appdata folder. Uninstalled the app from my desktop. Then reinstall the docker container. I removed my previous "backup" share or what ever I had called it. made a new share called "backup" set its min free sp
  2. That would be pretty slick I look forward to testing it out. Since your advice yesterday I've been going back through and trying to test all the drives. I've found several that are getting that "speed gap" notice. I've tried a few times now to test them with "Disable speed gap detection" checked and I still see reads start 90-100+ drop to 60-70 then repeated "retries" even when I have that disable speed gap detection checked.
  3. That seems to have done it. Test for that drive finished following your suggestions. To be honest I haven't looked at a SMART report closely in years and need to go refresh my memory on what "bad" really looks like.
  4. I suspect this is an issue with my system, and not with this container, but I'm hoping someone might be able to point me in the right direction here. I've had issues with very slow writes to my array (sub 1MB/s) at first I thought it was just the type of writes, lots of very small files with another CA docker container (urbackup). But after nuking all its shares and existing backup data and starting over thinking there was a config issue with that backup app, I decided to run this speed benchmark to check all the drives. I have 24x 3.5" drives in the 2TB and 6TB range in the array.
  5. Thanks for the advice. I'll try starting again from scratch. I know around the time I was first trying this I was also setting up timemachine, and some other stuff, and ended up filling my cache drive. That was where I had appdata and all my VMs.... they were not happy about that... I've since moved all my appdata and vms to a 2TB nvme ssd, but its possible some left over config either server or client side is still messing stuff up. So I'll nuke it all and start from scratch. I also found a docker container in CA that will test each drives read and writes. I'll turn off everything
  6. For those that have completed backups with this from a Windows Client to unraid How long did it take? What was the docker app pointing at to store the backups? Unassigned device / Unraid Cache / Unraid Array I have this setup and it appears to be working but my 500GB SSD on my windows host has been running for nearly 24hrs now and its still listed in the webUI as "indexing" I'm seeing writes to the array in the <1MB/s range I have it set to a share that doesn't use the cache.
  7. I'm up to 70TB Building a custom case has been a bit of a nightmare.... 100% due to my crap design lol o well at least it looks nice + works well when it doesn't need maintenance. Images from last weekends replacement of a bad sata cable... More pics of the initial build up here I've since replaced the motherboard. The old SR-2 died folding for a cure to COVID
  8. Think I found it! My brain was thinking only the paths referencing the Unassigned Device needed to be changed. You were right. It was another path in each docker even ones not assigned to the unassigned device mount point that needed to be changed, for some reason that never occurred to me thanks for pointing me in the right direction!
  9. Thanks Squid, I have been checking there. Here's what I'm seeing with advanced view switched on and after clicking the more settings drop down And when I check the AppData Config Path setting RW/Slave is already set
  10. I'm not sure if I'm misunderstanding the errors the Fix Common Problems plugin is reporting here or if its possibly giving false reports. I moved my appdata + docker and VMs to a new NVME Unassigned Devices Drive yesterday. I followed this guide here All seemed to work great. This morning Fix Common Problems is reporting errors for just a few of my docker containers, not all of them. Had no idea what this slave option was did some searching found this
  11. Thanks I did find that. Looking through this page I guess my best bet to validate all drive cables once I rebuild this is to search the syslog for a fresh boot and track down all "error" or "exception" messages I find.
  12. Could someone explain a good way to make sure a given bay is connected correctly? If it wasn't as in this case is there a particular line I could look for in the syslog on boot up? For example if I reseat all these sata cables for the attached drives. if I don't see "SError" in the syslog and the drives show up in the unraid UI is that an indication all is good? Short of the drive being good that is, i suppose it could still have SMART errors etc. Because its such a nightmare to take this thing a part I want to take a known good drive and check all the sata connections.
  13. Well I did a clean shutdown reseated that drive but still see that error in the syslog. I guess the other thing I can check is the sata cables. Unfortunately some bloody idiot (me.......) designed this custom case to be the most pain in the ass thing to get access to you can possibly imagine. See I've since put in a new motherboard and CPU that old SR-2 gave its last breath folding for a cure for COVID ever since I put it back together some things in the "basement" of tha
  14. Hmm I have no idea why that would be 100G. I do have a very large plex library (running plex in docker) and its not impossible that I misconfigured something along the way. Is there a good way for me to reset that? I looks like that is the SSD I recently replaced. I'll try to stop the array shutdown and check its connection. You mentioned cache space I'm wasting... even before I replaced that drive my cache on the main tab of unraid has always listed 743 GB total space. And double checking with a btfs calculator that seems to be correct. Are you saying that even though its
  15. This morning I found my cache had filled over night. Nzbget was downloading a bunch of new files. The mover was running but had been running all night long. Checked the syslog and saw errors about BTFS, I didn't save the post but searching pointed me to doing a rebalance of BTFS after clearing some space on the drive. I did that then did a clean shutdown and reboot. Start the mover and was getting up to 100MB/s. Hours later its slowed back to a crawl between 300 and 800 KBps. Enabled mover logging and its been moving the same 3.3GB file for over an hour so the reported write speeds