snowmirage

Members
  • Posts

    77
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

snowmirage's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I take it all back. For some reason that first disk, even though it is not that much more full than the other disks took 24+hrs restarting it today, its already on disk14 Or maybe most of the things that needed to be fixed are on the first disk 🤷‍♂️ Anyway if someone else see it take "forever" just let it do its thing, it'll get there.
  2. I have 24 disks in Unraid (and another 6 cache drives), decided I wanted to encrypt the drives. Following this guide from our beloved neighborhood space invader (Thank you! you absolute legend !) I hit some errors / warnings that I should run Docker Safe new perms. Started it got busy with other stuff came back the next day, and realized it had still not finished processing the 1st disk... A few hours into the day ~noon or so going on over 24hrs it moved on to disk 2, of 24 (well I guess of 21, 1 "hot spare* and 2 parity drives) I have ~53 TB of data and about 18 TB of free space all considered. Am I really looking at ~21 days of letting this run? and needing to make sure I don't close the the browser window with the open window the tool mentions?
  3. Took me a while to get time to jump back to this but over the last few days I did some tests of all the drives in my unraid server. I was able to write to all the drives at the expected rates (i.e. more than a minimum of 50MB/s, which in those rare cases seemed likely something else hit the drive other than the test.) I removed the docker container for urbackup, removed its appdata folder. Uninstalled the app from my desktop. Then reinstall the docker container. I removed my previous "backup" share or what ever I had called it. made a new share called "backup" set its min free space to 300GB. And configured the urbackup container to point there when I installed it. I can't imagine this has anything to do with it, but I'll mention it just incase. My appdata dir is on an unassigned device (a 2TB nvme ssd) as such I have the containers config path set to R/W Slave instead of just read write as Fix Common Problems suggested to me (all my dockers are like that). But the "/mnt/user/backup/" path mounted as /media in the urbackup container is just setup as "Read/Write" I reinstalled the urbackup client on my desktop and I'm seeing the same issue again sub 1MB/s writes. I'm a bit stumped here. Stepping away from the problem for a bit if I think of anything else I'll report back. Hopefully someone might have some ideas. I have to guess there is something wrong with my Unraid array / config or something but not even sure where to begin.
  4. That would be pretty slick I look forward to testing it out. Since your advice yesterday I've been going back through and trying to test all the drives. I've found several that are getting that "speed gap" notice. I've tried a few times now to test them with "Disable speed gap detection" checked and I still see reads start 90-100+ drop to 60-70 then repeated "retries" even when I have that disable speed gap detection checked.
  5. That seems to have done it. Test for that drive finished following your suggestions. To be honest I haven't looked at a SMART report closely in years and need to go refresh my memory on what "bad" really looks like.
  6. I suspect this is an issue with my system, and not with this container, but I'm hoping someone might be able to point me in the right direction here. I've had issues with very slow writes to my array (sub 1MB/s) at first I thought it was just the type of writes, lots of very small files with another CA docker container (urbackup). But after nuking all its shares and existing backup data and starting over thinking there was a config issue with that backup app, I decided to run this speed benchmark to check all the drives. I have 24x 3.5" drives in the 2TB and 6TB range in the array. When I start the bench mark it seem to keep getting "stuck" testing Disk 3 (sdp). Most tests took in the range of a few min (maybe 5?) but this drives has been stuck at 36% for over half an hour. I don't see any reads on the main tab on that drive being reported. if I reopen the webui it goes back to the main page where I can start another benchmark. Doing so seems to hang up in the same spot. I've attached the diag from the main page below. diskspeed_20201118_182405.tar.gz I'm afraid I have a bad drive or something and worse if thats the case that Unraid hasn't given me any warning about it. *edit* To rule out as many background processes and disk i/o as possible I stopped all of my other Docker containers and VMs (well 2 of the 4 vms I actually paused but that shouldn't make a difference in this case I don't think).
  7. Thanks for the advice. I'll try starting again from scratch. I know around the time I was first trying this I was also setting up timemachine, and some other stuff, and ended up filling my cache drive. That was where I had appdata and all my VMs.... they were not happy about that... I've since moved all my appdata and vms to a 2TB nvme ssd, but its possible some left over config either server or client side is still messing stuff up. So I'll nuke it all and start from scratch. I also found a docker container in CA that will test each drives read and writes. I'll turn off everything else and give that a try, as well as just try to move some files to the array via a windows share. That should give me some comparisons between urbackup speeds and what I can verify the system can do. My end up pointing out some bigger system problem... though I can't imagine what atm.
  8. For those that have completed backups with this from a Windows Client to unraid How long did it take? What was the docker app pointing at to store the backups? Unassigned device / Unraid Cache / Unraid Array I have this setup and it appears to be working but my 500GB SSD on my windows host has been running for nearly 24hrs now and its still listed in the webUI as "indexing" I'm seeing writes to the array in the <1MB/s range I have it set to a share that doesn't use the cache.
  9. I'm up to 70TB Building a custom case has been a bit of a nightmare.... 100% due to my crap design lol o well at least it looks nice + works well when it doesn't need maintenance. Images from last weekends replacement of a bad sata cable... More pics of the initial build up here https://linustechtips.com/topic/353971-an-introduction-to-project-egor-the-never-ending-story/ I've since replaced the motherboard. The old SR-2 died folding for a cure to COVID
  10. Think I found it! My brain was thinking only the paths referencing the Unassigned Device needed to be changed. You were right. It was another path in each docker even ones not assigned to the unassigned device mount point that needed to be changed, for some reason that never occurred to me thanks for pointing me in the right direction!
  11. Thanks Squid, I have been checking there. Here's what I'm seeing with advanced view switched on and after clicking the more settings drop down And when I check the AppData Config Path setting RW/Slave is already set
  12. I'm not sure if I'm misunderstanding the errors the Fix Common Problems plugin is reporting here or if its possibly giving false reports. I moved my appdata + docker and VMs to a new NVME Unassigned Devices Drive yesterday. I followed this guide here https://forums.serverbuilds.net/t/guide-move-your-docker-image-and-appdata-to-an-unassigned-drive/1478 All seemed to work great. This morning Fix Common Problems is reporting errors for just a few of my docker containers, not all of them. Had no idea what this slave option was did some searching found this Then went through each of the docker containers it was erroring about and checked the mount points that were using the unassigned device. But everyone that I checked (all six docker containers above) seem to already have the RW/Slave option set? For example here's what I see configured for my Sonar Docker. Might anyone have an idea what I'm missing here?
  13. Thanks I did find that. Looking through this page https://wiki.unraid.net/The_Analysis_of_Drive_Issues I guess my best bet to validate all drive cables once I rebuild this is to search the syslog for a fresh boot and track down all "error" or "exception" messages I find.
  14. Could someone explain a good way to make sure a given bay is connected correctly? If it wasn't as in this case is there a particular line I could look for in the syslog on boot up? For example if I reseat all these sata cables for the attached drives. if I don't see "SError" in the syslog and the drives show up in the unraid UI is that an indication all is good? Short of the drive being good that is, i suppose it could still have SMART errors etc. Because its such a nightmare to take this thing a part I want to take a known good drive and check all the sata connections.
  15. Well I did a clean shutdown reseated that drive but still see that error in the syslog. I guess the other thing I can check is the sata cables. Unfortunately some bloody idiot (me.......) designed this custom case to be the most pain in the ass thing to get access to you can possibly imagine. See https://linustechtips.com/topic/353971-an-introduction-to-project-egor-the-never-ending-story/ I've since put in a new motherboard and CPU that old SR-2 gave its last breath folding for a cure for COVID ever since I put it back together some things in the "basement" of that case have been.... "wonky" I guess I'm going to have to finally take the time to tear it all apart to try to find a bad SATA cable wish me luck!