BiGs

Members
  • Posts

    14
  • Joined

  • Last visited

BiGs's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm also experiencing this in mc just copying files from array to array. Does the same thing as mover, locks up and maxes CPU, even mc transfer freezes up every 5 seconds or so and pauses for a few seconds.
  2. Hey. I've been having similar issues from day dot of cache addition, but I only recently purchased Unraid and have started with 6.7. My system becomes unusable while mover is running. any transfers slow to nearly nothing, in some ftp cases time out completely and fail. I run Shinobi as a docker with a CCTV system on 24h recordings and I'm getting recording black spots during mover scheduled time. I troubleshooted this a bit myself by moving the cache disks off the mobo sata ports and onto the raid card sas sata ports on my raid card with the rest of the array disks thinking it might be unnecessary io cpu power to route it via the motherboard and instead keep it to the pci-e slot/raid card io. It did improve by actually having some sort of recording happening instead of nothing or 0 byte files being written, but the normally 15minute blocks of recordings are still being interrupted and split into various chunks of that 15minute block with missing minutes still. It also kills any gui communication while mover is running. I can't see much in the terms of mover settings for troubleshooting this but a speed limit or something might be good? I'm a bit of a layman with this stuff so I don't understand what's suggested in above posts. Just thought I'd post my issues on this subject too. Edit: Maybe an important note is I run two parity disk array with two ssd disks in a cache pool. (so maybe higher then standard cpu requirement by Unraid for mover)
  3. I also have a 24 slot norco case with sas connections. Backplanes can fail like cables and sometimes they have a bit of play that doesn't get a good connection. I would try shutting down and reinserting the missing disk and booting up again next time. Occasionally I do have a disk missing on bootup and this fixes it. If you are running through a sas controller you prob have to jbod/pass through the disk again as I do, manually via the card's bios or unraid never sees it. You prob know this and looks like you fixed it but thought I'd mention.
  4. All copied over. Removed disk 3 & 13-18. Parity drives rebuilding now. I'll surface test this ones later and just have some spares. I think my problem is I've only ever used physical raid with very rigid methods of acting on failed drives. This is the first time I've gone the software way. But it's good, it gives a heap more options and fail-safes. Thanks for the help. BiGs
  5. Yup. It's struggling on the last couple hunge GB. Does like 15 seconds at like 40MBs then stops, throws some read errors, then does another 15-30 seconds worth or so and repeats. Seams to slowly chugging through it tho. and it was disk 15 actually, miss read. disk 16 is a refurb I just bought, so I could of taken that back.
  6. Yeah good idea. I've only just purchased Unraid and was just getting the hang of it by adding & pre clearing disks etc. Good idea to remove a few at least and use them for hot swap spares. I had 26x of these drives once apon a time. These older non nas drives just drop like flies now. So I've xfered 1TB since. It is dragging parity from the empties too so yeah prob good to remove most of them. I've got read errors on disk 16 now too. So ill pull most of the empties out after removing them and test them on my desktop.
  7. Good idea. I will follow your advice. I will actually copy the data to another PC, then remove the disk3, then copy it back to the /mnt/user/share and let unraid deal with distribution again. Cheers. p.s Im confirming I did set global share settings to include all disks but 3 as per all the shares (as the emulated disk3 doesn't show up in include or exclude)
  8. Ah ha. Gotcha. Yeah I see disk shares in the ftp directory tree. I'll prob wait for this mover to finish now and I'll see if I can just move everything off that disk back onto the share again but this time with all disks included but disk3, it will redistribute everything skipping the mounted disk. Makes sense. I wonder if I should turn off cache for this.. Thanks for helping a nubcake out. I'll let you know how it goes.
  9. Ok. I think I understand now. So unraid replaces the missing disk as a mounted emulated disk with the same data derived from parity? Goodo. Lucky I asked here or I guess I would of lost the data on that drive if I tried my above steps. So even tho disk3 is missing from includions, I check all others on all shares and then start the mover?
  10. Hey J. Thanks for the fast reply. ss attached. I question this step as the drive is gone. Even raid card bios is coming up with something connected but blank SN. I don't have the option for including or excluding this disk 3 anymore which made me question that step. Perhaps it can be ignored if I'm removing a failed drive and I do the rest of the steps and rebuild the data from parity instead? Just want the clarification that my steps above will work or am I missing something? Cheers. BiGs.
  11. Hey once again. Unraid 6.7. I have a 20x 3TB drive array consisting of 18 data disks and 2 for parity. I just had a drive give a heap of read errors and then go offline and a few restarts and raid card pokes and it's not coming online at all. I do not want to get another old 3TB drive and I have only filled around 7 or 8TB of the 50TB+ available. The disk that failed (disk3) was one of the disks with data so I want to just reduce the disk count and rebuild parity. I read this: https://wiki.unraid.net/Shrink_array#The_.22Remove_Drives_Then_Rebuild_Parity.22_Method and I think I need clarification on the first step. What does it matter how share data is assigned after a full failure? I can't move the data off the disk now... So I was simply going to screeny my config, go to Tools > New Config and reset array, assign all drives in their same positions except I would leave disk 18 slot blank and assign the disk 18 SN disk to disk 3 slot, then start array without the "Parity is already valid" checked. Then rebuild parity if it doesn't auto start it for me. Will above work ok for me? I won't lose any data this way? Do I really have to start array again and include/exclude disks etc? Thanks for the help. BiGs.
  12. You're right, single mode is what I want. I made my way over to the cache balance form with the option cmd box (link seems to only be available from the dashboard page of the GUI). I copy pasted the option and nothing happened, I then copy pasted it via notepad and still nothing happened. I then manually typed it as per the FAQ note and it immediately started doing work (strange aye). So i guess ill wait for it to do its thing and maybe reboot after to see the 600GB. Cheers Con. p.s. went into detail with the balance fix for the sake of the help log for others.
  13. Ah. Thanks mate. Yeah I figured a mirrored backup for two different sized drives is impossible, after reading back my post. I just let unraid do its thing without any intervention. I actually wanted to setup the drives as raid0 in the first place. I'll do that now and i'm sure it will work fine again. Cheers. Also, forgot to mention i'm on v6.7.
  14. Hey peeps. I have a 18 drive array with 2 parity disks. I added a 120GB ssd cache disc, and set my dockers and shares to use the cache, no prob. I then added another ssd cache disc of 480GB and assigned to cache (cache pool jumped to 600GB) and changed to raid1 auto after about 15 minutes or so (cache pool dropped to 300GB). System saying everything is fine, but when I hit that 112GB mark on the cache pool now (prior it just bypassed cache onto main array disks until the daily mover scheduler kicked in) the whole array suddenly just goes un-writable. SMB, FTP, docker filesystems all cannot write and throw errors. Unraid still says everything is fine though and nothing is mentioned in log and it still says I have 190GB odd of cache space left. When I hit the manual move now button, 30 seconds later everything is writable again. It's done this twice now (only hit that GB mark during a day twice) so I've deduced it down to the cache pool causing the problem. I'm not sure if I have done something wrong in setup or perhaps this is a bug? I don't mind dropping the cache down to the 480GB alone and seeing if the problem persists. If I have a 120GB & 480GB SSDs in a cache pool but the size says 300GB, am I ok so just stop array and un-assign the 120GB and start array again? Or do I need to Yes/No all the shares etc first and move everything off whole pool first like a sole cache disc removal/swap? Any other suggestions I can try? (I'm not sure where to find the cache pool settings to change to stripped mode) Cheers, Ben