FrozenGamer

Members
  • Posts

    368
  • Joined

  • Last visited

Everything posted by FrozenGamer

  1. I am running an lsi 9207-8e card with 2 cables plugged into 2 of the following Chassis's: https://www.servethehome.com/sgi-rackable-3u-16-bay-se3016-sas-expander-chassis-forum-deal/ What i am looking for is something rackmount style that can handle up to 30 drives, less is ok. A decent power supply included. I normally just run cable from a pcie express card so i can switch machines easily but leave all the drives in the chassis (all but the ssd cache). It doesn't have to be quiet. What i am looking for is something with faster throughput but all the simplicity of what i have. Quicker parity checks and data rebuilds, 128 tb are 1 day 7 hours parity check, and i have seen up to a week for rebuilding 2 drives at the same time. Used is probably best since it should come with most everything and be cheaper. Price range - reasonable.. I don't mind spending some money. $200-$1000 price range. I do have quite a few 8tb seagate archive drives which i am sure are a big factor in my speeds, but i would consider starting a new server with WD white 8tb drives shucked from easystores. I am pretty sure i don't want a tall tower with a bunch of 3x5 drive bay expanders in it. Also i have a lot of experience building PC's but not a ton of experience with servers. This would be running plex, sabnzbd,radarr,sonarr, maybe deluge. Any suggestions or examples on ebay of proven/often used large drive array hardware is appreciated. If there is something that comes with the mobo/cpu/ram etc it isn't completely out of the question.
  2. Did you end up getting this? is it working ok? I am shopping around for something similar to upgrade my server.
  3. I first tried changing the file as johnnieblack suggested in the thread i posted above, that didn't work. After that i tried creating a new bootable usb stick and copying config files over from backup. That did work. As far as i can tell everything is working ok. I will mark this topic solved but am left curious what might have caused this failure.
  4. See attached screenshots. One is how it booted first time after shutdown and the next is the 2nd boot. I am not sure how much this will apply? Also i happened to download diagnostics prior to the reboot, on the off chance that there are clues in it. I just made a backup of my usb stick onto my windows machine and i think i have it set to backup on unraid. Also i tried booting without my external data drives, just the usb stick and the ssd are in the tower. No luck yet. Also i tried safe mode and it got same thing, i will try booting usb on a different computer next. tower-diagnostics-20190221-1347.zip ts.
  5. Ok, sounds like i should move the files off the drive then exclude it, or just move whatever gets moved back later. Thanks for the clarification!
  6. Yes, but be aware that split settings override include/exclude settings. Ok, it would potential split some files in folder, but not put them on the excluded drive, right?
  7. This would be just to keep the disk almost empty until i planned on following these instructions in the future, otherwise the mover script will probably put more files on it - https://wiki.unraid.net/Shrink_array#For_unRAID_v6.2_and_later This first step is a little confusing to me - "Make sure that the drive you are removing has been removed from any inclusions or exclusions for all shares, including in the global share settings.." The way i read it, when i am actually ready to remove the drive this step would mean, global share settings are no exclusions, and drive #4 (the drive) would be removed from all inclusions. With individual shares drive 4 should be removed from each share's inclusions, but exclusions would stay at none. I would have thought that global share settings alone would do it.
  8. If i set the global share settings to exclude one disk of the array will the following happen. No further writes will be written to that disk until i change it back to exclude none. In particular mover script, because i don't see a lot of other ways it would start filling up under normal circumstances. Is there any danger to leaving a disk excluded for a week or two. Purpose is to pull the mostly empty disk in a shrink array without losing parity sometime in the next few weeks.. (i cleared the data, but don't want more to fill up before i have time to remove the disk)
  9. It looks like i am having problems with sonarr importing files from drone factory folder. When episodes are missing i download them elsewhere and import them via drone factory folder. About a week or so ago, i ran fix docker permissions on my server and i assume that may have caused the problem. Expecting that this might be caused by folder permissions i set the drone folder to chmod -R 0777 and when that didn't work i set it in sonarr settings "Permissions" to file chmod mask 0777 and folder chmod mask 0777. Nothing that i have tried has fixed it, is it a problem in the destination directory? sabnzbd downloads appear to be processing correctly as far as i can tell. Or - this is happening to others as well?? - https://github.com/Sonarr/Sonarr/issues/2929 This is the error i am getting. ImportApprovedEpisodesCouldn't import episode /drone/showname.episode.etc.mkv: An empty file name is not valid System.ArgumentException: An empty file name is not valid. at System.IO.DirectoryInfo.CheckPath (System.String path) [0x0005d] in /build/mono/src/mono/mcs/class/corlib/System.IO/DirectoryInfo.cs:496 at System.IO.DirectoryInfo..ctor (System.String path, System.Boolean simpleOriginalPath) [0x00006] in /build/mono/src/mono/mcs/class/corlib/System.IO/DirectoryInfo.cs:61 at System.IO.DirectoryInfo..ctor (System.String path) [0x00000] in /build/mono/src/mono/mcs/class/corlib/System.IO/DirectoryInfo.cs:55 at (wrapper remoting-invoke-with-check) System.IO.DirectoryInfo..ctor(string) at NzbDrone.Common.Extensions.PathExtensions.IsParentPath (System.String parentPath, System.String childPath) [0x00060] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Extensions\PathExtensions.cs:92 at NzbDrone.Common.Extensions.PathExtensions.GetRelativePath (System.String parentPath, System.String childPath) [0x00000] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Common\Extensions\PathExtensions.cs:60 at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.GetOriginalFilePath (NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode) [0x0006f] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedEpisodes.cs:177 at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.EpisodeImport.ImportMode importMode) [0x00263] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedEpisodes.cs:105
  10. Thanks for taking the time to comment and try and help! - At this point i am running a server with 22 8tb mostly seagate archive drives. Stuff is really slow when it involves writes. I need to look into getting better hardware also because i have a bottleneck in my backplate multiplier (if that is the right term). Need to find something on ebay that will keep my drives cool and has a bit more modern bandwidth for the drives to box. Parity check takes about 29 hours, a rebuild takes about 7 days. I am going to post a question once i find the appropriate forum for suggestions on what to buy on ebay.
  11. No, it was going to be quite a while, i had left it all night while i slept and it was only 1/3 through the first rsync command. I have pretty slow writes to parity - 10-15MB/s.
  12. Edit.. I am going to go back and manually delete all the duplicate files with binhex krusader on disks2, 1 and 4. I was able to discern what was incomplete and delete destination files in that case. Source otherwise. Really lucky and it was all large files that were transferred and that i didn't chose to scatter to all disks - if someone wants to post an easy way could have undone this versus manually please post. Otherwise disregard my question. If i press stop on a long scatter move operation, will unbalance delete the files that have been successfully moved so far? 415GB out of 3.17TB set to transfer. or do i have to go back and manually delete all the files on the source? (edit in answer to my own question, i have pressed stop now i have 500gb of duplicate files as far as i can tell it didn't delete any source files). Is there an easy way to go back and delete the source files that i was copying? or how do i go about doing this? I am reading that first it copies, then deletes. Does this normally happen at the end of each completed rsync command versus the end all of the rsync commands queued?
  13. Thanks Johnnie, I am going to try diskspeed docker first @10% increments on the disks and then smaller. Perhaps that will highlight a problem disk. I am interested in finding new hardware/storage rack that will support 20 to 30 disks but has newer and faster to increase speed of rebuilds etc. Price would be a consideration within reason. If anyone has any suggestions please feel free to post them.. Or should i post this question in a different part of the forum?
  14. Is it safe to do test rebuilds? ie start one for a while, see if slowdown problem goes away, if not stop the rebuild, shut down the array and put the original disk back in? using process of elimination? Assume that turning off mover schedule would be necessary. I am guessing it is not one of the 4tbs because problems continued through the 5 and even after the 5, but no problems at all from 6 to 8 in random slowdowns.
  15. Latest update - rebuild will be done in about 10 hours for a total of a little over 7 days. I can no longer view my log - i get this error " Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 134037536 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(418) : eval()'d code on line 73". Looks like i have filled up my log file's max of 128mb - results of df -h /var/log are Filesystem Size Used Avail Use% Mounted on tmpfs 128M 128M 0 100% /var/log I have shut off all dockers, disabled cache dirs and am just hoping that everything is ok after the rebuild is done late tonight. As far as i can tell i can't reach the shares via samba at moment, but unraid webgui is working fine. Just waiting for the finish of rebuild and then instructions on what to do after. So far i have that it may all be fine after a reboot. Check cables after shutdown for reboot. Can or should i clear/reset the log in the meantime so that it continues to log? Or best just not to touch anything until done?
  16. For what its worth turning off Cache Dirs Plugin seems to have stopped the errors and all the drives are spun down that aren't currently being used. i wonder if the continuous scans could have been a factor in the slowdown spikes. Seems that there were no slowdowns past 6tb either before i turned off the plugin and that slowdowns continued after 4tb and to some degree after 5tb.
  17. Prettty sure I fixed that diagnostics posted a few hours ago? Should have the errors. I’m at Crossfit gym and will be home to double check in few hours. Thanks for you help!
  18. Not sure i understand the last comment. The 2 disks that are rebuilt are well past the sectors that contained data being that i replaced a 4tb with an 8tb right with no errors in the system until almost 6tb? i didn't repair the disks in another system? Trust that you are right, just don't understand. - about 1 day 12 hours to complete rebuild.
  19. Also i just noticed that the 5tb is showing up in unassigned devices, i think i may have updated unassigned devices shortly before the problem, which could have possibly caused it to go offline out of the array? And it appears that data is missing from the array in its current state - a tv show which was there is no longer there when i go to the array on the network. This was likely written to disk 18 just prior to the disk rebuild. Perhaps the increasing errors are from Dynamix Cache Directories refreshing every so often.
  20. Ok this is really weird. I got 37 read errors on disk 18 which is toshiba 5tb. And it is increasing - 1797 - started 15 minutes ago. But I am almost to 6tb in disk rebuild now.. There is no temperature on disk 18 as well. Perhaps it has shut down? Since i have 2 8tb parity drives and i have replaced 2 4tbs with 8tb and am rebuilding them, if i were to pull the shut down 5tb out of hot swap and reinsert it would i jeopardize my data? I would assume it would. This is really confusing. I am rebuilding zeros from 4.01tb to 8tb on the 2 new drives, so the rebuild is technically done? and they are all zeros because i precleared them prior right? I have attached another diagnostics that is a few minutes old. the problem starts at 10:05 1/22/19. Also attached a screenshot if that helps to show what is happening. I have 16tb of parity drives (2x8) but if i had 3 drives fail, say a 4tb and 2 5's i would lose data right? It would seem that normally i would have lost data, if it weren't for the fact that all this is happening past the points on drives that have data? ie if i am at 6tb on a 4tb rebuild to an 8 is my array safe with drives that are currently being rebuilt with zeros on the last half? tower-diagnostics-20190122-1047.zip
  21. Average is about 1 day and 4 hours. A little over 6 days for changing parity drives to 8tb. It might be useful to know what your situation is. Ideally, the number of disks won't affect parity syncs, etc., and it would primarily depend on the size and speed of parity. But in the real-world there can be controller bottlenecks for example. I have 2 of these 16 bay racks https://www.servethehome.com/sgi-rackable-3u-16-bay-se3016-sas-expander-chassis-forum-deal/ hooked up to LSI SAS HBA 9207-8e 6GBs 8-Port - one cable each directly to the card, not daisy chained. The processor/computer its hooked up to is a gigbayte z97x-sli-cf mobo, i5 4590 @3.3ghz, 16gb Ram. I know that there are some speed limitations on the expander (PCIE 1), but i would assume that shouldn't be causing the intermittent slowdowns in the parity rebuild? ie is it normal for an unraid server to have severe slowdowns for periods of time during the rebuild of data?
  22. How many drives in your array and what type of communication with PC, backplate,caddy,pcie card? Maybe i need to upgrade my equipment (all though i am not sure there is an affordable 20-30 drive solution?