Jump to content

FrozenGamer

Members
  • Content Count

    320
  • Joined

  • Last visited

Community Reputation

2 Neutral

About FrozenGamer

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So I thought i had it working but it didn't work - Is there a good tutorial for setting this up for unraid? I assume there is something that i overlooked?
  2. I Installed the docker - simply changed the default DNS on a few computers to the one set by the default 192.168.1.202 and the second to 8.8.8.8 It appears to be working. questions. Forgive my questions if they are completely ignorant. I didn't see a guide so i winged it. 1. Will non gaming network traffic not be cached and pass through the lancache server? 2. I see some references to complications on having the lancache share go to more than 1 drive, does that matter much? 3. When i can afford to get another SSD (beyond the Cache drive of unraid) - it seems like it might be a good idea to dedicate an SSD in the unraid array. I am assuming that i would limit the share to that drive and it would work fine? But it would be written to cache then moved at night with parity written to the lancache share which would be limited to that new SSD? Or am i on the wrong path here? Perhaps it would be worth just getting another old machine and throwing a drive in, installing linux and making it a dedicated lancache machine? 4. Would i want to set the dns in the router instead so all traffic goes through the cache instead of on each computer? 5. Did i miss something in the setup and i just think its working.. I only tested with a few smaller games or is it as easy as setting your dns? What about Gateway does that need to be set?
  3. 1 - i did my monthly parity check - and it took an extra day to complete, started at normal speed then slowed down as much as 17 MB/s at the end - gradually slower and slower... - here are my last several parity checks - i added another drive between the 2-29 and 3-03 parity checks. 2020-03-03, 18:34:462 day, 18 hr, 34 min, 45 sec 33.4 MB/sOK0 2020-02-29, 00:33:111 day, 14 hr, 42 min, 33 sec 57.4 MB/sOK0 2020-01-31, 20:04:171 day, 10 hr, 36 min, 38 sec 64.2 MB/sOK0 2020-01-02, 11:04:371 day, 11 hr, 4 min, 36 sec 63.4 MB/sOK0 2019-12-02, 10:31:481 day, 10 hr, 31 min, 47 sec 64.4 MB/sOK0 2 - fix common problems says i needed unassigned devices plus (i have the non plus one installed) - when i went to check plugins - the installed plugins page failed to load over and over (including after reboot) - now it says that i have backup server or server settings running at the top of the page and that some settings plugins may be different.. And now it is failing to load again a few minutes later (It should be noted that my internet provider is having some problems today and connections to some pages are not working at the moment) - i assume this is the reason for that but not so sure about the backup settings thing? EDIT - it appears that plugins page is working now with internet back. I am attaching diagnostics for before and after the reboot and 1 minute power down of all hardware. pipe-diagnostics-20200304-0825.zip pipe-diagnostics-20200303-2034.zip
  4. OK - so i have put another drive in slot 14- which was formatted xfs and pulled from the 2nd unraid system. It was unmountable - so i formatted it, at this point it is formatting as far as i know, but in doing so it is reading from all drives which is more like a parity check right? Is it normal in this case - see screenshot. If so, i will mark this thread solved I assumed that a format would not require reading of the rest of the drives, but to be just writing zeros to the new drive. Thanks again everyone for your help.
  5. Thanks Johnnie - just out of curiosity - if i had formatted, but the data mover had not been run (i turned it off for now) - would parity still in theory be good? I have 21 hours until i have access to a free 8tb hard drive. So i am going to just add it in slot 14, format it and then parity check.
  6. This screenshot (shows asking to format) would be in line with the theory that it would be ok to do the new config and say parity is valid? Being very cautious here, since i am not 100% sure i would be doing right thing.
  7. I am a little confused at this point - not sure - My assumption is that it nothing was written to the drive - i should be able to mount it and look right?
  8. I will put the drive in another machine and smart test it - it may be a while until i can safely do that.
  9. I am not sure if it was unmountable - i think it might have been asking me to format (which i didn't do) but i don't remember. - This is my smaller array - i am waiting for an 8tb to be freed up on my 25 disk 170 TB - That one still has 5's and a lot of 6 TBs and i am pulling the drives and rebuilding to 10/12 - so next drive i pull will be an 8 and i will move it to the problem array.
  10. Thanks, wouldn't the shrink array method put the array with no parity backups vs, having the ability to lose 1 drive with the 2nd method? ie less risk of data than a full rebuild of parity?
  11. Can i run it for a few days emulating the missing disk and then add an 8 when i get it freed up from another machine?
  12. I shut down the array, added one slot (#14), then added a drive 8tb which was labeled as precleared. Recieved smart errors right off the bat on the precleared drive - my assumption is that this drive is bad So i stopped the array - now it treats new slot (#14) as if it were a missing disk. Is this normal behavior because the drive was labeled precleared even if the array was only run for a few minutes? Should i just run the array for as is for days or just remove disk 14 - i am assuming that will force a data rebuild? Is there an easy way to just go back to the array as it was until i can free up an 8tb drive (probably will take 4 days to get)? I have attached screenshot and diagnostics - look feb 25, 1100 am for problems to start. Thanks in advance. pipe-diagnostics-20200225-1137.zip
  13. I updated to 10.15.3 without any problems that i can see.