Jump to content

Traxxus

Members
  • Posts

    165
  • Joined

  • Last visited

Posts posted by Traxxus

  1. change your settings at https://www.google.com/settings/security/lesssecureapps so that your account is no longer protected by modern security standards.
    Not saying this is an appropriate way of dealing with it, but I guarantee it will solve the immediate issue.

     

    Perhaps set up a "throw away" google account to auto forward unraid messages, and reduce the security settings on it to allow unraid to use it.

     

    This is what I did, surely you guys aren't using your real email for things like this?

  2. Looks good to me.

     

    Couldn't tell you exactly why it slows down on post-read, but that's normal (remember I told you post-read would take twice as long?).

     

    Here's my results from a couple 6 TB drives which shows something similar.

     

    Preclear Successful

    ... Total time 54:43:12

    ... Pre-Read time 13:55:07 (119 MB/s)

    ... Zeroing time 12:46:33 (130 MB/s)

    ... Post-Read time 28:00:32 (59 MB/s)

     

      Preclear Successful

    ... Total time 54:50:58

    ... Pre-Read time 14:02:36 (118 MB/s)

    ... Zeroing time 12:46:51 (130 MB/s)

    ... Post-Read time 28:00:32 (59 MB/s)

  3. I've been having a similar issue, no red balls, but I do have constant sync errors since I added a 6 TB drive. Xenboot and disabling spindown did not help. I'll try moving to all motherboard ports, I did remove some 2 TB drives so this is possible for now, though I'm in the middle of consolidating/moving data to finish converting to XFS.

    Here is my thread http://lime-technology.com/forum/index.php?topic=36465

  4. Finally completed the spare 500GB WD Blue (not Black as I thought) Drive I found in the draw - The plan is to use this is my cache, since I am not sure using the SDD as a cache is a good idea as it would most likely shorten it's life

     

    Anyways here are the logs for the 500G, is it safe to use?

     

    I guess asking about the duplication of HDD display in unmenu is not seen by anyone else so i should just ignore it right

    http://i161.photobucket.com/albums/t214/Kostiz/unmenumain_zps1ba49511.png

     

    Cheers

    Kosti

     

    Yeah just ignore it, it's normal, it's just the physical drive vs mounted volume or something.

  5. It's going to take around 46-50 hours for a single pass on a 4TB drive. 5 hours of 9 is not the entire preclear, just the first stage. Pre read, then writes zeros, then post read (which takes twice as long as the pre read just FYI). Some people do multiple passes, and some have reported some drives that pass the first and fail the second, or third, but I think it's a little overkill, especially with the large drive sizes we have and how long it takes. Running 3 passes on a 6TB drive would take over a week (1 is ~61 hours IIRC), with constant random seeking, that's a lot of (perhaps unnecessary) wear. Someone here made an analogy once, something along the lines of it's akin to driving a car cross country just to see if it will make the 20 minute commute to work, here. Something smaller than 4 TB I would do multiple passes, above that I do one. Up to you really. 

     

    Also, it's not a bad idea to do a long SMART test after the array is up and running, just to have a good baseline recorded. I also try to do long smart tests once a month just before the scheduled parity check.

    Here is a script Joe provided to generate smart results automatically with dates in the filename.

  6. Is there any way to change the reiser timeouts or forcing it to do housekeeping in the background?

     

    Does unraid support BTRFS and any idea if XFS is better than BTRFS?

     

    FWIW, Unraid currently defaults to XFS, and I've decided to stick with it since it will probably have the most support/knowledge available.

     

    Consensus seems to be that BTRFS can potentially be great/better if promises about future features are met, so depends on whether you want to rely on that or not.

  7. Something I'd like to add since I'm sure I'm not the only one who had trouble with it, when adding a docker such as NZBDrone, after you're done with the docker part, and you are setting up NZBDrone within the webgui, /mnt/user/TV Shows will not work, the correct path to use would be whatever the container volume is, which in that particular case was /tv.

    Same for couchpotato, the movies folder is just /movies.

     

    It makes sense in the context of being a docker that the original path wouldn't work, but wasn't really thinking about it that way.

  8. Any luck on that updated guide? :D

     

    I threw together a quick one. 

     

    http://seandion.info/unraid/unraid-dockers/

     

    Thanks, this helped a lot.  :)

     

    Only hiccup is Container Port and Host Port.

     

    For NZBDrone I use 8084, default is 8989 or something. I changed just host port and I couldn't access it, I changed both container port and host port, and it does work. But Sabnzbd seems to work differently, changing both and I can't access it at all, only if I change just host port to 8081, and leave container port at the default of 8080.

     

    Edit: I think I figured it out, it's cause when I restored the settings, it also restored the port settings, which was making the container port be wrong.

     

     

     

  9. trurl,

     

    I had one curiosity question for you.  I noticed while unloading some addons that after re-booting, the 'sdx' drive assignments changed.  The drive locations did not change and I was very careful to ensure the drives were still where they should be, parity in particular.

     

    Why do the drive letter assignments change like that?  Since my system is running fine, it clearly is not an issue.  But, I can see where someone who is going to run preclear on new drives after install might end up clearing the wrong drive.

     

    Everything can change and the disk assignments should stay the same assuming your config file is present and valid. I changed all my hardware before (including motherboard) and they still came up in the right configuration. You can change SATA ports, drive locations, all that stuff and Unraid won't care because it uses serial numbers for drive assignments.

     

    You can also change which drive is disk1/disk2, etc no problem, I changed mine a couple times to match the physical location of a drive so I could easily find it in the event I need to.

  10. @Traxxus

    Not trying to hijack your thread, just trying to keep the data about the 6TB reds together as they are so new.

     

    Np, it's in the right place, no need to have a bunch of threads about it.

     

    I disabled spindown on both the 6 TB Red drives before the parity check I was talking about in the OP completed, so they haven't spun down since then. No errors on the scheduled parity check yesterday. I would like to see if it's related to spindown, so I am going to leave it as is, do another parity check in 2 weeks, if all is good, I will then enable normal spindown (2 hours for me) for those drives, and do another parity check 2 weeks after that.

     

    I've also noticed slow parity checks, takes a little over 18 hours, and initial speeds are only ~80 MB/s. I understand having various sizes of drives can slow it down because it slows down as it gets to the slowest part of each different sized disk, but it still doesn't make sense that adding 2 larger, denser/faster drives would slow things down in that way. I'm fairly sure the 6TB reds are on the SASLP, just going by memory on how I ran everything. I may move them to the motherboard, but I don't want to change anything else just yet, I'd like to see if it's really the spindown causing the errors.

     

    I completed the parity check with no errors. 

     

    I have migrated all the data from the three 1.5TB to the to the new 6TB  and rebuilt the array without the drives.  I am in the process of rebuilding parity now.  After about 3-4 minutes it jumped from ~50MB/s to ~110MB/s.

     

    Minus what is already on the 6TB Reds, I have 8 GB of data left (out of ~13 total), and that is all on the 2 TB drives. I've been thinking about phasing them out, and have been consolidating data onto as few drives as possible with that in mind, though I'm going to wait for a v6 RC at least, so I can format them using one of the new filesystems, maybe, I dunno.  I don't remember exactly how old they are, but most have ~50,000 hours, or almost 6 years on them according to power on hours in SMART. 

     

     

  11. Started a parity check (correcting) yesterday, come in to find it at 46% and 454 errors. I got 4 errors a week or so ago, but I wrote it off as an anomaly, my reading here seemed to indicate it happens, and was usually the parity drive.

     

    I ran long smart tests after that, found no issues so I forgot about it, and then this. I have made some hardware changes lately, I added 2 more SS-500s, added another breakout cable to my SASLP, and added 2 6TB Reds, a third precleared and on standby.

     

    Specs are in my sig, drives are:

    6TB Red Parity

    6TB Red Data

    5x 2TB WD EARS/EADS

    1x 2TBHitachi 5k3000

    1x 3TB WD Red

    Cache is 1 TB WD Blue

    Spare 6 TB Red

     

    Included syslog, and smart reports from today (each file contains all smart reports for all drives), and also from a week ago (right after the long smart tests I think).

     

    20141119_smart_summary.txt

    20141127_smart_summary.txt

    syslog1127.zip

  12. I have 3 6 TB reds, precleared and running fine for a month now.

     

    Where are the 8TB and 10TB options?

    Way too expensive and I don't even know if I would even shell out the cash for a 6TB.  I don't accumulate data that quickly that the 3TB that I gain from moving my parity to a data drive will last me a while.  Also anything larger than 3TB will be wasted space till I add/change out another data drive to the same size as the new parity.

     

    It's not wasted, it's long term planning. You only have room for one more drive, if you put a 3 TB drive in there, you either need to make a hardware investment to fit more drives (cages, docks, or a new case+controller card/cables, etc), or start replacing and removing those drives with bigger drives. At which point the money you spent on that 3 TB drive is wasted. It's a short term solution, and it will cost you more in the long run. You need to think ahead, you are going to have to start replacing drives soon if you are full on space with 18 TB of storage, with one spot left. Adding another smaller drive now isn't very efficient. Like gary said, you will be paying 110 for a 3 Tb drive now, then paying ~250 later, for only 3 TB more, and that will be just your parity drive, and you will have a wasted hard drive that has no use in your array. And you'll be needing to replace yet another just to add more space to your array. 

     

    Me for example, I had 7 2 TB data drives, parity, cache, and a spare. leaves me with 5 spots, I added 3 6 TB, which almost doubled my array size, allowed me to keep all my current drives, and a couple spots for the future. I see my data acquisition (at least size wise) increasing in the future, with moving to HD only, automating a lot of my acquisitions with CP, Sab, and NZBDrone, backing up all my and my SOs computers regularly, etc.

     

    Really you need a bigger case though.

×
×
  • Create New...