yitzi

Members
  • Posts

    30
  • Joined

Posts posted by yitzi

  1. 11 hours ago, Frank1940 said:

    If I understand what you are doing, I would suggest this approach.  Setup the HD share (Call it --Long_Term) and restrict it to the HD's.  Setup the SSD's share (Call --Short_Term) and restrict it to the SSD's.  By using the shell script, you can easily tailor it to copy from the Short_Term share to the Long_Term share.  You just have to determine at what age, you want to make the move from one to the other to minimize having to go the long term storage side of things for work-in-progress files. 

    Hi Frank1940, appreciate your responses to this. Great idea, but that means I can't pass a single share to a docker or VM and have unraid handle the capacity&performance tier operations. It's pretty simple logic in the disk fill allocation. Similar to how there's the cache "prefer, yes, only" there can be disk groups "slower, faster" or HDD, SSD. I think with larger 4TB SSDs becoming much cheaper, and HDD sort of getting left behind at some point, it only makes sense to accommodate SSD Array users.

  2. 22 hours ago, Frank1940 said:

    This can probably be done via a BASH shell script that you would have to write.  To learn how, try googling something like  how to write a BASH script that your script would copy 'aged' files to the other 'side'.  

     

    By the way, It is my impression that using SDD's as array drives can be a long term performance issue as you can't run the TRIM operation on them.  In any case, you could setup your 2-2TB SSD cache drives as a cache pool with data protection.  Of course, you would only have 2TB of cache storage in that case.  But it would give you the speed you are looking for along with data protection. 

    Hi, thanks for the response. As for SSDs on the array, they can be always on and have extremely fast performance for millions of tiny files. We're using this for an NVR and the video files tiny and stiched together on the NVR side. The benefit is huge when compared to HDD for random reads.

     

    As for cache. I'm already doing the 2x2TB in a raid1 for redundancy. But if I add all SSDs to the cache in raid10 I lose half. 

     

    I think long term I'm fine with SSDs in the array. We'll replace as performance is impacted. Or remove from array, run a trim and return. 

     

    The bash is likely the best place to do this but was hoping for a simpler approach.  The issue I have is the Disk Exclusion on the share. If I exclude the disk, then moving old files there, the share won't see it. If I include the HDD in the share, it'll try writing to that disk if I do fill up or high water. 

  3. Hi all, so say my unraid server is like the following:

    Parity1: 6TB HDD

    Disk1: 6TB HDD
    Disk2: 2TB SSD

    Disk3: 2TB SSD

     

    Cache1: 2TB SSD

    Cache2: 2TB SSD

     

    What I'm looking to achieve is to have most recent files write to cache for speed, then once a week or so move those files to the array but only to the SSDs, and after a month, move the files from Array SSDs to Array HDDs. This way, when reading the files, I get lots of speed, and as files age out, they'll be stored on longer term cheaper storage. Essentially creating a Performance Tier, and a Capacity Tier.

     

    As far as I know, read speeds wouldn't be affected, so read speeds from Array SSDs should get good performance.

     

    If I set the share to exclude the slower disk, it'll never write to it. Even as it ages out. Perhaps, I can script it to move from disk to disk? But if the Share is set to exclude that disk, it won't see it, correct? 

     

    Any suggestions would be helpful.

  4. 4 hours ago, Leifgg said:

     

    Do you get out of memory errors when using Plex? If that is the case it might be related to transcoding and how the folder mapping is set for the /transcode folder

    1 hour ago, trurl said:

     

    Default is for Plex to transcode to a folder inside appdata. It is possible to change that in the Plex application, but since he hasn't mapped a folder for transcoding, even if he told Plex to use something other than default, that unmapped folder would be inside docker.img, not in RAM.

     

    Thanks for the responses. I tackled this by using Open Files plugin and seeing a process for "mono" consuming lots of RAM. I shutdown each docker one at a time whole monitoring RAM usage. Looks like Jackett was using about 60% of my 16GB. No idea why. 

     

    I'll reach out on LSIO support page for this. 

     

    Thanks again, this is solved. 

  5. 3 minutes ago, trurl said:

    My guess is a docker has misconfigured volume mappings and so is writing into RAM instead of to an actual storage device. Only /mnt/user, /mnt/disk#, and /mnt/cache are actual storage. /boot is the flash drive, and any other path is in the RAMfs.

     

    Thanks for that, I went over my dockers and there's nothing that isn't pointing to /mnt/user

     

    See screenshot

     

    image.thumb.png.346048789b5f987cccd13b9349ce1744.png

  6. Hi all, I'm getting out of memory errors on my server, it says it's killing off processes but I'm not sure why. I have 16GB of RAM on my server with only a handful of dockers and 1 VM. I followed instructions on Tips and Tweaks and reduced Disk Cache 'vm.dirty_background_ratio' (%): to 1 and Disk Cache 'vm.dirty_ratio' (%): to 2

     

    Diagnostics are attached. Thanks all for the help.

     

     

    unraid-diagnostics-20180129-1550.zip

  7. So it's been running 4 days without issue in safe mode. 

     

    I'll see how it runs for another week or two before thinking it's almost definitely a plugin (Dockers and VM runs in safe mode)

     

    If it's a plugin,what would be the suggestion? Just starting from fresh? 

  8. Hey Frank1940, thanks for your detailed response. I captured the diag file right after reboot. Right now I'm in safe mode with no plugins running to see how well it works.

     

    Dockers:

    Crashplan

    Deluge

    Radarr

    Sonarr

    Plex

     

    Attached is the log file that FCP keeps in logs folder. Attached are the recent zips as well from 10/1 when it last crashed. It takes logs every 30 mins I think. I've ran a memtest for an hour or so. I guess I'll go the 24-hour route.

     

    Thanks,

     

     

     

    FCPsyslog_tail-backup1.txt

    logs.zip

  9. Hey all, I can't pin this on a time frame or anything, but randomly, the server will become completely unresponsive. WebGUI = no go. Can't SSH in either. Even using IPMI, where I can see the console, I type in "root" and it just hangs there without doing anything.

     

    Attached are the diagnostics, I've also been running Troubleshooting Mode with the Common problems plugin, but don't see anything in there that would cause this.

     

    Any help would be awesome, as I just can't seem to pin this issue down and I'm forced to hard reset the server. 

    unraid-diagnostics-20171002-1846.zip

  10. 11 minutes ago, Squid said:

    A couple of them in there.

     

    Plex ran out of memory, so it started killing off other processes

     

    This one I've never seen before: 

    
    Feb 28 07:45:14 unRAID kernel: Freezing user space processes ... 

     

    Thanks. Appreciate the response. That's odd that Plex can do that from a docker. I have an esxi VM but unraid still has 8GB of RAM for itself. Can't imagine Plex using more than that. 

  11. Though there are no pending sectors this looks like a disk error, this is disk9 (WDC_WD20EFRX-68EUZN0_WD-WCC4M0NV1LTS)

     

    Sep 30 09:55:22 unRAID kernel: ata2.00: failed command: READ DMA EXT

    Sep 30 09:55:22 unRAID kernel: ata2.00: cmd 25/00:08:b0:22:38/00:00:3a:00:00/e0 tag 12 dma 4096 in

    Sep 30 09:55:22 unRAID kernel:        res 51/40:08:b0:22:38/00:00:3a:00:00/e0 Emask 0x9 (media error)

    Sep 30 09:55:22 unRAID kernel: ata2.00: status: { DRDY ERR }

    Sep 30 09:55:22 unRAID kernel: ata2.00: error: { UNC }

     

    Thanks, I'm going to RMA the drive and see if there's still problems.

     

    Does anything seem to indicate a PSU issue or LSI card problem?

  12. This disk has been causing a few issues for me. Randomly, unRAID will report Errors on my dashboard but not redball the disk.

     

    What's strange is that my parity is getting disabled randomly as well. I've been dealing with this for the past month and can't figure out which disk is the issue, or if something else is.

     

    Right now, I'm rebuilding my parity and seeing what logs I can pull up. Attached is my latest syslogs. Ton's of errors and I can't make out what it is.

    syslog.txt

  13. I agree. I at least want to know what's going on and which drive is an issue. I've already reseated all my drives with new sata cables. Can it be a power issue? My PSU is a 430W I believe.

     

    Here's my specs:

     

    M/B: ASRock - B85 Killer

    CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz

    HVM: Enabled

    IOMMU: Disabled

    Cache: 256 kB, 1024 kB, 8192 kB

    Memory: 14 GB (max. installable capacity 32 GB)

    01:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)

  14. Thanks johnnie ... that's good to know.

     

    yitzi ==>  Given Johnnie's results, that means you could do either of the scenarios you asked about ... i.e. use the 2nd parity as a replacement for a failed drive (which I don't see any reason to do, as I outlined earlier);  or adding it as a new drive (which there could be a use for if you've failed to notice your storage getting too low and need to add more space quickly).

     

    Thanks garycase, the first wouldn't make sense. But in the second case, I have an additional 4TB drive that I can add to my array and not need it, or have the additional protection of parity2.

     

    Thanks again for the help guys. Appreciate the quick responses.