Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by yitzi

  1. Thanks!! That basic switch gets me sometimes.
  2. Hey, great work here! Does anyone know how to change hostname? Not sure where I'd add the extra parameters. Thanks!
  3. Hi Frank1940, appreciate your responses to this. Great idea, but that means I can't pass a single share to a docker or VM and have unraid handle the capacity&performance tier operations. It's pretty simple logic in the disk fill allocation. Similar to how there's the cache "prefer, yes, only" there can be disk groups "slower, faster" or HDD, SSD. I think with larger 4TB SSDs becoming much cheaper, and HDD sort of getting left behind at some point, it only makes sense to accommodate SSD Array users.
  4. Hi, thanks for the response. As for SSDs on the array, they can be always on and have extremely fast performance for millions of tiny files. We're using this for an NVR and the video files tiny and stiched together on the NVR side. The benefit is huge when compared to HDD for random reads. As for cache. I'm already doing the 2x2TB in a raid1 for redundancy. But if I add all SSDs to the cache in raid10 I lose half. I think long term I'm fine with SSDs in the array. We'll replace as performance is impacted. Or remove from array, run a trim and return. The bash is likely the best place to do this but was hoping for a simpler approach. The issue I have is the Disk Exclusion on the share. If I exclude the disk, then moving old files there, the share won't see it. If I include the HDD in the share, it'll try writing to that disk if I do fill up or high water.
  5. Hi all, so say my unraid server is like the following: Parity1: 6TB HDD Disk1: 6TB HDD Disk2: 2TB SSD Disk3: 2TB SSD Cache1: 2TB SSD Cache2: 2TB SSD What I'm looking to achieve is to have most recent files write to cache for speed, then once a week or so move those files to the array but only to the SSDs, and after a month, move the files from Array SSDs to Array HDDs. This way, when reading the files, I get lots of speed, and as files age out, they'll be stored on longer term cheaper storage. Essentially creating a Performance Tier, and a Capacity Tier. As far as I know, read speeds wouldn't be affected, so read speeds from Array SSDs should get good performance. If I set the share to exclude the slower disk, it'll never write to it. Even as it ages out. Perhaps, I can script it to move from disk to disk? But if the Share is set to exclude that disk, it won't see it, correct? Any suggestions would be helpful.
  6. Thanks for the responses. I tackled this by using Open Files plugin and seeing a process for "mono" consuming lots of RAM. I shutdown each docker one at a time whole monitoring RAM usage. Looks like Jackett was using about 60% of my 16GB. No idea why. I'll reach out on LSIO support page for this. Thanks again, this is solved.
  7. Thanks for that, I went over my dockers and there's nothing that isn't pointing to /mnt/user See screenshot
  8. Hi all, I'm getting out of memory errors on my server, it says it's killing off processes but I'm not sure why. I have 16GB of RAM on my server with only a handful of dockers and 1 VM. I followed instructions on Tips and Tweaks and reduced Disk Cache 'vm.dirty_background_ratio' (%): to 1 and Disk Cache 'vm.dirty_ratio' (%): to 2 Diagnostics are attached. Thanks all for the help. unraid-diagnostics-20180129-1550.zip
  9. Well, I gutted and started from fresh. Been running flawlessly for a week now. I guess something was up with the OS. Thanks for the awesome help Frank1940. Appreciate it.
  10. So it's been running 4 days without issue in safe mode. I'll see how it runs for another week or two before thinking it's almost definitely a plugin (Dockers and VM runs in safe mode) If it's a plugin,what would be the suggestion? Just starting from fresh?
  11. Hey Frank1940, thanks for your detailed response. I captured the diag file right after reboot. Right now I'm in safe mode with no plugins running to see how well it works. Dockers: Crashplan Deluge Radarr Sonarr Plex Attached is the log file that FCP keeps in logs folder. Attached are the recent zips as well from 10/1 when it last crashed. It takes logs every 30 mins I think. I've ran a memtest for an hour or so. I guess I'll go the 24-hour route. Thanks, FCPsyslog_tail-backup1.txt logs.zip
  12. Hey all, I can't pin this on a time frame or anything, but randomly, the server will become completely unresponsive. WebGUI = no go. Can't SSH in either. Even using IPMI, where I can see the console, I type in "root" and it just hangs there without doing anything. Attached are the diagnostics, I've also been running Troubleshooting Mode with the Common problems plugin, but don't see anything in there that would cause this. Any help would be awesome, as I just can't seem to pin this issue down and I'm forced to hard reset the server. unraid-diagnostics-20171002-1846.zip
  13. Is there a suggested number for "Disks Allowed To Be Spun Down Before Invoking Turbo Mode:"? I have 3 Cache SSD's Disks, 6 Data HDD and 1 Parity HDD. Current spindown time is set to 30 minutes Thanks.
  14. yitzi

    Call Traces

    Thanks. Appreciate the response. That's odd that Plex can do that from a docker. I have an esxi VM but unraid still has 8GB of RAM for itself. Can't imagine Plex using more than that.
  15. Hi, all. I'm getting call traces on my server. Diagnostic is attached. Thanks for any help! unraid-diagnostics-20170305-0838.zip
  16. yitzi


    Hi all, I'm getting a ton of segfaults and was hoping someone can take a quick look. The full diagnostic is attached. Thanks in advance for any assistance. unraid-diagnostics-20170301-2102.zip
  17. What's the difference between this and PlexRequests?
  18. So everything is running and I'm just curious what setting is needed to add a datastore. I've tried a few options but doesn't look like ESXi is seeing anything. Has anyone got this worked out? Thanks,
  19. Hi all, my parity drive is getting random errors and can't determine what exactly is happening. This is currently happening to my parity drive but was happening on different drives the past week. Please see attached diagnostic report. Thanks and let me know if there's anything else I can provide. unraid-diagnostics-20170123-1422.zip
  20. Thanks, I'm going to RMA the drive and see if there's still problems. Does anything seem to indicate a PSU issue or LSI card problem?
  21. This disk has been causing a few issues for me. Randomly, unRAID will report Errors on my dashboard but not redball the disk. What's strange is that my parity is getting disabled randomly as well. I've been dealing with this for the past month and can't figure out which disk is the issue, or if something else is. Right now, I'm rebuilding my parity and seeing what logs I can pull up. Attached is my latest syslogs. Ton's of errors and I can't make out what it is. syslog.txt
  22. Confirmed 430w after looking through my purchases. It's a CORSAIR CX series CX430
  23. I agree. I at least want to know what's going on and which drive is an issue. I've already reseated all my drives with new sata cables. Can it be a power issue? My PSU is a 430W I believe. Here's my specs: M/B: ASRock - B85 Killer CPU: Intel® Xeon® CPU E3-1241 v3 @ 3.50GHz HVM: Enabled IOMMU: Disabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 14 GB (max. installable capacity 32 GB) 01:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
  24. Hi all, I'm experiencing an issue with my unRAID array. For some reason that I cannot identify, my parity and occasionally a data disk is being red balled as failed when there's no smart errors. My log is showing errors, but I can't narrow down what's wrong. Please see my attached unraid-diagnostics report. Thank you to whoever can assist with this problem. unraid-diagnostics-20160930-1002.zip
  25. Thanks garycase, the first wouldn't make sense. But in the second case, I have an additional 4TB drive that I can add to my array and not need it, or have the additional protection of parity2. Thanks again for the help guys. Appreciate the quick responses.