yitzi

Members
  • Posts

    30
  • Joined

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

yitzi's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks!! That basic switch gets me sometimes.
  2. Hey, great work here! Does anyone know how to change hostname? Not sure where I'd add the extra parameters. Thanks!
  3. Hi Frank1940, appreciate your responses to this. Great idea, but that means I can't pass a single share to a docker or VM and have unraid handle the capacity&performance tier operations. It's pretty simple logic in the disk fill allocation. Similar to how there's the cache "prefer, yes, only" there can be disk groups "slower, faster" or HDD, SSD. I think with larger 4TB SSDs becoming much cheaper, and HDD sort of getting left behind at some point, it only makes sense to accommodate SSD Array users.
  4. Hi, thanks for the response. As for SSDs on the array, they can be always on and have extremely fast performance for millions of tiny files. We're using this for an NVR and the video files tiny and stiched together on the NVR side. The benefit is huge when compared to HDD for random reads. As for cache. I'm already doing the 2x2TB in a raid1 for redundancy. But if I add all SSDs to the cache in raid10 I lose half. I think long term I'm fine with SSDs in the array. We'll replace as performance is impacted. Or remove from array, run a trim and return. The bash is likely the best place to do this but was hoping for a simpler approach. The issue I have is the Disk Exclusion on the share. If I exclude the disk, then moving old files there, the share won't see it. If I include the HDD in the share, it'll try writing to that disk if I do fill up or high water.
  5. Hi all, so say my unraid server is like the following: Parity1: 6TB HDD Disk1: 6TB HDD Disk2: 2TB SSD Disk3: 2TB SSD Cache1: 2TB SSD Cache2: 2TB SSD What I'm looking to achieve is to have most recent files write to cache for speed, then once a week or so move those files to the array but only to the SSDs, and after a month, move the files from Array SSDs to Array HDDs. This way, when reading the files, I get lots of speed, and as files age out, they'll be stored on longer term cheaper storage. Essentially creating a Performance Tier, and a Capacity Tier. As far as I know, read speeds wouldn't be affected, so read speeds from Array SSDs should get good performance. If I set the share to exclude the slower disk, it'll never write to it. Even as it ages out. Perhaps, I can script it to move from disk to disk? But if the Share is set to exclude that disk, it won't see it, correct? Any suggestions would be helpful.
  6. Thanks for the responses. I tackled this by using Open Files plugin and seeing a process for "mono" consuming lots of RAM. I shutdown each docker one at a time whole monitoring RAM usage. Looks like Jackett was using about 60% of my 16GB. No idea why. I'll reach out on LSIO support page for this. Thanks again, this is solved.
  7. Thanks for that, I went over my dockers and there's nothing that isn't pointing to /mnt/user See screenshot
  8. Hi all, I'm getting out of memory errors on my server, it says it's killing off processes but I'm not sure why. I have 16GB of RAM on my server with only a handful of dockers and 1 VM. I followed instructions on Tips and Tweaks and reduced Disk Cache 'vm.dirty_background_ratio' (%): to 1 and Disk Cache 'vm.dirty_ratio' (%): to 2 Diagnostics are attached. Thanks all for the help. unraid-diagnostics-20180129-1550.zip
  9. Well, I gutted and started from fresh. Been running flawlessly for a week now. I guess something was up with the OS. Thanks for the awesome help Frank1940. Appreciate it.
  10. So it's been running 4 days without issue in safe mode. I'll see how it runs for another week or two before thinking it's almost definitely a plugin (Dockers and VM runs in safe mode) If it's a plugin,what would be the suggestion? Just starting from fresh?
  11. Hey Frank1940, thanks for your detailed response. I captured the diag file right after reboot. Right now I'm in safe mode with no plugins running to see how well it works. Dockers: Crashplan Deluge Radarr Sonarr Plex Attached is the log file that FCP keeps in logs folder. Attached are the recent zips as well from 10/1 when it last crashed. It takes logs every 30 mins I think. I've ran a memtest for an hour or so. I guess I'll go the 24-hour route. Thanks, FCPsyslog_tail-backup1.txt logs.zip
  12. Hey all, I can't pin this on a time frame or anything, but randomly, the server will become completely unresponsive. WebGUI = no go. Can't SSH in either. Even using IPMI, where I can see the console, I type in "root" and it just hangs there without doing anything. Attached are the diagnostics, I've also been running Troubleshooting Mode with the Common problems plugin, but don't see anything in there that would cause this. Any help would be awesome, as I just can't seem to pin this issue down and I'm forced to hard reset the server. unraid-diagnostics-20171002-1846.zip
  13. Is there a suggested number for "Disks Allowed To Be Spun Down Before Invoking Turbo Mode:"? I have 3 Cache SSD's Disks, 6 Data HDD and 1 Parity HDD. Current spindown time is set to 30 minutes Thanks.
  14. Thanks. Appreciate the response. That's odd that Plex can do that from a docker. I have an esxi VM but unraid still has 8GB of RAM for itself. Can't imagine Plex using more than that.
  15. Hi, all. I'm getting call traces on my server. Diagnostic is attached. Thanks for any help! unraid-diagnostics-20170305-0838.zip