descoladan Posted May 25, 2017 Share Posted May 25, 2017 On 2/22/2017 at 8:47 AM, hermy65 said: Yep, im more so looking to split it out though so i can have better quality ssd's running my vms/docker containers and regular ssds handling tasks like downloads, etc This right here describes my thoughts exactly. Please make this a feature! 1 Quote Link to comment
DireWolf Posted June 13, 2017 Share Posted June 13, 2017 +1 Said feature would make my life easier. Perhaps each cache pool could have an independent mover schedule? Quote Link to comment
BobPhoenix Posted July 4, 2017 Share Posted July 4, 2017 I would love multiple cache pools. Quote Link to comment
andyps Posted July 4, 2017 Share Posted July 4, 2017 +1 Would love to see this feature. Quote Link to comment
tjb_altf4 Posted July 5, 2017 Share Posted July 5, 2017 On 22/02/2017 at 10:47 PM, hermy65 said: Yep, im more so looking to split it out though so i can have better quality ssd's running my vms/docker containers and regular ssds handling tasks like downloads, etc +1 Same use case I'm interested in. Quote Link to comment
1812 Posted July 5, 2017 Author Share Posted July 5, 2017 I'd like to also add that I moved my older setup of using a single cache drive with vm's mounted on unassigned ssd's to creating a raid 0 cache with 2 samsung evo 250gb ssd's, just to see how performance was vs my older setup. Running only plex, krusader, and crashplan, the vm's have seen more issues and spinning pinwheel icons and other 1-3 second pauses where the vm is unresponsive vs when they were on their own independent drive. I can't imagine what it would be like if someone was running downlaoders trying to use it this way. I'm sure there are those who may be doing it with no problems? But having the ability to have a separation between dockers/vm's/etc running, would be nice for a performance bump. Quote Link to comment
Nickglott Posted July 6, 2017 Share Posted July 6, 2017 (edited) +1 I too would love to see this. I have been trying to fix my Cache I/O performance when my dockers/VM's/Web Interface timeout for 20-30 seconds sometimes longer when my windows VM is downloading at +75mb/s. I think this issue is cause by the disk(ssd) unable to keep up with background tasks along with +1000s of connections and processing the data inside an image file. My old setup had every sata port full(6- 4TB Hdd's in my array and 1 SSD as a cache). This is an ITX build so space in my node 304 is very valuable. New setup is as fallows 4-8TB HDD Array via onbaord sata ports 1-m.2 nVme 512gb SSD Cache (via PCI-E 3.0 x4[x16 slot]) 2-4TB (Hardware Raid1 via asmedia mini-pcie wifi adaptor slot pcie 2.0 x 1) for backups 2-240GB SSD via onbaord sata ports for VM's I am currently waiting on my new 8TB drives to preclear so this setup has gone untested so far. My only issue with this setup is that I would like my (2)240GB ssd's to be on raid0 for my VM's/Domains share. Since my motherboard uses Intel RST it is not true hardware raid so unraid is unable to see it as a raid. I would use my 2port asmedia card to do this but since they are SSD's and that is a gen2 x1 card I would lose a lot of performance. Plus I would like my backup drive(s) to be redundant in Radi1. My current thinking of a workaround is to run my windows VM on one drive and use the second to run my other VM's/domins folder. The only other option I see is to move my nVme to my onboard m.2, loose 2 sata ports and use a pcie raid card that can support the bandwidth. I don't really want to do this because my m.2 slot is on the other side of my motherboard and I think I will run into heat issues along with the inability to service in a drive failure. In my case having a separate "cache" pool in raid0 is really my only other option. By the way, Yes I have 9 drives running inside a Node 304 ITX build . Edited July 6, 2017 by Nickglott Quote Link to comment
1812 Posted July 9, 2017 Author Share Posted July 9, 2017 Was copying 160GB worth of video editing files from a vm hosted on raid 0 cache drives to a share which was set to use cache=yes... locked up the vm until the file transfer was complete.... Not sure why I am having these disk performance issues but wouldn't if we could have separate pools. Quote Link to comment
-Daedalus Posted July 9, 2017 Share Posted July 9, 2017 I have a very similar issue every time something gets copied from user/downloads to user/media, both set to use cache. For some reason it isn't just a an index change, it's a complete copy, and performance goes to hell when this happens. Quote Link to comment
DarkKnight Posted July 11, 2017 Share Posted July 11, 2017 I suggested a similar feature a while ago. Basically, due to how Unraid has grown in v6, we really need the capability to define & run multiple tier 2 pools (using BTRFS raid). T1 being the main Unraid pool, and several T2 pools like: Apps, Cache, VMs with optional mover support on each pool. Unassigned devices is a really nice plugin, but would be depended on a lot less with definable T2 pools. As a matter of course, I don't even understand why Unassigned devices isn't already integrated instead of a plugin given how integral it is to obtaining seriously increased functionality. LT took the first step in moving past being simply a storage OS with v6, now they need to really embrace it and add in the requisite storage options to make good use of the new features. 1 Quote Link to comment
JorgeB Posted July 11, 2017 Share Posted July 11, 2017 2 hours ago, DarkKnight said: I suggested a similar feature a while ago. Basically, due to how Unraid has grown in v6, we really need the capability to define & run multiple tier 2 pools (using BTRFS raid). T1 being the main Unraid pool, and several T2 pools like: Apps, Cache, VMs with optional mover support on each pool. Unassigned devices is a really nice plugin, but would be depended on a lot less with definable T2 pools. As a matter of course, I don't even understand why Unassigned devices isn't already integrated instead of a plugin given how integral it is to obtaining seriously increased functionality. LT took the first step in moving past being simply a storage OS with v6, now they need to really embrace it and add in the requisite storage options to make good use of the new features. LT just posted their intention to include UD as part of the webGUI, maybe they will also make it support extra pools. 1 Quote Link to comment
1812 Posted July 11, 2017 Author Share Posted July 11, 2017 9 hours ago, johnnie.black said: LT just posted their intention to include UD as part of the webGUI, maybe they will also make it support extra pools. Maybe a response from @limetech ? I don't think we're asking for the moon here, but I don't code so....? Quote Link to comment
1812 Posted July 25, 2017 Author Share Posted July 25, 2017 I just got a 25 disk MSA70. Would be great to be able to have unraid manage it into a super fast cache pool or multiple pools. I guess I'll have to have hardware raid take care of it instead... Quote Link to comment
Lebowski Posted October 4, 2017 Share Posted October 4, 2017 Multiple cache pools would be great. I love the idea to have a NVME as a primary to fill over to Sata SDDs. Quote Link to comment
Joseph Posted October 9, 2017 Share Posted October 9, 2017 On 2/14/2017 at 8:41 AM, 1812 said: Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance. TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!) +1 for multiple, redundant SSD cache pools! I would like to have a RAID1 pool that's dedicated to writing data which is then moved to the protected array nightly; and one RAID1 for VMs, system caching of appdata and etc. that is never moved. Other RAID levels would be cool too. Quote Link to comment
-Daedalus Posted October 9, 2017 Share Posted October 9, 2017 Yup. 2x RAID1 arrays is my use-case as well. One for VMs/User-facing Docker stuff, the other for cache and "backend" containers. Quote Link to comment
steve1977 Posted November 5, 2017 Share Posted November 5, 2017 +1Sent from my iPhone using Tapatalk Quote Link to comment
Lev Posted November 5, 2017 Share Posted November 5, 2017 Reading this thread is... I get it, but wow I feel like we've come full circle here, we're back to why unRAID was created to get away from all this complexity! All this sounds like a IOPS constraint. Why not pass-thru a SSD to the VM you care about? They are cheap enough these days. Quote Link to comment
-Daedalus Posted November 5, 2017 Share Posted November 5, 2017 Because then you can't run your VM on redundant storage, requiring downtime if you want to create a backup image. That, or you have to passthrough a hardware RAID1 config, which seems a bit silly within an OS like unRAID. Quote Link to comment
Dephcon Posted February 2, 2018 Share Posted February 2, 2018 (edited) +1 Having the standard cache pool, plus additional users definable pools would be awesome. I'm running into horrible iowait issues when downloading/parchecking/extracting/moving lots of data on my SSD cache, it's causing all my containers to be slow. The possibility of having a cache pools plus a docker/vm pool or separate docker and vm pools would be great. I found a crappy old 120GB SSD and moved my docker.img and some of my appdata contents to it and my iowait has decreased substantially and even when the cache drive gets backed up, it doesn't affect my app performance. The only option currently is to scale up and buy a diamond encrusted NVME ssd and hope it can take the load. Scale out is the way to go! Edited February 2, 2018 by Dephcon Quote Link to comment
pwm Posted February 4, 2018 Share Posted February 4, 2018 On 11/5/2017 at 3:14 PM, -Daedalus said: Because then you can't run your VM on redundant storage, requiring downtime if you want to create a backup image. That, or you have to passthrough a hardware RAID1 config, which seems a bit silly within an OS like unRAID. BTRFS snapshots can be an alternative, when you don't want downtime. But from the general perspective, I would prefer if unRAID could handle multiple mirrors. My main storage server is not unRAID for that very reason. It has multiple RAID volumes, where most are two-disk mirrors. Quote Link to comment
-Daedalus Posted February 4, 2018 Share Posted February 4, 2018 6 hours ago, pwm said: BTRFS snapshots can be an alternative, when you don't want downtime. But from the general perspective, I would prefer if unRAID could handle multiple mirrors. My main storage server is not unRAID for that very reason. It has multiple RAID volumes, where most are two-disk mirrors. Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place. I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well. Quote Link to comment
pwm Posted February 4, 2018 Share Posted February 4, 2018 1 hour ago, -Daedalus said: Yes, although (AFAIK) snapshots aren't implemented in the GUI yet, and the whole idea of this is to not have the VMs on the same storage as all the Docker images constantly ready/writing things all over the place. I think I might have to look at other solutions, to be honest. unRAID does lots of things pretty well, but nothing amazingly. ESXi has much better VM management, ZFS has (arguably) much better storage. If Limetech were in the habit of giving even a rough roadmap of the direction they're thinking of going, that might help, but we don't really hear about features until they show up in snapshots, and for something like this - that typically is more of a longer-term investment - I don't really think it serves the community well. You don't have snapshot support in the GUI, but you can still create a snapshot and mount separately to for the backup to read from. ZFS and BTRFS have lots in common. BTRFS doesn't have the deduplication functionality of ZFS, which on the other hand is a feature where lots of users have locked themselves out of their data because they have filled their storage pool larger than the maximum RAM capacity of the motherboard. Something that isn't obvious until they reboot and find they can no longer mount the ZFS array until they build a brand new system. The marketing point for unRAID as a storage system is for users who don't want to spin all drives of the array when making disk accesses. Which means to have parity without striping. If you want the bandwidth of a striped RAID and are ok with the recovery issues of a striped RAID, then the obvious route for you should be to select a system that stripes the data. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.