ReneP

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ReneP's Achievements

Noob

Noob (1/14)

0

Reputation

  1. @johnnie.black Your experience seems to be conflicting with other tips I have seen so far. @itimpi said that adding more devices to the cache pool would not change anything for performances. My initial feeling was that if I assign more than one device for the cache pool then I should expect btrfs RAID1 to deliver better reading speed without any improvement on writing speed. I am still confused about how running multiple processes can translate into reading operations from multiple devices at the same time. Is running a single VM requires one process or multiples processes? I guess I need to do more tests to find out. Last thing I want is to go back to FreeNAS for storing my VMs. My VMs lab is a bunch of databases for BI projects, some are doing heavy readings others are more into writings so spreading my IO efficiently is important for me. @John_M If TRIM can be disabled would this make it safe to use SDDs in the data array? @tjb_altf4 your setup with multiple 960 look so cool, that is what I had in mind to. Too bad you have not run some throughout benchmarks yet, if speed improvements for btrfs raid 1 and 10 are indeed in the pipeline then it would be great to have some kind of baseline. In theory your RAID-0 with two 960 should deliver twice more speed for both reading and writing operations, I am just curious to know if this what you are getting now. Let me know it if you dare to plunge for a RAID-10 of Samsung 960, I want to know if it is all worth the money.
  2. Thanks itimpi for your feedback! So you are confirming my fear that adding more SSDs to the cache pool will not provide any performance benefit. That is a major deception but I can live with that. I am recapping here my options for storing my VMs: 1) Have unRAID to create a cache pool with two SSDs: Adding additional SSDs to the pool won’t improve performance. Compared to using the hard drive from my array, I would still have a performance gain but I won’t be able to scale up the performance if ever needed. I will be limited to the IO of a single SDD. This is still worth a try because having a Samsung 960 EVO used as a cache would at least provide several times the performance of a single hard drive on my array. 2) Unassigned SSDs: By manually placing my VMs files on different Unassigned SSDs I would get all IO (read/writes) performances inherent to those respective SSDs. This would come at the expense of not having my VMs protected so I would need to set a manual back up solution. This really sounds like a simple way to scale up and maximize IO with parallel processing so I will go for this option. 3) Create an hardware RAID 1 pool of SSDs and have unRAID to use that pool as a cache. I am not even sure that this recommended or possible. My feeling is that unRAID is not really friendly to hardware RAID. 4) Run guest FreeNAS/Openfiler on my unRAID host and passthrough SSDs. With ZFS I would have the freedom in theory to explore all possible RAID configurations and run some benchmarks. This is not a recommended setup for production but for my home lab it might be worth at least to run some tests. I still need to factor cost into my decision. If interested I will publish my test results once I upgrade my server with SSDs.
  3. Thanks trurl for the reply! I did get the concept that UnRaid is not RAID, that nothing is stripped, in fact that was the reason that convinced me to go for UnRaid because I have a lot archive data that just need to sleep on non-spinning disks! I did solved the problems for my huge archives of media, that is not my concern anymore. Now I am dealing with the problem of running VMs with decent performances. My 5 VMs are stored on a single WD Red and they a limited to very slow reading speeds. When I do a lot of parallel reading operations from all my VMs, like starting all of them, it is not smooth at all. I am still not sure about my upgrade path. Having a single SSD SATA part of the array should in theory delivers reading speed of 500 mb/s and a SSD Nvme PCIe may deliver more than 2000 mb/s from that device alone. I am aware of the lag caused by the parity that needs to be calculated and then written on the parity drive, but that should not be a reading bottleneck for a SSD part of the array, I have not tested that but that is my understanding. Like I said, I would be pleased to have an SSD on the data array if I would at least gain some reading performance for my VMs, knowingly that data on those SSD won’t be striped like in normal RAID. I could do some manual file placement on several SSDs, I assume that this would give me some real reading speed benefits. As for improving writing speed a pool cache is what I am looking for. My main concern about having SSDs assigned to my array is is that SSDs are not officially supported for the data array so I may have to forget about that option. Talking about the pool cache, knowing about other supported RAID configuration on btrfs is interesting, but like I said all the conversation are focusing on levels of protection or the size of the pool, no details about expected performances. I would like to have some numbers about how btrfs RAID 1 or RAID 0 performs. What is troubling me is that I have seen on a post that btrfs RAID 1 is not systematically reading from two mirrored devices at the same time, instead multiple concurrent OS process requesting read operations are pushing the RAID 1 pool to read from multiple devices at the same time. I am not sure how btrfs behaves. A perfect RAID 1 implementation, let’s say with two devices, would deliver reading speed of the two devices combined, leaving the writing speed not improved but my question remains: Can we really achieve RAID 1 reading performance with btrfs?
  4. I am developing and testing multiples VMs from my array, all mechanical drives and no cache drive assigned so far. I really need to add some SSDs to my system because current performances are abysmal. My upgrade path seems to be going in the direction of adding SSDs in a cache pool but I still have some doubts about going down that path. The official statement from Limetech about caches pools says this about SSDs in a cache pool: That sounds fabulous but after doing a lot of research I found out that most people are concerned about protection or usable storage size of the cache pool. I cannot find any information about the actual performance that we would expect from btrfs RAID 1. I have seen somewhere that btrfs RAID 1 does not behave like conventional RAID so I do not know what to expect here in terms of performance. Would running a single VM benefits from using multiple SSDs from a cache pool? I really wish that btrfs RAID 1 reading speed would systemically be multiplied by the number of devices, like two SDDs with respective reading speed of 500 Mb/s would systemically deliver reading speed of 1000 Mb/s out of the mirror. Did someone tested btrfs RAID 1 performance and has some benchmarks to share with me? I am ready to give a shot at btrfs RAID 1 with 2 SATA SSDs but I really wonder I there are better options out there. My other option would be to store my domains VMs on SSDs part of my data array but the official recommendation says: I do not understand about what could possibly go wrong but I guess I am better not go against this advice. This is too bad because a single high-end M.2 PCIe drive such as Samsung Evo 960 would probably deliver the kind of performance I am looking for, at least for the read requests. Let's face it, SSD SATA drives will eventually become part of history because the SATA interface is saturating the transfer speed of the fastest NVMe SSD. I feel that a Samsung Evo 960 is a better future proof investment than pooling a bunch of SATA SDDs. This is just for development and testing, so I am ready to sacrifice some level of protection against better performance. I am backing up my VMs on an unassigned external hard drive in case the whole array goes south. I might also consider testing performance VMs passing-trough SSDs and possibly run FreeNAS/Openfiler as guest OS using my SSDs with proper RAID configuration. Not using a barebone metal machine for FreeNAS/Openfiler is not officially approved for production environment so I am not so sure if this a great idea. I am been running FreeNAS on my Unraid server for a while without a glitch so I am still considering this option for my home lab. Anyone has been facing the same situation as me?