andyps

Members
  • Posts

    33
  • Joined

  • Last visited

Posts posted by andyps

  1. On 10/28/2018 at 4:23 PM, wgstarks said:

    This really makes it sound like a problem with the app rather than the container. I would suggest you raise this issue on the Plex forum.

    I have posted the issue on the plex forums, so hopefully I get some insight there. However even with downgrading the issues I am having are still quite bad. Playback locks up and the server becomes unresponsive regularly no matter the version I downgrade to.

  2. Hi there, my plex server had starting becoming unresponsive, plex apps couldn't discover the server, etc. In the logs I was getting a bunch of "connect: Connection timed out" errors. I tried reinstalling the docker container, deleting the docker image and starting over, but nothing worked until I downgraded plex. 

     

    I downgraded to 1.13.2.5154-fd05be322 and now the server is much more quickly discovered and media plays, etc, but every so often it dies again and requires a docker container restart. I am also still getting a huge number of these errors (like pages and pages):

    connect: Connection timed out
    connect: Connection timed out
    connect: Connection timed out
    connect: Connection timed out
    connect: Connection timed out
    connect: Connection timed out
    connect: Connection timed out

    I'm running unraid 6.6.1 and I have attached my diagnostics to this message. 

     

    Also when I reinstall the docker container or make changes here is the output:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'VERSION'='1.13.2.5154-fd05be322' -v '/mnt/user/Media/':'/media':'rw' -v '/tmp/':'/transcode':'rw' -v '/mnt/user/appdata/plex':'/config':'rw' 'linuxserver/plex' 
    0790f41fd9ad67b65bb35622056278db63031673dbb1bfec20b5c4fdfcce97cd
    
    The command finished successfully!

     

    home-server-diagnostics-20181028-1612.zip

  3. On 2/6/2018 at 1:40 PM, jjjman321 said:

    In your day-to-day Blue Iris monitoring, do you all use Remote Desktop to access the actual Windows interface, or do you just remote in with <IPAddress>:81 web viewer? I believe that has less functionality than the actual desktop interface. Just curious what you all are using and the relative performance. 

     

    Oh sorry I missed this awhile back. I use the web viewer occasionally, but I do 95% of my monitoring with the phone app. I have also geofencing setup with the app so that our indoor camera is auto-disabled when we are home.

  4. 23 hours ago, jjjman321 said:

    I know this thread is about 4 months old but my question is directly related to the topic. 

     

    I’ve been running Blue Iris on a dedicated server and it’s been running well. In an effort to consolidate, I’ve created a Windows 10 VM on my UnRAID Server and installed Blue Iris on that. I’ve got a pretty small/simple camera system that easily runs within my resources. 

     

    As as for the data drive, it seems like recording outside the array is the best bet, so I moved my 4TB WD Purple surveillance drive into the UnRAID Server and mounted it using Unassigned devices. 

     

    This is brings me to my question. What’s the best way to have the Windows 10 VM see this drive? Right now I set up a second virtual file system on this drive, but I’m curious if some other solution is better practice, sharing the disk, etc (the virtual file system only shows in Windows partition manager as 2TB max also - but that’s another issue). I record continuously and am fine with the surveillance drive staying spin up 24/7 (after all that’s what it’s designed for). 

     

    Thanks!

     

     

    Yes, I'd definitely recommend the drive outside the array using unassigned devices. I've been running that config since this post and have had no issues.

     

    To setup the second drive I simply edited my VM in the VM manager of unRaid and added a 2nd vDisk Location. I selected the option to manually select it and used my drives name identifier. Basically that field looks like this: /dev/disk/by-id/ata-DRIVE-NAME

     

    You can just replace the "drive-name" part and update and be golden. I am running a 4TB WD red with no partition issues.

     

    Hope that's helpful

  5. I have a new install of unRAID. When I try to install an app (Krusader or Plex for example) unRAID freezes for a while. Basically, I click the install button below the app in community applications. I setup the config, and then we I accept it goes to a white screen (with the unRAID banner still at the top) and it stays here for 5-8mins. Chrome asks if I want to kill the tab process because it's not responding. I hit wait and after the 5-8 mins, it finally shows the add container screen and pulls the container and presents the "done" button. After this the app functions more or less normally. Though for Krusader it has completely locked up twice when trying to transfer several terabytes of data.

     

    This setup was working flawlessly, but then I wanted to try windows + snapraid/drivepool out. I wasn't happy with that setup, so I have now purchased unRAID and moved back. The only thing I have changed in my setup is the cache drive. I upgraded the Crucial M4 240GB SSD to a WD Blue 1TB. Other than that, the setup is identical, but now I have these issues. My WD SSD is new, but I bought it on ebay. When I first installed it, it showed 0 power-on hours, but I didn't look at reads and writes. After moving about 20TB to the server it shows 141,797 reads and 772,168 writes. Is that normal? Just wondering if it is indeed new. Screenshots attached.

     

    Any insight? Here are the specs:

     

    E3-1245v3

    32GB DDR3 ECC

    HP Z230 motherboard

    LSI SAS9201-16i

    SUPERMICRO CSE-M35T-1B Hot swaps

    2 x WD 8TB Reds (Parity)

    8 x WD 6TB Reds

    1 x 1TB WD Blue SSD cache drive

    Sandisk Cruzer Fit boot drive

    Seasonic G-Series 650w

    home-server-diagnostics-20171010-1148.zip

    Screen Shot 2017-10-10 at 11.48.47 AM.jpg

    Screen Shot 2017-10-10 at 11.49.18 AM.jpg

  6. 14 hours ago, bjp999 said:

    I do not run this type of software, but I would carefully consider where you decide to keep your video files. Putting on the array would cause constant parity I/O, and really drag performance of writing to the array for other purposes. I wouldn't do it unless my server were largely dedicated to this use.

     

    Writing to the cache or VM image file (which probably is in the cache drive) probably means writing heavy writes to an SSD. SSDs have limited lifetime based on amount of writes. I would not use up the life on video surveillance footage. But SSDs would perform well on the parallel I/O, so this would be your decision is it were worfh the cost of SSD wear and tear.

     

    The UD option seems most appropriate to me. Purples are optimized for this type of application, and having one of to the side storing video would not impact the array performance.

     

    Just my $0.02.

    Yep! This is the conclusion I'd come to as well. I'm writing to a 4TB drive in UD right now. Plan to do backups of it via rclone to google drive.

     

    BTW, thanks for all your help early in this process. I'll be sure to share pics and the build once it's done so I can show off the supermicro hotswaps all stacked up in the tower case!

     

    3 hours ago, Lev said:

    @andyps nice work on finding the BI optimizations, the ones already discussed in this thread are the big wins. I'll round it with remaining other tweaks I'm aware of...

     

    Don't have BI do the overlays for time-stamps or other text, set that in the camera settings. Same with things like privacy zones. Any post-processing you can source at the camera and not off-load onto BI makes a difference. Granted the trade-off is convenience, BI may make it easier to do it all in one place.

     

    Lower the frame-rate and use VBR on your cameras, go CBR if you really want to fine tune it. For my purpose, 30 fps @ full bit-rate is overkill. I use 15fps and don't care enough to mess with bitrate, I just leave on VBR, still full resolution though at 1080p.

     

    Make sure to run BI as a windows service, rather than leaving the desktop app open inside the VM. Close the app before you disconnect your VNC or RemoteDesktop connection to the VM

     

    Review your triggers and alerts. Triggers and the recordings they generate writing to the disk can be largely mitigated if you selectively pick on important cameras or masked zones within the frame. As for alerts, I used to get images and video clips from all 3 cameras in the front of my house, but in time realized it was overkill. a few images from just the single most critical camera was all I needed to know if the alert was worth concern or not. If I need video I can access where it's uploaded on the cloud easily to take a look from my phone.

     

    Not really BI related but wish someone would of told me this sooner... if the IR on the camera isn't serving a purpose, then disable it by setting the camera appropriately. I have a camera in a dark windowless room in my basement pointed at the door to enter the room. If anyone comes in, I'm betting on the fact they will turn on the lights. No IR needed for that scenario. That camera is simply to make sure the wife can't come snooping around unnoticed around my server rack! Any thief would of been caught on the outside cameras hopefully.

     

    Ya, I've made some of those tweaks as well. Made a big difference for sure. Good info on the IR. I'm going to play with those options, thanks.

  7. On 9/13/2017 at 9:20 AM, Lev said:

    Thats more details, thanks.

     

    BI can be tweaked if you haven't already to run those 4 cameras and VM on 2 cores, 4GB RAM.

     

    Ask yourself how long you need to keep you recording. Maybe <7 days, maybe only on events. Maybe then you'd be able to just keep your recordings cahed locally within your VM's vdisk (assuming that's on your cache drive) and then also use a off-site cloud backup to sync all your recordings as they are created. 

     

    For 99% of people, cam recordings are throwaway data. Unless something happens, in which you usually know within hours or days, the recordings are worthless. The paradigm though is very heavy on i/o writes, which makes putting it on the array a poor choice. As for UD? If you don't mind keeping that WD Purple constantly spun up for writes, then go for it. Odds are your VM vdisk size can be enough, especially when paired with what should be absolutely required for your setup, a off-site sync to a cloud server. If you think you're OK getting by with emailed attachments, that's your choice but in the event something unfortunate happens, you're going to want the best data your camera's recorded at time time.

     

    Good luck. And yeah stay with BlueIris, I've used it for years, I also thought he grass might be greener, but it's not. I just wish there was a docker container for unRAID for it so we could ditch the dedicated VM.

     

     

     

     

     

    I didn't realize that I was running BI so CPU heavily. I did a search for "reduce blue iris CPU usage" and followed a number of suggestions. BI went from 40% to 15-18%

     

    Got the VM with BI up and running. I'm using a single core + 1 virutal core & 4GB of ram. Running great so far. Thanks for suggestions

     

    On 9/13/2017 at 10:09 AM, Tuftuf said:

    I run Blueiris in a VM... I used to run it with 2 cores and 4GB ram, this was with 4 cameras mixture of 720 and 1080.

    I also ran the same system taking in another 8 feeds remotely but these were only 5FPS/20KB/sec each with limited internet causing issues with that bandwidth!

    D2D recording.

     

    It now has 4 cores but still often reports high cpu usage 80-90%, although within windows reports it as around 40% less. This has not caused me any issues.

     

    I do write it to the array and initially dedicated a disk to it by not allowing other shares on it, now that disk is shared with other media. 

     

    I guess my system isn't under enough load for this really to be an issue at the moment, but I'll be honest I hadn't really thought about it keeping that disk + parity spun up constantly or maybe I did and have forgotten! I might change things a little, but for the sake of saying it works... it works. 

     

    Do you run it with 2 physical cores? I'm running it right now with 1 physical and 1 virtual/hyperthreaded and it seems to be running great.

     

    My BI was running around 40% with 2 1080p cams, 1 720p cam, and 1 480p cam. After tweaking BI that dropped to 15-18%. Running it with the same tweaks on the VM I'm seeing right around 20-25% with unRAID reporting 25-28% going to the VM. I'm happy with that I think. Just wasn't sure what core mixture to run with.

     

    Thanks for info.

     

    On 9/13/2017 at 2:47 PM, NotYetRated said:

    For reference to a couple of your questions, I am also running a Windows VM for my BI setup. 6 POE cameras, 4 always recording to a WD purple drive set outside my array. I do send my alert snapshots to an array protected share, as BI lets you configure what goes where and when.

     

    I am running 6 cores of an e5-2670V1 with 4gigs of ram. It has been plenty of power for me so far. All in all, it has been incredibly reliable. I pull up the stream via blue iris android app from home, work, tablet, phone, on the road, etc with no problem. I also run a Homeseer home automation system on a Windows VM, and have a few tablets set with Imperihome constantly showing a few of the camera feeds. Have yet to have any real hiccups, apart from some older cameras that didn't play nice and one poor cable termination(my fault) that gave me headaches for a couple of days before root cause being discovered.

     

    A key to managing my CPU usage was to set each camera to blue iris DVR format, and direct to disk recording. This required me to set my overlays on each camera, but other than that no big trade-offs that have affected me anyways.

     

    Good info, thanks. Ya I found that the "direct to disk" setting for recoding was the biggest win for CPU usage. I had no idea I wasn't running it that way and that I was spending so much CPU power encoding the clips.

  8. 17 hours ago, Lev said:

    I've had some experience along the same path you're about to travel...

     

     

    Too many variables here given what little is known about your setup. Question like this is better suited for Blue Iris message boards. All I can suggest is to take a look at what you're running Blue Iris on today, and make a determination from there what's the equivalent to try in UnRAID. With a VM it's pretty easy to add/subtract cores or memory to fine tune it.

     

     

    Yes you can use UD, Yes you could also add a drive to the array so it's only used for this singular function. But IMHO both are poor choices and this question makes me even further skeptical that I couldn't assume anything about your setup, or your level of knowledge, cause honestly I'd do neither of those paths you asked about. My advice is simply to stay where you now running Blue Iris, don't migrate it to unRAID until you're far more experienced. It'll save you a lot of hassle early on, and given you a much less stressful environment to learn in.

     

     

    I have lots of experience, enough to know that it's a very narrow configuration in which it makes sense to run it on unRAID I'm sure many will chime in telling me I'm wrong in response to this. If so I dare them to post their requirements and configuration and justify it. As for ZoneMinder and others, it's easy answer... if you already purchased a BlueIris license, then my advice to you is simply don't bother with the others, the grass isn't greener. If you don't own a BI license and have the time and patience to explore, then go for it.

     

    To sum it up, don't dive head first into unRAID to start running critical applications on, whether it be your security cameras, your router software or your VPN, etc...

     

     

    I'll ask it over in the Blue Iris community. I was hoping to hear from other Blue Iris VM users as to what their setups looked like so I could make some better estimations, but I can also just fine tune mine as I go.

     

    If not UD and if not a disk as part of the array, then what?

     

    Here are my specs:

     

    E3-1245v3

    32GB DD3 ECC

    HP Z230 motherboard

    LSI SAS9201-16i

    SUPERMICRO CSE-M35T-1B Hot swaps

    8 x WD 6TB Reds

    1 x 256 Crucial M4 SSD cache drive

    Sandisk Cruzer Fit boot drive

    Seasonic G-Series 750w

     

    Here are my current dockers:

     

    deluge

    Krusader

    Sonarr

    Radarr

    PlexMediaServer

    Sabnzbd

     

    For Blue Iris I am currently running 4 1080p Hikvision POE cams.

     

    I am constantly reading, asking, trying, and learning with unRAID, is that not the point? I think unRAID can be a good option for me, so I am exploring that. Only way I will be "far more experienced" is by learning and doing.

     

    I'm happy with Blue Iris. I have been using it for years, which is why I am asking about a VM for it. I however am open to other option should someone make a convincing argument for an alternative that may work better for me/my setup. If not, I'm going to run Blue Iris, which I have been happy wth.

     

    I appreciate you taking the time to comment, but the sentiment of "don't try, stick with what you've got because I'm skeptical of you/your abilities," is weird to me. I'm not staying where I'm at with my current setup. It doesn't fulfill all my needs. So I am learning about unRAID and if that can meet my needs, so I am here asking questions. 

  9. I'm in the process of setting up my server (e3-1245v3 w/32GB ram). On my current home server (just a windows box) I run Blue Iris connected to some POE security cameras. So for my unRAID server, I'll need a Win10 VM for this function. I have a couple of questions:

     

    1) How many cores and how much memory should I allocate for this? I don't want to over or under-allocate

     

    2) I was originally planning to setup a single drive (WD Purple) in Unassigned Devices dedicated to just my security cam recordings, but it would be nice to have the protection of the array. Is there a way to add my drive to the array so it's protected, but then only allow it to be used for this singular function? 

     

    Also as a side note, I also noticed there is a ZoneMinder docker for unRAID now, so if anyone has any security cam experience with unRAID, I'm open to other suggestions. 

     

    Thanks!

  10. Hi there, I'm still setting up my unRAID server. It's a E3-1245v3 w/32GB of ram. I'm currently running the following dockers:

     

    deluge

    Krusader

    Sonarr

    Radarr

    PlexMediaServer

    Sabnzbd

     

    When I started to bring a bit of my data over to the new server, I plugged in a drive from my current server and did a copy of the data to the new array via Krusader. During the copy, pretty much everything else on the server was running slow or hanging. 

     

    Now that I have completed that transfer everything is up and moving, but I am experiencing similar issues when I run other tasks. For example, when I add 1-3 new items in Radarr/Sonarr and it activates Sabnzb, the rest of the system is slow. I tried launching other dockers and they were very slow and unresponsive. 

     

    This is concerning because I still need to get my Windows VM up and running with my BlueIris security came software. I need this server to do all these things.

     

    My current server is just a windows 10 box running everything my new unRAID server is running, but it's doing it all flawlessly with a slower E3-1225v3 chip and less ram. What am I doing wrong?

     

    Happy to give more info where needed. I have attached my diagnostics files to this post. Thanks

    house-server-diagnostics-20170911-1336.zip

  11. I'm closing in on my final components for my unRAID build and I need to select my 10GbE networking. My server will be in a separate room from my main desktop (45ft away), so I need distance to be supported. I'm really new to 10GbE, but I've been doing lots of reading. As I understand it, SFP+ is limited to 10m cabling with standard DAC, but with transceivers connected to fiber, it can go much farther. Is this correct?

     

    10BASE-T ($400-450):

    Dell W605R or Intel X540-T1

    ASUS XG-U2008

    There is also this 2 port card (not sure what advantage this gives me): AOC-STG-I2T

     

    SFP+ ($475ish):

    Mellanox Connectx-2

    Cisco SFP-10G-SR Transceivers

    OM3 Fiber Cabling

    TRENDnet TEG-30284

     

    Is there an advantage to one or the other? I definitely like the switch I have selected for the SFP+ setup over the RJ45 setup. 2 extra 10GbE ports for future expandability. That said, I do already have several rooms with CAT6 runs, so that makes 10BASE-T attractive too.

     

    Has anyone found one to be faster/better than the other with unRAID?

     

    To save money, I might just do two cards and a direct connect at first, then add in the switch later down the road.

     

    Love any feedback and personal experience with setups similar to this. Hoping to run a Raid 10 cache with at least 4 SSDs, so I would like to maximize over network data rates. Thanks!

     

     

     

  12. 2 hours ago, bjp999 said:

    I am running the stock fans. They are not silent. But they are terrific ball bearing fans that will last forever. And they move a lot of air. They are 92mm, not 80mm. The sound they make is the sound of air, not squeaks or whines.

     

    In a work environment, they'd  be fine IMO. If they are not, you can always look at options (quieter fans, fan controller, etc.). In a pristine listening environment where you want the sound of a pin to have you jumping out of your seat, they are going to be too loud (probably anything short of locating the server remotely is going to be too loud IMO, which is what I have).

     

    The 10GBE internet makes a lot of sense for your application. 

     

    I thought you had said that your media files were about 5G each. I'd consider 250G SSDs (50 x 5G files) on the desktops and a 500G SSDs (100x5G files) on the server. If you have space issues, you'd be able to 2x those into a RAID0 configs without much trouble. But this is your business and you know a lot more about the numbers of work items in queues and the amount of space you need on the workstations vs server. I can only give an amateur's perspective on what your business needs.

     

    But I think you are looking at a good server case. I have the 900 Antec for my backup server (with 2 extra 5in3s outside the server). The case is not quite as deep (front to back) as my Sharkoon. If you keep the 5in3s slid in about 3/4 of the way, you can get everything connected, and once done, slide them into place and screw them in. (BTW, you will need a DEEP THROATED C-CLAMP to bend back the little ledges in the case).  I had a little trouble with a deep controller card in my 900 server. Making matters worse, it has SATA ports on the south end, further increasing the depth it needs. It is a tight fit with that controller in place. The -16i is quite a bit shorter, and you shouldn't have much trouble. Can't speak for the 10Gbe cards, but should be fine. Worst comes to worse, you leave the 5in3's pushed in 90% of the way and have a little extra depth, which is my current setup. If (when) I had a different controller, I'd be able to push them all the way in.

     

    Good luck! Seems your off on an adventure getting it all assembled. Take your time! Take some pictures and post them. Also, I'm interested in the 10GBE network option, and will be very interested in the hardware you get and your experiences.

     

    Oh, one other thing. The used CSE-M35Ts don't always come with these tiny little flat head screws you will need. You can buy them on Amazon cheap.

     

    At the moment I have two active video projects. Both are between 100-150GB with individual video files sized between 4-6GB each (plus a large assortment of smaller files). I also have two active photo projects. One is 300GB (with over 8,000 raw images sized around 20-25MB each) and the other is 150GB. It's fairly common for me to have 2-4 active photo projects and 2-3 active video projects. So I'm thinking, minimum, I need 1-2TB of fast "working" storage. I'm currently leaning towards running the 2TB Raid 10 SSD cache on the server option. It gives me twice the space as the M.2 NVMe local drive while still maintaining very fast speeds (not as good as NVMe, but more than good enough). I also like that I can expand that in the future.

     

    I'm glad you pointed out needing the C-Clamp. I just figured out today that the Antec 1200 has the tabs in the 5.25" bays. I was scratching my head about how to handle those. Great suggestion.

     

    Figuring out my 10GbE solution is definitely next on the list. Looking forward to having that set up. 

     

    Good note on the screws for the hot swap cages. I figured they wouldn't come with any hardware given the discounted price I paid.

     

    I'll definitely be taking my time! I plan to do the build in stages so I don't get overwhelmed. Thankfully last week when I was looking to purchase the E3-1245 V3 CPU to upgrade my 1225, I found an auction for a full system (HP Z230) for the price of the CPU. Figured it wouldn't hurt to bid and actually won it. So now I have a second system to use for the unRAID build and I don't have to take my TS140 out of rotation until the unRAID is up and running. Then I can either sell the TS140 or hang onto it for anther project. Should take the stress off the build progress. Thanks!

     

  13. 28 minutes ago, bjp999 said:

     

    With that TS140 motherboard - you get 5 SATA slots, leaving you needing 15 more.

     

    The LSI SAS9201-16i would give you 16 ports. That is enough for your 15 drives + 1 SSD.

     

    If you need more (I don't think you should), you could buy a cheaper 2/4 port card. Or an LSI SAS9201-8i. You may even have something in your extra parts bin.

     

    That's one I've been looking at. That would cover my minimum requirements. I was considering more ports if I did a multiple SSD cache pool. 4 x 1TB in Raid 10 for example. To connect to via 10GbE, mount on my desktop, and use as my high-speed working/editing drive. That would be $1100-ish. If they are 500ish MB/s SSD drives, that would potentially saturate the 10GbE on reads and nearly on writes.

     

    Another option would be a 1TB M.2 NVMe drive in my desktop ($450) and a 2TB single SSD in the server as the cache disk ($550). I would have less working drive space, but it would be super super fast. I would lose transfer speeds, but 400-500MB/s is more than enough for just transferring back and forth.

     

    Seems like the first option would be my best compromise. More usable working space and really fast transfers/working speed. Any other options I should consider?

     

    Also on the CSE-M35T-1B cages, are you running the stock fans? If so, how loud are they?

  14. @bjp999 I ended up with an Antec Twelve Hundred case. I snagged it on ebay last night for less than $100 with shipping. Also grabbed 4 of the SuperMicro hot swaps. Did a best offer on the set and he accepted this morning.

     

    Now the only two big purchases remaining are the HBA, the 10GbE cards, and whatever SSD config I decide on. I think I need to go with a HBA with a lot of ports. I only have the following slots to work with:

     

    1 PCIe Gen3 x16 slot

    1 PCIe Gen2 x4 slot /x16 connector

    1 PCIe Gen2 x1 slot/x4 connector

    1 PCIe Gen2 x1 slot

     

     

  15. 19 hours ago, johnnie.black said:

     

    One

     

     

    Yes, it just won't show up in the GUI

     

    So if it's not the unRAID array or the cache drive, the only way for it to show up in the GUI is with the unassigned devices plugin? I could utilize a hardware raid as a volume or have multiple SSDs through that plugin. I will spend more time trying to find people who have been successful with hardware raids. Thanks for the info.

     

    15 hours ago, 1812 said:

    Post in this thread and maybe @limetech will consider implementing multiple cache pools someday!

     

     

     

    Very cool! I will post in there as well. That would be a great feature. Thanks for the heads up.

  16. 44 minutes ago, johnnie.black said:

     

    Cache pool supports RAID5 but btrfs RAID5 is still experimental and not stable for production use.

     

     

    ZFS pool is not controlled by unRAID, you'd need to use the CLI.

     

     

    So I'd want to go with Raid 10 for my cache pool then. Am I able to have multiple cache pools, or am I limited to the one?

     

    To clarify on the ZFS pool. That would not be controlled/created by unRAID, but would it be seen by unRAID? and accessible through the network and to the various VMs and things in dockers, etc? Would it just show up to unRAID as a volume? Is there an advantage to doing it that way, vs setting up a hardware raid 5 and bringing that in as an unassigned volume/drive?

     

    Thanks for the info and patience with the questions!

  17. 5 hours ago, johnnie.black said:

    HDD RAID10 pool will perform better than a single HDD, it should perform somewhat below twice the performance of a single disk.

     

    ZFS plugin allows you to have one (or more) ZFS pools outside the array or cache, they can't be used there.

     

    For the 4 7200rpm drives that I have (Toshiba X300), is Raid 10 my best bet then? Previously I have had them in a G-Speed Studio thunderbolt 2 enclosure in Raid 5. That gave me 450-500MB/s read and write speeds. That would be my ideal performance.

     

    So basically if I used the plugin, I would see the ZFS array as a separate, but accessible volume within UnRaid (and managed by)? Ideally I'd like to have my UnRaid array that I keep most data on and could keep growing. Then I could also have a high performance array that I could work from (6-12TB). Bonus would be a SSD cache intermediary. I am 100% okay and understand that it won't be a part of the UnRaid array.

     

    Am I understanding this correctly?

     

     

  18. As I'm researching my new build, I am still trying to wrap my head around how I will have a "working drive" or scratch disk. Ideally I'd have multiple terabytes for this purpose, and I'd love to not longer have an external attached to my desktop.

     

    I have been searching and reading a ton of threads on the forums, but I still don't have a clear picture of what's possible. I know most people use a SSD (or multiple) as a cache drive and that technically multiple raid levels are possible. So at first I thought I might have a few SSDs as my cache, access this on the server through 10GbE, and just make due with less space. However I am now wondering if I could use 4 7200rpm HDDs in Raid 10 and my "scratch disk" that I access on the server via 10GbE. Does anyone run something like this? Is there a reason to not do this?

     

    Also, if that is possible, is it also possible to run a second cache pool of a SSD (or multiple) to server as intermediary?

     

    At the very minimum, I'll install an additional 1-2TB SSD in my desktop and use that as a scratch disk, and keep my unraid config fairly standard, but I'd love more space that's also fairly speedy, so I'm looking at my options.

     

    Thanks!