• Posts

  • Joined

  • Last visited

Everything posted by andyps

  1. I have posted the issue on the plex forums, so hopefully I get some insight there. However even with downgrading the issues I am having are still quite bad. Playback locks up and the server becomes unresponsive regularly no matter the version I downgrade to.
  2. Hi there, my plex server had starting becoming unresponsive, plex apps couldn't discover the server, etc. In the logs I was getting a bunch of "connect: Connection timed out" errors. I tried reinstalling the docker container, deleting the docker image and starting over, but nothing worked until I downgraded plex. I downgraded to and now the server is much more quickly discovered and media plays, etc, but every so often it dies again and requires a docker container restart. I am also still getting a huge number of these errors (like pages and pages): connect: Connection timed out connect: Connection timed out connect: Connection timed out connect: Connection timed out connect: Connection timed out connect: Connection timed out connect: Connection timed out I'm running unraid 6.6.1 and I have attached my diagnostics to this message. Also when I reinstall the docker container or make changes here is the output: root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/Chicago" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'VERSION'='' -v '/mnt/user/Media/':'/media':'rw' -v '/tmp/':'/transcode':'rw' -v '/mnt/user/appdata/plex':'/config':'rw' 'linuxserver/plex' 0790f41fd9ad67b65bb35622056278db63031673dbb1bfec20b5c4fdfcce97cd The command finished successfully!
  3. Oh sorry I missed this awhile back. I use the web viewer occasionally, but I do 95% of my monitoring with the phone app. I have also geofencing setup with the app so that our indoor camera is auto-disabled when we are home.
  4. Yes, I'd definitely recommend the drive outside the array using unassigned devices. I've been running that config since this post and have had no issues. To setup the second drive I simply edited my VM in the VM manager of unRaid and added a 2nd vDisk Location. I selected the option to manually select it and used my drives name identifier. Basically that field looks like this: /dev/disk/by-id/ata-DRIVE-NAME You can just replace the "drive-name" part and update and be golden. I am running a 4TB WD red with no partition issues. Hope that's helpful
  5. I have a new install of unRAID. When I try to install an app (Krusader or Plex for example) unRAID freezes for a while. Basically, I click the install button below the app in community applications. I setup the config, and then we I accept it goes to a white screen (with the unRAID banner still at the top) and it stays here for 5-8mins. Chrome asks if I want to kill the tab process because it's not responding. I hit wait and after the 5-8 mins, it finally shows the add container screen and pulls the container and presents the "done" button. After this the app functions more or less normally. Though for Krusader it has completely locked up twice when trying to transfer several terabytes of data. This setup was working flawlessly, but then I wanted to try windows + snapraid/drivepool out. I wasn't happy with that setup, so I have now purchased unRAID and moved back. The only thing I have changed in my setup is the cache drive. I upgraded the Crucial M4 240GB SSD to a WD Blue 1TB. Other than that, the setup is identical, but now I have these issues. My WD SSD is new, but I bought it on ebay. When I first installed it, it showed 0 power-on hours, but I didn't look at reads and writes. After moving about 20TB to the server it shows 141,797 reads and 772,168 writes. Is that normal? Just wondering if it is indeed new. Screenshots attached. Any insight? Here are the specs: E3-1245v3 32GB DDR3 ECC HP Z230 motherboard LSI SAS9201-16i SUPERMICRO CSE-M35T-1B Hot swaps 2 x WD 8TB Reds (Parity) 8 x WD 6TB Reds 1 x 1TB WD Blue SSD cache drive Sandisk Cruzer Fit boot drive Seasonic G-Series 650w
  6. Yep! This is the conclusion I'd come to as well. I'm writing to a 4TB drive in UD right now. Plan to do backups of it via rclone to google drive. BTW, thanks for all your help early in this process. I'll be sure to share pics and the build once it's done so I can show off the supermicro hotswaps all stacked up in the tower case! Ya, I've made some of those tweaks as well. Made a big difference for sure. Good info on the IR. I'm going to play with those options, thanks.
  7. I didn't realize that I was running BI so CPU heavily. I did a search for "reduce blue iris CPU usage" and followed a number of suggestions. BI went from 40% to 15-18% Got the VM with BI up and running. I'm using a single core + 1 virutal core & 4GB of ram. Running great so far. Thanks for suggestions Do you run it with 2 physical cores? I'm running it right now with 1 physical and 1 virtual/hyperthreaded and it seems to be running great. My BI was running around 40% with 2 1080p cams, 1 720p cam, and 1 480p cam. After tweaking BI that dropped to 15-18%. Running it with the same tweaks on the VM I'm seeing right around 20-25% with unRAID reporting 25-28% going to the VM. I'm happy with that I think. Just wasn't sure what core mixture to run with. Thanks for info. Good info, thanks. Ya I found that the "direct to disk" setting for recoding was the biggest win for CPU usage. I had no idea I wasn't running it that way and that I was spending so much CPU power encoding the clips.
  8. Disk speeds are all 138-142MB/s Parity check speed is around 162MB/s
  9. I'll ask it over in the Blue Iris community. I was hoping to hear from other Blue Iris VM users as to what their setups looked like so I could make some better estimations, but I can also just fine tune mine as I go. If not UD and if not a disk as part of the array, then what? Here are my specs: E3-1245v3 32GB DD3 ECC HP Z230 motherboard LSI SAS9201-16i SUPERMICRO CSE-M35T-1B Hot swaps 8 x WD 6TB Reds 1 x 256 Crucial M4 SSD cache drive Sandisk Cruzer Fit boot drive Seasonic G-Series 750w Here are my current dockers: deluge Krusader Sonarr Radarr PlexMediaServer Sabnzbd For Blue Iris I am currently running 4 1080p Hikvision POE cams. I am constantly reading, asking, trying, and learning with unRAID, is that not the point? I think unRAID can be a good option for me, so I am exploring that. Only way I will be "far more experienced" is by learning and doing. I'm happy with Blue Iris. I have been using it for years, which is why I am asking about a VM for it. I however am open to other option should someone make a convincing argument for an alternative that may work better for me/my setup. If not, I'm going to run Blue Iris, which I have been happy wth. I appreciate you taking the time to comment, but the sentiment of "don't try, stick with what you've got because I'm skeptical of you/your abilities," is weird to me. I'm not staying where I'm at with my current setup. It doesn't fulfill all my needs. So I am learning about unRAID and if that can meet my needs, so I am here asking questions.
  10. I'm in the process of setting up my server (e3-1245v3 w/32GB ram). On my current home server (just a windows box) I run Blue Iris connected to some POE security cameras. So for my unRAID server, I'll need a Win10 VM for this function. I have a couple of questions: 1) How many cores and how much memory should I allocate for this? I don't want to over or under-allocate 2) I was originally planning to setup a single drive (WD Purple) in Unassigned Devices dedicated to just my security cam recordings, but it would be nice to have the protection of the array. Is there a way to add my drive to the array so it's protected, but then only allow it to be used for this singular function? Also as a side note, I also noticed there is a ZoneMinder docker for unRAID now, so if anyone has any security cam experience with unRAID, I'm open to other suggestions. Thanks!
  11. Also if helpful, here are the full server specs: E3-1245v3 32GB DD3 ECC HP Z230 motherboard LSI SAS9201-16i SUPERMICRO CSE-M35T-1B Hot swaps 8 x WD 6TB Reds 1 x 256 Crucial M4 SSD cache drive Sandisk Cruzer Fit boot drive Seasonic G-Series 750w
  12. Hi there, I'm still setting up my unRAID server. It's a E3-1245v3 w/32GB of ram. I'm currently running the following dockers: deluge Krusader Sonarr Radarr PlexMediaServer Sabnzbd When I started to bring a bit of my data over to the new server, I plugged in a drive from my current server and did a copy of the data to the new array via Krusader. During the copy, pretty much everything else on the server was running slow or hanging. Now that I have completed that transfer everything is up and moving, but I am experiencing similar issues when I run other tasks. For example, when I add 1-3 new items in Radarr/Sonarr and it activates Sabnzb, the rest of the system is slow. I tried launching other dockers and they were very slow and unresponsive. This is concerning because I still need to get my Windows VM up and running with my BlueIris security came software. I need this server to do all these things. My current server is just a windows 10 box running everything my new unRAID server is running, but it's doing it all flawlessly with a slower E3-1225v3 chip and less ram. What am I doing wrong? Happy to give more info where needed. I have attached my diagnostics files to this post. Thanks
  13. Ya the one I linked to is $88 Those prices I put on my post are for all the components together
  14. I'm closing in on my final components for my unRAID build and I need to select my 10GbE networking. My server will be in a separate room from my main desktop (45ft away), so I need distance to be supported. I'm really new to 10GbE, but I've been doing lots of reading. As I understand it, SFP+ is limited to 10m cabling with standard DAC, but with transceivers connected to fiber, it can go much farther. Is this correct? 10BASE-T ($400-450): Dell W605R or Intel X540-T1 ASUS XG-U2008 There is also this 2 port card (not sure what advantage this gives me): AOC-STG-I2T SFP+ ($475ish): Mellanox Connectx-2 Cisco SFP-10G-SR Transceivers OM3 Fiber Cabling TRENDnet TEG-30284 Is there an advantage to one or the other? I definitely like the switch I have selected for the SFP+ setup over the RJ45 setup. 2 extra 10GbE ports for future expandability. That said, I do already have several rooms with CAT6 runs, so that makes 10BASE-T attractive too. Has anyone found one to be faster/better than the other with unRAID? To save money, I might just do two cards and a direct connect at first, then add in the switch later down the road. Love any feedback and personal experience with setups similar to this. Hoping to run a Raid 10 cache with at least 4 SSDs, so I would like to maximize over network data rates. Thanks!
  15. At the moment I have two active video projects. Both are between 100-150GB with individual video files sized between 4-6GB each (plus a large assortment of smaller files). I also have two active photo projects. One is 300GB (with over 8,000 raw images sized around 20-25MB each) and the other is 150GB. It's fairly common for me to have 2-4 active photo projects and 2-3 active video projects. So I'm thinking, minimum, I need 1-2TB of fast "working" storage. I'm currently leaning towards running the 2TB Raid 10 SSD cache on the server option. It gives me twice the space as the M.2 NVMe local drive while still maintaining very fast speeds (not as good as NVMe, but more than good enough). I also like that I can expand that in the future. I'm glad you pointed out needing the C-Clamp. I just figured out today that the Antec 1200 has the tabs in the 5.25" bays. I was scratching my head about how to handle those. Great suggestion. Figuring out my 10GbE solution is definitely next on the list. Looking forward to having that set up. Good note on the screws for the hot swap cages. I figured they wouldn't come with any hardware given the discounted price I paid. I'll definitely be taking my time! I plan to do the build in stages so I don't get overwhelmed. Thankfully last week when I was looking to purchase the E3-1245 V3 CPU to upgrade my 1225, I found an auction for a full system (HP Z230) for the price of the CPU. Figured it wouldn't hurt to bid and actually won it. So now I have a second system to use for the unRAID build and I don't have to take my TS140 out of rotation until the unRAID is up and running. Then I can either sell the TS140 or hang onto it for anther project. Should take the stress off the build progress. Thanks!
  16. That's one I've been looking at. That would cover my minimum requirements. I was considering more ports if I did a multiple SSD cache pool. 4 x 1TB in Raid 10 for example. To connect to via 10GbE, mount on my desktop, and use as my high-speed working/editing drive. That would be $1100-ish. If they are 500ish MB/s SSD drives, that would potentially saturate the 10GbE on reads and nearly on writes. Another option would be a 1TB M.2 NVMe drive in my desktop ($450) and a 2TB single SSD in the server as the cache disk ($550). I would have less working drive space, but it would be super super fast. I would lose transfer speeds, but 400-500MB/s is more than enough for just transferring back and forth. Seems like the first option would be my best compromise. More usable working space and really fast transfers/working speed. Any other options I should consider? Also on the CSE-M35T-1B cages, are you running the stock fans? If so, how loud are they?
  17. @bjp999 I ended up with an Antec Twelve Hundred case. I snagged it on ebay last night for less than $100 with shipping. Also grabbed 4 of the SuperMicro hot swaps. Did a best offer on the set and he accepted this morning. Now the only two big purchases remaining are the HBA, the 10GbE cards, and whatever SSD config I decide on. I think I need to go with a HBA with a lot of ports. I only have the following slots to work with: 1 PCIe Gen3 x16 slot 1 PCIe Gen2 x4 slot /x16 connector 1 PCIe Gen2 x1 slot/x4 connector 1 PCIe Gen2 x1 slot
  18. Sorry, yes I understand that now about ZFS. I was speaking to a hardware Raid 5 array or a Raid 10 array and passing it through the UD plugin. Not a part of the array or managed by unRAID, but able to be seen and shared.
  19. +1 Would love to see this feature.
  20. So if it's not the unRAID array or the cache drive, the only way for it to show up in the GUI is with the unassigned devices plugin? I could utilize a hardware raid as a volume or have multiple SSDs through that plugin. I will spend more time trying to find people who have been successful with hardware raids. Thanks for the info. Very cool! I will post in there as well. That would be a great feature. Thanks for the heads up.
  21. So I'd want to go with Raid 10 for my cache pool then. Am I able to have multiple cache pools, or am I limited to the one? To clarify on the ZFS pool. That would not be controlled/created by unRAID, but would it be seen by unRAID? and accessible through the network and to the various VMs and things in dockers, etc? Would it just show up to unRAID as a volume? Is there an advantage to doing it that way, vs setting up a hardware raid 5 and bringing that in as an unassigned volume/drive? Thanks for the info and patience with the questions!
  22. For the 4 7200rpm drives that I have (Toshiba X300), is Raid 10 my best bet then? Previously I have had them in a G-Speed Studio thunderbolt 2 enclosure in Raid 5. That gave me 450-500MB/s read and write speeds. That would be my ideal performance. So basically if I used the plugin, I would see the ZFS array as a separate, but accessible volume within UnRaid (and managed by)? Ideally I'd like to have my UnRaid array that I keep most data on and could keep growing. Then I could also have a high performance array that I could work from (6-12TB). Bonus would be a SSD cache intermediary. I am 100% okay and understand that it won't be a part of the UnRaid array. Am I understanding this correctly?
  23. Also I found this while searching: Is this just for ZFS within a VM? Or does this allow for a ZFS array within unraid (alongside the standard unraid array)? Sorry I'm very new to a lot of this stuff and trying to figure it out.
  24. As I'm researching my new build, I am still trying to wrap my head around how I will have a "working drive" or scratch disk. Ideally I'd have multiple terabytes for this purpose, and I'd love to not longer have an external attached to my desktop. I have been searching and reading a ton of threads on the forums, but I still don't have a clear picture of what's possible. I know most people use a SSD (or multiple) as a cache drive and that technically multiple raid levels are possible. So at first I thought I might have a few SSDs as my cache, access this on the server through 10GbE, and just make due with less space. However I am now wondering if I could use 4 7200rpm HDDs in Raid 10 and my "scratch disk" that I access on the server via 10GbE. Does anyone run something like this? Is there a reason to not do this? Also, if that is possible, is it also possible to run a second cache pool of a SSD (or multiple) to server as intermediary? At the very minimum, I'll install an additional 1-2TB SSD in my desktop and use that as a scratch disk, and keep my unraid config fairly standard, but I'd love more space that's also fairly speedy, so I'm looking at my options. Thanks!