Lev

Members
  • Posts

    369
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Lev

  1. Confirmed, this makes Windows 10 report as expected now 4K size (it was previously using 1MB). UnRaid is version 6.4rc7a, smbd v4.6.6
  2. I love this app, thank you @Squid, many times in the past I wish I had set this up to run automatically. Would of saved myself a lot of grief! The script works great and I could use some advice on what the best way to make it more efficient. As it's been noted many times already in this thread, metadata folders from plex, sonarr, radarr...etc contain a lot of small files. Working with or storing such a large number of small files is inefficient both disk operations or file system block size. After reading this thread there seems to be a few ways to solve it: Add a feature to tar or compress. Seems like the right solution, but feature doesn't exist today. Exclude the sub-directories with the metadata causing this inefficiency to be exclude from the backup. This isn't much of a solution IMO, as it's not being backed up. One could argue if it's even worth backing up in the first place, given it just takes time to easily re-download. Do not use the dated back-up option. This would allow rsync to much more quickly compare and only replace files that have changed. All this media metadata is pretty static. So this may be the best option to pick? What's everyone's thoughts?
  3. Looks like the deal is over, price is back up. Amazed how long it lasted.
  4. @coppit thanks! looking forward to trying this!
  5. Agree with OP, need more options for containers like DNS for the reasons he cited. Docker keeps improving on Unraid thanks I hear to bonienl hopefully this request will make the list. The 6.4-rc's are awesome, and this feature would be another step forward.
  6. With 8TB and larger drives available now, I can't see why anyone would put that many in a single system with only 2 parity drives. Allow multiple pro keys per install. So two keys would allow two arrays of 28 data + 2 parity. If we were allowed to run multiple license keys, I would most likely run three keys per install consisting of three arrays of 18 data + 2 parity for a total of 60 drives in my JBOD chassis. LOVE IT. Thank you for the reply! Genius. Tom do this and please take my money already.
  7. jonathanm thank you very much for the clarification! That was exactly what I was needing to know. Tom if you're watching, starting thinking of that next license level above pro. It's been a few years since I bought my license and would glad put some new license money next year for new high device limit for the array. I'm thinking of 72+ devices. Hot swap parity drive to go with it would be dope.
  8. Need some clarification. I've read in other posts that Pro license is unlimited, yet I also see the same '30' slots limit in the GUI. How do I get that limit to go higher to be unlimited?
  9. I'm trying to improve the performance during the unpack of NZBGET which is slow when the Mover is running. Unpack times go from 1m to 10minutes when the mover runs and NZBGet is unpacking to the cache drive. The the PP-Queued items start stacking up quickly in NZBGet. All other times this setup runs fine if the mover is not running. I thought about getting a larger cache drive but the mover would still have to run just as much to move the data. Here's my config in sequence of how the data flows: AppData for NZBGet Unassigned Device 1 - 256GB SSD - This is where NZBGet's cache, history and nzbs are stored Inter (Intermediate Directory) NZBGet Unassigned Device 2 - 2TB SSD - This is where all the news articles are assembled into their complete form in .rar files or whatever the content is. Download Destination Array Cache Drive - 512GB SSD - This is where all downloads end up and are post processed by sonnar, couchpotato or other containers. From here the mover runs and transfers to the array. Any thoughts? Should I try unpacking straight to the array and skip the cache drive?
  10. Yes those rear wall fans are the loudest in the case, IMO louder than the middle ones in the fan wall. If I was to swap out anything, I'd do start in the back.
  11. I ran the same setup you're asking about now for over two years, i7-920 and x58-ud4p, works flawless but I never used the onboard sata controllers. I use m1015 instead to get the SATA-III support, so I can't speak to if HBA is an issue still or not on the Gigabyte boards. Over the years as I got more into VM's and Docker with UnRaid adding support for those, I upgraded the DDR3 ram from 8GB to 48GB. Also swapped out the i7-920 and put in a Xeon 5660. Fantastic platform, so I say stick with it since its worked for you and its worked for me.
  12. The only way I was ever able to work around this was by using a USB add-in card and passing it thru to the UnRaid VM. I since gave that up long ago and just went to running UnRaid on bare metal and then using VMs inside of it. The docker functionality and VM support of UnRaid's current releases has removed 80% of why I ever wanted VMware in the first place. Life is much better, I don't miss the other 20% I'm missing not having VMware anymore.
  13. That's a pretty solid recommendation. I like that Rosewill case.
  14. I like this case a lot, thinking about buying it but need help with one question I can't seem to find the answer to in all the documentation. There appear to be two backplanes 'expanders' in it, each supports 24 drives. Each backplane has 3 sas connectors on it. I'm confused as to the three sas connectors on the backplane, does each connector only go to 8 of the 24 disks? Or is it something else? Many of the photos I found online only show one sas cable going into each backplane, which makes me think it operates as a 24 port expander, but then that doesn't make any sense cause why have 3 sas connectors on the backplane, it would only be one or two (dual linking). Does anyone how owns this case know how it works and can get me sorted. Trying to figure this out so I know what performance to expect. Cheers!
  15. I was lucky for 3 years, my luck ran out today. 5 bad drives due to a Norco backplane. You might want to consider switching.
  16. Interesting product, and inexpensive. Wouldn't the performance on a parity check be very slow using those multipliers? If my math is right... SATAIII 600MB / 5 drives per mulitpler = 120MB/sec max best case
  17. I'd recommend to go with a 4 post rack, then you can get shelves and everything for it to customize it as you see fit. I bought this one off Amazon many years ago, very happy with it. Then as your equipment grows, easy to throw in the rack. Also I recommend getting a rack mount UPS to put in the bottom slots, check out the CyberPower models. https://www.amazon.com/gp/product/B00O6GNLQE/ As for cases, you have a lot of options if you only need 12 drive slots.
  18. Bringing up this old thread as a warning.... I performed all this when I first got my Norco 4224 a couple years back, all the backplanes and ports worked fine, when testing with an individual disk. Now years later, I populate with 3 drives, and power up and I can smell the drives burn up. Lost 2 more drives confirming it. Thankfully no data drives. I thought I was safe from the Norco backplane issues, but it's really just a ticking time bomb. I have a second new Norco case here I could swap backplanes for, but my trust in their product is now gone. Plus the chips and on the backplane look unchanged after on this new revision of the backplane from 2015.
  19. Just did an upgrade from 6.1.9 using the plug-in method. No issues to report. VMs and Docker containers all work fine. File transfers are all at expected speeds. Nice work!
  20. Would it be possible to diff compare the last reported working version (b15?) to the current release? If it is isolated to to a change, the kernel as some suspect, would that allow for review by Tom and others the best way to fix. I'm still at v5 and planning to stay there until this is resolved.
  21. Same issue here, constant resets. I've tried USB 3.0, USB 2.0 sticks and nothing works to get around it. VMware ESXi v5.5, UnRaid v6 Final. I've tested bare metal in 3 different machines, they all boot fine. I'm going to an experiment with trying to direct pass thru a PCIe 1x USB 3.0 add-in card and see if POLP can boot from it.
  22. I'm having a similar problem, but not sure if it's exactly the same. My GUI is incredible slow just to go from page to page in Unraid. Each page takes about 1 minute to load. Is yours doing this? If so, It seems like you're running Unraid within a VM, correct? Have you tried not using the ESXi and just booting directly into Unraid? Did it make any difference? When I do that I have no issues and Unraid is super fast in the browser. If any of this sounds familiar, please attach your logs so I can compare. Thanks.
  23. 2.5 years ago, I built my first unraid server (pictured below). A Dell Vostro 200 (Dual Core Pentium 1.6Ghz, 4GB RAM, 4 sata ports) with two PCI SATA-I cards (4 ports each, 8 in total). Started with just 6 drives, now I’ve physically maxed out the case and ports to 12 drives. Hard Drives: 1x 500 1x 1TB 2x 1.5 2x 2TB 6x 3TB Parity checks are becoming a worry, they now take 2-3 days to complete. So I purchased a m1505 in an attempt to move off the PCI bus completely. I flashed the m1505 to IT mode in a separate computer, but it fails to work in the Vostro’s PCIe 1.0 x16 slot. I’ve tried various attempts, the Vostro will post and detect the drives, but it will never boot. So I’m back on the PCI cards. While this Vostro server has treated me very well, it’s got just enough cpu and ram to handle running unraid and the plugins I run. However I worry it’s reaching the end of it’s life simply because the parity checks are taking longer and longer as I swap out the smaller drives for larger. What are your thoughts? Anything else I should consider before resigning to upgrade?