Leaderboard

Popular Content

Showing content with the highest reputation on 10/04/18 in all areas

  1. Request to have the current 30 data drive (28+2) limit increased. My specific application is a media server and with the advent of 4K videos, the need for storage has almost tripled per video. I've hit the maximum 30 drives and as I replace my smaller 4TB and 6TB models with 8 or 10TB, I cannot reutilize those 4TBs within the same data drive pool; cache drives server no real purpose for me on this server. Speaking for myself, I am willing to pay an upgrade or higher tier license fee for the ability to go beyond 30 data drives.
    1 point
  2. There have been some reports of trim not working any longer on the LSI 9207, IIRC since around v6.5, possibly same change on the LSI driver, but I don't know if it's a general issue, I do have one and trim worked fine before, but not been using it with SSDs for while, I'll try to make some tests when I can.
    1 point
  3. I have had this motherboard since 9-1-2013 and was one of the first ones to post this problem on the comments for Newegg about it, my post was the user James G. I spent a few months going around with AsRock about it and as always there response was that if it was the board then everyone would have the problem. What they fail to see is that not everyone monitors IPMI issues. I use mine on vSphere so I do monitor these events. So far the board has not died but I sure would had liked to have been able to fully monitor the health of my server. But yeah, mine works fine after a reboot but after an hour or too the CPU temp on both go completely nuts.
    1 point
  4. The fix should cover both. Are you seeing the issue with the GUI?
    1 point
  5. Because they use some multicast thing, you can't fire up the container w/ all those port forwards in the normal mode and have it work. You should be able to use the adopt token and such. Or you can fire it up in --network="host" mode to add them, then drop back to normal. https://github.com/pducharme/UniFi-Video-Controller/issues/110 ^ This issue talks about another way where the container gets its own mac and IP, you could try it and see how it goes.
    1 point
  6. ... and here I thought I was helping you debug your fine product As requested. tower-diagnostics-20181003-2113.zip
    1 point
  7. I, too, would like to see the 30 data drive (28+2) limit increased. My specific application is a media server, therefore, data integrity is not the highest concern as I could always reload any media files lost; I agree with Ashman that the responsiblity rests on the end user, and so should the option to incorporate a larger data pool beyond 30 drives. With the advent of 4K videos, the need for storage has almost tripled per video. As the OP, I'm about to purchase a SuperMicro 36-bay chassis since my current Norco 4224 I've begun putting in drives loosely inside the enclosure in any free space around the mobo, as well as on PCI backplane brackets. Since the chassis is installed in a rack with other enclosures, even with sliding rails, accessing these drives requires removing the enclosure immediately above it in order to get the cover off since it only slides out about 2/3rd's its length; a big PITA. I did a quick search in the feature request thread but didn't see if increasing the data drive limit had been requested. If it hasn't I will post a new feature request.
    1 point
  8. This is honestly something I have been asking for for years. As @ashman70 stated, if it's a technical limit that unraid has, then so be it. I can't say that I've ever seen other OS's struggle with this limit, so I'm not sure its a technical limit vs a LT imposed limit. I've also said for years I would have been happy to buy a Pro Plus or Ultra or whatever license to get rid of the limit. While I'm now running 12TB drives, it would have been a hell of a lot cheaper to buy an upgraded license rather than expensive larger drives. I'm not talking hundreds of dollars cheaper, I'm talking thousands of dollars cheaper. As to not wanting that much data (or drives) being protected by 1 or 2 parity drives, that's just personal preference. I no longer even run parity drives on my array. At $400/drive and having a main and backup server, dual parity on those machines would be a $1,600 expense. If a drive dies, I replace that drive, and copy the missing data from my backups. So if that was a real concern, give us multiple arrays with each array protected by it's own set of parity disks. Cost issue solved, scaling issue solved. We get what we need, LT makes money, everyone wins.
    1 point
  9. I had the same issue. It has to do with docker log file sizes. Just run this and your utilization will come down a lot: truncate -s 0 /var/lib/docker/containers/*/*-json.log It's safe to use while the docker is still running.
    1 point