Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. Been a while since I've been around, but it is humming along nicely. The only issue I've had is the istar cages rattling (The front doors don't latch as snugly as needed). To fix that I've used some electrical tape inside the unused bays. I'm nearing 100G of capacity in the server now, but still have a lot of room to grow. I've suffered with 2 drives having to be RMAed...well, one drive and then the RMA for that one had to be replaced. Parity rebuilds were seamless, temps never got that high and I was able to finish a 12TB parity rebuild in just (barely) under a day. I'm currently debating a Quatro inn a VM for trans-coding so I can finally get rid of having two libraries...but I'm holding off until late fall when I see what Intel and AMD are brining to the table this year.
  2. Very cool! Thanks for sharing, that would likely be a lot simpler than my solution...though both would have similar end results
  3. I can confirm, I get the same result as @rinseaid; Not ideal, but as this build is plugging a hole in the product lines until 7nm solutions come out for me, I can deal with it for a few years :). If this was intended to be a 10 year build, I'd have a different opinion.
  4. I'll put it in, but stick to using the script for now
  5. Hey, quick question. I just added the apcupsd exporter and I'm getting an error. Not sure if this is because I'm using a cyberpower UPS or what: Traceback (most recent call last): File "/src/apcupsd-influxdb-exporter.py", line 36, in <module> 'BATTV': float(ups['BATTV']), KeyError: 'BATTV' This is what I have in my apcaccess if that matters: root@Storage:/mnt/user/AppData/Config/scripts# apcaccess APC : 001,033,0821 DATE : 2019-02-01 15:40:38 -0800 HOSTNAME : Storage VERSION : 3.14.14 (31 May 2016) slackware UPSNAME : Storage CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2019-01-27 04:40:04 -0800 MODEL : OR1500PFCLCD STATUS : ONLINE LINEV : 120.0 Volts LOADPCT : 11.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 68.0 Minutes MBATTCHG : 10 Percent MINTIMEL : 10 Minutes MAXTIME : 600 Seconds OUTPUTV : 120.0 Volts DWAKE : -1 Seconds LOTRANS : 100.0 Volts HITRANS : 132.0 Volts ALARMDEL : No alarm NUMXFERS : 1 XONBATT : 2019-01-30 05:02:27 -0800 TONBATT : 0 Seconds CUMONBATT: 8 Seconds XOFFBATT : 2019-01-30 05:02:35 -0800 SELFTEST : NO STATFLAG : 0x05000008 SERIALNO : GX1HS2000054 NOMINV : 120 Volts NOMPOWER : 1050 Watts END APC : 2019-02-01 15:40:46 -0800
  6. That's true, but I believe that you can only /encode/ under linux in Plex at the moment...it's all sorts of early days on this, and not getting better quickly. The only reliable non-hackish ways to do more than 2 streams is with an iGPU or a P2000. It can work in other situations, but the fragility and potential for down-stream breakage is a factor.
  7. Amazon has quite a few options: https://www.amazon.com/Express-Riser-Extender-Ribbon-Cable/dp/B06Y14G61W I don't think you will find a riser card as the bracket on the back wouldn't work properly, and 16x cards can't support themselves in a vertical install without the bracket screw or the "tail" of the bracket seating solidly against the case. I think your getting into custom work here, and a ribbon would make more sense as you can then mount the card elsewhere....sort of like the vertical mounts for the define R6.
  8. The VMs are what makes this interesting...you want to reserve cores for those, but running a 10 stream plex server can quickly much them even in a 1080p world. So the first decision: Plex in a docker with an iGPU for transcoding OR Plex in a VM with a P2000 for transcoding That will really dictate where you go from here IMHO, if you need an iGPU for a plex docker, then team Blue is your only option. If you want to do transcoding in a VM then you can use either, but team Red would get more bang for the buck (IMHO). You could go with a really beefy i7 or i9 with an iGPU, but Intel is really managing what processors have gpus tightly...so it is a very limited set. If you aren't going to go for hardware transcoding, then likely team Red is the default as they have much higher core counts for the investment and you will ALWAYS have PCI-E lanes free to drop in a video card later. Anyway, that's just my thought on a clear distinction between the two.
  9. Word of warning, the GTX/RTX lines are limited to 2 streams at a time, and you would have to run it in a windows VM to get that. If you want more, you have to look at a P2000 or similar professional grade GPU. Just making sure you know the limitations before you go too far down the path, if you are fine with 2 streams and a windows VM then great! I don't think a 16x card can slot into an 8x slot, unless it is a special slot that has the back "opened". PCI-E will use the available lanes if you can seat it, so there is no electrical issue with running a 16x card in that slot...but you have to get the physical ability to plug it in!
  10. Well, the E-21xxG processors are actually cheaper than the i7/i9 counterparts...but if I'm reading your post correctly you have a MB that will support some generation of those...that's a really hard call to make man. On the cooling, those processors use grease for the TIM...thus the delidding madness of the past few years. With a good air cooler and not over-clocking them you should be fine...I have a D-15 and the E-2176G (which is basically a i7-8700k re-branded and tweaked) and I never get much above 50C. If I hammer it (go out of my way to do so) then I spike up into the low 60s, I'm sure a stress test would push it into the 70s with ease...but I can't see it going higher than that. Just go for quality air cooling and you should be fine with the i7, I've no experience with the i9, but believe it should be the same...however if someone who's run one contradicts me...believe them!
  11. The B series boards will be fine as long as you know their limitations going in...heck any board will be fine as long as you know what you want and understand the limitations. I also haven't seen an AMD board yet with a PCI slot, then again I haven't really looked ;). I do know my SM C246 board has one, but that's for a whole different level of processor so not relevant to this...other than I can't believe in 2019 I got a new motherboard on a brand new chipset with a freaking PCI slot...Really? Another option is to spend $40 on a passive PCI-E video card and open the world of options you have up wider...I'm going to guess that would be cheaper and net you a lot better end results. Edit for link: https://www.amazon.com/ZOTAC-GeForce-Profile-Graphic-ZT-71304-20L/dp/B01E9Z2D60/ref=sr_1_3?ie=UTF8&amp;qid=1548968560&amp;sr=8-3&amp;keywords=pcie+1x+video+card
  12. Generally ECC/non-ECC is a personal decision, there is no "Right" answer...however the recommendation seems to be if you aren't going to use ECC then memtest the heck out of your RAM before you rely on it...sort of like a pre-clear, a good chance any issues will be found during the test...not a lock, but it increases the level of confidence you can have if you get 24/48 hours of clean memtest :).
  13. You didn't grab a diags before the reboot by chance? Odds are something jacked up the "upgrade" which in docker terms is a "Reinstall the docker using the template" and not actually an upgrade. You could try to re-add the docker from your custom template (at the bottom of the list) and that "should" mirror what happened during the upgrade process last night...other than removing the existing container I mean.
  14. A very reasonable plan, as I said above, I've done the upgrade and I'm having to do the same thing. You do need to keep separate libraries by the way, you can't just use plex "Versions" of the movie as Plex is /stupid/ about picking which one to play. If there isn't a direct play option, and it is going to transcode...the selection of which version to transcode is lacking any brains. Some people say it is the "first" version added to plex, and others say it always transcodes the best file....either way, you can't get predictable behavior at the moment when you have multiple versions in the same library. For example, I have a movie with a 1080p and 4k HDR version. Transcoding down to iphone via LTE levels (so the lowest 720p option or lower) my server started transcoding the 4k HDR version......
  15. Very interesting, Going to do some testing today, I have a small monitor I can connect for the moment as I don't have a display emulator plug handy...but I should be able to replicate the setup. Will report back.
  16. This, if you are looking for transfer TO/From the server...if you are talking about mover putting stuff in the protected array then that's a very different question ;).
  17. Nope, the only thing you lose is remote console...all other functions of IPMI seem to work fine.
  18. Some clarifications that I've found: If you are using an IPMI board that has console redirection, odds are that you will lose remote console to enable this You /can/ use an NVidia card, but it has to be passed through to a VM. If you use a GTX/RTX you can only get 2 streams, if you use a P2000 you can do 15+ with ease The VM must be windows, as Linux doesn't support both encode and decode in hardware with Nvidia (As stated above in thread) Even with hardware transcoding there are issues: Plex does not properly deal with HDR, so if you are transcoding 4k TV you should be fine, but 4k Movies with HDR will look like /crap/ when transcoded (Colors totally washed out, not even worth watching IMHO). No update from Plex on if this is even going to be addressed. A lot of devices have issues with profiles at this point. For instance, I can transcode a 4k HDR movie to 720p at 8x+ speeds, UNLESS it is transcoding for my kids Firestick and then it is stuck at .8 (So buffering with one core on the CPU at 100%). This is something to do with the available profiles plex sees for that client. FTR: Shield and Roku all seem to work cleanly....TV apps are hit and miss. I just wanted to toss this up here, as it seems a lot of people find this thread when looking into HW transcoding...you can find more details scattered about the forums on each of these issues with a search!
  19. Could also spend a little more and just get a 1x card: https://www.amazon.com/ZOTAC-GeForce-Profile-Graphic-ZT-71304-20L/dp/B01E9Z2D60/ref=sr_1_3?ie=UTF8&amp;qid=1548294846&amp;sr=8-3&amp;keywords=pcie+1x+video+card
  20. I'm not an expert by any means, as I use ECC ram so don't overclock it. But my understanding is that most all DDR4 RAM over 2666 is "Overclocked", even if you get a 3200 kit you have to turn on XMP in the BIOS to "try" to run at that speed. In other words, we are at another one of those transitional times in the tech world, AMD processors are hungry for fast ram, but fast ram is just now coming into the market (and even then, at absurd prices). If you are just doing some dockers and NAS functions, then it likely will not matter. But if you ever want to do VMs or a desktop replacement then it may well be a factor. At this point, I'm going to hope one of the Ryzen experts jumps in and takes it from here, as I'd be slightly out of my depth past this point ;).
  21. The 2018 report is out: https://www.backblaze.com/blog/hard-drive-stats-for-2018/ Looks like 10TB is the sweet spot in terms of reliability and size (Not cost tho ;).
  22. Only comment is to think about upping the RAM speed, Ryzen is very very sensitive to that, when looking at benchmarks make sure you consider the ram speed used...it can have a large impact. Not as much as on TR from what I understand, but AMD is a lot more sensitive to RAM speed than Intel. Granted, in your use-case that might not be important, but wanted to put it out there for consideration!
  23. Yea, the P2000 would be in a windows VM and passedthrough so it has no impact at the unraid level. It looks like with an iGPU and BMC, you have either/or. With a PCI-E vid card and BMC you can have both but it is driver dependent, but I've not tested this and have no intention too. If I can't get iGPU and BMC then I only have two paths: 1> accept IPMI is mostly pointless for me 2> I have to move away from iGPU
  24. Ok, thanks for testing. I'll re-seat the card this weekend and try the RC again.