brainbone

Members
  • Posts

    272
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by brainbone

  1. I understand that port multipliers are, in general, actively discouraged for use with Unraid -- with most/all discussion I'm able to find about them ending in finding alternatives instead of use of them. I'd like to add some more SSD drives to an unraid box, but all PCIe/NVME/SAS/SATA are currently exhausted, and I'd like to avoid replacing the motherboard. (asrock x470 taichi w/ 8 onboard SATA and an IBM M1015 / SAS2008 SAS HBA.) Unfortunately, this puts me in the position of looking at a SAS/SATA port multiplier as a possible option, and I'm hoping there's been some success in finding something that works. I'm thinking if I could find a good multiplier/expansion, I could move some of my 15 array drives to that, opening up some non-expanded SATA ports for more SSDs. I've used IBM 46M0997 SAS Expansion cards with IBM M1015 HBAs in the past (not on unraid) with good results, but they unfortunately require a PCIe slot, which I don't have available. Does something like it exist that's known to work well with unraid and doesn't require a PCIe slot, or am I looking for a unicorn? Perhaps replacing the M1015 with a 9201-16i would be a better option? Edit: After thinking this though, I think the obvious answer is something like the 9201-16i -- not sure why I got hung up on a port expander --- probably because I have some 46M0997 sitting around. Anyone know if the 9201-16i has trouble at PCIe x4?
  2. What's the correct way to re-build a drive marked as disabled that passes extended SMART test without losing the emulated data on it? I'd ideally like to pre-clear the drive as a test before re-adding it back. Are these the correct steps?: 1. Stop array 2. Set device to "No device" (this is the step that concerns me.) 3. Start the array (hopefully my disk 13 will still be emulated even after marking no-device?) 4. Preclear the now un-assigned disk 5. Stop the array 6. Set disk 13 to the pre-cleared disk 7. start the array and let unraid rebuild it. Edit: Here's my "PASSED" smart report for the drive:
  3. I haven't tried, so I may be way off base, but wouldn't it be possible, and far more flexible, to pass through the WiFi adapter to a VM in Unraid (like we do for GPUs, USB controllers, etc.), and then have the VM run, say, OpenWRT or whatever else, to handle Wifi?
  4. A touch too close to the fresh air supply for the furnace -- and no easy way to relocate it.
  5. After taping the vents up, the lowest HDD temp is currently around 14C. A bit too cold yet, but still above the Seagate minimum of 5C, and well above the insanely low -19C I hit in 2019 (Lost a couple drives from that -- and the older drives that died recently I still blame on it.)
  6. Polar vortex time again, and my HDD temps are already falling. (Monday will be even colder.) Throwing some masking tape over the vents in front of the drives as a temporary fix. It really would be nice if Unraid could give notifications for low drive temperatures, seeing that low enough HDD temps can be as bad, or worse, than high HDD temps.
  7. One of my 4TB data drives failed while waiting for my two new 10TB drives to arrive. (All drives were 4TB.) No problem. I figured unraid would let me rebuild the failed 4TB drive onto the 10TB drive, then I could replace the 4TB parity with the other 10TB parity... but unraid doesn't seem to like this Idea. Please don't tell me I need to purchase a 4TB drive just to get back up and running before I can install my 10TB drives? Unraid 6.8.3.
  8. Crap. I see that now for devices under the array. Unfortunately, some of my NVMe devices are Unassigned Devices. Doesn't look it's supported there. Guess this is a question for the Unassigned Devices plugin thread. Thanks.
  9. Is there a way to have certain devices, like NVMe drives, use a higher threshold for temperature warnings? NVMe can and do run at much hotter temperatures than HDDs. When I set the threshold to warn me when HDD temps are getting too high, I end up getting constant warnings when my NVMe devices get heavy write activity.
  10. But I currently have no use for multiple array support. I'd much rather have HA and replication as core features. But above that, I'd like GPU drivers baked in and officially supported. May I suggest cheering in the multiple array thread instead of jeering in the GPU one?
  11. Yeah, let's just get rid of docker, VMs, the whole plugin system ... all that stuff that's not "genuine NAS", whatever that means.
  12. That could be said about any feature request. You may not find it useful, but many others would. For this specific feature, I think some of you may be conflating the burden for Linuxserver.io to do each Unraid Nvidia release with what it would be for Limetech. The burden on Limetech would be many orders of magnitude less, which is why this feature would be beneficial.
  13. Please add GPU drivers to unraid builds. Doesn't Limetech already do this for some NICs, SAS controllers, etc? If so, I see no reason why GPU drivers (specifically for Nvidia in my case) shouldn't be added as well. There's no need to keep up to date with the latest driver, unless there are serious bugs/exploits that need to be patched, just like with any other driver Unraid uses. I really don't get all the "sky is falling" negativity surrounding this request. The work involved for Limetech to include these drivers is far less than for the Linuxserver.io team (and a huge thanks to each of them for that effort!) to add them after the fact.
  14. Until there's ubiquitous, unmetered, gigabit internet just about everywhere, there's going to be a need for transcoding. Transcoding to lower bit rates for streaming, and syncing, while on the road is one of my main uses of Unraid Nvidia. I don't see that going away any time soon.
  15. Having an asrock board and multiple nvidia GPUs, I was a little gun-shy updating to 6.8.1 based on a few past comments. Updated to Nvidia 6.8.1 without issue. @CHBMB Thank you!
  16. Letting the GPU handle decoding/encoding leaves your CPU open for other work. Generally, the encodes from a GPU (hardware encoder) will be lower quality than the encodes from the CPU (when using x264), though newer generation hardware encoders are starting to close this gap. (the K10 does not have a newer generation hardware encoder.) If you have a powerful enough CPU to keep up with the transcodes being requested and everything else being requested of your unraid server, you'll likely see no real benefit with a hardware encoder. "Unraid Nvidia" lets you use the GPU with Plex/Emby/etc. dockers. See this list for what you can expect in Plex. The K10 isn't included in that list, but if it works at all, I'd expect it to perform something like the other GK104's No experience with any of the Tesla cards, but how useful it will be depends on your specific use case. I use a Quadro P400 for nvenc/nvdec in my Plex Docker, leaving my CPU (and GTX 1080) open for use with a gaming VM. For me, it was worth the time involved passing the 1080 GTX through to my gaming VM, and using Unraid Nvidia to offload Plex transcoding to the P400. Allows me to dedicate more CPU cores to the Gaming VM.
  17. I just use a second NVMe for transcoding and other stuff that makes more sense on a scratch disk. This saves some endurance and bandwidth on the main cache SSD, where I'd rather not have its appdata and domains mounts go down, and leaves ram open for more useful endeavors like VMs, dockers, read cache, etc. If you're confident you don't need a huge amount of space for transcodes / scratch disk, a 64GB (or even 32GB, if you don't have that many plex clients) intel Optane will have much higher endurance than a typical NVMe. However, I just use a pair of 500GB 970 EVOs -- one for a standard cache drive + docker/vm storage, the other for transcode/scratch. I'll likely need to replace them every 3 to 5 years.
  18. Yikes! I'm behind the curve here. I updated to 6.7.2 from 6.6.6 a few days ago, before seeing this information. I've not yet had any corruption, but I'm a little worried. Glancing over this and related threads, so far it seems like only those storing SQLite DBs directly on disk rather than on SSD/Cache are seeing this issue. Is that correct, or are there reports of people storing appdata on cache with this issue? Having all my dockers in appdata, and having appdata set to cache only, am I immune to this issue, or should I roll back to 6.6.7?
  19. Please edit or delete that last post. The first post in this thread forbids talking about that!
  20. Make sure you pay attention to what's highlighted in green/yes. The 1030, for example, is a GPU you should NOT get. This list has more detailed information on GPUs with Plex.
  21. I'm not exactly sure how much extra power my GTX 1050 is chewing sitting idle without being in a power saving mode, but my guess (based on readings from a "Kill A Watt" with the GTX installed vs. not.) is around 15 to 20 watts. 15 to 20w x $0.15 per kwh = ~ $20 to $25 per year. I'd certainly be willing to donate at least that much towards getting this done.
  22. I think I'm seeing this same issue. When mover is running, Plex is for the most part frozen. Happened both before and after installing an old Geforce 1050 (using Unraid-Nvidia) for transcoding, so it seems unrelated to Unraid Nvidia. I'm not certain if this happened before I upgraded to 6.7.0 as I seldom manually ran the mover when I was on 6.5.x -- though I don't recall it happening before the upgrade to 6.7.0.
  23. Is there a way to throttle the mover so it doesn't saturate the unraid array while moving data from SSD cache to the array?
  24. Seeing negative/wrong MB/CPU temps on dashboard on unraid 6.7.0 / Dynamix System Temperature 2019.01.12a Any idea what I'm doing wrong? (See attached:)
  25. I'd like an option for critical and warning thresholds for low HDD temperature in addition to high. Due to server location (by an air-intake vent), I noticed my HDD temps had dropped to ~ -19°C as outdoor temperatures fell to -33°C. Also, it would be helpful to have an option to keep the array spun up if any single HDD or SDD is below a specified "keep spun up" temperature threshold.