Leaderboard

Popular Content

Showing content with the highest reputation on 07/08/19 in all areas

  1. I recently built an unRaid server with almost this exact setup. Same motherboard, same cpu, I have the 1660ti (Gigabyte 1660ti mini-itx) setup in the primary pcix16 slot) but I have an old Radon 5770 in the second slot. The only caveat is, I need to run the F5 version of the bios. I had upgraded to F32 and F40. Both gave me serious issues that would not allow Windows VMs to start with those bios revisions. I did a full post in the VM Troubleshooting section. I hope this helps. You can see my IOMMU groups here:Post with IOMMU groups
    2 points
  2. To the developers, to everyone in this community involved with getting unRAID where it stands today! I guess unRAID is written differently these days. I remember starting with unRAID Server Plus almost 10 years ago, the days Tom Mortensen thanked you in person for buying a license, no criticism here of course . For some reasons, now still very obscure to my mind (getting old), I installed one of the first versions of Windows Home Server. After that it got killed by MS I went with an Apple Server, great idea (NOT) and switched to Windows 10 with Stablebit Drivepool for a couple of ye
    1 point
  3. For media workflows it would be nice to be able to keep all of your projects, clips, samples, etc in their own user share but have all the recently used projects, clips, etc intelligently mirrored to the ssd cache for acceleration. When writing a file to user share, it is written to the cache. In the background the cache is written to the array so that the data has parity protection. Recently read files would be mirrored onto the cache for accelerated access but still stored on the device array. Reasons for this is not having to deal with local storage copies and conflicts. Pr
    1 point
  4. Correct the share settings for the disk shares or dont enable disk shares at all.
    1 point
  5. Okay so. As I was giving up, I did it. Newest drivers 19.7.1 works, both cards passed trough on their own VMs. I played around again with different combinations and way too little sleep, too many hours in this forum and then I found this one below: Modifying this for my system, using correct identifier for my first Radeon VII, I managed to put it trough. With OVMF and Q35 3.1 I succesfully installed current AMD drivers. Just Display and audio, no radeon software yet. Wanted to play safe first. I have minor audio issues, but I guess I am past worst problems. I hope. Thanks @bastlanyw
    1 point
  6. While not as easy as Handbrake, you can decode and encode with your GTX 750 on unraid with ffmpeg. docker run --rm --runtime=nvidia -v "/unraid/your/videos:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-UUID-from-nvidia-pluginXXXX' djaydev/ffmpeg-ccextractor \ ffmpeg -hwaccel nvdec -i "/input/oldvideo.ts" -c:v hevc_nvenc -c:a copy "/input/newvideo.mp4" Edit, I don't know if GTX 750 has nvdec components or not
    1 point
  7. Not saying those won't work, but not really recommended, 12V on SATA plug is designed for 4.5Amps maximum, 4 disks can easily require 8 to 10amps during spin up.
    1 point
  8. This may well be fixable by putting the array into maintenance mode and running a file system check/repair.
    1 point
  9. May be Samsung still the best, most NVMe work well with Unraid, just some specific case will got trouble. I just try Plextor M8peg as simple cache storage use and no any problem, but finally I take it out for Windows machine, because it utilize it better.
    1 point
  10. 😆, There are no different between add-in card and mainboard build-in. Pls well planning and research.
    1 point
  11. All C246 downstream device will be share the upstream x4 link, i.e. HDD, LAN, USB .... There are many report no matter single/dual NVMe, even direct connect to CPU PCIe also can't reach benchmark speed. Different NVMe also have different issue. Pls search on forum. I'm not means NVMe not good, it still the king of speed, just don't expect the result by simple calculation.
    1 point
  12. Whatever the board is doing, a single x4 lane should easily be able to handle your 2 SSDs at maximum speed.
    1 point
  13. FWIW, instead of entering in the command that might scare people off, you can also get the info via Tools - System Devices. And also via the various ACS override options, you can also potentially separate devices that are natively in a single IOMMU group to their own.
    1 point
  14. Apologies, I messed up then... I'll get some sleep and come back to it.
    1 point
  15. Or if you only use the card for passthrough to a VM.
    1 point
  16. Not useless at all when you don't own any nvidia hardware.
    1 point
  17. I was under the impression that Emby doesn't have this behavior? To see if it does, and part of "why" run watch nvidia-smi Check the power state is not latched to P0. Start a stream, verify the power state latches to P0 Stop the stream, verify the process has ended, and that the power state is stuck in P0 Run fuser -v /dev/nvidia* To see what processes are still using the driver.
    1 point
  18. https://lmgtfy.com/?q=How+to+use+docker+tags
    1 point
  19. On a casual note, do you have a favorite beer recipe, "Tom's Brew," or do you tend to experiment, change it up from batch to batch?
    1 point
  20. Sounds like you are attending a meeting. Stay strong.
    1 point
  21. If you have a cache device and "appdata" is set to "cache only" then it is preferred to use "/mnt/cache/appdata". When no cache device is present then use "/mnt/user/appdata" instead. "/mnt/disks" refers to "Unassigned Devices". Some users prefer to use UD to make a dedicated disk for Docker instead of the standard cache device.
    1 point
  22. @kizer thanks for you input. I’ll give you the tl;dr. I have two main storage needs: 1) Sample libraries 2) project library A sample library is massive & needs maxed out ram and ultra fast storage. If I open a patch, let’s say a piano, the front of each note is loaded into ram. Some of these can be 200+mb in size each. When I play a note the rest of the sample is streamed from storage as you play. When doing 60+ layers of instrumentation/libraries you’re filling up 64+ ram. Add to that multiple workstations. So you if you pooled storage you need lots of
    1 point
  23. My docker.img file is filling up due to excessive logging from various apps Some applications will log almost everything that they do. In cases of your docker.img file filling up due to excessive logging, you should look at the application's settings within its GUI and try to limit the logging that it performs. Additionally, you can limit any docker's logging to a set size by adding the following to the Extra Parameters when you edit the application template (Switch to advanced view via the button in the top right corner) --log-opt max-size=50m --log-opt max
    1 point
  24. All versions since v5 have kept track of disk assignments using the drive serial numbers. so yes you can move them and it will work fine.
    1 point