Leaderboard

Popular Content

Showing content with the highest reputation on 07/08/19 in all areas

  1. I recently built an unRaid server with almost this exact setup. Same motherboard, same cpu, I have the 1660ti (Gigabyte 1660ti mini-itx) setup in the primary pcix16 slot) but I have an old Radon 5770 in the second slot. The only caveat is, I need to run the F5 version of the bios. I had upgraded to F32 and F40. Both gave me serious issues that would not allow Windows VMs to start with those bios revisions. I did a full post in the VM Troubleshooting section. I hope this helps. You can see my IOMMU groups here:Post with IOMMU groups
    2 points
  2. To the developers, to everyone in this community involved with getting unRAID where it stands today! I guess unRAID is written differently these days. I remember starting with unRAID Server Plus almost 10 years ago, the days Tom Mortensen thanked you in person for buying a license, no criticism here of course . For some reasons, now still very obscure to my mind (getting old), I installed one of the first versions of Windows Home Server. After that it got killed by MS I went with an Apple Server, great idea (NOT) and switched to Windows 10 with Stablebit Drivepool for a couple of years now. Somehow never looked back although I never really liked it a lot, way too clean and lean for an ex-Unix admin... Until recently. I rediscovered unRAID and I must say that I was baffled about the software, the plugins, the dockers, the whole setup and especially about the supporting community that arose around this beauty. Moved 15TB of media through the Unassigned Devices plugin to a newly built array over the weekend, parity build is now ongoing. Happy! Thanks! Mike
    1 point
  3. For media workflows it would be nice to be able to keep all of your projects, clips, samples, etc in their own user share but have all the recently used projects, clips, etc intelligently mirrored to the ssd cache for acceleration. When writing a file to user share, it is written to the cache. In the background the cache is written to the array so that the data has parity protection. Recently read files would be mirrored onto the cache for accelerated access but still stored on the device array. Reasons for this is not having to deal with local storage copies and conflicts. Projects remain in their respective user share and an automated backup system would not have redundancies because users need to manual copy data to and from SSDs to work. All that said, I'm very new to UNRAID, so perhaps this is a feature that can be implemented manually but thus far I've not found it in the forums.
    1 point
  4. Correct the share settings for the disk shares or dont enable disk shares at all.
    1 point
  5. Okay so. As I was giving up, I did it. Newest drivers 19.7.1 works, both cards passed trough on their own VMs. I played around again with different combinations and way too little sleep, too many hours in this forum and then I found this one below: Modifying this for my system, using correct identifier for my first Radeon VII, I managed to put it trough. With OVMF and Q35 3.1 I succesfully installed current AMD drivers. Just Display and audio, no radeon software yet. Wanted to play safe first. I have minor audio issues, but I guess I am past worst problems. I hope. Thanks @bastlanyway for replying, and thanks @Siwat2545 where ever you are In to the new misadventures!
    1 point
  6. While not as easy as Handbrake, you can decode and encode with your GTX 750 on unraid with ffmpeg. docker run --rm --runtime=nvidia -v "/unraid/your/videos:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-UUID-from-nvidia-pluginXXXX' djaydev/ffmpeg-ccextractor \ ffmpeg -hwaccel nvdec -i "/input/oldvideo.ts" -c:v hevc_nvenc -c:a copy "/input/newvideo.mp4" Edit, I don't know if GTX 750 has nvdec components or not
    1 point
  7. Not saying those won't work, but not really recommended, 12V on SATA plug is designed for 4.5Amps maximum, 4 disks can easily require 8 to 10amps during spin up.
    1 point
  8. This may well be fixable by putting the array into maintenance mode and running a file system check/repair.
    1 point
  9. May be Samsung still the best, most NVMe work well with Unraid, just some specific case will got trouble. I just try Plextor M8peg as simple cache storage use and no any problem, but finally I take it out for Windows machine, because it utilize it better.
    1 point
  10. 😆, There are no different between add-in card and mainboard build-in. Pls well planning and research.
    1 point
  11. All C246 downstream device will be share the upstream x4 link, i.e. HDD, LAN, USB .... There are many report no matter single/dual NVMe, even direct connect to CPU PCIe also can't reach benchmark speed. Different NVMe also have different issue. Pls search on forum. I'm not means NVMe not good, it still the king of speed, just don't expect the result by simple calculation.
    1 point
  12. Whatever the board is doing, a single x4 lane should easily be able to handle your 2 SSDs at maximum speed.
    1 point
  13. FWIW, instead of entering in the command that might scare people off, you can also get the info via Tools - System Devices. And also via the various ACS override options, you can also potentially separate devices that are natively in a single IOMMU group to their own.
    1 point
  14. Apologies, I messed up then... I'll get some sleep and come back to it.
    1 point
  15. Or if you only use the card for passthrough to a VM.
    1 point
  16. Not useless at all when you don't own any nvidia hardware.
    1 point
  17. I was under the impression that Emby doesn't have this behavior? To see if it does, and part of "why" run watch nvidia-smi Check the power state is not latched to P0. Start a stream, verify the power state latches to P0 Stop the stream, verify the process has ended, and that the power state is stuck in P0 Run fuser -v /dev/nvidia* To see what processes are still using the driver.
    1 point
  18. https://lmgtfy.com/?q=How+to+use+docker+tags
    1 point
  19. On a casual note, do you have a favorite beer recipe, "Tom's Brew," or do you tend to experiment, change it up from batch to batch?
    1 point
  20. Sounds like you are attending a meeting. Stay strong.
    1 point
  21. If you have a cache device and "appdata" is set to "cache only" then it is preferred to use "/mnt/cache/appdata". When no cache device is present then use "/mnt/user/appdata" instead. "/mnt/disks" refers to "Unassigned Devices". Some users prefer to use UD to make a dedicated disk for Docker instead of the standard cache device.
    1 point
  22. @kizer thanks for you input. I’ll give you the tl;dr. I have two main storage needs: 1) Sample libraries 2) project library A sample library is massive & needs maxed out ram and ultra fast storage. If I open a patch, let’s say a piano, the front of each note is loaded into ram. Some of these can be 200+mb in size each. When I play a note the rest of the sample is streamed from storage as you play. When doing 60+ layers of instrumentation/libraries you’re filling up 64+ ram. Add to that multiple workstations. So you if you pooled storage you need lots of space that’s ultra fast & cannot be moved because each library has a directory path. The projects folder will be streaming video & dozens if more a hundred uncompressed audio files. These files can sometimes point to/reference files from prior projects. So It’s best to keep all projects in the same pool wise you end you redefining hundreds of paths or maintains multiple copies of the same thing. What I need is a parity protected JBOD with NVME read cache. 40+ TB of SSD is a solution but it’s overkill since 90% of it will lay dormant. It’s also not my best solution. The best solution is an expandable 40+TB of HDD & 4TB of NVME with intelligent read cache/tier networked over 10GBE. So that’s what I started building but then quickly discovered the cache was not what I expected. It’s entirely my fault for not reading enough of the literature but I think an intelligent read cache-tier would be a very nice feature for people trying to work with media so that’s why I was suggesting it as a feature for future versions of unraid. I’m exploring solutions still. I think I’m just going to stick with local nvme storage & SATA ssd’s and just back up to the unraid system, but it generates tons of file redundancies I was hoping to avoid. I am considering a 10gbe RAID 10 using pcie cards but it sure would be nice to have nvme cache for the low latency. To do an nvme tier I’d need to run Windows server for storage spaces direct or some Linux variant. I just can’t seem to figure out a way to do this with unraid as I’d planned.
    1 point
  23. My docker.img file is filling up due to excessive logging from various apps Some applications will log almost everything that they do. In cases of your docker.img file filling up due to excessive logging, you should look at the application's settings within its GUI and try to limit the logging that it performs. Additionally, you can limit any docker's logging to a set size by adding the following to the Extra Parameters when you edit the application template (Switch to advanced view via the button in the top right corner) --log-opt max-size=50m --log-opt max-file=1 This will limit the log size to a very reasonable 50 Meg EDIT: DO NOT USE THE LOG ROTATION OPTION IN UNRAID LISTED BELOW. IT IS NOT OBVIOUS THE STEPS REQUIRED TO MAKE IT WORK PROPERLY For the log rotation settings to take effect, you MUST remove and reinstall your containers (or delete the docker image). It is advised to instead use the extra parameters section instead * Under unRaid 6.7.0+, the logging options now insert the appropriate entries into the entra parameters, but you still basically have to remove and then re-add the container EDIT: As of unRaid 6.4.0-rc7 you can set this globally for all applications by: Settings - Docker Stop the service Switch to advanced view Enable the logging rotation option accordingly Restart the service
    1 point
  24. All versions since v5 have kept track of disk assignments using the drive serial numbers. so yes you can move them and it will work fine.
    1 point