eatoff

Members
  • Posts

    42
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

eatoff's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. So, I've just made the switch from AMD to Intel, and am having trouble getting the Intel QSV plugin working. I was previously using the vaapi for AMD and it worked great with the iGPU. It gets stuck like this: That is the entire log too. just says "infinity hours" and doesnt seem to do anything. I cannot terminate the worker either, i have to reboot the container. Here are my settings: The new CPU is an intel 11th gen 11700 and the iGPU is the UHD 750 if any of that matters EDIT: Tried VAAPI with the Intel iGPU, and it failed instantly, with the log full of: [h264 @ 0x5572ddb63940] Failed to end picture decode issue: 23 (internal decoding error). [h264 @ 0x5572ddb63940] hardware accelerator failed to decode picture [h264 @ 0x5572ddb80440] Failed to end picture decode issue: 23 (internal decoding error). [h264 @ 0x5572ddb80440] hardware accelerator failed to decode picture
  2. Would love to know this. I have a 11th gen on the way, and it would be awesome to be able to use the HDMI output for a VM, but still use the GPU for hardware transcoding in Plex. I'm running RC3 currently, but on an AMD system
  3. I always forget about the help buttons, sorry. Thank you for checking for me though. FYI, there is some grammar/spelling in the help: Specifies how accumulation periods are executed. Daily means every subsequent day the parity check continuous until finished Weekly means every subsequent week the parity check continuous until finished Continuous should be continues or resumes.
  4. Good to see RC3 released. Looking at the dividing up of the parity check, its not quite clear how you can split it up. Say for example, I want to run a parity check once a quarter on the first Monday starting at 1AM, but to stop again at 6AM and then repeat until its complete, is this how its done in the screenshot?
  5. Which 2 devices are you referring to? the 2 devices in the unassigned devices list, or the 2 devices that were in the original pool? Or just nuke all 3 SSDs and start from scratch (after moving appdata etc off the cache pool)? EDIT: I'm just moving everything off the cache now, and will run the blkdiscard on the 3 ssds and start from scatch.
  6. So, the result of deleting the pool and starting fresh is the exact same. Writing to the array sees data being written to both the NVME SSD and the SATA SSD in the unassigned devices section.
  7. This didnt work unfortunately. I followed the steps, added all three devices to the same pool and started up the array. Let it copy everything between the devices across the pool. stopped the array. Removed the sata SSD from the pool and started the array. Unraid complained the cache was unmountable and that i should format the cache (which i declined to do since that would wipe all data that i wanted to keep). So then reverting back, i made the pool only have the 1 device (the original NVME i wanted to keep) and restarted the array. Now when writing to the array i can see the old sata ssd also being written to in the unassigned devices. I have attached the diagnostics, but i think I'm going to have to just delete the pool entirely and start it fresh. unraid-diagnostics-20220304-0853.zip
  8. I mean when having 2 SSD in a pool for redundancy, so yes, RAID 1 pool. So the drive with the data I want to keep I put in slot 1 of the pool, and then whatever one I put in the second slot will be overwritten to make RAID 1?
  9. Awesome, thanks for the effort writing this up. Will I need to move all the data off the cache before unassigning all pool devices? Edit: or do I just put the drive I want to keep the data from in slot 1 of the cache pool and then parity will be built from it, that's preserving that data?
  10. EDIT: i have removed the second cache pool in an effort to resolve, but made no difference. the sata SSD is now an unassigned device (not mounted) and its still being written to/read from as if it were still in the original cache pool Done unraid-diagnostics-20220303-1501.zip
  11. My situation: I had 1TB NVME and 1TB Sata SSDs as a cache pool. I bought an aiddtional 1TB NVME so that i could swap out the sata SSD to make a separate unprotected cache pool. So, i stopped the array, and removed the sata ssd from the cache pool, and swapped the NVME in its place. Started array and it complained that the pool could nto be started up due to too many changes. Ok, so i dropped the pool down to just 1 device (the original NVME and it started up fine. I then set the old sata SSD into its own cache (called cache_unprotected). and started the array. I saw that the cache_unprotected pool had data in it, must have been when it was being used as parity, So i went through using MC and deleted the appdata folder and a couple others. What actually happenned though was that these folders were deleted from BOTH cache pools! Cue panic and a server reboot. Nope, the appdata folders have been deleted from both pools. So i'm now restoring my appdata folder from a backup, and i can see its also restoring that folder to BOTH cache pools (screenshot) So, what is the right way to do what I was doing? and how can i "unlink" these devices? Am I going to have to move all data and everything to the array, then delete all cache pools and move it all back again (as if I only had the one cache device and was replacing it? I have gone through that once before already)
  12. I just tried this, and no luck for me. All my game files were on the one disk in the array already Edit: i just re-read that, i hadnt changed the docker mapping. i'll have a go at that over the weekend and report back. Update: tried it and still same error for Left 4 Dead 2
  13. Does this work with Unraid 6.10 RC2? The current status drives spun down (as of last poll) sits at 3 all the time (i have 3 drives plus a parity in my array), even when all drives are spun up, and the polling time is set very short. I can get it to activate turbo writes by setting the Disks Allowed To Be Spun Down Before Invoking Turbo Mode to 3, but I want to be able to set this to 2 or 1 so that turbo writes will only kick in if most of my drives are spun up anyway. EDIT: had a reab back about smartctl and hdparm, and looks like my USB enclosure for my drive isnt passing this info through correctly
  14. Ok, so I think I've found an issue. When I run mover with the following setting in the share: It then moves all the data to Disk 1 only as per: When I specifically exclude disk 1 and run the mover for just that share, it still pushes all data to disk 1 only. I have done a full server reboot, and still get the same behaviour. I am using the mover tuner plugin if that makes a difference.
  15. I've hit this error - https://github.com/ValveSoftware/Source-1-Games/issues/1685 Looks like source games don't like xfs filesystems and there is a missing file. Source games won't load. Bioshock infinite worked just fine though. Very impressed with his this is going