eatoff

Members
  • Posts

    71
  • Joined

  • Last visited

Everything posted by eatoff

  1. First up, great plugin, has definitely reduced the power usage on my server using this plugin and having the drives spun down. Feature request - settings per cache pool. Use case - I currently have 2 cache pools, 1 for general data ingestion, VMs, docker etc, and a second for surveillance footage. I would like to have the criteria for the mover running different between the two caches. the surveillance i just want it to run when its at say 80% full regardless of file age, but on my data cache i want to keep files on there for up to 20 days before moving them off. I get to this odd scenario where the drives will spin up every hour where there is footage there that is more than 20 days old, but the cache is under 80% full. then the drive will slowly fill up, and all data comes off the surveillance cache and its fine again.
  2. I think people are scared to test, I know I am. I have 1 Unraid server and if it goes down then that's my 1 server. Still want SR-IOV to work though
  3. I'm not that knowledgeable about Linux, but is the SR-IOV compatibility built into the new 6.11 RC1?
  4. I'm getting the same issue. Nothing in the logs until i try to access the webUI, then i get Error : incorrect HTTP headers line The ports are all available when checking the docker allocations. I've tried network mode as host and as bridge, same result. Any ideas? UPDATE: It appears to have been cookies issue of some sort, when attempting to access the webUI from my phone it works fine.
  5. So, I've just made the switch from AMD to Intel, and am having trouble getting the Intel QSV plugin working. I was previously using the vaapi for AMD and it worked great with the iGPU. It gets stuck like this: That is the entire log too. just says "infinity hours" and doesnt seem to do anything. I cannot terminate the worker either, i have to reboot the container. Here are my settings: The new CPU is an intel 11th gen 11700 and the iGPU is the UHD 750 if any of that matters EDIT: Tried VAAPI with the Intel iGPU, and it failed instantly, with the log full of: [h264 @ 0x5572ddb63940] Failed to end picture decode issue: 23 (internal decoding error). [h264 @ 0x5572ddb63940] hardware accelerator failed to decode picture [h264 @ 0x5572ddb80440] Failed to end picture decode issue: 23 (internal decoding error). [h264 @ 0x5572ddb80440] hardware accelerator failed to decode picture
  6. Would love to know this. I have a 11th gen on the way, and it would be awesome to be able to use the HDMI output for a VM, but still use the GPU for hardware transcoding in Plex. I'm running RC3 currently, but on an AMD system
  7. I always forget about the help buttons, sorry. Thank you for checking for me though. FYI, there is some grammar/spelling in the help: Specifies how accumulation periods are executed. Daily means every subsequent day the parity check continuous until finished Weekly means every subsequent week the parity check continuous until finished Continuous should be continues or resumes.
  8. Good to see RC3 released. Looking at the dividing up of the parity check, its not quite clear how you can split it up. Say for example, I want to run a parity check once a quarter on the first Monday starting at 1AM, but to stop again at 6AM and then repeat until its complete, is this how its done in the screenshot?
  9. Which 2 devices are you referring to? the 2 devices in the unassigned devices list, or the 2 devices that were in the original pool? Or just nuke all 3 SSDs and start from scratch (after moving appdata etc off the cache pool)? EDIT: I'm just moving everything off the cache now, and will run the blkdiscard on the 3 ssds and start from scatch.
  10. So, the result of deleting the pool and starting fresh is the exact same. Writing to the array sees data being written to both the NVME SSD and the SATA SSD in the unassigned devices section.
  11. This didnt work unfortunately. I followed the steps, added all three devices to the same pool and started up the array. Let it copy everything between the devices across the pool. stopped the array. Removed the sata SSD from the pool and started the array. Unraid complained the cache was unmountable and that i should format the cache (which i declined to do since that would wipe all data that i wanted to keep). So then reverting back, i made the pool only have the 1 device (the original NVME i wanted to keep) and restarted the array. Now when writing to the array i can see the old sata ssd also being written to in the unassigned devices. I have attached the diagnostics, but i think I'm going to have to just delete the pool entirely and start it fresh. unraid-diagnostics-20220304-0853.zip
  12. I mean when having 2 SSD in a pool for redundancy, so yes, RAID 1 pool. So the drive with the data I want to keep I put in slot 1 of the pool, and then whatever one I put in the second slot will be overwritten to make RAID 1?
  13. Awesome, thanks for the effort writing this up. Will I need to move all the data off the cache before unassigning all pool devices? Edit: or do I just put the drive I want to keep the data from in slot 1 of the cache pool and then parity will be built from it, that's preserving that data?
  14. EDIT: i have removed the second cache pool in an effort to resolve, but made no difference. the sata SSD is now an unassigned device (not mounted) and its still being written to/read from as if it were still in the original cache pool Done unraid-diagnostics-20220303-1501.zip
  15. My situation: I had 1TB NVME and 1TB Sata SSDs as a cache pool. I bought an aiddtional 1TB NVME so that i could swap out the sata SSD to make a separate unprotected cache pool. So, i stopped the array, and removed the sata ssd from the cache pool, and swapped the NVME in its place. Started array and it complained that the pool could nto be started up due to too many changes. Ok, so i dropped the pool down to just 1 device (the original NVME and it started up fine. I then set the old sata SSD into its own cache (called cache_unprotected). and started the array. I saw that the cache_unprotected pool had data in it, must have been when it was being used as parity, So i went through using MC and deleted the appdata folder and a couple others. What actually happenned though was that these folders were deleted from BOTH cache pools! Cue panic and a server reboot. Nope, the appdata folders have been deleted from both pools. So i'm now restoring my appdata folder from a backup, and i can see its also restoring that folder to BOTH cache pools (screenshot) So, what is the right way to do what I was doing? and how can i "unlink" these devices? Am I going to have to move all data and everything to the array, then delete all cache pools and move it all back again (as if I only had the one cache device and was replacing it? I have gone through that once before already)
  16. I just tried this, and no luck for me. All my game files were on the one disk in the array already Edit: i just re-read that, i hadnt changed the docker mapping. i'll have a go at that over the weekend and report back. Update: tried it and still same error for Left 4 Dead 2
  17. Does this work with Unraid 6.10 RC2? The current status drives spun down (as of last poll) sits at 3 all the time (i have 3 drives plus a parity in my array), even when all drives are spun up, and the polling time is set very short. I can get it to activate turbo writes by setting the Disks Allowed To Be Spun Down Before Invoking Turbo Mode to 3, but I want to be able to set this to 2 or 1 so that turbo writes will only kick in if most of my drives are spun up anyway. EDIT: had a reab back about smartctl and hdparm, and looks like my USB enclosure for my drive isnt passing this info through correctly
  18. Ok, so I think I've found an issue. When I run mover with the following setting in the share: It then moves all the data to Disk 1 only as per: When I specifically exclude disk 1 and run the mover for just that share, it still pushes all data to disk 1 only. I have done a full server reboot, and still get the same behaviour. I am using the mover tuner plugin if that makes a difference.
  19. I've hit this error - https://github.com/ValveSoftware/Source-1-Games/issues/1685 Looks like source games don't like xfs filesystems and there is a missing file. Source games won't load. Bioshock infinite worked just fine though. Very impressed with his this is going
  20. So did you get it all working? You can use the GPU in windows? And can output via HDMI?
  21. HDMI dummy plug solved all my issues. VNC is now responsive, and controller support works. Pretty good performance, very impressed. I did run into an issue though, I ran the docker safe permissions to fix some permission issues, and then couldn't launch any games. Had to manually change the permissions in my games folder. Is there a way to get the permissions working without changing them back. I run that new permissions script more often than I'd care to admit
  22. Have you seen the beta Bios for the x300? https://botflakes.de/asrockwiki/docs/bios/deskmini/ Says "It contains a fix for VMWare ESXi". Not sure exactly what that means but could be fixed for virtualisation? According to this it also resolved his Linux gaming issues -
  23. Ok, so i just tried again and it came u fine. I updated it to the latest version. The problems i was experiencing before was this - My problem now is that my xbox controller doesnt work. Its paired to my google TV via bluetooth. My phone with on screen controls doesnt work either. My PC running keyboard and mouse streams just fine. Running 6.10RC2, the plugin is installed too. Server has been rebooted with no effect
  24. How can we get controller support for instances where we can't use network: host If you're running 2 instances then you can't be using host network for both to get controller support for both at the same time yeah? I think that's all I'm missing, I need to be able to use a controller with a custom network address due to the number of other containers I'm already running
  25. Edit: NVM, full server reboot and it's working. Apart from controller support, that's not working, but that's because I'm not using host network due to port conflicts