TheDragon

Members
  • Posts

    389
  • Joined

  • Last visited

Everything posted by TheDragon

  1. I'm trying to setup custom permissions on disk and user shares which will be mounted via NFS4 on a Debian server. I need them to match a UID/GID I use on that Debian server. I've set permissions for the local folders (via ssh as root) using chown -R UID:GID and chmod -R xxx Then setup NFS for those user/disk shares, and mounted them on Debian - everything works perfectly at this point. However if I stop the array and restart it, all the custom permissions on the local files on unRAID are reset back to nobody:user 777 Is there somewhere I can either prevent this action of resetting my custom permissions, or change the user/group/permissions that are being set automatically? Or is there a way to have all files/folders appear as owned by a specific UID/GUD on my Debian server once mounted, without having to change the ownership/permissions locally on unRAID?
  2. Aah thanks for sharing that. Doesn't seem logical but at least seems isolated to that attribute! Thanks for your help @JorgeB
  3. If we focus on just the power on hours for now, how could it be a device issue if its showing a value above 0? Just doesn't seem to make sense logically
  4. Does anyone have any other ideas or know more re how SMART works with NVME drives?
  5. Its the power on hours specifically that caught my eye, as its clearly not updated, which in turn made me doubt whether the percentage used attribute or other attributes were also being updated. Do only some of the SMART attributes update on this type of drive? Is there anything else I could try?
  6. Sorry for the slow reply, diagnostics attached.
  7. I'm on the current unRAID version, and generally update within a week or so of a new version release. So yeah its certainly possible, but not sure if this is correlation or causation!
  8. I have an NVME drive which up until 10 days after installation was showing updates to the attributes page - since then (when I looked today) it hasn't updated since (this is months, not days/weeks). Is there something obvious I'm overlooking, or does this indicate a problem with the drive?
  9. Is there any updated installation steps for Linkace? Tried various methods - following the steps in the docker description, and also from the official Linkace website, simple and advanced, sadly no joy whatsoever!
  10. I'm not sure if all the steps I took are totally needed, I just figured if it ain't broke don't try and fix it! Also to clarify settings in Plex Transcoder section, I have - "Enable HDR tone mapping" disabled "Use hardware acceleration when available" enabled "Use hardware-accelerated video encoding" enabled With those selected it uses HW. If I enable HDR tone mapping as well, everything is done by the CPU.
  11. Just to share my experience. Using an i3-12100 on a Z690 motherboard on unRAID 6.11.0-rc2 I now have HW transcoding working perfectly, the only thing that doesn't work is tone mapping. With this - echo "blacklist i915" > /boot/config/modprobe.d/i915.conf Intel-GPU-TOP GPU Statistics and /dev/dri in the official Plex docker container. No other changes from default configuration. I've played a 4K home movie in Plex, transcoded down to 720p without issues. I've also converted it to a lower quality and downloaded it to a android phone with no problem either. Both definitely using the GPU as can see low CPU usage and high GPU usage throughout. I've had no crashes or errors in either my system or container logs. If I enable tonemapping, I don't get any errors or crashes, but instead of using the GPU it uses the CPU. Hope this is useful to others!
  12. Quick question, have a new build with a i3-12100 on 6.10.0 where I've left a monitor plugged in and turned on but have no /dev/dri folder.. I tried using the Intel-GPU-TOP plugin with this version of unRAID which from reading I think are the only requirements - am I missing something stupid or is this expected behavior without the custom kernel?
  13. I've run across an issue where several times mover has failed to complete transfers from cache to array disks due to lack of space. The files on array disks show up on MC as .filename.flac which isn't the full file based on size. After making space i was able to get it to move over and ended up with filename.flac as well as .filename.flac Is there a way to search my array disks for other partial files to delete them all? Just as I only noticed the partials while trying to free up space, and they could be littered all over the array disks wasting space. Any suggestions would be much appreciated!
  14. Do you think you'll have a chance to share info on setup of alerts? Appreciate when you replied you had a lot on your plate, not hassling whatsoever just checking it wasn't forgotten 👍
  15. Awesome info you've shared, thank you so much for taking the time to do so!! Could you possibly cover how to setup alerts? I'm setup with this and is working great, I just can't quite get my head around setting alerts, especially with things that are true/false... For example - I have a dashboard for my router, and want to set alerts to notify of VPN or WAN connections going up or down.
  16. Thanks Roxedus I didn’t realise that was possible!
  17. I’m trying to follow the rule re matching name - I already have a discord account for another forum with the same rule. I tried to change my username on here but there’s no option to do so. I also tried setting up a new account so I can get on discord that way but I keep getting this message - You are not permitted to register a user account with this site. Error code: 2S129/1 Are you able to help me either change username or setup another forum account?
  18. I’ve upgraded to 6.9.1 and I’m trying to create a backup flash drive. I’ve been a user for many years, but the most recent instructions for creating the flash drive don’t have details on how to do so on Linux. I tried using old instructions, and the script on my current flash drive (as the Linux script no longer seems to be included if you download the latest version off the website). The old version of the script fails though, saying ‘unable to locate unRAID install files’. I also don’t have access to a Windows or Mac machine, so would appreciate if someone can give me pointers on how to create it on Linux in 6.9.1. Thanks in advance.
  19. Hi there im trying to get a wildcard cert using Cloudflare but it keeps giving this error - I’ve checked the API key, even regenerated a new one but it just keeps giving the same error every time. is there anything you can suggest trying? 👍 Variables set: PUID=99 PGID=100 TZ=Europe/London URL=jaxnet.uk SUBDOMAINS=wildcard EXTRA_DOMAINS= ONLY_SUBDOMAINS=true DHLEVEL=4096 VALIDATION=dns DNSPLUGIN=cloudflare [email protected] STAGING= 4096 bit DH parameters present SUBDOMAINS entered, processing Wildcard cert for only the subdomains of domain.net will be requested E-mail address entered: [email protected] dns validation via cloudflare plugin is selected Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator dns-cloudflare, Installer None Obtaining a new certificate Performing the following challenges: dns-01 challenge for domain.net Cleaning up challenges Error determining zone_id: 0 connection failed.. Please confirm that you have supplied valid Cloudflare API credentials.
  20. I have all my dockers setup to backup docker data using CA at 9am on a Monday - this has finished now, but I've found myself in a bit of a jam as my broadband has died and I'm told it might be a while before its fixed by an engineer. As the docker is set to download the latest version of plex on startup this is obviously failing due to lack of broadband, is there anyway I can bypass the attempt to upgrade plex so I can use plex in the meantime? Hope that makes sense, as not had much sleep!!
  21. Is anyone able to point me in the right direction for running the 'occ' command inside the container? I can't seem to work out where the occ file is once I've used docker exec to get inside the container
  22. Ignore that, just noticed you’ve already tried that!