• Posts

  • Joined


  • URL
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

T0rqueWr3nch's Achievements


Rookie (2/14)



  1. We're really only watching for a close_write, so the fact that you're seeing this implies that VLC is opening the file like it intended to write to it. My bet is that it's related to this: VLC drop linux FS-Cache when playing (#27848) · Issues · VideoLAN / VLC · GitLab. Specifically this line: Try updating VLC and let us know how that goes. Update (5/23/23): It doesn't appear this is fixed until 3.0.19 (current release is 3.0.18), so you may have to wait for VLC to release 3.0.19 or seek out an alpha/beta version.
  2. Pull requests in: Implement auto-hashing for cache-to-disk moves by Torqu3Wr3nch · Pull Request #79 · bergware/dynamix (github.com) Fix Automatic Checksums - Don't exclude everything when nothing is set to be excluded. by Torqu3Wr3nch · Pull Request #81 · bergware/dynamix (github.com)
  3. @bonienlI ran into the same problem when I attempted to implement this plugin with the "automatic" functionality today. I believe I have figured out the problem (I actually found two problems; one I've already submitted the PR for) and I have the automatic functionality working as expected on my Unraid development server as we speak. I just wanted to do some further testing before I submit the additional PR. tl;dr: Focus on your 6.12 release. You can take this off your mind. I'll submit the PR for your review. We appreciate your work!
  4. You're right: # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." Thank you!
  5. While preparing to shrink my array, I had a question which I didn't see documented on the wiki or in @RobJ's zeroing script post. If we are zeroing the data disk, shouldn't we disable the mover? Otherwise, don't we run the risk of data loss if the mover runs and moves data to the disk being zeroed?
  6. Can you help me run through the following scenarios? 1) Parity 1 fails- This one seems obvious. Data drives are still intact, let parity 2 run through to completion. 2) Parity 2 fails- Parity never existed on it in the first place, so who cares. But the loss of a data drive seems less obvious to me: What happens if a data drive fails? Does the parity 2 rebuild continue to run or does it stop automatically? If it doesn't stop automatically, aren't we just now calculating parity 2 with a corrupt disk thus corrupting parity 2? Do we stop the parity 2 calculation? Do we go back in to the New Config and remove Parity 2 from circulation, check the "Parity Already Valid Box", restart the array, stop the array again, power down, pull the failed data drive, install new data drive, power on, assign new data drive and rebuild the data drive? Sorry for the barrage of questions; I really value your input.
  7. That's a good suggestion. So split step 3 into two phases: 1) After zeroing, save new config with zeroed data drive removed and "Parity is valid" checked and then 2) Save New Config again but with the removed drive set as parity 2. Parity 2 (and only Parity 2's) contents rebuild. Do I have that right? Now let's talk emergency/recovery procedures: In the event that I have a drive failure while Parity 2 is rebuilding, what would be the best procedure from here? I like have contingency plans in place. This is why I have already taken two backups: one offsite and one rsync'd to an external drive, but the answers to these questions help me build a model in my mind of how Unraid works so I can respond without a flowchart (and it may be helpful for other users in the future). Thanks again!
  8. So I have a large surplus of storage space that I likely will never use (<1.5% utilization). Because of this, I'd like to remove one data disk from my array and use it as a second parity drive. My plan is to do the following: 1) Migrate drive contents to the other data disks using unBalance. 2) Zero drive with the script found here: Additional Scripts For User.Scripts Plugin - Plugin Support - Unraid 3) Stop the array and create a new array configuration. The (now zeroed) data drive will be removed from the data pool and added as the second parity drive in the same step. Main Request: 1. Can someone please review this for accuracy/safety? Secondary Questions: 2. Keeping in mind that I am adding a second parity drive where parity has to be rebuilt, does this zeroing step help me at all? Would I still be protected by the first parity drive while parity is being rebuilt? 3. I assume I cannot check the "Parity is already valid" box because, even though it is for the first parity disk, it isn't for the new second parity disk, correct? 3. Has anyone done this before? Thanks in advance!
  9. OS updates (IIRC) come from AWS. Best of luck keeping track of those. Do you have a specific concern with allowing Unraid to initiate the outgoing connection and then allowing the established return traffic?
  10. Hey everyone, I recently had a permissions issue where an rsync backup failed to an NFS share in Unraid. It ended up being due to an ACL on that particular directory (and in fact it's on the whole share). I won't rule out that something outside of Unraid itself accessing that directory didn't set it, but is anyone else seeing something similar? Seems to be new since the latest RC, but that could just be a coincidence. Thanks in advance!
  11. It's an excellent point. Unfortunately, I don't think there's going to be anyway around checking in layers since none of these checks are going to be absolutely foolproof. There's just too many that the library can be hidden/obfuscated. That's why I recommend following it all up with the scan at the end even if the other tests come back negative. If it's positive, then you know you're affected, and if it's not you can at least be reasonably confident that you aren't. And then you're also not relying entirely on the developer/Docker maintainer that may not even be aware of the dependencies they're using themselves. Can I get your opinion on something? I am considering replacing step 1 ("the quick and easy way") with an even quicker and easier way that has a few more automated checks. The problem is that it uses a remote script: wget https://raw.githubusercontent.com/rubo77/log4j_checker_beta/main/log4j_checker_beta.sh -q -O - | bash I trust the remote script, it's clearly visible what it's doing and it's basically doing the same checks (but automated and includes your Java install check), but what is your opinion on recommending this to other Unraid users? Would you yourself be comfortable with it?
  12. It's a good starting point so I should probably at least mention it, thanks. By itself though isn't completely accurate and thus not sufficient to check for vulnerability:
  13. For what it's worth, I wrote a quick guide on testing your Unraid server + Docker containers against the log4j/log4shell vulnerability if you want to independently verify. Log4j for Dummies: How to Determine if Your Server (or Docker Container) Is Affected by the Log4Shell Vulnerability The guide is still very much a work in progress as the situation is evolving, but when tested against a container with known vulnerabilities, it flagged it as such. In the interest of getting a guide in yall's hands as soon as possible, I prioritized testing and writing the guide. As a result, I have not yet created any templates for Community Apps (let me know if anyone would like to collaborate with me) and so you'll have to deploy the scanner container manually. Any feedback or additions anyone would recommend to the guide would be very much welcome. -TorqueWrench
  14. Nope, I haven't noted any ill effects so far. Unraid's umask seems to be set to wide open (checked by running umask -S which returns 000), so I really don't understand what could've changed. Reviewing this stuff does make me wonder again if I should be doing more for ransomware protection...
  15. That's what I ended up doing right after I posted that! Glad to have the independent verification. Thanks! Honestly, I'm not sure why this hasn't always been a problem- rclone says its default umask is "2", but other sources say this is really 0022, which is what I'm seeing. I wonder what changed with rc2. Could it be related to this "Fixed issue in User Share file system where permissions were not being honored."? Unraid OS version 6.10.0-rc2 available - Prereleases - Unraid While troubleshooting this, I also used this as an opportunity to update how I mount rclone. I passed the uid and gid arguments to mount as "nobody" and "users" (UID 99; GID 100). I might go back to just mounting as root again. Thoughts?