Jump to content

falconexe

Members
  • Content Count

    73
  • Joined

  • Last visited

Community Reputation

8 Neutral

About falconexe

  • Rank
    Advanced Member

Recent Profile Visitors

129 profile views
  1. I have updated my ALL Disk scripts. There are 2 copies attached. One does not suppress errors, and the other one does. Updates: Audit Timestamps Now in YYYY-MM-DD Format Was YYYY-DD-MM in Error Before Output Results Folder Now in YYYY-MM-DD Format (With Dashes) Was in YYYYMMDD Format Before (Without Dashes) UNRAID_Audit_All_Disks_NullErrors.bat UNRAID_Audit_All_Disks.bat
  2. I've been using the "Integrity" plugin for over a year. No bit rot yet...
  3. Yep that worked. Thanks. Echo Performing Audit on Disk: 1... DIR /s /b "\\%ServerName%\disk1" 2>NUL 1>"%OutputPath%\%CurrentDate%\%FileNameRaw%" DIR /s "\\%ServerName%\disk1" 2>NUL 1>"%OutputPath%\%CurrentDate%\%FileNameMeta%" FOR /L %%i IN (2,1,%DataDisks%) DO ( Echo Performing Audit on Disk: %%i... DIR /s /b "\\%ServerName%\disk%%i" 2>NUL 1>>"%OutputPath%\%CurrentDate%\%FileNameRaw%" DIR /s "\\%ServerName%\disk%%i" 2>NUL 1>>"%OutputPath%\%CurrentDate%\%FileNameMeta%" )
  4. So I forgot to mention that the "type" parameter can be set to the following: File: find . -type f -ls |grep ' 1969 ' Directory: find . -type d -ls |grep ' 1969 ' I have seen both files and folders have the missing creation date attribute where it defaults to 1969. Here is some more info on the subject: https://www.a2hosting.com/blog/whats-the-deal-with-12-31-1969/
  5. No worries. I have been having some weird stuff with my scripts too. The Meta audit sometimes runs into 1969 files that are missing the creation date on the folder. When this is encountered it outputs "The parameter is incorrect." into my screen output. I spent many hours last night touching files to reapply these attributes. I was able to find these erroneous files and folders by going to each disk share and running the following command on the UNRAID terminal: find . -type f -ls |grep ' 1969 ' Then you "touch" the files and it should fix the issue. Here is what it looks like when it happens in the script. You can see this exact output by just running the command in CMD natively, so it is not script related. Super freakin annoying... I had this completely fixed last night after fixing all of the files in question. I then re-ran the audit and I had clean screen output. Then this morning I ran it again and I have more of these lines in different disks. So not sure if it is always the "1969" issue. This does not happen on the RAW "DIR /s /b" command, only the META "DIR /s" command. Anywho, I'll keep trying to track the issue down. I hate intermittent issues! If anyone else is seeing this on their server, I would be very curious. Again the scripts work perfectly. It is the actual data throwing the exception. Worse case scenario, I would love to just suppress the error line on my output screen. Any thoughts on how to do this?
  6. Dude, this is impressive. It is way more crazy and complex than I could have imagined. You definitely taught me some new techniques. And I noticed your PowerShell injection. I just finished running the audits using your script and got the EXACT same results down to number of lines in each file and the exact same size of output files down to the byte. So both of our scripts match up output-wise exactly. Peer Review Complete ha ha. Nice work!
  7. No thanks HA HA. I also dabble in PowerShell development and these work fine 😂
  8. No worries. Glad they were of some use! I'll probably go back and get the Single Disk and Share scripts updated to be fully dynamic and repost those as well. Stay tuned.
  9. Freaking Sweet. So after about 10 minutes of Google searching, I was able to solve the dynamic loop question. I can now dynamically tell the script the total number of data drives I have and it will loop through a single block of code until completed. Work smarter not harder! 😅 Variable: SET DataDisks=28 Code: Echo Performing Audit on Disk: 1... DIR /s /b \\%ServerName%\disk1>"%OutputPath%%FileNameRaw%" DIR /s \\%ServerName%\disk1>"%OutputPath%%FileNameMeta%" FOR /L %%i IN (2,1,%DataDisks%) DO ( Echo Performing Audit on Disk: %%i... DIR /s /b \\%ServerName%\disk%%i>>"%OutputPath%%FileNameRaw%" DIR /s \\%ServerName%\disk%%i>>"%OutputPath%%FileNameMeta%" ) Note: The IN (#,#,#) DO loop parameters map out to (Start, Step, End). So in this case, we start at disk2 since the script is now appending to the files disk1 created. We are then stepping by 1 disk at a time. And finally, we end at the variable value of 28 (my last disk). I Also Added Some New Bells and Whistles: Dynamic Audit Output Folder Creation Based on Current Date in YYYYMMDD Format Only Creates This Folder If It Does Not Already Exist Audit Start/End Timestamps During Run-Time I have attached the FULLY DYNAMIC Version of the UNRAID DISK AUDIT script. I have also fully set all text contents to variables which will make @Keexrean happy. Anyone in the community should now be able to run this (On Windows) for any UNRAID server by simply changing out the variables at the top of the script. Prerequisites/Instructions: Set Your Windows hosts File (Make a Backup First!) Allows You to Call UNRAID Server by Name Instead of IP (For NET USE Command) Location of hosts File in Windows 10: C:\Windows\System32\drivers\etc\hosts Open hosts File with Notepad Enter This At Bottom of hosts File (Change IP Address and Host Name Accordingly): #Custom Defined Hosts 192.168.#.# YOURSERVERNAME Save the New hosts File Turn on Per Disk Shares in UNRAID (Do This For Every Data Drive) I Set My Disk Share Type To: Export: Yes (hidden) Security: Secure I Set My Disk Share Access To: User: Read-only Right Click and Edit the Batch Script Set Variables Accordingly Sit Back and Relax, This Will Take a While... Use Cases: Run Regularly to Have an Exact Mapping of Every File on the Array Run Prior to Parity Checks Run Prior to Reboots/Shutdowns If You Ever Experience Data Loss, At Least You Will Know Which Files Were On That Drive! ENJOY! Updated Code 2020-01-19 UNRAID_Audit_All_Disks.bat
  10. Yeah, I just performed a full audit on my appdata share without any issues. Natively my appdata folder is about 30GB in size. For the audit results, it produced txt files with hundreds of MBs and millions of records/lines inside each (Both Raw and Meta). I have different dockers than you, so I must have ones that work without throwing exceptions. Let me know if you figure it out. Sorry you are running into issues...🤷‍♂️ I am very interested to see how you loop through a set number of disks via a single variable though. If you figure out that snippet of code, please send it my way. I can incorporate it back into my script and send it out to the masses if you are cool with that.
  11. Hmm. I don't use that docker so I am stumped. Any chance you could skip the cache share for now and get the main disk script working fully dynamically?
  12. Yeah I can see both sides. Yes, when it stops it most likely is a permissions issue. I've seen the same thing on my cache share script. Certain appdata folders can cause issues. For data disks, I usually run the "Docker Safe New Perms" script to reset permissions across the array. It is safe...and resolves similar issues, especially when uploading files to the array from OSX. You also might be running into issues if certain dockers are running and have file locks. I usually run my cache share script when all dockers are disabled and the array is stopped for that reason.
  13. Sweet. Take your time. Looking forward to seeing the finished product.