Jump to content

falconexe

Members
  • Content Count

    86
  • Joined

  • Last visited

Everything posted by falconexe

  1. Thanks for sharing. Welcome to the UNRAID community!
  2. Thanks @johnnie.black @jonathanm. I ended going with a file called ".moverignore" (Much like the syntax of .plexignore). Nothing inside the file (unless you want to put a sentence or two about what the crap this file is for in case you lose your memory for some reason 😂) and I simply created it from a blank txt file and changed the extension and removed the name. The period in front keeps it hidden to Windows *if* you choose to hide hidden files (I don't...) First, I had to create all the files in the source share folders within the second split levels. \\UNRAID\Media\Subfolder1\.moverignore \\UNRAID\Media\Subfolder2\.moverignore \\UNRAID\Projects\Subfolder1\.moverignore I noticed that once I did this, they immediately show up on the cache drive share (that's fine and expected). So I ran the MOVER which then deleted the files from the cache drive share ONLY (again as expected). Then I manually recreated the folder structure on ONLY the cache drive share. \\UNRAID\cache\Media\Subfolder1\.moverignore \\UNRAID\cahce\Media\Subfolder2\.moverignore \\UNRAID\cache\Projects\Subfolder1\.moverignore Finally, I ran the mover again, and voila, the structure remained! Nothing happened as expected. If I put a new file in any of these folders on the cache drive share, parallel to the dummy file, the mover still works and moves only the net new files, and the directory structure persists. So far so good. I'll continue to monitor. It Ain't Pretty, but it WORKS! Now I can write to my cache drive at full 1 GB/s speed and bypass the shfs overhead, but still have the MOVER do its job and write the files to the array/parity normally later! Thanks so much everyone. I'll mark this as solved with a workaround.
  3. Thanks everyone. So what you describe is EXACTLY what I did per my first post in order to hit a solid 1 GB/s sustained upload. I was writing files directly to the cache disk share, instead of the Media share. Though this works, it is not ideal for 1 reason. Once the Mover completes (mine runs nightly), it deletes the folder structure (schema) of the Media share and all sub-folders of anything that is successfully FULLY moved to the Array. Therefore, it would be a huge pain in the butt to have to recreate the folder structure (top 2 levels) first on the cache drive direct share DAILY, and then copy the content over. I Run a 2 Folder Deep Split Level on this Share: For instance. If I create the following directory structure on the cache disk share, I can keep dropping files to it UNTIL the mover runs. Once the mover completes, it destroys this architecture. Before Mover: \\UNRAID\cache\Media\RawFootage After Mover: \\UNRAID\cache\DELETED_BY_MOVER\DELETED_BY_MOVER I know this is working as designed, but it is not a REAL work around in my opinion to solve the 2/3 loss in network throughput that I am experiencing as described above, because it is not perpetual solution. Maybe I need to just get over this and move on. Let me know if I have missed something, or if I am correct. @testdasi Is this what you deal with too? Or have you solved this part? If the Mover moved all files to the array, BUT DID NOT DELETE THE TOP 2 LEVEL FOLDER STRUCTURE (on the cache disk share), this WOULD work for me as an accepted solution. Any thoughts?
  4. So I have X2 actual socketed CPUs, each having 8 Physical Cores with HyperThreading (HT). That gives me 16 Physical Cores and 16 Virtual Cores (HT) for a total of 32 "Cores". What I see on your screen appears normal and UNRAID's GUI is working as designed. If you had a total of 64 Physical Cores (X4 Socketed CPUs, Each With X16 Physical Cores), then I would expect a total of 128 entries including HT on your CPU screen. Also, not even sure what motherboard supports X4 CPUs, so let us know exactly your overall specs including motherboard, processors, and chassis. I am very interested. I run a very ROBUST and near top of the line (expensive) server and I am always interested in other people's builds. Sounds like you have a very sweet setup! Here is My UNRAID CPU Screen (Notice it Only Calls Out X1 Processor by Name Even Though There are 2):
  5. @testdasi So I "share" my actual Cache drive directly (\\UNRAID\Cache) and this is how I was able to get true 1GB/s transfers. However, this get's dicey with writing directly to the cache disk, the mover handling, and the actual native share schema itself. Sure I could create the entire folder path for the content I want to write directly to cache, but this is a ton of work, and from what I have read over the years, UNSAFE. I'd rather just go to my share and also see the parallel files in that share (the entire share contents). Is there a way to integrate proper Share Handling while "exposing" the Cache drive? For instance, can I open my "Media" share and write to this normally (to Cache) and obtain the speeds I'm looking for? Can you elaborate on exactly how you accomplished this? What exactly is in your custom SMB config that allows this? I have no issues with exposing my Cache in a different way to solve this issue. Thanks so much for the info! Here is my current SMB config: store dos attributes = yes #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end #vfs_recycle_start #Recycle bin configuration [global] syslog only = No log level = 0 vfs:0 #vfs_recycle_end #Prevent OSX Files From Showing Up veto files = /._*/.DS_Store/.AppleDouble/.Trashes/.TemporaryItems/.Spotlight-V100/ delete veto files = yes #Added Security Settings min protocol = SMB2 client min protocol = SMB2 client max protocol = SMB3 guest ok = no null passwords = no #Speed Up Windows File Browsing case sensitive = yes
  6. Thanks for responding @johnnie.black. Why is there any overhead at all when writing to CACHE (a cache share in this case) when parity is not being written to on the array until later via MOVER? How (or What?) can cost me almost 700 MB/s of overhead? I have tried this same test with DOCKER disabled and minimal services running. As you can see from my signature, we are running a seriously ROBUST server. If this is just "known" overhead, I may return the $400 roughly I put into this NVMe cache setup including the PCIe adapters, etc. In my case, there is literally ZERO benefit over a standard SATA6 SSD if this is expected behavior. I have seen PLEX run MUCH more quickly with appdata on this NVMe, so there's that. Overall, I am super disappointed if this is par for the course. We run a media production company and quickly offloading TBs of raw footage is critical to our workflow. @limetech Tom, this is the first time I have reached out to you directly in the 6 years that I have owned UNRAID. Do you have any thoughts or suggestions to navigate this issue? Is there any technical reason why UNRAID cannot support what I am asking? It appears that I have the proper hardware (Profession IT guy), and that this is a software issue or limitation of UNRAID itself. Perhaps I have missed something? Thanks so much for your help. This is 1 of 2 massive servers we operate. We seriously LOVE UNRAID and we have one of the largest single arrays out there.
  7. You guys ever figure this out? I noticed that in most of your pics, you top out around ~300 MB/s uploading to UNRAID. I just installed an NVMe PCIe X4 SSD Cache drive and there is NO IMPROVEMENT and I am stuck at those same speeds coming up from a standard SATA6 SSD cache drive. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please let me know. I would like to get this figured out once and for all. My post is below. Thanks!
  8. You guys ever figure this out? I am in the same boat. I just posted a new topic regarding this issue, and I have possibly found the CAUSE. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please see below and feel free to stop by my post and saturate that LOL. I REALLY want to get this fixed. The correct way. Thanks everyone!
  9. So here is the full cache drive disk log. It does appear that DISK CACHING IS ON on the tail end of the log. Sorry about all of the posts in a row... I'm just trying to present as much info as possible for everyone to assist. In the meantime, I am going to fully Power Down and test again to see if there is any change. I'm guessing not, but you never know LOL. EDIT: Reboot had NO effect. Welp...
  10. Here is what I can find in the WIKI regarding speed and how it works. Keep in mind I have been using UNRAID avidly since 2014. I consider myself a pretty advanced user, and I have had a cache disk for a very long time. I have to imagine that what is causing this issue is either something really dumb and basic, or something really technical and beyond my level of expertise. I'm looking forward to anyone's feedback. Thanks again in advance! Cache Wiki Page: https://wiki.unraid.net/Cache_disk
  11. Also, I have Direct IO set to "YES" I have also been reading a lot about WRITE CACHE. Since this drive is PCIe and not SATA/SAS, I am unable to check to see if it is on. I am also not sure if this would even matter in the test I did since IT DOES work in one scenario.
  12. Former Title: "1TB NVMe PCIe Cache & 10 GBe NIC = Very Odd Network Issue" Renamed for Better Searchability and Tutorial Purposes. Hello, I have a 10GBe peer to peer network connection with a dedicated Windows 10 PC (CAT6A at around 75 Feet). Both machines have the exact same ASUS XG-C100C NIC. I have been running a 1TB SSD as my primary Cache drive until recently. Back then I was peaking around 300 MB/s on writes to the Cache (not the array). Though these drives peak at 500 MB/s writes, I was not too concerned about losing 200 MB/s. I figured it was some kind of bottleneck or overhead. I have my MTU set to 9014 on both ends, and I have fine tuned my NIC like crazy on the Windows side and it is fully optimized for this work. Today, I installed a Samsung 970 EVO Plus 1GB NVMe PCIe X4 SSD as my primary cache drive. I have 2 of these in my desktop and I peak at 3350 MB/s writes from disk to disk. That being said, with my 10GBe NIC, which should peak around 1250 MB/s fully saturated, and my new NMVe cache drive having a higher throughput than my NIC (it is not a bottleneck), I assumed I would be hitting 1GB/s transfers uploading files to my UNRAID server. Well...there is something VERY ODD about my outcome. I do and I don't. 😆 If I write directly to the share which is really on Cache (Cache is set to "YES" on this Share), I lose 2/3 of my throughput and peak at a sustained ~300 MB/s. \\UNRAID\Share If I write directly to the Cache drive (only did this as a test) to the same exact folder path, I suddenly hit a rock solid 1 GB/s sustained write speed as expected. \\UNRAID\Cache\Share WHAT AM I MISSING? How is this even happening? 🙄 I would certainly expect some kind of performance drop if I was writing directly to the array, but I am writing to CACHE. This is freaking killing me. I just spent a crap ton of money to upgrade this server, and I am basically in the same place as a standard SSD. And now I'm thinking if I had written to Cache directly using the old SSD cache drive as a test, I would have hit my peak 500 MB/s after-all, not to mention I saw the same ~300 MB/s max write speeds on that drive. This is clearly a software issue. The same hardware is being used writing the same file in both instances. *HOW* the file is being written is the only difference. Technically, SMB/UNRAID is agnostic in the fact that it presents the folder/file on the SMB share path and does not delineate if it is actually sitting on the cache drive or the share. It just serves up the file. However, WINDOWS or UNRAID certainly behaves differently via SMB if I explicitly direct the file to just the share via cache, or the cache\share directly. I know you SHOULD NOT write to Cache directly or it can mess stuff up. So my question is, HOW DO I GET FULL THROUGHPUT while writing to Cache the proper way? Please help! Thanks everyone!
  13. All good here too! 6.81 > 6.82 😉
  14. I have updated my ALL Disk scripts. There are 2 copies attached. One does not suppress errors, and the other one does. Updates: Audit Timestamps Now in YYYY-MM-DD Format Was YYYY-DD-MM in Error Before Output Results Folder Now in YYYY-MM-DD Format (With Dashes) Was in YYYYMMDD Format Before (Without Dashes) UNRAID_Audit_All_Disks_NullErrors.bat UNRAID_Audit_All_Disks.bat
  15. I've been using the "Integrity" plugin for over a year. No bit rot yet...
  16. Yep that worked. Thanks. Echo Performing Audit on Disk: 1... DIR /s /b "\\%ServerName%\disk1" 2>NUL 1>"%OutputPath%\%CurrentDate%\%FileNameRaw%" DIR /s "\\%ServerName%\disk1" 2>NUL 1>"%OutputPath%\%CurrentDate%\%FileNameMeta%" FOR /L %%i IN (2,1,%DataDisks%) DO ( Echo Performing Audit on Disk: %%i... DIR /s /b "\\%ServerName%\disk%%i" 2>NUL 1>>"%OutputPath%\%CurrentDate%\%FileNameRaw%" DIR /s "\\%ServerName%\disk%%i" 2>NUL 1>>"%OutputPath%\%CurrentDate%\%FileNameMeta%" )
  17. So I forgot to mention that the "type" parameter can be set to the following: File: find . -type f -ls |grep ' 1969 ' Directory: find . -type d -ls |grep ' 1969 ' I have seen both files and folders have the missing creation date attribute where it defaults to 1969. Here is some more info on the subject: https://www.a2hosting.com/blog/whats-the-deal-with-12-31-1969/
  18. No worries. I have been having some weird stuff with my scripts too. The Meta audit sometimes runs into 1969 files that are missing the creation date on the folder. When this is encountered it outputs "The parameter is incorrect." into my screen output. I spent many hours last night touching files to reapply these attributes. I was able to find these erroneous files and folders by going to each disk share and running the following command on the UNRAID terminal: find . -type f -ls |grep ' 1969 ' Then you "touch" the files and it should fix the issue. Here is what it looks like when it happens in the script. You can see this exact output by just running the command in CMD natively, so it is not script related. Super freakin annoying... I had this completely fixed last night after fixing all of the files in question. I then re-ran the audit and I had clean screen output. Then this morning I ran it again and I have more of these lines in different disks. So not sure if it is always the "1969" issue. This does not happen on the RAW "DIR /s /b" command, only the META "DIR /s" command. Anywho, I'll keep trying to track the issue down. I hate intermittent issues! If anyone else is seeing this on their server, I would be very curious. Again the scripts work perfectly. It is the actual data throwing the exception. Worse case scenario, I would love to just suppress the error line on my output screen. Any thoughts on how to do this?
  19. Dude, this is impressive. It is way more crazy and complex than I could have imagined. You definitely taught me some new techniques. And I noticed your PowerShell injection. I just finished running the audits using your script and got the EXACT same results down to number of lines in each file and the exact same size of output files down to the byte. So both of our scripts match up output-wise exactly. Peer Review Complete ha ha. Nice work!
  20. No thanks HA HA. I also dabble in PowerShell development and these work fine 😂
  21. No worries. Glad they were of some use! I'll probably go back and get the Single Disk and Share scripts updated to be fully dynamic and repost those as well. Stay tuned.
  22. Freaking Sweet. So after about 10 minutes of Google searching, I was able to solve the dynamic loop question. I can now dynamically tell the script the total number of data drives I have and it will loop through a single block of code until completed. Work smarter not harder! 😅 Variable: SET DataDisks=28 Code: Echo Performing Audit on Disk: 1... DIR /s /b \\%ServerName%\disk1>"%OutputPath%%FileNameRaw%" DIR /s \\%ServerName%\disk1>"%OutputPath%%FileNameMeta%" FOR /L %%i IN (2,1,%DataDisks%) DO ( Echo Performing Audit on Disk: %%i... DIR /s /b \\%ServerName%\disk%%i>>"%OutputPath%%FileNameRaw%" DIR /s \\%ServerName%\disk%%i>>"%OutputPath%%FileNameMeta%" ) Note: The IN (#,#,#) DO loop parameters map out to (Start, Step, End). So in this case, we start at disk2 since the script is now appending to the files disk1 created. We are then stepping by 1 disk at a time. And finally, we end at the variable value of 28 (my last disk). I Also Added Some New Bells and Whistles: Dynamic Audit Output Folder Creation Based on Current Date in YYYYMMDD Format Only Creates This Folder If It Does Not Already Exist Audit Start/End Timestamps During Run-Time I have attached the FULLY DYNAMIC Version of the UNRAID DISK AUDIT script. I have also fully set all text contents to variables which will make @Keexrean happy. Anyone in the community should now be able to run this (On Windows) for any UNRAID server by simply changing out the variables at the top of the script. Prerequisites/Instructions: Set Your Windows hosts File (Make a Backup First!) Allows You to Call UNRAID Server by Name Instead of IP (For NET USE Command) Location of hosts File in Windows 10: C:\Windows\System32\drivers\etc\hosts Open hosts File with Notepad Enter This At Bottom of hosts File (Change IP Address and Host Name Accordingly): #Custom Defined Hosts 192.168.#.# YOURSERVERNAME Save the New hosts File Turn on Per Disk Shares in UNRAID (Do This For Every Data Drive) I Set My Disk Share Type To: Export: Yes (hidden) Security: Secure I Set My Disk Share Access To: User: Read-only Right Click and Edit the Batch Script Set Variables Accordingly Sit Back and Relax, This Will Take a While... Use Cases: Run Regularly to Have an Exact Mapping of Every File on the Array Run Prior to Parity Checks Run Prior to Reboots/Shutdowns If You Ever Experience Data Loss, At Least You Will Know Which Files Were On That Drive! ENJOY! Updated Code 2020-01-19 UNRAID_Audit_All_Disks.bat
  23. Yeah, I just performed a full audit on my appdata share without any issues. Natively my appdata folder is about 30GB in size. For the audit results, it produced txt files with hundreds of MBs and millions of records/lines inside each (Both Raw and Meta). I have different dockers than you, so I must have ones that work without throwing exceptions. Let me know if you figure it out. Sorry you are running into issues...🤷‍♂️ I am very interested to see how you loop through a set number of disks via a single variable though. If you figure out that snippet of code, please send it my way. I can incorporate it back into my script and send it out to the masses if you are cool with that.