Jump to content

falconexe

Community Developer
  • Posts

    789
  • Joined

  • Days Won

    15

Posts posted by falconexe

  1. 29 minutes ago, johnnie.black said:

    Those 14TB Exos are CMR.

    That's what I thought when I bought them, but these lines from that article are making me question that. I do believe you though...

     

    "The Archive drives are for archiving and Exos drives are optimised for maximum storage capacity and the highest rack-space efficiency. Seagate documentation for the Exos and Archive HDDs explicitly spells out that they use SMR."

     

     

  2. 9 minutes ago, falconexe said:

    I've since been buying 14TB Exos (ST14000NM0018-2H4101). These are doing really well!

    Well crap. According to this website, these ARE SMR. No where in the product documentation shows this too. Welp, I guess I'll go back to WD or Iron Wolf next time. 🤷‍♂️

     

    https://blocksandfiles.com/2020/04/15/seagate-2-4-and-8tb-barracuda-and-desktop-hdd-smr/

     

    https://www.seagate.com/www-content/datasheets/pdfs/exos-x-14-channel-DS1974-4-1812US-en_US.pdf

  3. On 1/23/2019 at 6:21 PM, wheel said:

    So last month, I started having some issues writing to certain 8TB drives, but never took note of which specifically since no parity errors ever popped up before or after checks.  I was looking into adding more 8TBs to a 19-disk array this week when I realized I’ve been adding ST8000DM004-2CX188s left and right over the past couple of years.

     

    SIX disks in a nineteen-disk array (not counting parity) are SMR drives (one is ST8000AS).

    Ha ha. I've got 10 out of 30 of these. Been a solid 2-3 years so far. SMR is trash though. At the time, it was the best TB/$ out there, and I've been very lucky. I've since been buying 14TB Exos (ST14000NM0018-2H4101). These are doing really well! Eventually I'll be upgrading 1/3 of my server just to get rid of these SMR liabilities. Good luck!

  4. Hey everyone. I have been able to get true 10GBe read/write speeds directly to the cache drive to/from a Windows 10 client. However, if you are trying to write to the array shares (on cache), you may be running into "known" SHFS overhead. I dealt with this in a very unique way a few months back. I will try some of these additional SMB settings you guys mentioned (my config is also in the below thread) and report back.

     

    If one of us figures this out, I would LOVE to know. My workaround works, but it is kind of a PITA. The golden bar for me is true 10Gbe writes to the cache using the native shares, and not the cache disk share itself.

     

    I also have Ubiquity equipment, but am actually running a direct CAT7 NIC to NIC 10Gbe setup for this purpose. I built a new house and put in a second dedicated network drop from my main client to UNRAID. I also have a fully dedicated server room for my network equipment, servers, and Control 4 stuff. Also added a dedicated Carrier HVAC Ceiling unit so the room is a cool 68F at all times. This massive server is running about 20C across all my 30 drives, even during parity checks🥶😅.

     

    Anyway, hopefully the below helps! I'll keep checking back for any news.

     

     

  5. Let us know how it goes and report back.

     

    Also, can you provide the make/model number/year of that specific MacBook, and also the same for the drive? I actually have a MacBook Pro 15'' Touchbar that died recently and I am about to do the same. Any issues ripping out the drive? Mine is actually soldered to the motherboard, so not even sure I can get it out safely without a ton of work.

     

    As for hooking up drives to UNRAID Unassigned devices via available motherboard interfaces/USB, I have done this many times successfully, including with NVME PCIe.

  6. On 5/12/2020 at 5:12 AM, NotSoAlien said:

     

    Exactly my thoughts. I have 4 socketed CPUs.

     

    My CPUs are AMD 6380s at 2.5Ghz per unraid.

     

    I am going to reboot the server in a bit and go into BIOS to see if I see any more information. I dont want to tear the case apart to look at part numbers etc.

    EDIT: I went into BIOS and grabbed these pictures: It shows my 4 CPUs and they all say 16 cores.

     

     

     

     

    Thanks for sharing. Welcome to the UNRAID community!

  7. Thanks @johnnie.black @jonathanm.

     

    I ended going with a file called ".moverignore" (Much like the syntax of .plexignore). Nothing inside the file (unless you want to put a sentence or two about what the crap this file is for in case you lose your memory for some reason 😂) and I simply created it from a blank txt file and changed the extension and removed the name. The period in front keeps it hidden to Windows *if* you choose to hide hidden files (I don't...)

     

    First, I had to create all the files in the source share folders within the second split levels.

     

    \\UNRAID\Media\Subfolder1\.moverignore

    \\UNRAID\Media\Subfolder2\.moverignore

    \\UNRAID\Projects\Subfolder1\.moverignore

     

     

    I noticed that once I did this, they immediately show up on the cache drive share (that's fine and expected). So I ran the MOVER which then deleted the files from the cache drive share ONLY (again as expected). Then I manually recreated the folder structure on ONLY the cache drive share.

     

    \\UNRAID\cache\Media\Subfolder1\.moverignore

    \\UNRAID\cahce\Media\Subfolder2\.moverignore

    \\UNRAID\cache\Projects\Subfolder1\.moverignore

     

    Finally, I ran the mover again, and voila, the structure remained! Nothing happened as expected. If I put a new file in any of these folders on the cache drive share, parallel to the dummy file, the mover still works and moves only the net new files, and the directory structure persists.

     

    So far so good. I'll continue to monitor. It Ain't Pretty, but it WORKS! Now I can write to my cache drive at full 1 GB/s speed and bypass the shfs overhead, but still have the MOVER do its job and write the files to the array/parity normally later!

     

    Thanks so much everyone. I'll mark this as solved with a workaround.

    • Thanks 1
  8. 19 hours ago, johnnie.black said:

    You can still write and use the user shares normally while avoiding the overheard by writing directly to a disk share, just enable disk shares and then write to:

     

    \\tower\cache\your_share

     

    instead of

     

    \\tower\your_share

     

    Thanks everyone. So what you describe is EXACTLY what I did per my first post in order to hit a solid 1 GB/s sustained upload. I was writing files directly to the cache disk share, instead of the Media share.

     

    Though this works, it is not ideal for 1 reason. Once the Mover completes (mine runs nightly), it deletes the folder structure (schema) of the Media share and all sub-folders of anything that is successfully FULLY moved to the Array. Therefore, it would be a huge pain in the butt to have to recreate the folder structure (top 2 levels) first on the cache drive direct share DAILY, and then copy the content over.

     

    I Run a 2 Folder Deep Split Level on this Share:

     

    image.thumb.png.488da7c91f0ec0c7576c43bf63b7d0e5.png

     

     

    For instance. If I create the following directory structure on the cache disk share, I can keep dropping files to it UNTIL the mover runs. Once the mover completes, it destroys this architecture.

     

    Before Mover:

    \\UNRAID\cache\Media\RawFootage

     

    After Mover:

    \\UNRAID\cache\DELETED_BY_MOVER\DELETED_BY_MOVER

     

    I know this is working as designed, but it is not a REAL work around in my opinion to solve the 2/3 loss in network throughput that I am experiencing as described above, because it is not perpetual solution.

     

    Maybe I need to just get over this and move on. Let me know if I have missed something, or if I am correct.

     

     

    @testdasi Is this what you deal with too? Or have you solved this part?

     

    If the Mover moved all files to the array, BUT DID NOT DELETE THE TOP 2 LEVEL FOLDER STRUCTURE (on the cache disk share), this WOULD work for me as an accepted solution. Any thoughts?

     

     

  9. So I have X2 actual socketed CPUs, each having 8 Physical Cores with HyperThreading (HT). That gives me 16 Physical Cores and 16 Virtual Cores (HT) for a total of 32 "Cores". What I see on your screen appears normal and UNRAID's GUI is working as designed.

     

    If you had a total of 64 Physical Cores (X4 Socketed CPUs, Each With X16 Physical Cores), then I would expect a total of 128 entries including HT on your CPU screen.

     

    Also, not even sure what motherboard supports X4 CPUs, so let us know exactly your overall specs including motherboard, processors, and chassis. I am very interested. I run a very ROBUST and near top of the line (expensive) server and I am always interested in other people's builds. Sounds like you have a very sweet setup!

     

    Here is My UNRAID CPU Screen (Notice it Only Calls Out X1 Processor by Name Even Though There are 2):

     

    image.thumb.png.134241d6b74b739c56f70911745d4620.png

  10. 37 minutes ago, testdasi said:

    This is shfs overhead (the Unraid share functionality). I worked around it by creating custom smb config to expose my /mnt/cache/share

     

    @testdasi

     

    So I "share" my actual Cache drive directly (\\UNRAID\Cache) and this is how I was able to get true 1GB/s transfers.

     

    image.thumb.png.d8559d2d32096369def400aa1fcc03c1.png

     

    However, this get's dicey with writing directly to the cache disk, the mover handling, and the actual native share schema itself. Sure I could create the entire folder path for the content I want to write directly to cache, but this is a ton of work, and from what I have read over the years, UNSAFE. I'd rather just go to my share and also see the parallel files in that share (the entire share contents). 

     

    Is there a way to integrate proper Share Handling while "exposing" the Cache drive? For instance, can I open my "Media" share and write to this normally (to Cache) and obtain the speeds I'm looking for?

     

    Can you elaborate on exactly how you accomplished this? What exactly is in your custom SMB config that allows this? I have no issues with exposing my Cache in a different way to solve this issue. Thanks so much for the info!

     

    Here is my current SMB config:

     

    store dos attributes = yes

    #unassigned_devices_start
    #Unassigned devices share includes

       include = /tmp/unassigned.devices/smb-settings.conf
    #unassigned_devices_end

    #vfs_recycle_start
    #Recycle bin configuration

    [global]
       syslog only = No
       log level = 0 vfs:0

    #vfs_recycle_end

    #Prevent OSX Files From Showing Up
    veto files = /._*/.DS_Store/.AppleDouble/.Trashes/.TemporaryItems/.Spotlight-V100/
    delete veto files = yes

    #Added Security Settings
    min protocol = SMB2
    client min protocol = SMB2
    client max protocol = SMB3
    guest ok = no
    null passwords = no

    #Speed Up Windows File Browsing
    case sensitive = yes

  11. Thanks for responding @johnnie.black.

     

    Why is there any overhead at all when writing to CACHE (a cache share in this case) when parity is not being written to on the array until later via MOVER? How (or What?) can cost me almost 700 MB/s of overhead? I have tried this same test with DOCKER disabled and minimal services running. As you can see from my signature, we are running a seriously ROBUST server.

     

    If this is just "known" overhead, I may return the $400 roughly I put into this NVMe cache setup including the PCIe adapters, etc. In my case, there is literally ZERO benefit over a standard SATA6 SSD if this is expected behavior.

     

    I have seen PLEX run MUCH more quickly with appdata on this NVMe, so there's that. Overall, I am super disappointed if this is par for the course. We run a media production company and quickly offloading TBs of raw footage is critical to our workflow.

     

    @limetech Tom, this is the first time I have reached out to you directly in the 6 years that I have owned UNRAID. Do you have any thoughts or suggestions to navigate this issue? Is there any technical reason why UNRAID cannot support what I am asking? It appears that I have the proper hardware (Profession IT guy), and that this is a software issue or limitation of UNRAID itself. Perhaps I have missed something? Thanks so much for your help. This is 1 of 2 massive servers we operate. We seriously LOVE UNRAID and we have one of the largest single arrays out there.

  12. You guys ever figure this out?

     

    I noticed that in most of your pics, you top out around ~300 MB/s uploading to UNRAID. I just installed an NVMe PCIe X4 SSD Cache drive and there is NO IMPROVEMENT and I am stuck at those same speeds coming up from a standard SATA6 SSD cache drive. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please let me know. I would like to get this figured out once and for all. My post is below. Thanks!

     

     

  13. On 10/5/2017 at 9:09 AM, greg2895 said:

    I am having the same issues here. Can't saturate 10gbe at about 350mb/s. Direct I/O is giving me call traces and all docker apps had to be changed from mnt/usr/appdata to mnt/cache/appdata for them to be able to read/write. To top it off i am still only getting 350mb over 10gbe! I am out of ideas.

     

    You guys ever figure this out? I am in the same boat.

     

    I just posted a new topic regarding this issue, and I have possibly found the CAUSE. However, I am able to fully saturate my 10GBe NIC with sustained 1 GB/s writes under a very specific scenario. Please see below and feel free to stop by my post and saturate that LOL. I REALLY want to get this fixed. The correct way. Thanks everyone!

     

     

  14. So here is the full cache drive disk log. It does appear that DISK CACHING IS ON on the tail end of the log.

     

    image.thumb.png.3d2a6e4b10f005c166ed717374d51202.png

     

    Sorry about all of the posts in a row... I'm just trying to present as much info as possible for everyone to assist.

     

    In the meantime, I am going to fully Power Down and test again to see if there is any change. I'm guessing not, but you never know LOL.

     

    EDIT: Reboot had NO effect. Welp...

  15. Here is what I can find in the WIKI regarding speed and how it works. Keep in mind I have been using UNRAID avidly since 2014. I consider myself a pretty advanced user, and I have had a cache disk for a very long time. I have to imagine that what is causing this issue is either something really dumb and basic, or something really technical and beyond my level of expertise.

     

    I'm looking forward to anyone's feedback. Thanks again in advance!

     

    image.thumb.png.fe6e155845ee289414deb864926bab79.png

     

    Cache Wiki Page:

    https://wiki.unraid.net/Cache_disk

     

     

  16. Former Title: "1TB NVMe PCIe Cache & 10 GBe NIC = Very Odd Network Issue"

     

    Renamed for Better Searchability and Tutorial Purposes.

     

    Hello, I have a 10GBe peer to peer network connection with a dedicated Windows 10 PC (CAT6A at around 75 Feet). Both machines have the exact same ASUS XG-C100C NIC. I have been running a 1TB SSD as my primary Cache drive until recently. Back then I was peaking around 300 MB/s on writes to the Cache (not the array). Though these drives peak at 500 MB/s writes, I was not too concerned about losing 200 MB/s. I figured it was some kind of bottleneck or overhead. I have my MTU set to 9014 on both ends, and I have fine tuned my NIC like crazy on the Windows side and it is fully optimized for this work.

     

    Today, I installed a Samsung 970 EVO Plus 1GB NVMe PCIe X4 SSD as my primary cache drive. I have 2 of these in my desktop and I peak at 3350 MB/s writes from disk to disk. That being said, with my 10GBe NIC, which should peak around 1250 MB/s fully saturated, and my new NMVe cache drive having a higher throughput than my NIC (it is not a bottleneck), I assumed I would be hitting 1GB/s transfers uploading files to my UNRAID server.

     

    Well...there is something VERY ODD about my outcome. I do and I don't. 😆

     

    If I write directly to the share which is really on Cache (Cache is set to "YES" on this Share), I lose 2/3 of my throughput and peak at a sustained ~300 MB/s.

     

    \\UNRAID\Share

     

    image.png.93fb5636ee03d0495969c1872a6c8a96.png

     

     

    If I write directly to the Cache drive (only did this as a test) to the same exact folder path, I suddenly hit a rock solid 1 GB/s sustained write speed as expected.

     

    \\UNRAID\Cache\Share

     

    image.png.2ad22b18569be21ea47e6547299064a5.png

     

     

    WHAT AM I MISSING? How is this even happening? 🙄

     

    I would certainly expect some kind of performance drop if I was writing directly to the array, but I am writing to CACHE. This is freaking killing me. I just spent a crap ton of money to upgrade this server, and I am basically in the same place as a standard SSD. And now I'm thinking if I had written to Cache directly using the old SSD cache drive as a test, I would have hit my peak 500 MB/s after-all, not to mention I saw the same ~300 MB/s max write speeds on that drive.

     

    This is clearly a software issue. The same hardware is being used writing the same file in both instances. *HOW* the file is being written is the only difference. Technically, SMB/UNRAID is agnostic in the fact that it presents the folder/file on the SMB share path and does not delineate if it is actually sitting on the cache drive or the share. It just serves up the file. However, WINDOWS or UNRAID certainly behaves differently via SMB if I explicitly direct the file to just the share via cache, or the cache\share directly.

     

    I know you SHOULD NOT write to Cache directly or it can mess stuff up. So my question is, HOW DO I GET FULL THROUGHPUT while writing to Cache the proper way?

     

    Please help! Thanks everyone!

    • Like 1
    • Thanks 1
  17. I have updated my ALL Disk scripts. There are 2 copies attached. One does not suppress errors, and the other one does.

     

    Updates:

    • Audit Timestamps Now in YYYY-MM-DD Format
      • Was YYYY-DD-MM in Error Before 
    • Output Results Folder Now in YYYY-MM-DD Format (With Dashes)
      • Was in YYYYMMDD Format Before (Without Dashes)

    UNRAID_Audit_All_Disks_NullErrors.bat UNRAID_Audit_All_Disks.bat

×
×
  • Create New...