Jump to content

TexasUnraid

Members
  • Content Count

    144
  • Joined

  • Last visited

Community Reputation

17 Good

About TexasUnraid

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah, I see, so it just shares the whole disk. I was thinking it was like user shares but with a direct path to the disk. This might work, the main reason for disk share use would be during backups and I could setup the backup program to use a disk share then have the mapped drives still be user shares.
  2. I reverted back to 6.8.3 when I saw my trial was drawing to a close. Pretty sure I saw this same issue with 6.9 but since I didn't figure out what was causing the issue until today I can't be sure. I didn't know to test for it in particular, I thought it was just a permissions issues or something previously.
  3. I can't stop the array right now as my trial expired this morning and I read somewhere that the extension can take a little time to activate? People are using it right now. I will try changing that when I can afford some downtime. I was looking at this setting and not sure I understand how I actually create a disk share? If set to No, disk shares are unconditionally not exported. If set to Yes, disk shares may be exported. WARNING: Do not copy data from a disk share to a user share unless you know what you are doing. This may result in the loss of data and is not supported. If set to Auto, only disk shares not participating in User Shares may be exported. I don't see any options for disk shares when creating a share? Most of my shares right now are set to only use 1 disk so that must not be the answer?
  4. Was doing some testing on the possibility of the fuse file system causing the slowdowns. Not sure how to mount a disk share via SMB but I did finally figure out why my krusader copies are stupid slow sometimes and super fast others. Turns out if I use the /user path my speed is limited to ~50-60mb/s. Doing the exact same copy with the direct drive path on the other hand had things copying at over 1GB/s until the memory buffer filled lol Then it dropped to the expected ~120mb/s. So yes, the fuse file system is a bottleneck for sure, although I have not been able to test it over SMB. I have no idea how to mount things manually in linux. If I could setup a manual share that mounted directly to the disk and get full performance, that would be an acceptable workaround for my use case. I actually don't really see me letting most shares span multiple disks, there is simply no need and it makes backups much simpler as I have the shares split up to fit on individual disks right now.
  5. Been playing around with the settings, I found that reducing the cache depth down to 5 and the cache pressure to 1 drastically reduced the CPU usage. Now it just spikes up to ~7-10% usage every few seconds instead of stilling at 20-30% for 50-60% of the time. Just need to figure out the ideal depth it seems. If I am going more then 5-6 folders deep on a drive it will most likely need to be spun up anyways I figure. Interestingly, excluding SSD shares that I know have a lot of folders in them does not seem to reduce the CPU usage much if at all.
  6. I am guessing you are saying that it is scanning the cached version of those folders since the drives are spun down and it can't read the actual folders? How fast does linux drop the cache? Is there a way to extend it? I am not worried about memory usage since I have 32gb and only use like 8gb unless running VM's. Seems like it would be able to go more then ~10-20 seconds between scans?
  7. Curious about this as well, My CPU normally sits around ~3-5% but with the folder cache enabled it spends almost half the time around ~20-30% usage. Not using the /user option and also mostly default settings. Seems really excessive when it should not be doing anything once the folders are cached.
  8. Yep, basically the same results as what I am seeing. Now try that with ~1 million small files and watch your day go bye bye lol. When testing to a windows machine, try using a windows client as well, I only use windows clients so not sure how linux client would fair talking to a windows host.
  9. Yes, the HBA setting is just set once. I don't remember the exact settings off hand but it was pretty straight forward, basically you just disable all raid functions of the card and it has a setting to turn it into an HBA setup. The online documentation explains it pretty well. SSD trim is outside the scope of a fourm post, but basically it clears old data from an SSD nand memory so that future writes will be faster. Modern drives have garbage collection that makes it far less important then it used to be but it can still help write performance to trim the drive from time to time. In the last month I have been using the system though I have not seen any performance degeneration.
  10. Yes, I already tested this it seemed to make a minor difference but that doesn't mean much when it is still 10-20x slower then windows lol.
  11. Interesting, how would I mount an SMB share to the disk and not the user folder? I find it interesting that if I use NFS to connect to the user share everything works fine, that would seem to say it is not a fuse issue? Also, I currently have all my shares set to only use 1 drive each, although I doubt it would make a difference.
  12. Here is the first test directly on the drive, It created the files at 22mb/s and deleted them in: real 0m0.116s user 0m0.008s sys 0m0.076s And using the same folder in the user share it created them at 10.4mb/s and deleted in: real 0m0.606s user 0m0.014s sys 0m0.112s So slower but nothing that would explain the SMB performance, everything I do directly on unraid pretty much has the expected speeds. Except krusader sometimes will be locked to like ~60mb/s for reasons I can't explain. If I copy it in console or with large files with SMB it will go at full speed though, so it must be something to do with krusader. The only time I have slow down issues is when using SMB and then only with small files for the most part (although if it tries to copy a large file mixed in with a bunch of small files it will sometimes go slower for some reason).
  13. I got my card on ebay, I got a good deal but you should be able to find them under $40 without a lot of hassle, at least when I was searching. Like all HBA card, you will need some airflow over the heatsink to keep the card happy. I ended up building a funnel out of poster board that covers half of the side panel mounted 120mm fan and funnels it down to the heatsink. I run the fan at the minimum speed which is somewhere around ~500rpm and it can't even be heard and the card stays nice and cool.
  14. I was updating my Coreelec media box today and had to delete ~40k thumbnails over smb (100mb, not even gigabit) on the devices SD card. Even though it was working with an SD card it still managed to delete the files almost 3x as fast as unraid on a raid0 cache pool during the first ~1000 files before it grinds to a halt. Both use linux samba implementations. I just found it interesting.
  15. I ended up getting a Adaptec ASR-71605 16-Port 6Gbs SAS PCIe 3.0 card for $30 I have to say, I am very happy with it. No need to flash it to put it into HBA mode, you simply change the setting in the cards bios at boot. Performance has been great as well and it has 16 ports for less money then the LSI 8 port cards. It also works great in unraid, drivers installed automatically and have given me no issues. Only downside is that like the LSI cards it can't trim consumer SSD's but thus far I have not noticed this to be an issue and I can connect the SSD's to the onboard sata to trim them every now and then if needed.