Jump to content

JunctionRunner

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by JunctionRunner

  1. From a quick 17gb test of multiple files, ~530 writes, ~650 reads. Off my old server, it has more of a slow ramp up in the beginning actually, but higher peak speeds by a good margin. It does however fluctuate more, but those speeds do result in it being faster still.
  2. Alright, so, more iperf testing with four streams. I did single stream testing with the Quanta server (the unraid box) in windows and got similarly poor iperf results but for some reason that doesn't affect transfer speeds. Writing this a bit as I go. From my desktop to my main server. Then from the main server to the desktop, more like what I should be getting. Now, from the Quanta server to my desktop, running windows server. I did notice the second 10 gig port being a little flaky for some reason so I disconnected that, and need to investigate. Then desktop to Quanta server running windows server. That all looks fine, my old server is a little worse performing actually, but it's not noticeable in use. So, now with unraid booted, from my desktop to the Quanta server. and the Quanta/unraid server back to my desktop. So, that all looks fine and dandy, back to testing with the fresh unraid install. Interestingly this time, I saw more than 400MB/s on the unraid ui, but my write speed is still trash and reads are back to where they were. So, I enabled exclusive access and netbios again and shared the cache with no secondary and things are looking much better. Saw up to 1.6 GB/s in unraid, and the final results are 1,022.43 reads and 810.9 writes. It seems like the flaky network connection was causing some sort of issue, I don't know if it's a transceiver issue or smething but this is a massive step forward. That being said I am still under the performance of my other server by a bit, this is at least what I consider usable now though I think, I'll run some tests with real world file transfers now but I need a screen break.
  3. Yes, I'm aware of that. Now I just need to track down why it's struggling. Considering WS worked fine I'm guessing a network setting in unraid somewhere, maybe with link bonding.
  4. Yeah, I'm not sure why it's reading kind of low. That being said according to task manager transferring to my other machine I do hit 10 gig speeds, and diskmark shows this which is far more expected, it will also keep these speeds for 64GiB Interesting though, iperf the other direction shows a significant difference. I thought I had run it both ways yesterday and saw matching figures but I was fairly tired and on a mix of caffeine and allergy meds.
  5. That improved things a a little bit, but I'm still well under the potential performance of the drives... I also tried enabling direct i/o and that didn't change much of anything. A small bump but still far under.
  6. Well, I am only sharing the cache with no secondary storage for this testing so.... This still makes no sense I guess idk, I'll try and fully wipe and redo the usb from scratch? I don't see why that would change anything but seems like the last option as a hail mary. At least I basically have nothing configured yet.
  7. I see matching speeds on the unraid server as well. I haven't tried running multiple tests, I'll try doing that when I get back home in a few hours. I did try using the diskspeed docker plugin but it doesn't seem to allow me to benchmark a single SSD regardless, is there a particularly good way within unraid to benchmark the cache itself locally? It's making zero sense especially since the exact same hardware works perfectly with windows server. I can boot back up into that, configure the drives and immediately have the expected performance.
  8. Yes, those are the ones I'm going to use as the cache, regardless of how I configure it though I still have this issue. I even tried a single drive, and got basically the same results. It's like it's not properly striping the data across drives for some reason.
  9. Little more testing, forgot to include diagnostics as well, but creating a separate share of two drives in raid 0, sharing those still only shows 2tb of capacity according to windows, so it's like it can only see one of the drives even though it's in an array... quantaserver-diagnostics-20231204-1017.zip
  10. Ok, that's what I figured, since you'd want the cache to be as snappy as possible. It's honestly probably a minor thing once I configure mover tuning. I had originally wanted tiered storage, like I have currently set up in my mess of a server which does 500gb nvme > 500gb sata ssd > raid array. It's entirely possible now that I'm going to quickly ingest a terabyte of raw video at a time, so I needed to upgrade, and I basically want files to stay on the SSD tier as long as possible before being shoved off. Mover tuning I think will get me close, but I can't really start testing it properly until I get this speed issue figured it. It will be a shame to lose the visible capacity at an easy glance of the SSD tier, but it just seems like I kind of have to suck it up. I was also thinking hmm, maybe I run a windows server VM on top of unraid with drivepool to get that functionality, passing the array and cache through to that but it's clunky. Maybe if mover tuning doesn't work how I want. Pinning specific folders would be nice after all, but that's low priority since I can't even properly use the server currently.
  11. Slight bit of further testing, I enabled NetBios and disabled enhanced macos compatibility. With BTRFS raid0 I get 906 MB/s reads, a jump upwards, but only 440 MB/s writes yet again. ZFS provided similar results to the first test as well. I also enabled reconstruct write but I don't think that affects the cache?
  12. One curious thing I just noticed, is that I shared the main array, but also created a share only on the cache to ensure that it was only writing to these drives. It seems to report the overall capacity of the server however which is interest. I had originally wanted the cache to be non-transparent and be more like drivepool, where it showed the capacity of the cache as well as spinning drives. I'm wondering if this points to some sort of issue though. I would ideally like to have a single share like this, but the quantassd share doesn't show files off the array so that means that isn't going to work how I want at all and will even be misleading if I wanted a share that stayed on the SSDs it seems... edit: After changing secondary storage to none the capacity changed, though it's higher than it should be. Not a huge issue right now though as I want to fix the speed issue. I had the share set up with no secondary storage before and wanted to test again now and this did not affect speeds.
  13. Hi all, so I recently picked up a pretty cool 1u storage server to try and upgrade my homelab and fix some bad practices, as well as explore new software options for my nas, so I'm quite new to unraid. Unfortunately due to budget reasons, truenas is out of the list of options since I can't afford to buy a bunch of high capacity hard drives at this time. Took a few weeks but I was finally able to afford four 2tb SSDs and another 8tb drive for parity, so I finally started doing some proper testing after picking up a pro license during the black friday sale since I was near the end of my trial. However, from my testing, I suddenly saw a significant problem, in that the cache performance was frankly, awful. My old server has a 500gb nvme drive as tiered storage with drivepool and I'm able to easily saturate my 10 gig network. I was at first concerned that the sas expander and backplane layout might be limiting the available bandwidth to my drives, so I fired up windows server as a sanity check. With a disk management raid0 of the four SSDs I'm able to get 1,054 MB/s reads and 1,164 MB/s writes over the network on Q8T1 crystaldiskmark, fantastic! Matching my other server. But, with unraid and a raid0 using ZFS, I'm limited to 893 MB/s reads and only 434 MB/s writes. With BTRFS raid0 I'm seeing 703 MB/s reads and 449 MB/s writes. I've enabled SMB multichannel on unraid, and manually spun down the HDD array to ensure the only devices using any real bandwidth on the SAS channels are the SSDs, but these results are consistent, and unfortunately, kind of deal breaking. The other sections of crystaldiskmark are also significantly worse compared to windows server across the board. 900 read isn't bad honestly but being able to write at only ~400 is a substantial and noticeable downgrade from my current setup, and unfortunately, this server doesn't have a way to add nvme storage, since the model that did was out of my price range. Unfortunately that is seeming very regrettable now. Now, I should actually be able to add a custom 5v supply internally and add another ghetto rigged set of four SSDs, perhaps even 8 off the intel hba chipset by removing the shell off the SSDs, but that's another several hundred and I'm unsure if that will even help. It would be nice for capacity, but speed wise, since four drives at raid0 or raid10 seem so limited, I really doubt it will. I was hoping to have my four SSDs in a pseudo raid5 array, with single parity so I had the storage space of three, with the ability to saturate 10 gig but it seems unraid can't do that so I might need to do raid10 and lose half my ssd capacity, if this speed issue can be helped.
  14. Oh, really? Where do you go to do this? I saw a comment somewhere talking about clicking and holding but that seemed to do nothing, and I figured that was related to UDD which I heard about. I tried that and looked through display settings, but didn't spot it. Is there an option hidden somewhere for four columns as well? I think that I wouldn't need it as much as I do on something like opnsense, but would be good to know. edit: Oh, damnit. There's a little lock icon I just spotted...
  15. Found this looking for the same, especially on an ultrawide, middle panel has a huge gap at the bottom with nothing useful, the disk identify plugin is on the left side but I have to scroll down to even see it. Kind of shocked we can't change this stuff around.
  16. Hi, so, currently evaluating unraid vs other options for a new storage server in a Quanta 1u chassis, this plugin is pretty cool, and while I certainly have easy access to drive serial numbers, the locate function is how I actually found this as that's a feature that's appealing. It doesn't seem to work on my machine unfortunately, my drives are all connected with an LSI3008 in IT mode, on some quanta backplanes, with single lights. They don't actually flash for drive activity oddly, just purple when initializing and then blue when done and a drive is connected. However windows storage spaces is able to blink the backplane lights to locate a disk, despite this. I'm wondering if there's any sort of diagnostics or info I can provide so that you can try and increase compatibility with more backplanes? It also seems like any SSD, even if it's not in the cache pool, is marked as a cache, I'll likely end up with an SSD only share, marking those drives as the relative data and parity would be nice, but I'm guessing that's more a limitation with unraid interpreting how drive types are used. This sort of thing probably has been asked before but I need to sleep so figured I'd just post. already ended up ruining my chance at a good amount of rest, again with projects lol.
×
×
  • Create New...