JunctionRunner

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by JunctionRunner

  1. Hi, couldn't really find anything with archivebox throwing an error, but I'm trying to get the docker set up, and it basically stops immediately after starting, with error "NameError: name 'auth' is not defined" Noticed that the port it wanted to use in the config was taken so completely removed and reinstalled it with a different one , but that still didn't work. Probably some simple fix, but that error line didn't show any super helpful results in google, and I'm new to docker, and it's been a long, shitty sleep deprived day.
  2. Yeah... I probably have a little much to index everything using this tool sadly, I changed the cache pressure setting to 1 but I saw basically no change in ram usage, I'll try 20 in case that actually works instead of just setting it to never reclaim. I have 128gb of ram right now and only 25% usage. I don't mind it using up a good amount if it makes file explorer on windows more responsive, as it is, it still kind of takes a bit to load everything up, appearing in chunks unlike when I was running windows server. It's too bad it's just stored in ram and can't run off the cache or use a special metadata device.
  3. Oooh, yeah that's what's doing it. I guess it indexes files a lot slower than everything search does. Thanks, hopefully it doesn't take too much longer.
  4. Hi, little odd quirk I'm having that's not hugely important, but it seems like the "find" function likes to grab some files and keep them active, which in turn means one of my disks and both parity drives are always active. I've rebooted, made sure hashes were up to date and killed the process, but it still seems to latch onto these particular files. Any idea how to fix this? I can't find any other examples of people with this issue so far. It does seem to periodically switch to a different group of files, but then hang onto those for quite a while. I did only run the initial hash of all files a few daysa go so perhaps it's still doing stuff.
  5. Tiny half update, trying a couple things from a support request thread, but this just happened and not too sure what's going in. This is really, really weird but it seems like briefly I got an smb multichannel connection but not.. fully? I set my two 10gbe connections to x.x.1.225 and x.x.1.226 (so same subnet), and added the interfaces capability rss line from smb performance tuning. Then, I wasn't able to connect by using the hostname, but I was able to connect through the ip directly and to the cache I got over 1GB/s, a couple times, I mapped the drive using the ip address, and got a much slower speed. I still couldn't see it in powershell but in terminal I did see a third connection during that transfer, and now it won't establish that again, so it's like it kicked on for a second somehow... anyways, that just gave me a massive migrane confusing the hell out of me but I wanted to share this. I think at least maybe I'm kind of closer today? Testing with different subnets did not work
  6. @JorgeB I just stumbled across an old comment of yours that stated that unraid needs each nic to have a different ip address for SMB multichannel to work, is that still the case? That's a) something that seems stupid and b) a potential good explanation, I only enabled bonding, I didn't try and set a second static ip or anything for it. Think it mentioned a different subnet being required too? Not sure if that means on windows I would have to run a second om3 cable and also set up an ip on a different subet for that. That would definitely be annoying and stupid.... I think I do have a spare cable long enough though.
  7. That's fair, but this is still definitely falling under the issue of SMB multichannel not working, since I know 100% I can fire up windows on this machine and get full 10 gig speeds. I haven't tried that for hundreds of GB yet no, but for the amount I'm testing with definitely, obviously with more data it will choke eventually. Looks like there is an iscsi plugin that lets you create shares for unraid, so that would let me in theory put the array and cache as two "physical" drives in the VM and bypass SMB entirely to feed em into drivepool. There are also some 7mm SLC drives out there but only up to 800gb, I was looking at those and who knows, maybe still sometime.
  8. Need to try and hit the hay earlier so I get more than 5 hrs sleep today, fresh diagnostics. Haven't made any progress trying various other things like buffer and io tuning disable for single channel speed. Maybe they'll reveal something. I could set up teaming on my switch but I can't imagine that's necessary for multichannel. Kind of thinking maybe I should do a fresh start on the usb again? quantaserver-diagnostics-20231221-2040.zip
  9. Hi all, popping in here as I have a thread where I was troubleshooting over here which lead to me discovering that SMB multichannel doesn't work with a single nic, unless you make the smb-extras changes. Unfortunately, with both those changes, and now with new transceivers having arrived and being installed, I still don't get a multichannel smb connection to unraid, the server doesn't even show up in powershell under that command at all. I mapped it using \\servername\sharename through this pc, but I can't imagine that would cause it to not work and I'd have to map it through cli to get a multichannel connection, right? I've verified that I get a multichannel connection on the server if I fire up windows server, I've mapped, unmapped the share, rebooted, and turned smb multichannel on and off on the unraid box, but it seems to just not want to establish a multichannel link at all for some reason. So, as the start of this thread is fairly old now, I'm wondering what the "smb multichannel connection for dummies" steps are to break it down to basics, as I still have a whoooole mess of troubleshooting stuck in my head, and a fresh starting point of a known good setup would be helpful. Perhaps I should just redo my usb and have an all new start.
  10. Updated iperf as well. still slow in that one direction with single stream which annoys me but good with multiple.
  11. Well, transceivers arrived so that's good, though this weekend I have to leave to visit family to make them happy instead of working on projects lol, uhg. Testing with the direct share still. Enabled bonding, active backup, ~700 with dips to mid 400s with the smb extras thing cleared out. Added back in the smb extras I had before, and rebooted my client and I still see no multichannel connection and speeds varied wildly, getting up to 900 briefly. Read speeds look fantastic though Unfortunately after poking at this for a little bit, no luck getting smb multichannel to enable and show in powershell, getting similar results with it turned off completely in unraid both copying to and from the server so once it actually turns on it should work great. The other thing I'm thinking of, and maybe it's best suited as another thread, is using a virtual machine with stablebit drivepool on it, and sharing out my storage from there. I need to test with mover tuning more, but if I was able to just map my cache, and my array as separate physical disks, that means that I could do tiered storage with drivepool sill, which means I would be able to pin folders to the faster tier, and unlike mover tuning, it seems more sophisticated in terms of how it handles thresholds, and properly moves oldest files first, etc. One thing at a time though, first I want to get multichannel actually working, then I'll see if there's a way to get these drives in to a vm at full speed, so far my testing got 200MB/s at best.
  12. Oh right, so I tried those tweaks, and that dropped performance by about 50%, so I reverted it all back for now. I figure I adjust and play with those once multichannel is working.
  13. Well, I've followed the steps in that guide, and unfortunately it seems to be a no-go. On the unraid box it only shows these two for my machine On windows, for some reason the unraid box doesn't show up at all in my list... My card does support RSS though. I've rebooted multiple times, both machines with no luck. My egrep output looks messier than the example but probably because I have more cores/threads. I am wondering why the unraid server doesn't show up in there... I have drives mapped and I have checked again while running a file transfer.
  14. Edited my post, I have two single gig ports so I'll try that when I get home and report back.
  15. Ah, I see, so windows smb multichannel works with only a single interface, but linux, or at least unraid doesnt for some reason then. That seems, odd, but it does explain why this has been a complete no go from the very start and why it works immediately well in windows. Unfortunately that flakey transceiver I have was my last one (I'll dig around and quadruple check later today), so I'll have to wait for some others I ordered on ebay to arrive. I was going to do two bonded anyhow, but didn't know it was a requirement for multichannel. Not sure how many sessions windows opens by default, but hopefully I don't need more than two to achieve the same speeds since I can't add any more 10 gig into this server. Unfortunately it will take a while to arrive. Edit: I will give it a shot with a 1 gig port bonded for the hell of it when I'm done work today, but I have a feeling that won't be very good.
  16. Here you go, ended up being fairly short so I just copied it from the terminal, and I don't see anything to be concerned about not being anonymized. testparmesean output.txt
  17. Where would I go about running that? Command isn't found in the terminal. Also tried testmem in case it was a typo. Can't find reference to that command searching google or the unraid manual.
  18. Sorry, brain farted and forgot to attach. quantaserver-diagnostics-20231208-0408.zip
  19. Guess I did post my prior reply but it does seem like multichannel isn't enabled. Here are the diagnostics. I just ran another copy test to the SSD disk share and I actually hit 1GB/s briefly but then it dropped off very fast. For the standard share with cache enabled, same cap as before. And copying from, just so there's a current reference of each. crap, I did just notice I just the four SSD's in a 2 group 2 device zfs mirror, so I switched to back raid 0, saw worse peak on the ssd disk share, but higher overall. I have switched back to two groups of two devices though because that is the end use, unless I need to buy another four SSDs and have two groups of four devices to be able to maximize speed? I definitely don't want to be running four in raid 0 but buying another four is at least two, likely four weeks away. It still should be able to get at leas the above speed with only two though so I think the mutichannel issue is where to look. If I have to wait longer and buy another four SSDs, I guess at least I get more high performance storage capacity, I was thinking of maybe doing it eventually, but I'd like to not _have_ to. I did at least get more than five hours of sleep today, so I shoudn't miss a stupid obvious change like that I made again.
  20. Well, support pointed me towards the sbmstatus command but I can't seem to find a multichannel option like this example here. I enabled jumbo frames on my router, switch and workstation, set them all to maximum, and I saw single stream performance jump up quite a lot, but it's inconsistent. These two tests were done, then a few minutes later, slow again in one direction. Still better, but slow. With a direct disk share I've now seem it peak to near 800MB/s but then it drops quite a lot, often ending back down around 500MB/s, and if I use the unrad array share with the cache enabled, I get basically 500 max reading and writing... So, I guess I'll see if there's anything else I can do tuning wise, but still, the fact that windows just magically works perfectly in comparison... I thought the windows network stack was supposed to be worse than linux for performance. Maybe SMB multichannel isn't actually working.
  21. Yes, I did enable it, what would be the way to test that it's actually working and enabled in unraid? If enabling it doesn't actually turn it on, then I guess that would explain why the performance is so much worse. There is that "SMB Extras" section, maybe that needs some stuff added too, but you'd think the yes/no toggle would yknow, work. And I have done multiple reboots since enabling it. Found this article, not sure when it was posted or if it's outdated but those smb extras didn't help. https://unraid.net/blog/how-to-beta-test-smb-multi-channel-support
  22. But windows on this machine shows similar single stream iperf results in the same direction as unraid, yet it transfers fine... And would SMB multichannel not use multiple streams? Also this is not one giant file, it's multiple.
  23. Hmm, maybe that's it? I'm using an intel 82599 network card, not Mellanox, but you'd think that would be supported pretty well too, I know for TrueNas intel stuff is preferred for some reason though but Mellanox is more common for hobbyist stuff due to used parts being way more available and cheaper. But it can saturate it on iperf... idk. Guess this might be beyond forum support
  24. Disabling bonding, as expected, didn't help, so much for proper sleep, unfortunately I can't think of anything else to try currently. I don't know if I can pass the array and cache through to a Windows server vm directly, not as a share, to test that out. It seems like right now the issue is unraid, so if I do the network share through a vm that could saturate the network maybe then that will fix this issue but that seems like a stupid workaround to have to do...
  25. Sigh, this is really feeling like I wasted $160 right now. There's still a bunch of other cool stuff I want to check out with unraid but there's no point when at this point it looks like unraid for some reason is just sucking for network storage. Fuck this is frustrating. Exact same hardware config, I've tried everything I can find searching, and I still get massively worse performance on unraid than windows server even with a disk share. SMB is supposed to be as good on unraid as anything else, I don't know, is there some sort of additional tweak to make to smb multichannel in a config file or something? Windows server, yet again with the flaky transceiver pulled. Easily saturating the network with a 17gb folder of files. But unraid? Half that, even with that disk share. And I'm still using zfs raid0. If anything you'd think zfs would be faster than NTFS Testing with ~200gb of files on an external nvme ssd, there's a four-five minute longer transfer time on unraid than windows, even with windows doing it's classic dipping then speeding up behavior. Change that to a terabyte and that's an even bigger difference.