BKTEK

Members
  • Posts

    29
  • Joined

Everything posted by BKTEK

  1. After trying so many things for so long, I decided simply wait for a very long time - and in fact I was able to boot in. One of the many problems (and the primary one ultimately) was that my IPMI on my SuperMicro board seems to conflict with the video card I have in the box...so it seems that the video output simply stops at random places during boot. That's why I had so many random "hangs"...it would randomly stop updating the video on my monitor during the boot process. Strange. Either way, I ultimately got it to work.
  2. Some errors (just this morning): Dirty bit is set - Fs not properly unmounted and data may be corrupt. Automatically removing the dirty bit. There are differences between the boot sector and backup (mostly harmless) Hangs: ata4: SATA max UDMA/133 abar m2048@0xdd11d000 port 0xdd11d280 irq 31 [drm] Initialized ast 0.1.0 20120228 for 0000:06:00.0 on minor 0 mpt2sas_cm0: 0 1 1 (the three above hangs were simple reboots with no hardware changes...demonstrating the randomness of the hangs) mount: /dev/loop1 mounted on /lib/modules. mpt2sas_cm0: CurrentHostPageSize is 0: Setting default page size to 4k (removed SAS): usb 1-5: new high-speed USB device number 2 using xhci_hcd (no SAS, 2 SSDs attached): ast 0000:06:00.0: [drm] Analog VGA only This is really crazy. Any help/insight would be great. I'm at a loss.
  3. Last error (with no drives attached: mount: /dev/loop1 mounted on /lib/modules.
  4. I just tested this while you must have been writing your reply (serendipity) and can see that it is definitely writing to the USB. I put a new install of 6.10.3 in the server and booted it up. It wrote lots of things to the config: folders: modprobe.d plugins-error pools shares ssh ssl wireguard files: network-rules.cfg (various others) This is with no drives attached...so I can't see how it's seeing this stuff, including "wireguard." Weird. Thanks for responding.
  5. On another note, which might help me diagnose things more...during the first time a new UNRAID install gets used, what files are written to the USB drive? Because every time I put the drive back in my computer to reformat it and reinstall UNRAID to it, it has errors...so I believe the the broken install/boot is corrupting the USB drive. That might prove informational.
  6. It stopped in at least 30 different places, literally. It always gets to the blue Unraid screen where I have tried normal boot, safe mode, GUI, and GUI safe mode. It always continues past that for about 20 seconds or so and then is totally unresponsive. It appears to draw its last line of text but necessarily the entire "idea", like there's a BMC error it sometimes mentions that I've seen so many times. It mentions the word "compensating" but that word is split between two lines and sometimes it stops drawing text in the middle so ends with "compe" and never even finishes the sentence. Sometimes it stops around the Nvidia driver install and complains about kernel taint. This only happens when I try to use my old build. It often freezes right after. It seems like it's usually looking for hardware when it freezes (BMC, video card, sda, sdb, etc.). After unplugging all of the drives it still errors. Seems like it almost certainly has to be flash drives or RAM...and I've tried every permutation of those I can think of. Thank you for your response.
  7. Hello! I decided to rebuild my server's OS which was bogging down like crazy. So I installed 6.11 to a USB device and fired it up. Things seemed fine - I installed a few essential plug-ins, notably Nvidia Drivers. Then later in the day I rebotted the server and it wouldn't properly boot. Hasn't been able to boot since. The motherboard is a Supermicro X11SSM-F. I read up a lot on possibilities about why it might not boot. I thought maybe it was hanging trying to install the Nvidia driver. But since I can't get into the system anymore and can't SSH in, I'm not able to fix it. Safe mode doesn't help, etc. I have since tried booting the system with: different RAM (ECC and non-ECC) different DIMM slots different USB devices (identical devices which used to work fine) different USB ports USB3 USB2 using a backup of my OS before the crash installing various other Unraid builds ethernet plugged in ethernet unplugged removing the video card Various UEFI options, including EFI and Legacy boot hard drives no hard drives EDIT (additional info): It's worth noting about the RAM...the server originally had two 16GB ECC DIMMs in it (some years ago). At some point in the distant past one of the sticks just stopped showing up so I could only see 16GB total...even in the UEFI. So when I was troubleshooting all of this all day I was able to get the UEFI to see 32GB in a lot of different slot combinations. But when troubleshooting I tried each stick in each slot separately...no avail. In every case the system hangs, usually on random stuff. I worked on this for about 10 hours today and made no progress. Any help would be appreciated. Thank you.
  8. Well, wouldn't you know. I just started syncing some files via my newly minted smb rootshare and it is clearly using the cache drive. This is via the Unassigned Devices plugin method. Any clue why? Either way I'm pretty pleased.
  9. Thank you all for your responses and taking the time to answer. I understand better the structure of rootshares. But like I said, the weird part is that I was able to get cache drives working with the manual rootshares method detailed in SpaceInvaderOne's video here: The Unassigned Devices plugin method never used the cache but the SpaceInvaderOne's did. Still don't know why. I just did a complete rebuild of Unraid and will experiment with these two methods to see why they differ. Also, apologies for the long pause.
  10. I've never been clear on the distinction between rootshares and user shares. I see in the GUI that user shares are what I've created - but I'm confused on what exactly a rootshare is, I guess. From a layman's point of view (me), the rootshare created by the plugin seems like a one-stop solution to see all of my shares in one place, which is much better than mapping 8 shares and 2 servers from unraid to multiple Windows boxes. That would be probably 10-16 network drives mapped per machine and would make me want to die. So when you ask if I'm trying to write to rootshare, what I did was map the rootshare to Windows as drive Z, then go in that and sync some files to one of the shares that appears within that. But it won't use the cache that way. I'm pretty sure that the manual rootshare technique outlined by SpaceInvaderOne did use the user share settings (cache drives). Either way, the method was implemented very differently since the SpaceInvaderOne method seemed to use my usershare settings but didn't accurately reflect free space across the array, while the Unassigned Devices plugin shows accurate space across the array but bypasses the usershare settings. If you meant writing files to the root of the rootshare - no. Thanks for the help.
  11. Update - I removed the Unassigned Devices root share and did a simple smb share. Then I copied a test file over and VIOLA. The cache did exactly what it was supposed to do. So I'm thinking now about starting a new thread to figure out what's going on with the plugin's implementation of root shares - because it definitely broke my setup.
  12. I had another thought. Is it possible that the root share available in the Unassigned Drives plugin bypasses smb cache usage? Previously I was using the method detailed by SpaceInvaderOne to create a root share, but found the plugin to be simpler to setup. Is it interfering? Another reason I abandoned the old method is that it would incorrectly report array space in Windows. The reason I thought of this is because along the way I've heard a lot of people talking about being careful with root shares because of copying files across shares or drives, or arrays, etc. I wonder if I'm somehow doing something wrong with that. Finally, can someone detail a few simple tests I can perform to see if indeed my cache drives are being used correctly? Thanks. Sorry for the spammed messages. I'm just thinking out loud at this point.
  13. Also worth noting that my server writes are seemingly better upon reboots, at least for a while. Though still not using cache. Every time I reboot it feels like it's "unclogged" for a little while.
  14. Thank you for the suggestions. I'll work on implementing them today (I have to work and my girlfriend and I are both a little sick, too).
  15. Okay, one more odd observation - on my main server I decided to do the whole "mark shares as cache=yes" thing and running Mover so it bounces the files over to the array. I have about 100gb on a cache drive and Mover is indeed moving the files (the little bars are going down). I know that the array is writing parity while it's doing this, but I think it's been bouncing the files over for maybe 4 hours now and has only moved 10gb or less. At this rate it will take 40 hours to bounce 100gb from the cache to the array, which seems absurd. Anyone have any thoughts?
  16. I also go in and check the cache drives in Unraid to see what's on them and they remain 0 folders, 0 files. And while the write is happening/after the write, there is nothing. And no space taken on the drives...no changes. Is the only/best solution to start over from scratch, which I probably should have done in the first place? Thank you again to anyone helping.
  17. Also worth noting I just tried a bunch of different things and none of them made the cache drives work (I have two). *marked all shares as cache=yes (after turning off dockers and VMs), then running Mover, then rebooting *detached/deleted the cache pools and rebuilt the pools as new (with the same names) *detached/deleted AGAIN and rebuilt using new names. Interestingly, the old cache reference names were still in the shares, but I went in and changed them to the new names. No fun, though - it didn't change anything I observed that Windows believes that it's copying the files over smb at a rate of about 112mb/s (saturation) for the first 4GB of the file, then drops off dramatically to 40mb/s (and yes, I checked the server and Windows box and both are full 1GB duplex). I don't know what else to try, but both servers are having this problem. Again, swapped out hardware (see first post) and maybe that's the problem. I wonder if I start over from scratch if the problem will be solved. I could do that on my backup server Monolith and see what happens.
  18. So I wonder why my system isn't using the cache drives for writes over SMB. It's devastatingly slow.
  19. /mnt/user/Storage/cache_test /mnt/user0/Storage/cache_test /mnt/disk4/Storage/cache_test /mnt/rootshare/user.Monolith/Storage/cache_test Didn't realize it hadn't completed its run. This is the completed output. Thanks for the tip.
  20. Here's what the output is: /mnt/user/Storage/cache_test What do you think? Thanks, as always, for your help.
  21. By the way, is there a way to check hardware ID associations? What should be my next steps?
  22. Thank you for the suggestion - I created a new test share (TESTTEST) and dumped the same 10GB file to it via Windows explorer. Writes averaged to about 45mb/s. Didn't use cache at all. I knew this from looking on the cache drives right after and there was still no change on the cache drives themselves...it wrote the file directly to the array. I'm wondering if UNRaid associated cache drives with hardware IDs or something and it's seeing the old IDs even though I've tried to update them. It's like Unraid sees cache drives in the main tab but doesn't have them correctly associated so it doesn't use them. Thanks again for the help.
  23. Sorry, I don't understand your reply. Could you elaborate? Thanks for your reply. If you're referring to the 10GB file I dumped, I dumped it to the root of the share (it didn't exist there). Have my logs revealed anything useful? And for more info, my Windows writes to the server are around 35mb/s (since no cache). When the cache drives were working, I can get close to saturating the bus - around 100mb/s.
  24. I was also thinking that maybe I need to uninstall and reinstall the cache drives so the system is 'reintroduced' to them. Thoughts?
  25. Hello folks, I attached diagnostics. So, I have two Unraid servers - one names Node and another named Monolith. These are named after their appearance/case designs. Node is my main server and Monolith is acting as my backup server. I'm administering this stuff on a Windows machine. Once upon a time the builds of these servers was different, but I recently swapped cases between the two, and the arrays, cache drives, etc. Kind of like swapping the USB sticks between servers where the USB install expects to see different stuff than what it finds. Anyway, things are humming along reasonably well after having to fix some small things. But I don't believe either server is using the cache drives now. The diagnostics I've attached is for Monolith as I know it's not using the cache drive for writes to the "Storage" share (I just bounced a 10GB file to the share while it's running a parity check to see if it'd write to cache and it didn't). Any advice on what to fix? Thanks, and I'll be eagerly watching this topic for any help. monolith-diagnostics-20220805-1103.zip