bamhm182

Members
  • Posts

    65
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bamhm182's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. Going to hop on the revive an ancient thread bandwagon. I finally got around to learning more about docker networks, and I am having a hard time believing that such a useful feature of docker (various networks) is not reliably possible to maintain in Unraid... EDIT: Hello sarcasm, my old friend.
  2. No problem. FWIW, I haven't had any issues since I started it manually. I haven't rebooted either.
  3. I just noticed my Docker page was reporting the same issues. I didn't try to reboot my server and it has been up for nearly 20 days, while it was working fine last night. Going to settings => docker and disabling, then re-enabling didn't work. I couldn't find the `service` command. I don't know if that's a nuance of slackware or unraid or what, but running `/etc/rc.d/rc.docker status` reported it was stopped, then I ran `/etc/rc.d/rc.docker start` and it said it was started. I then ran `/etc/rc.d/rc.docker status` and it reported running. Everything seems to be working fine now. Not really sure what that was about... I attached my diagnostics in case anyone cares to look at it compared to the ones Pillendreher uploaded. If it happens agaiin, I'll probably spend more time looking into it, but for now, I'm good with leaving it at "computers are wierd". trminator-diagnostics-20210123-1518.zip
  4. Fair enough. I'll take your word for it. Thanks for the additional information!
  5. I wasn't insinuating it would be a long-term fix, I was just thinking maybe a little javascript to clean it up a bit. I honestly don't know anything about Unraid plugins and what they're capable of.
  6. Thanks for the info. JorgeB! Though it makes me wonder, why include the Free Space at all if it's known to be buggy and there's nothing Unraid can do about it? Seems to me that the cache always has at least a good idea of how much space is actually being used, so it seems like it would look much better if there was only the used metric there, and if the bar filled up as a percentage of the used space over the calculated space (2TB/2.3TB in my case). Even if the total drive space does fluctuate a bit, 3MB/2.3TB vs 3MB/2TB or 200GB/2.3TB vs 200GB/2TB seems like it would look much better than an arbitrary 3MB ~= 50%. While I haven't dug into creating plugins personally, does this seem like the kind of thing that could easily be done with a plugin? Seems likely to me. If so, perhaps I'll have a go at creating one.
  7. Is anyone else having trouble installing perl? When I go to add it, it jumps straight to "Installing", whereas something like python will go to "Downloading", the "Installing". Even though the toggle switch is set to "yes" for perl, the status is uninstalled.
  8. Hello, I am hoping to have someone sanity check me on whether or not these numbers make sense or not. As can be seen in the screenshot, I have 7 cache disks of varying size. Sometimes the cache pool will says 2TB total, but sometimes it will says 2.3TB. The USED and FREE numbers don't add up to anywhere close to 2 or 2.3 TB. Most recently, I deleted my cache pool and completely reformatted all the drives, thinking that may solve the problem, however, it currently shows that of 2.3TB, I have 3.56MB used, and 1.27TB available. Another interesting note is that the bars seem to indicate that 3.56MB ~= 1.27TB. From trying to look into this problem, I understand there's an issue with free space calculations in btrfs, but this all just seems a little off considering that I don't recall having an issue like this before and wanted to get some other eyes on it. Thanks in advance! trminator-diagnostics-20201230-1158.zip
  9. I just built a box with the following specs: * CPU: ThreadRipper 3970X * Mobo: ASUS Zenith II Extreme Alpha * GPU: ASUS Dual OC 3070 I was having a lot of issues until I updated to the 6.9.0-beta/rc releases and a lot of them were resolved, but I was still unable to pass through the GPU. I finally got around to moving around all my cards, throwing in another GPU so I could have the 3070 in the secondary slot and followed SpaceInvader One's video on dumping the vBIOS. It went flawlessly and I was able to dump the vBIOS from both positions on this dual-bios card. Now pass through works perfectly, but I somehow borked my baremetal install I was trying to pass through. Re-installing it now. If you have ANY ability to get a second GPU, I would definitely go the route of dumping your own BIOS. It is a pain, but it should only need to be done once AFAIK and gives you the best chance at success. FWIW, my bios was only like 153KB in both positions vs the 976KB from TechPowerUp. I don't know where that comes from since the card seemed to indicate that the BIOS chip was only 512 KB if I'm not mistaken.
  10. I've been putting my head through my desk all weekend trying to get these little oddities with my new ThreadRipper 3970X build to work out right. I've been on 6.8.3 stable, which is on kernel 4.19 with my last build and everything was working fine. Stupid brain didn't put 2 and 2 together that new chips like new kernels. 6.9.0-beta35 is on 5.8.18, and it looks like 5.8+ fixes the issues I've been having, so I'm absolutely going to upgrade to the latest beta. Standard beta warnings apply in that "You may lose data/things may explode and make you :(" however, I've used the Unraid betas in the past with little to no issues. I know that others have had issues, though. If you do go to beta, make sure your backup strategy is rock solid and there will be nothing to worry about. EDIT: This appears to have been a very good thing for me to do. I am now able to pass through my USB controller and it works exactly as expected without totally wrecking my server and forcing me to hold down the power button... If only I came across the combination of the following reddit thread and this thread before wasting my entire weekend... >_< https://www.reddit.com/r/VFIO/comments/eba5mh/workaround_patch_for_passing_through_usb_and/
  11. I didn't want to bump a thread that has been dormant for a while, then it turns out I didn't have to. Haha. With the 3080 having more info about it, it sounds like it is very well within the realm of possibility that we will see SR-IOV support on the 30-series. I don't know if it works unofficially or not, but I would LOVE to see some official support for this feature. +1
  12. Sorry for the late response. I thought things were slowing down and I'd get a second to really dig into this problem. Boy was I wrong... The power supply is a Corsair CS55M. The backplanes I have provide power over molex, so all my drives (aside from my m.2) are powered from the molex connectors on it. It appears to be a single-rail PSU. Just to see what my max-ish power consumption was, I started up a few hundred `yes` streams and made sure all of my disks were spun up. My UPS said I was pulling around 200W. At ~idle, I'm around 110W. As far as disks go, I have the following: 7x 3.5" Spinning Disk (Molex Power) 1x 2.5" Spinning Disk (Molex Power) 2x NVMe (PCIe Power) 1x SATA M.2 (SATA Power) 3x 2.5" SSD (Molex Power) I haven't done a memtest yet and the server is usually in use. I'll try to remember to start it before bed tonight. I've enabled logging to my USB, but I can never find any sort of crash information there either. I'll do it again and post some information from around the time of the crash. It just kind of instantly craps out, then works when I reboot it again, which is making me think it's something like the PSU going out vs something to do with software. That said, I did run into an issue recently where it just REFUSED to boot. It was giving me exit_boot() and efi_main() failures after grub. I had to try like 10 times before it would finally boot. I don't know that that it related to this, though.
  13. Hello, I have been using Unraid for a long time on an R710 and a custom built server. I recently sold the R710 and moved everything over to the custom built server, and for some reason, I have had random crashes ever sense. The only thing I can think of is that the PSU isn't powerful enough. I have looked through the logs several times and I cannot seem to pinpoint what the issue is from there, but I'm hoping maybe someone else can before I go dump a bunch of money into a new PSU. It isn't ever doing anything insane when it crashes. I just have a couple VMs and Docker containers that are always running in the background. The only one that ever really uses a ton of juice is Plex. Thank you in advance for anyone willing to help me look into this! tardis-diagnostics-20200901-0036.zip
  14. I would recommend against it just because if how loud it's going to be. It's likely to only accept 2TB drives at most. You MIGHT be able to flash an HBA into IT mode to bypass this, but idk if it would be worth it.
  15. Would you be open to it being made with Python? Seems to me that python would be a good choice since it is easily extensible, cross platform, and easier to maintain. Last I looked into it, you could easily build for Linux, Windows, and OS X. The only stipulation is that the OS X executable needs to be made on OS X.