bamhm182

Members
  • Content Count

    65
  • Joined

  • Last visited

Community Reputation

3 Neutral

About bamhm182

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Going to hop on the revive an ancient thread bandwagon. I finally got around to learning more about docker networks, and I am having a hard time believing that such a useful feature of docker (various networks) is not reliably possible to maintain in Unraid... EDIT: Hello sarcasm, my old friend.
  2. No problem. FWIW, I haven't had any issues since I started it manually. I haven't rebooted either.
  3. I just noticed my Docker page was reporting the same issues. I didn't try to reboot my server and it has been up for nearly 20 days, while it was working fine last night. Going to settings => docker and disabling, then re-enabling didn't work. I couldn't find the `service` command. I don't know if that's a nuance of slackware or unraid or what, but running `/etc/rc.d/rc.docker status` reported it was stopped, then I ran `/etc/rc.d/rc.docker start` and it said it was started. I then ran `/etc/rc.d/rc.docker status` and it reported running. Everything seems to be working fine now. Not really
  4. Fair enough. I'll take your word for it. Thanks for the additional information!
  5. I wasn't insinuating it would be a long-term fix, I was just thinking maybe a little javascript to clean it up a bit. I honestly don't know anything about Unraid plugins and what they're capable of.
  6. Thanks for the info. JorgeB! Though it makes me wonder, why include the Free Space at all if it's known to be buggy and there's nothing Unraid can do about it? Seems to me that the cache always has at least a good idea of how much space is actually being used, so it seems like it would look much better if there was only the used metric there, and if the bar filled up as a percentage of the used space over the calculated space (2TB/2.3TB in my case). Even if the total drive space does fluctuate a bit, 3MB/2.3TB vs 3MB/2TB or 200GB/2.3TB vs 200GB/2TB seems like it would look much better than an
  7. Is anyone else having trouble installing perl? When I go to add it, it jumps straight to "Installing", whereas something like python will go to "Downloading", the "Installing". Even though the toggle switch is set to "yes" for perl, the status is uninstalled.
  8. Hello, I am hoping to have someone sanity check me on whether or not these numbers make sense or not. As can be seen in the screenshot, I have 7 cache disks of varying size. Sometimes the cache pool will says 2TB total, but sometimes it will says 2.3TB. The USED and FREE numbers don't add up to anywhere close to 2 or 2.3 TB. Most recently, I deleted my cache pool and completely reformatted all the drives, thinking that may solve the problem, however, it currently shows that of 2.3TB, I have 3.56MB used, and 1.27TB available. Another interesting note is that the bars seem to indicat
  9. I just built a box with the following specs: * CPU: ThreadRipper 3970X * Mobo: ASUS Zenith II Extreme Alpha * GPU: ASUS Dual OC 3070 I was having a lot of issues until I updated to the 6.9.0-beta/rc releases and a lot of them were resolved, but I was still unable to pass through the GPU. I finally got around to moving around all my cards, throwing in another GPU so I could have the 3070 in the secondary slot and followed SpaceInvader One's video on dumping the vBIOS. It went flawlessly and I was able to dump the vBIOS from both positions on this dual-bios card
  10. I've been putting my head through my desk all weekend trying to get these little oddities with my new ThreadRipper 3970X build to work out right. I've been on 6.8.3 stable, which is on kernel 4.19 with my last build and everything was working fine. Stupid brain didn't put 2 and 2 together that new chips like new kernels. 6.9.0-beta35 is on 5.8.18, and it looks like 5.8+ fixes the issues I've been having, so I'm absolutely going to upgrade to the latest beta. Standard beta warnings apply in that "You may lose data/things may explode and make you :(" however, I've used the Unraid bet
  11. I didn't want to bump a thread that has been dormant for a while, then it turns out I didn't have to. Haha. With the 3080 having more info about it, it sounds like it is very well within the realm of possibility that we will see SR-IOV support on the 30-series. I don't know if it works unofficially or not, but I would LOVE to see some official support for this feature. +1
  12. Sorry for the late response. I thought things were slowing down and I'd get a second to really dig into this problem. Boy was I wrong... The power supply is a Corsair CS55M. The backplanes I have provide power over molex, so all my drives (aside from my m.2) are powered from the molex connectors on it. It appears to be a single-rail PSU. Just to see what my max-ish power consumption was, I started up a few hundred `yes` streams and made sure all of my disks were spun up. My UPS said I was pulling around 200W. At ~idle, I'm around 110W. As far as disks go, I have the fol
  13. Hello, I have been using Unraid for a long time on an R710 and a custom built server. I recently sold the R710 and moved everything over to the custom built server, and for some reason, I have had random crashes ever sense. The only thing I can think of is that the PSU isn't powerful enough. I have looked through the logs several times and I cannot seem to pinpoint what the issue is from there, but I'm hoping maybe someone else can before I go dump a bunch of money into a new PSU. It isn't ever doing anything insane when it crashes. I just have a couple VMs and Docker containers th
  14. I would recommend against it just because if how loud it's going to be. It's likely to only accept 2TB drives at most. You MIGHT be able to flash an HBA into IT mode to bypass this, but idk if it would be worth it.
  15. Would you be open to it being made with Python? Seems to me that python would be a good choice since it is easily extensible, cross platform, and easier to maintain. Last I looked into it, you could easily build for Linux, Windows, and OS X. The only stipulation is that the OS X executable needs to be made on OS X.