1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. so to make sure I understand, if I want to use 3 disks using xfs, I'll need to create 3 xfs pools each with a single disk, and then then create a share that specifies those pools and Unraid will do the rest, correct?
  2. maybe I missed this, but can different cache pools be mixed formats, as in main cache btrfs, second pool xfs? and if so, when pooled with xfs does it span data or stripe? Looking at using a pool for a backup copy of data on a few drives with the of accessibility xfs provides for recovery. thanks
  3. and https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA
  4. I had this issue with this container and the other minos container. The only sort of solution was to log out and log back in. This resolved the webgui not responding about 50% of the time. I tried on several browsers to still the same. I got tired of that and now have mineos installed as a vm and it has zero issues and I've been frustration free.
  5. check and make sure you didn't over provision the disk that the backup is located on (as in, make sure the physical disk has space available.) I ran into this issue by telling TM I wanted it that I wanted the backup to be XXX in size. And as it wrote to the disk (along with other shares) it filled the disk to the point where TM had no more space to write even though the share/setup said otherwise.
  6. are you using logical disks or passing through via an hba? also, I wouldn't recommend usb disks. regarding the single pegged cpu core, next time it happens, open the terminal and type htop then see what process is causing the issue. regarding the trial, you can request one or two extensions, or at least you could lat time I checked (which was a while ago). If the usb creator isn't working for you, and you're running something other than macOS Catalina, then that points to a problem with the system running the creator.
  7. required until HP fixes it. but don't hold your breath. my hp workstations all complain about it but I just ignore them as it doesn't seem mission critical for what I do.
  8. I can confirm this behavior with at least 2 computers on Catalina. I thought it was something I had somehow borked even though I hadn't changed anything in Unraid. On my server with the TM backups I attempted rolling back from 6.8.3 to .2, to .1, to .0 and rc 9 (in which I know it worked) and still the same. Additionally I created a Tm backup share on my main server and TM did see that disk. I did not connect via the finder but used command+k, added the server location and share name, then clicked connect. That share then populated in the Time machine disk selection. So then I went back to the server hosting the TM backups. I tried the same method but backups failed. Then I noticed the disk had 5gb free on it, the same amount I had specified as the minimum free space. I made more room (166 GB actually) and told TM to backup now. It's currently preparing. So it appears I over-provisioned the disk. This doesn't necessarily address your issue, unless it is not seeing it because of the connection method. I've read that if you mount it as a read/write share via network browsing, TM won't always want to use it. So try the command+k method and see if that helps.
  9. it isn't intended to be virtualized, so it assumes it is running on hardware and some sort of hardware GPU is present, either card/soldered or graphics via cpu. Yes, you can run it in VMware officially from apple, but it's not really built to do that in the same manner as windows.
  10. Maybe driver issue. Are you running the windows driver or nvidia?
  11. you are trying awfully hard to justify a really niche request since it has never had any real majority of voices behind it in all the time that dockers and vm's have been incorporated into unRaid. By your logic, Frenas doesn't have a cappuccino making program written into the code, so if unRaid did that, they'd gain more users because of this feature. I'm being facetious but it illustrates the point. Uniqueness does always directly equate to value proposition. yes I recognize that you thanked me. But you also said this wasn't the place for alternative suggestions to the problem, which it actually has been. at this point there is no further need for discussion. it was requested, a work around was presented as either a permanent alternative or something usable in the duration before inclusion. it's up to everyone else to ask for it too. As a side note, I asked for multiple cache pools in February 2017. there is at least one request predating it in October 2015. we are now at the point that it is thankfully being released in the next version of unRaid. I recognize it was a major request and was happy to have 4 pages of support and have the dev's work hard to implement it. This further demonstrates the need and appropriateness of having workarounds while things are considered.
  12. so, 2 thoughts: 1: why would they create this when other solutions are possible? Especially when there are other features that are more important to develop? This is a low priority, which leads to the next thought- 2: Even if they decided to implement this, it could be 6 months or more until it arrives. yes, this is the "feature request" subforum, but that doesn't mean a valid workaround can't be posted to help you and other uses out in the meantime. If you take the time to look at a few other threads, this type of discourse is commonly found.
  13. then create a pfsense vm, route the dockers through it, and problem solved at zero additional cost.
  14. I have not done this, but I believe it would be possible this way: setup the docker containers on their own IP address manage the bandwidth using QoS settings on your router I do not believe unRaid will do this natively.
  15. 5xx series is not recommended. Use 7xx and higher because these issues happen due to architecture and bios.
  16. try this before booting the vm. I run two 580's in one server on separate vm's and 2 in another in the same vm and this seems to sort things out.
  17. is there a reason you want a different ip range? if there is no real reason than "just because" I'd just leave it all in the same network. I use to get fancy but simplicity is where I ended up these days. for a time I used the onboard gigabit for lan access for the rest of my network to my server and it worked well. Not collisions or other issues changing from 1gbe to 10gbe. (I've since purchased another mikrotik switch for my house and now run a 10gbe trunk line from this switch to that one.) whether or not the new pathway from your computer - 10gbe - CRS305 - 1gbe- gigabit switch will impact you negatively, it all depends on what the bandwidth going to the servers from the rest of the house is on the 1gbe line connecting them to it. If there is no traffic then you should see no slowdown. I run mine in swos. there seem to be no improvements in performance vs routeros from my early playing around it with it. and I'm not using vlans/layer3. it's a simpler interface that just works for me.
  18. noooo, don't format the drive. install this: then once installed (and possibly rebooting) go to settings>Unraid Nvidia. once there, look for the stock Unraid builds, pick one, let it install, reboot. then you'll need to go back to the github site for the right patch that aligns with the Unraid version you picked, copy that bzimage to the flash drive, reboot, and then report.
  19. how interesting. you typically don't need to update bios for the patch to work but in this case I would consider it. But before that, I would also consider dropping back an Unraid version or two, maybe 6.8.2 or 6.8.1 and try with those patches. I don't have a need for the patch anymore so I have no way to test if it is the patch or not. But that will eliminate a variable.
  20. your log shows vfio-pci 0000:04:00.0: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. you need
  21. Left Zoho for GSuites. I work at a non profit so it was free, with the added bonus of just being an all around better service.