cyberspectre

Members
  • Posts

    111
  • Joined

  • Last visited

Everything posted by cyberspectre

  1. I have a question regarding PCIE lanes, IOMMU groups, etc. My board is a Gigabyte X370 K7. Currently, I have two NVidia GTX cards that I pass through to two separate VMs running concurrently. I want to add a third graphics card — a very basic one — to use as a dedicated display for the UnRaid shell. VGA output would be sufficient. Is this possible on X370?
  2. I was doing exactly that until a few days ago. Now, it only works if I have the gaming graphics card (in my case a 970) set as the secondary VGA in the BIOS. When it's set to the default VGA, it doesn't work. If you experience the same problem, roll back to UnRaid 6.6.6 and that will likely fix it.
  3. Anybody else unable to install any applications in Catalina? Even after disabling "Gatekeeper," I still get nothing but errors when trying to install Vivaldi from dmg. And since attempting it the first time, now OSX only successfully boots every other time, and the desktop environment loads in bits and pieces. Is it just a steaming pile of an OS?
  4. Since I upgraded from 6.6.6 to 6.7.2, passthrough of my GTX-760 no longer works right. However, I can still pass through my GTX-970 with no issues. The 760 is the default VGA selected in the BIOS, the one that UnRaid displays on while booting. In 6.6.6, when starting a VM that used the 760, the display would simply switch from the UnRaid shell to the VM. But in 6.7.2, it switches from UnRaid to a black screen, and stays black. Is this a known issue? Should I roll back?
  5. It wasn't this. Actually, I found that it happened whenever I had another VM running. Disabling the other VMs, I got it to work.
  6. How did you get this fixed? I'm experiencing the same issue. The VM goes into a boot loop after installing high sierra. CPU is a Ryzen 1600x.
  7. What a shame. Really wanted to use this. InvoiceNinja billed me the other day and I thought, why am I paying them when I can host this on my own server? Eagerly awaiting a new implementation for UnRaid. But I don't have the chops to do it myself.
  8. Could you elaborate? Is that some setting in UnRaid? My memory is rated at 3000 but the default profile in the BIOS is 2133, so I leave it at that. By the way, if I set "use cache disk" on a share to "yes," why does the mover move everything OFF the cache disk?
  9. This is the first time it's happened to me, but I'll run memtest86 tonight just to make sure there's nothing wrong. Regardless, perhaps XFS is a better option for me because of the fact that it's more resilient. The only advantage to BTRFS I can use is TRIM, and if I'm not mistaken, my SSD does garbage collection in the firmware.
  10. Thank you. The disk was mountable, and I was able to recover some of the data stored on it. Luckily, my daily use VM image was intact (though, it's the backup image, which is a few days out of date -- this means I have to start over in The Witcher 2 😩). I was also able to recover the appdata for syncthing. Sadly, the rest is cooked. I'm wondering if it's an issue with the disk at all, or if it's an(other) issue with BTRFS. Based on some threads I'm reading, people tend to choose XFS for SSDs after suffering repeated failures similar to this one. Once I was finished with all the recovery options that worked, I was able to reformat the disk as XFS with no errors. Not going to put anything on it without testing it extensively, though.
  11. Suddenly having this error repeated in the log, causing my main VM to stutter / behave badly. ANDRAS4 kernel: BTRFS critical (device nvme0n1p1): corrupt leaf: root=5 block=113983488 slot=109 ino=1122227 file_offset=413696, invalid type for file extent, have 129 expect range [0, 2] Log is attached. The vdisks (domains) are on the cache drive, as is the appdata folder for docker. It's an NVME SSD that I bought new about 2 months ago. Is it failing? If so, is there a way to safely move domains + appdata to the array? andras4-syslog-20191014-1851.zip
  12. Thanks! You know, I just now discovered Plex, which can stream music directly from your server to any device. So now, I'm not sure I need Google Music anymore. I might still use it because I have some concerns about opening a port to the public, though. Either way, I'll set it up. Good to have.
  13. I have a website with a paid hosting plan and a server instance managed with cpanel. I want to set up 2-way syncing between its public_html directory and a share on my UnRaid server, simply named "web." Using cpanel, I can configure access via WebDAV or SSH. I don't want to write a script for scheduled scp push / pull, I want syncing in semi-real-time. What's the best way to do this?
  14. Could someone provide a brief tutorial on how to use this?
  15. Anyone using the InvoiceNinja docker successfully? After installing it, I get an error when trying to access the web UI. Thanks!
  16. It isn't the cheapest service, but SpiderOak is once again working perfectly for me, and I can safely suggest it. The docker container runs a native Linux client that watches for changes. Any time I modify or add a new file to my array, it's backed up instantly.
  17. Maybe so, but it's the principle if you ask me. Regardless, you might be able to use OpenWRT to accomplish something like that if you're determined. I figured out that the issue I was having with SpiderOak was due to one specific file. Deleted the file and now it's all good again. Easy, clean, headless backup.
  18. Specs in signature. It does everything. Daily general stuff, work, gaming, etc. It's also my HTPC. Next one I build is going to be a "whole-house PC."
  19. 😂 "Hey best buddy, I don't want to pay a company to do my backup. Instead, I'd like you to pay for the electricity and bandwidth needed to do it."
  20. There have been a number of threads about this, but the latest ones date back a year or more, so I'm wondering what solutions people are employing at this moment in 2019. Some background: my UnRaid machine is not a 16-disk, data-hoarding closet server. It's a multi-role computer that's used for work, daily browsing, gaming, and everything in between, in addition to being a file server. The array has 2 HDDs, so there is some redundancy. But as there are only 2 HDD slots in the tower, I also have paid offsite backup for peace of mind. Until recently, I've used SpiderOak ONE in a docker container. But lately, I've been experiencing issues with the command-line client that make the service no longer usable. The docker container also frequently segfaults for some reason. Suffice to say, it's time to look at other options. I experimented with rclone to very little avail, and I don't enjoy the idea of sinking more time into that. This method looks promising, and may be my next route. But I'd like to hear what others are doing first. A simple docker container running a first-party, native Linux client from a reputable service would be ideal. Anybody have something like that?
  21. Good to know they're not completely stupid. Does this mean the solution above won't be needed soon?
  22. Thanks trurl. I've got it figured out now. Larger disk set as parity, which, in a 2-disk array, is actually a mirrored setup. Exactly what I needed.
  23. Oh, I see. I think I was looking at it the wrong way around. I was under the impression the parity was the "master" disk and all data disks mirror what's on that.
  24. Should I copy everything over from the other drive first? If I leave the new disk empty and set it as parity, won't it erase the data on the current disk?