• Content Count

  • Joined

  • Last visited

Community Reputation

6 Neutral

About DarkKnight

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For the second week in a row, the vast majority of my containers that are set to update late Sunday night using this plugin are just missing entirely on Monday morning. What steps can I take to track down why this is happening?
  2. There's a lot going on here. You should set the system date and time in BIOS. It should have a current time unless you reset the CMOS. If you did not reset the CMOS, then check the motherboard battery to see if it needs replacement. There are a lot of errors in your logs that you seemed to have ignored, like QEMU being unable to find your vdisk images. I would not, as a matter of best practices, use an external hard drive to host anything that requires realtime interaction like a VM. External drives, unless connected via eSATA or SAS should be used for storage only. Pu
  3. Can we please get some fixes to the VM GUI to support more of the various QEMU/KVM options that are available? Just moved over some VMs from ESXi, and it was a pain to get the XML straight. KVM apparently can handle VMDK's natively, but the GUI doesn't support it, so I had to find some examples online and get it sorted out. The docker GUI is really nice. The VM GUI feels like an afterthought in comparison. Would be nice if it could get the same kind of attention.
  4. This is 24 port SAS/SATA PCIe 8x Raid card. Please note that this does work well with Unraid, but only if IOMMU/VT-D is turned off in your bios, otherwise you get a bunch of errors. It has been my daily driver for many years, powering an 18 drive 48TB array, along with 4 extra drives. It's currently running the full gamut of docker services fine, full Plex, NZBGet, Sonarr, Radarr, Qbitorrent etc. on my array. Very stable, great performance vs cards that utilize port expanders to get to 24 port. What you cannot do with this card on Unraid is also have hardware pass-through for VMs, which is som
  5. Yeah, that was it. I made some changes to my firewall, and it's pretty sensitive about port 443. Swapped to another port in ovpn conf and it worked immediately. Really impressed you spotted that with so little info. Thanks man.
  6. Updated the container, and when I restarted it, QB isn't running any more. Can't tell from the log what's wrong. After the initial startup, this just loops endlessly every couple minutes. 2018-12-29 12:05:14,449 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 [UNDEF] Inactivity timeout (--ping-restart), restarting 2018-12-29 12:05:14,450 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 SIGHUP[soft,ping-restart] received, process restarting 2018-12-29 12:05:14,450 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 WARNING: --keysize is DEPRECATED and
  7. I have dual parity. My concern was the warning message that data corruption could get worse due to using -L in the repair. If this is not the case in this instance, than I have nothing to worry about. I'm running a non-corrective parity check. I also noticed that after 18 consecutive months of error free checks, I got 394 errors on my last monthly check. No new smart warnings, but I did have to shut unraid off a couple times in the past month while I was doing work on my servers. I suppose I could have had an unclean shutdown then. In terms of backups of *really* important data lik
  8. In my case, my spare board was mATX, so I have more options. I sold off my old 3u server case, so I'll need to pick up something. Until then it'll just have to sit in an old tower on the floor. Your ITX board has a single PCIe 8x slot, right? Get a dual nic card.
  9. It's like $25 for a cpu that fits my board and supports AES-NI. After Christmas, I'll scrape up the cash for it. If I can keep the larger server off for about 40 days next year, it'll pay for it in energy savings.
  10. md4 & md15 both had log errors. Edit: I believe it was related to an unclean shutdown due to too short of a default timer on the disks. I set it to 7 min per the recommendation today.
  11. I was down two disks. I did not want to take the chance of a problem occurring during rebuild that would lose all of that data. I don't have 4TB of space available outside the array for backup of the emulated contents either.
  12. The server is at about 30/50TBs. used. There's no other backup. Unraid is capable of emulating the disks when they are missing using parity, provided enough other disks are available. If it can do that, why can't we choose to have the data corrected rather than the parity?
  13. I never considered the case where you'd want to run two instances of OVPN inside the same network. I do run pfSense on a 2nd larger server, and I'm actually in the process of migrating to untangled on it's own box so I can shut down the larger server when it's not needed to save on power (~25w vs ~250w). The box I'm migrating to should hopefully support decent speeds. Edit: Ugh, now you've got me looking at getting a new CPUs that supports AES-NI for the 'low power' box. Way to help me save money @jonathanm. 😂
  14. I turned off my unraid server via the GUI a couple times this week, and when restarting it yesterday it came back up with two unmountable disks with 'Corruption warning: Metadata has LSN (1:83814) ahead of current LSN (1:80338).' I restarted the array in maintenance mode and ran xfs_repair -v for both devices which indicated -L was needed. I reran it with -L and the output looked good: Phase 1 - find and verify superblock... - block cache size set to 2292464 entries Phase 2 - using internal log - zero log... zero_log: head block 451270 tail block 451266 A
  15. $150 UPS has saved me endless aggravation and headaches. It's easily worth putting off purchasing extra drives for it. I always highly recommend one.