DarkKnight

Members
  • Posts

    92
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DarkKnight's Achievements

Apprentice

Apprentice (3/14)

6

Reputation

  1. For the second week in a row, the vast majority of my containers that are set to update late Sunday night using this plugin are just missing entirely on Monday morning. What steps can I take to track down why this is happening?
  2. There's a lot going on here. You should set the system date and time in BIOS. It should have a current time unless you reset the CMOS. If you did not reset the CMOS, then check the motherboard battery to see if it needs replacement. There are a lot of errors in your logs that you seemed to have ignored, like QEMU being unable to find your vdisk images. I would not, as a matter of best practices, use an external hard drive to host anything that requires realtime interaction like a VM. External drives, unless connected via eSATA or SAS should be used for storage only. Put your VMs either on the cache drive or on an internal drive outside the array using the unassigned devices app.
  3. Can we please get some fixes to the VM GUI to support more of the various QEMU/KVM options that are available? Just moved over some VMs from ESXi, and it was a pain to get the XML straight. KVM apparently can handle VMDK's natively, but the GUI doesn't support it, so I had to find some examples online and get it sorted out. The docker GUI is really nice. The VM GUI feels like an afterthought in comparison. Would be nice if it could get the same kind of attention.
  4. This is 24 port SAS/SATA PCIe 8x Raid card. Please note that this does work well with Unraid, but only if IOMMU/VT-D is turned off in your bios, otherwise you get a bunch of errors. It has been my daily driver for many years, powering an 18 drive 48TB array, along with 4 extra drives. It's currently running the full gamut of docker services fine, full Plex, NZBGet, Sonarr, Radarr, Qbitorrent etc. on my array. Very stable, great performance vs cards that utilize port expanders to get to 24 port. What you cannot do with this card on Unraid is also have hardware pass-through for VMs, which is something I'm currently in need of, and so will have to replace this card with 3x 8 drive HBAs from other servers I have that work with VT-D. This card also works wonderfully in Windows/Windows Server, (with IOMMU/VT-D enabled) which I used it with for years before switching to Unraid. If anyone is running Unraid under HyperV, this is the perfect card for you. I paid ~$700 for it new, I'd like to get $400 shipped for it.
  5. Yeah, that was it. I made some changes to my firewall, and it's pretty sensitive about port 443. Swapped to another port in ovpn conf and it worked immediately. Really impressed you spotted that with so little info. Thanks man.
  6. Updated the container, and when I restarted it, QB isn't running any more. Can't tell from the log what's wrong. After the initial startup, this just loops endlessly every couple minutes. 2018-12-29 12:05:14,449 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 [UNDEF] Inactivity timeout (--ping-restart), restarting 2018-12-29 12:05:14,450 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 SIGHUP[soft,ping-restart] received, process restarting 2018-12-29 12:05:14,450 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 Sat Dec 29 12:05:14 2018 WARNING: --keysize is DEPRECATED and will be removed in OpenVPN 2.6 2018-12-29 12:05:14,451 DEBG 'start-script' stdout output: Sat Dec 29 12:05:14 2018 WARNING: file 'credentials.conf' is group or others accessible Sat Dec 29 12:05:14 2018 OpenVPN 2.4.6 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 24 2018 Sat Dec 29 12:05:14 2018 library versions: OpenSSL 1.1.1a 20 Nov 2018, LZO 2.10 Sat Dec 29 12:05:14 2018 Restart pause, 5 second(s) 2018-12-29 12:05:19,451 DEBG 'start-script' stdout output: Sat Dec 29 12:05:19 2018 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2018-12-29 12:05:19,452 DEBG 'start-script' stdout output: Sat Dec 29 12:05:19 2018 TCP/UDP: Preserving recently used remote address: [AF_INET]xxx.xxx.xxx.xxx:443 Sat Dec 29 12:05:19 2018 Socket Buffers: R=[212992->212992] S=[212992->212992] Sat Dec 29 12:05:19 2018 UDP link local: (not bound) Sat Dec 29 12:05:19 2018 UDP link remote: [AF_INET]xxx.xxx.xxx.xxx:443
  7. I have dual parity. My concern was the warning message that data corruption could get worse due to using -L in the repair. If this is not the case in this instance, than I have nothing to worry about. I'm running a non-corrective parity check. I also noticed that after 18 consecutive months of error free checks, I got 394 errors on my last monthly check. No new smart warnings, but I did have to shut unraid off a couple times in the past month while I was doing work on my servers. I suppose I could have had an unclean shutdown then. In terms of backups of *really* important data like photos, I do have those on multiple machines. I don't have an off-site backup configured for older photos, but it's on the list. B2 looking pretty cheap for that. Newer photos are covered by iCloud. I have a few TB in project files for old VHS home movies that I'd be pretty pissed to lose, but uncompressed they are like 30GB/hr and I have something like 100-200hrs of footage, though not all of it has been digitized yet. I can't imagine what my ISP would do if I tried to pass 10+TB of upload in a month over top of my already high usage to cover backing up that plus my existing irreproducible data. Would raise a red flag I don't want. Besides, I don't wan't the monthly sub. Would make more sense to me to get a 10-14TB drive, back it up locally and store that off-site. Just not in the budget. 😒
  8. In my case, my spare board was mATX, so I have more options. I sold off my old 3u server case, so I'll need to pick up something. Until then it'll just have to sit in an old tower on the floor. Your ITX board has a single PCIe 8x slot, right? Get a dual nic card.
  9. It's like $25 for a cpu that fits my board and supports AES-NI. After Christmas, I'll scrape up the cash for it. If I can keep the larger server off for about 40 days next year, it'll pay for it in energy savings.
  10. md4 & md15 both had log errors. Edit: I believe it was related to an unclean shutdown due to too short of a default timer on the disks. I set it to 7 min per the recommendation today.
  11. I was down two disks. I did not want to take the chance of a problem occurring during rebuild that would lose all of that data. I don't have 4TB of space available outside the array for backup of the emulated contents either.
  12. The server is at about 30/50TBs. used. There's no other backup. Unraid is capable of emulating the disks when they are missing using parity, provided enough other disks are available. If it can do that, why can't we choose to have the data corrected rather than the parity?
  13. I never considered the case where you'd want to run two instances of OVPN inside the same network. I do run pfSense on a 2nd larger server, and I'm actually in the process of migrating to untangled on it's own box so I can shut down the larger server when it's not needed to save on power (~25w vs ~250w). The box I'm migrating to should hopefully support decent speeds. Edit: Ugh, now you've got me looking at getting a new CPUs that supports AES-NI for the 'low power' box. Way to help me save money @jonathanm. 😂
  14. I turned off my unraid server via the GUI a couple times this week, and when restarting it yesterday it came back up with two unmountable disks with 'Corruption warning: Metadata has LSN (1:83814) ahead of current LSN (1:80338).' I restarted the array in maintenance mode and ran xfs_repair -v for both devices which indicated -L was needed. I reran it with -L and the output looked good: Phase 1 - find and verify superblock... - block cache size set to 2292464 entries Phase 2 - using internal log - zero log... zero_log: head block 451270 tail block 451266 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:452526) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Sat Dec 8 08:49:37 2018 Phase Start End Duration Phase 1: 12/08 08:44:23 12/08 08:44:23 Phase 2: 12/08 08:44:23 12/08 08:45:44 1 minute, 21 seconds Phase 3: 12/08 08:45:44 12/08 08:45:45 1 second Phase 4: 12/08 08:45:45 12/08 08:45:45 Phase 5: 12/08 08:45:45 12/08 08:45:45 Phase 6: 12/08 08:45:45 12/08 08:45:45 Phase 7: 12/08 08:45:45 12/08 08:45:45 Total run time: 1 minute, 22 seconds done xfs_repair -v -L /dev/md15 Phase 1 - find and verify superblock... - block cache size set to 2292464 entries Phase 2 - using internal log - zero log... zero_log: head block 80338 tail block 80334 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:83814) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Sat Dec 8 08:50:28 2018 Phase Start End Duration Phase 1: 12/08 08:45:19 12/08 08:45:19 Phase 2: 12/08 08:45:19 12/08 08:47:15 1 minute, 56 seconds Phase 3: 12/08 08:47:15 12/08 08:47:15 Phase 4: 12/08 08:47:15 12/08 08:47:15 Phase 5: 12/08 08:47:15 12/08 08:47:15 Phase 6: 12/08 08:47:15 12/08 08:47:15 Phase 7: 12/08 08:47:15 12/08 08:47:15 Total run time: 1 minute, 56 seconds done I restarted the array and it detected the disks normally and everything 'looks' okay. Now I need to run a consistency check, but I'd like the check to consider the parity authoritative rather than the data disks in case there are differences. How can I do this? diagnostics-20181208-0926.zip
  15. $150 UPS has saved me endless aggravation and headaches. It's easily worth putting off purchasing extra drives for it. I always highly recommend one.