Caduceus

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Caduceus's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I tried to follow these instructions to the letter... unfortunately, every time I restart the container or just restart splunkd, all of my indexed data seems to no longer be searchable on restart. The index data can still be seen in the persistant shares (ie. /splunkdata and /splunkcold). If I stop splunk and delete everything in /splunkdata and /splunkcold, then splunk start. Everything will re-index again. If I don't delete the data, I cannot search the data whether I restart or not. So some record of the files being indexed remains persistant in fishbucket db. I have not idea why my other data seems to disappear on restart. I was doing some Splunk Labs for an Sales Engineer 2 accreditation and I spent half my time wondering why my files weren't being indexed... turned out that they had been indexed but on restart they disappeared and because there was data still in fishbucket they wouldn't be re-parsed with any changes I made and re-indexed. Here are my docker volume mappings: /splunkcold <> /mnt/user/splunk-warm-cold/ #this is a unraid cached array share for cold buckets /opt/splunk/etc/licenses <> /mnt/user/appdata/splunkenterprise/license #I persist a developer license here /test <> /mnt/user/appdata/splunkenterprise/test/ #ingesting test files from here /opt/splunk/etc/system/local <> /mnt/user/appdata/splunkenterprise/etc/system/local #indexes.conf lives here /splunkdata <> /mnt/user/appdata/splunkenterprise/splunkdata #hot-warm data is persisted here on ssd cache /opt/splunk/etc/apps <> /mnt/user/appdata/splunkenterprise/etc/apps /opt/splunk/etc/auth <> /mnt/user/appdata/splunkenterprise/etc/auth Here is my indexes.conf stored in /opt/splunk/etc/system/local which is persisted to a share (/mnt/user/appdata/splunkenterprise/etc/system/local): [volume:hotwarm] path = /splunkdata maxVolumeDataSizeMB = 3072 [volume:cold] path = /splunkcold maxVolumeDataSizeMB = 51200 [default] # 90 days in seconds frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/$_index_name/db coldPath = volume:cold/cold/$_index_name/colddb thawedPath = /splunkcold/$_index_name/thaweddb tstatsHomePath = volume:hotwarm/$_index_name/datamodel_summary # Splunk Internal Indexes [_internal] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/_internaldb/db coldPath = volume:cold/cold/_internaldb/colddb thawedPath = /splunkcold/_internaldb/thaweddb tstatsHomePath = volume:hotwarm/_internaldb/datamodel [_audit] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/audit/db coldPath = volume:cold/cold/audit/colddb thawedPath = /splunkcold/audit/thaweddb tstatsHomePath = volume:hotwarm/audit/datamodel [_introspection] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/_introspection/db coldPath = volume:cold/cold/_introspection/colddb thawedPath = /splunkcold/_introspection/thaweddb tstatsHomePath = volume:hotwarm/_introspection/datamodel [_metrics] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/_metricsdb/db coldPath = volume:cold/cold/_metricsdb/colddb thawedPath = /splunkcold/_metricsdb/thaweddb tstatsHomePath = volume:hotwarm/_metricsdb/datamodel [_metrics_rollup] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/_metrics_rollup/db coldPath = volume:cold/cold/_metrics_rollup/colddb thawedPath = /splunkcold/_metrics_rollup/thaweddb tstatsHomePath = volume:hotwarm/_metrics_rollup/datamodel [_telemetry] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/_telemetry/db coldPath = volume:cold/cold/_telemetry/colddb thawedPath = /splunkcold/_telemetry/thaweddb tstatsHomePath = volume:hotwarm/_telemetry/datamodel [_thefishbucket] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/fishbucket/db coldPath = volume:cold/cold/fishbucket/colddb thawedPath = /splunkcold/fishbucket/thaweddb tstatsHomePath = volume:hotwarm/fishbucket/datamodel [history] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/historydb/db coldPath = volume:cold/cold/historydb/colddb thawedPath = /splunkcold/historydb/thaweddb tstatsHomePath = volume:hotwarm/historydb/datamodel [summary] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/summarydb/db coldPath = volume:cold/cold/summarydb/colddb thawedPath = /splunkcold/summarydb/thaweddb tstatsHomePath = volume:hotwarm/summarydb/datamodel [main] frozenTimePeriodInSecs = 7884000 homePath = volume:hotwarm/defaultdb/db coldPath = volume:cold/cold/defaultdb/colddb thawedPath = /splunkcold/defaultdb/thaweddb tstatsHomePath = volume:hotwarm/defaultdb/datamodel # Begin Custom Indexes [splunk_labs] homePath = volume:hotwarm/splunk_labs/db coldPath = volume:cold/cold/splunk_labs/colddb thawedPath = /splunkcold/splunk_labs/thaweddb tstatsHomePath = volume:hotwarm/splunk_labs/datamodel [win_logs] homePath = volume:hotwarm/win_logs/db coldPath = volume:cold/cold/win_logs/colddb thawedPath = /splunkcold/win_logs/thaweddb tstatsHomePath = volume:hotwarm/win_logs/datamodel Monitoring Console / Data / Indexes Before a Restart Any ideas how I can fix this? I love the idea of having a container but I can't live with it as is Thanks! I also attached the container logs to see if that gives any insight. ****UPDATE**** I think I discovered the answer in the logs: 11-17-2020 09:07:31.535 +0000 INFO BucketMover - will attempt to freeze: candidate='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13' because frozenTimePeriodInSecs=7884000 is exceeded by the difference between now=1605604051 and latest=1505895227 11-17-2020 09:07:31.535 +0000 INFO IndexerService - adjusting tb licenses 11-17-2020 09:07:31.545 +0000 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13' splunk_docker_logs.txt It turns out that the Splunk Lab files they give you have dates back from 2014... so the event dates in my case and in most peoples cases exceed by far the amount of time it takes to send a bucket of events to a frozen bucket. So on restart it sends all of my just indexed data, directly to frozen because of the latest event date in that newly indexed bucket and therefore I can't see the data anymore. Seem that the data was persisting just fine and this is expected behavior. For the lab data I will change the frozenTimePeriodInSecs > greater than the event dates. I hope this helps others, even if just looking at screenshots and config files if they are looking to do the same thing.
  2. I was mistaken. This change doesn't seem to fix it. But in $splunk_home/etc/apps/journald_input/local/app.conf if we add: [install] state = disabled That seems to fix it by disabling the app.
  3. This message is persistent in my Messages. Is this error expected?
  4. unraid is currently on kernel 4.19.56 . I've read about the patches but haven't attempted anything. I was going to wait it out for the next AGESA. I did try both UEFI and legacy when I had F32 and F40. It didn't matter. Plus I read there could be problems with Windows using Nvidia cards if you booted unraid with the UEFI mode. I have no idea. I'm really happy with the way everything is working for the moment. I picked up some g.skill dual channel 3200mhz 32gb kit for $109 and a bunch of IronWolf 4TB hdds for $95.
  5. I was able to use the Radeon 5770 as the primary display by configuring the bios to use card two as primary. I was able to pass 1660Ti to VM, but only with the F5 bios. I got the 2700 because I was able to get the motherboard and chip for about $220 total. I planned on upgrading to 3700x in a year or two when the prices come down. Yes, if you get that chip, unless Gigabyte fixes their bios, you won't be able to pass through the 1660Ti. Also, I was able to pass through both videocards, 1 to each vm by running unRaid headless. It's a little more complicated, but you can use a vbios rom with a vm to let it take control of the video card in use by the host.
  6. I recently built an unRaid server with almost this exact setup. Same motherboard, same cpu, I have the 1660ti (Gigabyte 1660ti mini-itx) setup in the primary pcix16 slot) but I have an old Radon 5770 in the second slot. The only caveat is, I need to run the F5 version of the bios. I had upgraded to F32 and F40. Both gave me serious issues that would not allow Windows VMs to start with those bios revisions. I did a full post in the VM Troubleshooting section. I hope this helps. You can see my IOMMU groups here:Post with IOMMU groups
  7. Well it looks like this was due to the bios on my motherboard. I had the latest revision F40. I tried downgrading to F32 but had the same problem. I then downgraded to F5 and I am now able to pass-through successfully. Ugh. I will be reluctant to update my bios now.
  8. Hi all, I'm running the latest version of unraid 6.7.2. I have 2 gpu's in my system, an old radeon 5770 and a gigabyte rtx 1660ti. I have a Ryzen 7 2700 and a Gigabyte b450 aorus m motherboard. I have tried everything I can think of or read to get the pass through working. I watched every spaceinvader one video on any related subject but it didn't help. The nvidia card has all of it's devices in the same iommu group. I pinned them all to vfio-pci. Whenever I try to start up a windows 10 vm , passing the card through, I get: -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.3,addr=0x0,romfile=/mnt/user/system/vbios/Gigabyte.GTX1660Ti.6144.190113_1_no_header.rom \ -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.4,addr=0x0 \ -device vfio-pci,host=06:00.2,id=hostdev2,bus=pci.5,addr=0x0 \ -device vfio-pci,host=06:00.3,id=hostdev3,bus=pci.6,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2019-07-06 01:33:41.796+0000: Domain id=1 is tainted: high-privileges 2019-07-06 01:33:41.796+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2019-07-06T01:33:44.345969Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3 2019-07-06T01:33:44.349775Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3 2019-07-06T01:33:44.354756Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3 2019-07-06T01:33:44.358761Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3 And then of course I have to kill the vm or it sits there doing nothing and not booting. If I try to start any other vm again with the card used for pass through, I'll get the error: internal error: Unknown PCI header type '127' until I reboot. I have tried running the vm with i440fx-3.1 , Q35-3.1, seabios, OVMF, with Hyper-V on/off , booted unraid from UEFI and without UEFI. I set my bios to boot with the Radeon card as primary so I know I don't need the gpu bios to passthrough, but I did grab the bios and edited it to remove the headers as well, and tried that to see if that helped. I have tried every combo of this for syslinux : kernel /bzimage append amd_iommu=on pcie_acs_override=downstream vfio-pci.ids=10de:2182,10de:1aeb,10de:1aec,10de:1aed initrd=/bzroot PCI Devices and IOMMU Groups IOMMU group 0:[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 1:[1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 2:[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 3:[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 4:[1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 5:[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 6:[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 7:[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B IOMMU group 8:[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge IOMMU group 9:[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B IOMMU group 10:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 11:[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 12:[1022:43d5] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller (rev 01) [1022:43c8] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller (rev 01) [1022:43c6] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge (rev 01) [1022:43c7] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01) [1022:43c7] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01) [1022:43c7] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01) [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16) [1002:68b8] 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Juniper XT [Radeon HD 5770] [1002:aa58] 05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Juniper HDMI Audio [Radeon HD 5700 Series] IOMMU group 13:[10de:2182] 06:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 Ti] (rev ff) [10de:1aeb] 06:00.1 Audio device: NVIDIA Corporation Device 1aeb (rev ff) [10de:1aec] 06:00.2 USB controller: NVIDIA Corporation Device 1aec (rev ff) [10de:1aed] 06:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1aed (rev ff) IOMMU group 14:[1022:145a] 07:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function IOMMU group 15:[1022:1456] 07:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor IOMMU group 16:[1022:145f] 07:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller IOMMU group 17:[1022:1455] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function IOMMU group 18:[1022:7901] 08:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 19:[1022:1457] 08:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller I also get these errors in the system logs; vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting vfio-pci 0000:06:00.1: Refused to change power state, currently in D3 vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars Jul 5 21:33:45 Redshift kernel: vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway Jul 5 21:33:46 Redshift kernel: vfio-pci 0000:06:00.0: not ready 1023ms after FLR; waiting Jul 5 21:33:47 Redshift kernel: vfio-pci 0000:06:00.0: not ready 2047ms after FLR; waiting Jul 5 21:33:49 Redshift kernel: vfio-pci 0000:06:00.0: not ready 4095ms after FLR; waiting Jul 5 21:33:53 Redshift kernel: vfio-pci 0000:06:00.0: not ready 8191ms after FLR; waiting Jul 5 21:34:02 Redshift kernel: vfio-pci 0000:06:00.0: not ready 16383ms after FLR; waiting Jul 5 21:34:19 Redshift kernel: vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting Jul 5 21:34:54 Redshift kernel: vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars Jul 5 21:37:19 Redshift emhttpd: req (2): csrf_token=****************&title=Log+for%3AWindows+10+-+Bare+Metal&cmd=%2FwebGui%2Fscripts%2Ftail_log&arg1=libvirt%2Fqemu%2FWindows+10+-+Bare+Metal.log Jul 5 21:37:19 Redshift emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log libvirt/qemu/Windows 10 - Bare Metal.log Jul 5 21:48:40 Redshift login[12997]: ROOT LOGIN on '/dev/pts/1' Jul 5 22:02:04 Redshift kernel: ata6.00: Enabling discard_zeroes_data Jul 5 22:02:04 Redshift kernel: sde: sde1 sde2 sde3 Jul 5 22:02:04 Redshift avahi-daemon[5027]: Interface vnet0.IPv6 no longer relevant for mDNS. Jul 5 22:02:04 Redshift avahi-daemon[5027]: Leaving mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe88:9fb7. Jul 5 22:02:04 Redshift kernel: br0: port 2(vnet0) entered disabled state Jul 5 22:02:04 Redshift kernel: device vnet0 left promiscuous mode Jul 5 22:02:04 Redshift kernel: br0: port 2(vnet0) entered disabled state Jul 5 22:02:04 Redshift avahi-daemon[5027]: Withdrawing address record for fe80::fc54:ff:fe88:9fb7 on vnet0. Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'System_Reserved' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'Samsung_SSD_840_Series_S14CNSAD200506M-part3' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'Samsung_SSD_840_Series_S14CNSAD200506M-part2' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift kernel: vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway Jul 5 22:02:07 Redshift kernel: vfio-pci 0000:06:00.0: not ready 1023ms after FLR; waiting Jul 5 22:02:08 Redshift kernel: vfio-pci 0000:06:00.0: not ready 2047ms after FLR; waiting Jul 5 22:02:10 Redshift kernel: vfio-pci 0000:06:00.0: not ready 4095ms after FLR; waiting Jul 5 22:02:14 Redshift kernel: vfio-pci 0000:06:00.0: not ready 8191ms after FLR; waiting Jul 5 22:02:23 Redshift kernel: vfio-pci 0000:06:00.0: not ready 16383ms after FLR; waiting Jul 5 22:02:40 Redshift kernel: vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting Jul 5 22:02:56 Redshift ool www[14840]: /usr/local/emhttp/plugins/dynamix/scripts/bootmode '1' Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.1: Refused to change power state, currently in D3 Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.2: Refused to change power state, currently in D3 Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.3: Refused to change power state, currently in D3 Please help! I don't know if it's my motherboard or video card that's the issue. I'll return either one if I can just fix this.