Caduceus

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by Caduceus

  1. On 7/11/2020 at 12:08 AM, andrew207 said:

    Hey @4554551n thanks for your interest, here are some answers to your questions.

    > Resetting trial license

    Yeah sure, you can set it to the free license if you want. Whenenver you upgrade the container you'll just need to set it to free license again.

     

    > Splunk index data location / splitting hotwarm and cold

    You can't split hot and warm, but you can split hot/warm and cold. With Splunk there are a lot of ways to split cold data off into its own location, I'd use "volumes". Here's a link to the spec for the config file we'll be editing: https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Indexesconf

     

    In my docker startup script I run code to change the default SPLUNK_DB location to the /splunkdata mount the container uses. SPLUNK_DB contains both hot/warm and cold. OPTIMISTIC_ABOUT_FILE_LOCKING fixes an unrelated bug. We set this in splunk-launch.conf meaning the SPLUNK_DB variable is set at startup and is persistent through the whole Splunk ecosystem. As you correctly identified from Splunk docs, SPLUNK_DB is used as the storage location for all indexes data and all buckets by default, this config was made to split it off into a volume.

    
    printf "\nOPTIMISTIC_ABOUT_FILE_LOCKING = 1\nSPLUNK_DB=/splunkdata" >> $SPLUNK_HOME/etc/splunk-launch.conf

    1. Create a new volume in your docker config to store your cold data (e.g. /mynewcoldlocation)
    2. Create an indexes.conf file, preferebly in a persistent location such as $SPLUNK_HOME/etc/apps/<app>/default/indexes.conf
    3. Define hotwarm / cold volumes in your new indexes.conf, here's an example:

    
    [volume:hotwarm]
    path = /splunkdata
    # Roughly 3GB in MB
    maxVolumeDataSizeMB = 3072
    
    [volume:cold]
    path = /mynewcoldlocation
    # Roughly 50GB in MB
    maxVolumeDataSizeMB = 51200

    It would be up to you to ensure /splunkdata is stored on your cache disk and /mynewcoldlocation is in your array as defined in your docker config for this container.

    4. Configure your indexes to utilise those volumes by default by updating the same indexes.conf file:

    
    [default] 
    # 365 days in seconds 
    frozenTimePeriodInSecs = 31536000 
    homePath = volume:hotwarm\$_index_name\db 
    coldPath = volume:cold\$_index_name\colddb 
    # Unfortunately we can't use volumes for thawed path, so we need to hardcode the directory. 
    # Chances are you won't need this anyway unless you "freeze" data to an offline disk. 
    thawedPath = /mynewFROZENlocation/$_index_name/thaweddb 
    # Tstats should reside on fastest disk for maximum performance 
    tstatsHomePath = volume:hotwarm\$_index_name\datamodel_summary

    5. Remember that Splunk's internal indexes won't follow config in [default], so if we want Splunk's own self-logging to follow these rules we need to hard-code it:

    
    [_internal]
    # 90 days in seconds
    frozenTimePeriodInSecs = 7776000
    # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf
    homePath = volume:hotwarm\_internaldb\db
    coldPath = volume:cold\_internaldb\colddb
    
    [_audit]
    # 90 days in seconds
    frozenTimePeriodInSecs = 7776000
    # Override defaults set in $SPLUNK_HOME/etc/system/default/indexes.conf
    homePath = volume:hotwarm\audit\db
    coldPath = volume:cold\audit\colddb
    
    # ... etc etc for other indexes you want to honour this.

    > Freezing buckets

    When data freezes it is deleted unless you tell it where to go.

    You'll see in my config above I set the config items "maxVolumeDataSizeMB" and "frozenTimePeriodInSecs". For our volumes, once the entire volume hits that size it'll start moving buckets to the next tier (hotwarm --> cold --> frozen). Additionally, each of our individual indexes will also have similar max size config that will control how quickly they freeze off, iirc the default is 500GB.

     

    Each individual index can also have a "frozenTimePeriodInSecs", which will freeze data once it hits a certain age. If you have this set, data will freeze either when it is the oldest bucket and you've hit your maxVolumeDataSizeMB, or if it's older than frozentimePeriodInSecs.

     

    When data freezes it is deleted unless you tell it where to go. The easiest way to tell it where to go is by setting a coldToFrozenDir in your indexes.conf for every index. For example, in our same indexes.conf we have an index called "web", it might look like this, here's some doco to explain further https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Automatearchiving:

    
    [web]
    # 90 days in seconds
    frozenTimePeriodInSecs = 7776000
    coldToFrozenDir = /myfrozenlocation
    # alternatively,
    coldToFrozenScript = /mover.sh

    Hope this helps.

     

    I tried to follow these instructions to the letter... unfortunately, every time I restart the container or just restart splunkd, all of my indexed data seems to no longer be searchable on restart.  The index data can still be seen in the persistant shares (ie. /splunkdata and /splunkcold).  If I stop splunk and delete everything in /splunkdata and /splunkcold, then splunk start. Everything will re-index again.  If I don't delete the data, I cannot search the data whether I restart or not. So some record of the files being indexed remains persistant in fishbucket db.

    I have not idea why my other data seems to disappear on restart. I was doing some Splunk Labs for an Sales Engineer 2 accreditation and I spent half my time wondering why my files weren't being indexed... turned out that they had been indexed but on restart they disappeared and because there was data still in fishbucket they wouldn't be re-parsed with any changes I made and re-indexed.

     

    Here are my docker volume mappings:

    /splunkcold <> /mnt/user/splunk-warm-cold/  #this is a unraid cached array share for cold buckets
    /opt/splunk/etc/licenses <> /mnt/user/appdata/splunkenterprise/license #I persist a developer license here
    /test <> /mnt/user/appdata/splunkenterprise/test/  #ingesting test files from here
    /opt/splunk/etc/system/local <> /mnt/user/appdata/splunkenterprise/etc/system/local   #indexes.conf lives here
    /splunkdata <> /mnt/user/appdata/splunkenterprise/splunkdata #hot-warm data is persisted here on ssd cache
    /opt/splunk/etc/apps <> /mnt/user/appdata/splunkenterprise/etc/apps
    /opt/splunk/etc/auth <> /mnt/user/appdata/splunkenterprise/etc/auth

     

    Here is my indexes.conf stored in /opt/splunk/etc/system/local which is persisted to a share (/mnt/user/appdata/splunkenterprise/etc/system/local):

    [volume:hotwarm]
    path = /splunkdata
    maxVolumeDataSizeMB = 3072
    
    [volume:cold]
    path = /splunkcold
    maxVolumeDataSizeMB = 51200
    
    [default]
    # 90 days in seconds
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/$_index_name/db
    coldPath = volume:cold/cold/$_index_name/colddb
    thawedPath = /splunkcold/$_index_name/thaweddb
    tstatsHomePath = volume:hotwarm/$_index_name/datamodel_summary
    
    
    # Splunk Internal Indexes
    [_internal]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/_internaldb/db
    coldPath = volume:cold/cold/_internaldb/colddb
    thawedPath = /splunkcold/_internaldb/thaweddb
    tstatsHomePath = volume:hotwarm/_internaldb/datamodel
    
    
    [_audit]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/audit/db
    coldPath = volume:cold/cold/audit/colddb
    thawedPath = /splunkcold/audit/thaweddb
    tstatsHomePath = volume:hotwarm/audit/datamodel
    
    [_introspection]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/_introspection/db
    coldPath = volume:cold/cold/_introspection/colddb
    thawedPath = /splunkcold/_introspection/thaweddb
    tstatsHomePath = volume:hotwarm/_introspection/datamodel
    
    [_metrics]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/_metricsdb/db
    coldPath = volume:cold/cold/_metricsdb/colddb
    thawedPath = /splunkcold/_metricsdb/thaweddb
    tstatsHomePath = volume:hotwarm/_metricsdb/datamodel
    
    [_metrics_rollup]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/_metrics_rollup/db
    coldPath = volume:cold/cold/_metrics_rollup/colddb
    thawedPath = /splunkcold/_metrics_rollup/thaweddb
    tstatsHomePath = volume:hotwarm/_metrics_rollup/datamodel
    
    [_telemetry]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/_telemetry/db
    coldPath = volume:cold/cold/_telemetry/colddb
    thawedPath = /splunkcold/_telemetry/thaweddb
    tstatsHomePath = volume:hotwarm/_telemetry/datamodel
    
    [_thefishbucket]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/fishbucket/db
    coldPath = volume:cold/cold/fishbucket/colddb
    thawedPath = /splunkcold/fishbucket/thaweddb
    tstatsHomePath = volume:hotwarm/fishbucket/datamodel
    
    [history]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/historydb/db
    coldPath = volume:cold/cold/historydb/colddb
    thawedPath = /splunkcold/historydb/thaweddb
    tstatsHomePath = volume:hotwarm/historydb/datamodel
    
    [summary]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/summarydb/db
    coldPath = volume:cold/cold/summarydb/colddb
    thawedPath = /splunkcold/summarydb/thaweddb
    tstatsHomePath = volume:hotwarm/summarydb/datamodel
    
    [main]
    frozenTimePeriodInSecs = 7884000
    homePath = volume:hotwarm/defaultdb/db
    coldPath = volume:cold/cold/defaultdb/colddb
    thawedPath = /splunkcold/defaultdb/thaweddb
    tstatsHomePath = volume:hotwarm/defaultdb/datamodel
    
    # Begin Custom Indexes
    [splunk_labs]
    homePath = volume:hotwarm/splunk_labs/db
    coldPath = volume:cold/cold/splunk_labs/colddb
    thawedPath = /splunkcold/splunk_labs/thaweddb
    tstatsHomePath = volume:hotwarm/splunk_labs/datamodel
    
    [win_logs]
    homePath = volume:hotwarm/win_logs/db
    coldPath = volume:cold/cold/win_logs/colddb
    thawedPath = /splunkcold/win_logs/thaweddb
    tstatsHomePath = volume:hotwarm/win_logs/datamodel

    Monitoring Console / Data / Indexes Before a Restart

    firefox_fR0NrXngFg.thumb.png.4a7376bb2fae9fb557ed95c014a8380a.png

     

    Any ideas how I can fix this? I love the idea of having a container but I can't live with it as is :P  Thanks!  I also attached the container logs to see if that gives any insight.

     

     

    ****UPDATE****

     

    I think I discovered the answer in the logs:

    11-17-2020 09:07:31.535 +0000 INFO BucketMover - will attempt to freeze: candidate='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13' because frozenTimePeriodInSecs=7884000 is exceeded by the difference between now=1605604051 and latest=1505895227
    11-17-2020 09:07:31.535 +0000 INFO IndexerService - adjusting tb licenses
    11-17-2020 09:07:31.545 +0000 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/splunkdata/splunk_labs/db/db_1505895227_1388693545_13'

    splunk_docker_logs.txt

    It turns out that the Splunk Lab files they give you have dates back from 2014... so the event dates in my case and in most peoples cases exceed by far the amount of time it takes to send a bucket of events to a frozen bucket. So on restart it sends all of my just indexed data, directly to frozen because of the latest event date in that newly indexed bucket and therefore I can't see the data anymore.  Seem that the data was persisting just fine and this is expected behavior.  For the lab data I will change the frozenTimePeriodInSecs > greater than the event dates.  I hope this helps others, even if just looking at screenshots and config files if they are looking to do the same thing.

    • Like 1
  2. On 11/10/2020 at 8:03 AM, andrew207 said:

    Hey @Caduceus, this seems to be something new Splunk added in the latest update that will only work on systemd based systems.

     

    I've just committed a change to disable this in the container.

     

    Commit is here: https://github.com/andrew207/splunk/commit/ebf5f696c1458fd9ca0be3402f0fc930d2cfd1a2

     

    Such is life living on the bleeding edge! :) Thanks for pointing this out. You can switch to the 8.1.0 branch/dockerhub tag if you want to try the fix now, otherwise i'll push it to master in the next couple of days.

    I was mistaken. This change doesn't seem to fix it.  But in $splunk_home/etc/apps/journald_input/local/app.conf

     

    if we add:

    [install]
    state = disabled

    That seems to fix it by disabling the app.

  3. Quote

    Unable to initialize modular input "journald" defined in the app "journald_input": Introspecting scheme=journald: Unable to run "/opt/splunk/etc/apps/journald_input/bin/journald.sh --scheme": child failed to start: No such file or directory.

    This message is persistent in my Messages. Is this error expected?

  4. 11 hours ago, belliash said:

    Yeah, that's how I passed optimus-based nvidia card in my laptop.

    I wonder if this problem is really related to BIOS, not some combination of BIOS, BIOS settings and Linux kernel. I found several threads over the net, some contained patches for kernel, as well as mostly UEFI was used. I wonder if problem would still exist when booted in legacy mode.

     

    I finally came across this patch: https://clbin.com/VCiYJ that most probably fixes problems introduced by AGESA 0.0.7.2 (BIOS version F30). According to the information found over the net it works at least with Linux Kernel 5.1.3. Maybe you could try it?

     

    There is also a chance, AGESA 1.0.0.3AB will bring some bugfixes.

    unraid is currently on kernel 4.19.56 . I've read about the patches but haven't attempted anything. I was going to wait it out for the next AGESA.  I did try both UEFI and legacy when I had F32 and F40. It didn't matter.  Plus I read there could be problems with Windows using Nvidia cards if you booted unraid with the UEFI mode. I have no idea.

     

    I'm really happy with the way everything is working for the moment. I picked up some g.skill dual channel 3200mhz 32gb kit for $109 and a bunch of IronWolf 4TB hdds for $95. 

  5. 4 hours ago, belliash said:

    And were you able to use Radeon 5770 as primary display for host, and to pass 1660Ti to VM?

     

    Your problem with BIOS doesn't look good, especially I am considering a different CPU to buy - Ryzen7 3700X and it needs F40.

    I was able to use the Radeon 5770 as the primary display by configuring the bios to use card two as primary. I was able to pass 1660Ti to VM, but only with the F5 bios. I got the 2700 because I was able to get the motherboard and chip for about $220 total.  I planned on upgrading to 3700x in a year or two when the prices come down.  Yes, if you get that chip, unless Gigabyte fixes their bios, you won't be able to pass through the 1660Ti.  Also, I was able to pass through both videocards, 1 to each vm by running unRaid headless. It's a little more complicated, but you can use a vbios rom with a vm to let it take control of the video card in use by the host.

  6. I recently built an unRaid server with almost this exact setup.

    Same motherboard, same cpu, I have the 1660ti (Gigabyte 1660ti mini-itx) setup in the primary pcix16 slot) but I have an old Radon 5770 in the second slot.

    The only caveat is, I need to run the F5 version of the bios. I had upgraded to F32 and F40. Both gave me serious issues that would not allow Windows VMs to start with those bios revisions.  I did a full post in the VM Troubleshooting section.  I hope this helps.

     

    You can see my IOMMU groups here:Post with IOMMU groups

    • Like 2
  7. Hi all,

    I'm running the latest version of unraid 6.7.2.

    I have 2 gpu's in my system, an old radeon 5770 and a gigabyte rtx 1660ti.

    I have a Ryzen 7 2700 and a Gigabyte b450 aorus m motherboard.

     

    I have tried everything I can think of or read to get the pass through working. I watched every spaceinvader one video on any related subject but it didn't help.

     

    The nvidia card has all of it's devices in the same iommu group. I pinned them all to vfio-pci.

     

    Whenever I try to start up a windows 10 vm , passing the card through, I get:

    -device vfio-pci,host=06:00.0,id=hostdev0,bus=pci.3,addr=0x0,romfile=/mnt/user/system/vbios/Gigabyte.GTX1660Ti.6144.190113_1_no_header.rom \
    -device vfio-pci,host=06:00.1,id=hostdev1,bus=pci.4,addr=0x0 \
    -device vfio-pci,host=06:00.2,id=hostdev2,bus=pci.5,addr=0x0 \
    -device vfio-pci,host=06:00.3,id=hostdev3,bus=pci.6,addr=0x0 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    2019-07-06 01:33:41.796+0000: Domain id=1 is tainted: high-privileges
    2019-07-06 01:33:41.796+0000: Domain id=1 is tainted: host-cpu
    char device redirected to /dev/pts/0 (label charserial0)
    2019-07-06T01:33:44.345969Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3
    2019-07-06T01:33:44.349775Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3
    2019-07-06T01:33:44.354756Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3
    2019-07-06T01:33:44.358761Z qemu-system-x86_64: vfio: Unable to power on device, stuck in D3

     

    And then of course I have to kill the vm or it sits there doing nothing and not booting.  If I try to start any other vm again with the card used for pass through, I'll get the error:

    internal error: Unknown PCI header type '127'   until I reboot.

     

    I have tried running the vm with  i440fx-3.1 , Q35-3.1, seabios, OVMF, with Hyper-V on/off , booted unraid from UEFI and without UEFI.

     

    I set my bios to boot with the Radeon card as primary so I know I don't need the gpu bios to passthrough, but I did grab the bios and edited it to remove the headers as well, and tried that to see if that helped.

     

    I have tried every combo of this for syslinux :

    kernel /bzimage
    append amd_iommu=on pcie_acs_override=downstream vfio-pci.ids=10de:2182,10de:1aeb,10de:1aec,10de:1aed initrd=/bzroot

     

    PCI Devices and IOMMU Groups

    IOMMU group 0:[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 1:[1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

    IOMMU group 2:[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 3:[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 4:[1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

    IOMMU group 5:[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 6:[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 7:[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

    IOMMU group 8:[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

    IOMMU group 9:[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

    IOMMU group 10:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)

    [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

    IOMMU group 11:[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0

    [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1

    [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2

    [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3

    [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4

    [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5

    [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6

    [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7

    IOMMU group 12:[1022:43d5] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller (rev 01)

    [1022:43c8] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller (rev 01)

    [1022:43c6] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge (rev 01)

    [1022:43c7] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)

    [1022:43c7] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)

    [1022:43c7] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port (rev 01)

    [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 16)

    [1002:68b8] 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Juniper XT [Radeon HD 5770]

    [1002:aa58] 05:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Juniper HDMI Audio [Radeon HD 5700 Series]

    IOMMU group 13:[10de:2182] 06:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 Ti] (rev ff)

    [10de:1aeb] 06:00.1 Audio device: NVIDIA Corporation Device 1aeb (rev ff)

    [10de:1aec] 06:00.2 USB controller: NVIDIA Corporation Device 1aec (rev ff)

    [10de:1aed] 06:00.3 Serial bus controller [0c80]: NVIDIA Corporation Device 1aed (rev ff)

    IOMMU group 14:[1022:145a] 07:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function

    IOMMU group 15:[1022:1456] 07:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor

    IOMMU group 16:[1022:145f] 07:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Zeppelin USB 3.0 Host controller

    IOMMU group 17:[1022:1455] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function

    IOMMU group 18:[1022:7901] 08:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

    IOMMU group 19:[1022:1457] 08:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller

     

     

     

    I also get these errors in the system logs;

    vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting

    vfio-pci 0000:06:00.1: Refused to change power state, currently in D3

    vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars

    vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway

    vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting
    vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up

    vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars

    vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars
    vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars
    vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars

     

    Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars Jul 5 21:33:44 Redshift kernel: vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars Jul 5 21:33:45 Redshift kernel: vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway Jul 5 21:33:46 Redshift kernel: vfio-pci 0000:06:00.0: not ready 1023ms after FLR; waiting Jul 5 21:33:47 Redshift kernel: vfio-pci 0000:06:00.0: not ready 2047ms after FLR; waiting Jul 5 21:33:49 Redshift kernel: vfio-pci 0000:06:00.0: not ready 4095ms after FLR; waiting Jul 5 21:33:53 Redshift kernel: vfio-pci 0000:06:00.0: not ready 8191ms after FLR; waiting Jul 5 21:34:02 Redshift kernel: vfio-pci 0000:06:00.0: not ready 16383ms after FLR; waiting Jul 5 21:34:19 Redshift kernel: vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting Jul 5 21:34:54 Redshift kernel: vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.0 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.1 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.2 reset recovery - restoring bars Jul 5 21:34:54 Redshift kernel: vfio_bar_restore: 0000:06:00.3 reset recovery - restoring bars Jul 5 21:37:19 Redshift emhttpd: req (2): csrf_token=****************&title=Log+for%3AWindows+10+-+Bare+Metal&cmd=%2FwebGui%2Fscripts%2Ftail_log&arg1=libvirt%2Fqemu%2FWindows+10+-+Bare+Metal.log Jul 5 21:37:19 Redshift emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log libvirt/qemu/Windows 10 - Bare Metal.log Jul 5 21:48:40 Redshift login[12997]: ROOT LOGIN on '/dev/pts/1' Jul 5 22:02:04 Redshift kernel: ata6.00: Enabling discard_zeroes_data Jul 5 22:02:04 Redshift kernel: sde: sde1 sde2 sde3 Jul 5 22:02:04 Redshift avahi-daemon[5027]: Interface vnet0.IPv6 no longer relevant for mDNS. Jul 5 22:02:04 Redshift avahi-daemon[5027]: Leaving mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe88:9fb7. Jul 5 22:02:04 Redshift kernel: br0: port 2(vnet0) entered disabled state Jul 5 22:02:04 Redshift kernel: device vnet0 left promiscuous mode Jul 5 22:02:04 Redshift kernel: br0: port 2(vnet0) entered disabled state Jul 5 22:02:04 Redshift avahi-daemon[5027]: Withdrawing address record for fe80::fc54:ff:fe88:9fb7 on vnet0. Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'System_Reserved' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'Samsung_SSD_840_Series_S14CNSAD200506M-part3' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift unassigned.devices: Disk with serial 'Samsung_SSD_840_Series_S14CNSAD200506M', mountpoint 'Samsung_SSD_840_Series_S14CNSAD200506M-part2' is not set to auto mount and will not be mounted... Jul 5 22:02:05 Redshift kernel: vfio-pci 0000:06:00.0: timed out waiting for pending transaction; performing function level reset anyway Jul 5 22:02:07 Redshift kernel: vfio-pci 0000:06:00.0: not ready 1023ms after FLR; waiting Jul 5 22:02:08 Redshift kernel: vfio-pci 0000:06:00.0: not ready 2047ms after FLR; waiting Jul 5 22:02:10 Redshift kernel: vfio-pci 0000:06:00.0: not ready 4095ms after FLR; waiting Jul 5 22:02:14 Redshift kernel: vfio-pci 0000:06:00.0: not ready 8191ms after FLR; waiting Jul 5 22:02:23 Redshift kernel: vfio-pci 0000:06:00.0: not ready 16383ms after FLR; waiting Jul 5 22:02:40 Redshift kernel: vfio-pci 0000:06:00.0: not ready 32767ms after FLR; waiting Jul 5 22:02:56 Redshift ool www[14840]: /usr/local/emhttp/plugins/dynamix/scripts/bootmode '1' Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.0: not ready 65535ms after FLR; giving up Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.1: Refused to change power state, currently in D3 Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.2: Refused to change power state, currently in D3 Jul 5 22:03:13 Redshift kernel: vfio-pci 0000:06:00.3: Refused to change power state, currently in D3

     

    Please help!  I don't know if it's my motherboard or video card that's the issue. I'll return either one if I can just fix this.