civic95man

Members
  • Posts

    224
  • Joined

  • Last visited

Everything posted by civic95man

  1. Just attach the entire zip file as that contains everything to troubleshoot your issue. That is a good first step to try to troubleshoot and fix the issue, since you won't wipe you win10 install. you need to embed the text/xml with the [</>] button at the top of the comment box. That will format and highlight the text based on the chosen syntax
  2. Parity can only emulate a disk at the disk level. if the disk itself has corruption (file system corruption), whether or not it was due to a write error, then chances are that the emulated disk will have that corruption too. It's not the end of the world though, the file system just needs to be repaired and many times that can be accomplished with little or no loss of data. What check are you referring to? A repair of the file system? It sounds like unraid is writing that 'emulated' disk back to the physical disk, which will not in itself fix the unmountable issue, that requires a repair of the filesystem. Your data rebuild may fail if there are continued errors with that Marvell card and port multiplier. Its best to replace those ASAP rather than waiting. As for Marvell; their drivers for linux seem flaky at best and is the reason that those controllers are not recommended for unraid. Pretty much any LSI card will work, just make sure it is flashed to IT mode and has the latest firmware - you don't want to use unraid with a RAID card. EDIT: -and avoid 'new' LSI cards from China as they are most often counterfeit. Best to get used cards pulled for servers.
  3. Sounds more like the disk image for the windows VM was corrupted. Diagnostics would help see if it's a hardware or software related issue
  4. You are using a Marvell sata controller for a majority of your drives. Marvell controllers are not recommended as they can drop drives and cause problems. Best bet would be to find a LSI controller. If it's not currently disabled then the emulated drive will have the same contents as the physical drive. Parity is always in sync with the drives. You may need to read up on how parity works, it's not a backup but just a way to recover from one (or two with dual parity) missing/failed drives. Parity cannot recover from data loss resulting from writes to disk (ie. formatting, deleting files, ransomware, etc). You would need a full backup solution in that case. Edit: It also looks like you're using a port multiplier on that Marvell controller. That is also not recommended.
  5. Because you have CA appData Backup running every morning at 5:00. When it backs up all of the containers, it first shuts them down. Sep 30 05:00:01 Tower CA Backup/Restore: ####################################### Sep 30 05:00:01 Tower CA Backup/Restore: Community Applications appData Backup Sep 30 05:00:01 Tower CA Backup/Restore: Applications will be unavailable during Sep 30 05:00:01 Tower CA Backup/Restore: this process. They will automatically Sep 30 05:00:01 Tower CA Backup/Restore: be restarted upon completion. Sep 30 05:00:01 Tower CA Backup/Restore: ####################################### Maybe set the backup schedule to something like 3 days or more if you can handle running that long without a valid backup.
  6. Thats an approach that many people seem to take. Just keep in mind that you *may* run into stability issues since windows is in a VM (not saying you will and probably wont), but just keep that in mind that it may crash/lockup. I personally run 2 windows VMs 24/7 with absolutely no issues (~150 days uptime) - but the potential is always there. That would be a good approach and the cpu has a good amount of cores/threads to share between the VM and unraid/plex. It also depends on what you want to use the windows VM for. With plex, if you transcode, you may run into issues depending on how many cores you assign to the VM vs leaving free for unraid and the content to transcode. Depends on what you want to use it for. Gaming? Surfing the web? You just want to make sure that you keep enough cores available to unraid (maybe 4 threads max for VM and 8 for unraid). Too many people assign all of their cores to the VM and leave nothing for unraid to manage the VM with. If you want to test, just pull your windows HD out and try booting that system with your flash. If you feel comfortable, you could try creating a VM and using the HD with the windows install as your disk. I believe their are tutorials out there for converting a baremetal w10 to a VM. I would recommend a flash drive backup before doing this just in case. EDIT: I forgot the most important thing - you need to make sure that your IOMMU groups are split ideally for VM use. It would probably be a good idea to have a few USB ports on its own group. That would allow you to pass them through and allow them to natively function within windows (so you can easily plug in flash drive, etc)
  7. Your unraid system is on vlan 130; are your docker containters using a different vlan? I believe ALL of your containers that have a static IP must be on a different vlan than that of your unraid system. My understanding for this is because most routers will not route broadcast packets between networks - with the broadcast packets being the cause of the macvlan call traces.
  8. Yes, that would mean that its at the hardware level if you are comfortable with used, then i would check ebay. I acquired most of my build through parts from ebay. Just test everything really well before you are comfortable with it and any return policy expires. otherwise you would need to look into NOS, but at that point it may be cheaper to use lower end modern equipment and buy everything new.
  9. Looks like you are using a Marvell controller for four of your drives, two of which are ones that you were having issues with in this post. I don't believe Marvell controllers are recommended and all of your connections issues may be caused by this card. It could have also been the cause of the corrupted file system on disk6. As for formatting disk6, judging by your logs, it looks like disk6 (sdh) drops in and out because of connection issues (presumably). I can only guess that it looks like it drops out when it should have been assigned the "disk 6" designation but still maintains a md6 presence. so the format is performed against md6. Just my educated guess.. in that case, maybe this is a bug
  10. The /config of your flash drive backup has all of the settings. If you want to restore everything then just copy it all over to your current flash drive (overwriting everything). Optionally, if you just care about your disk assignments then just copy over the super.dat file.
  11. Yes, remove the unraid USB and insert the bootable flavor of our choise. as long as you do not try to mount any of your disks outside of unraid, then you will be fine and parity will stay valid. If you don't trust yourself then maybe disconnect all of your drives. I don't know if the live versions will try to auto mount disks now - its been a while since I've used one, but my assumption is that they won't mount until you try to access them (via GUI if you prefer)
  12. Maybe try booting your system with a live version of another flavor of linux to see if you have similar issues (lockup/boot problems). Could also try clearing the bios to defaults and checking the battery as a last resort. I didn't see anything in the manual about not using the pcie x16 slot for anything but graphics.
  13. Have you had a look at this thread In my case I had to update the firmware of the UPS in windows and then change it to modbus to get full stats. I don't know if that model works with modbus, but it cant hurt to try different settings.
  14. Your syslog shows that your network adapter for eth0 might be having problems. This may or may not be related to this option: pcie_acs_override=downstream have you tried running your VM without this option? Is that option necessary for your IOMMU groups to properly split? Failing that, you could try the beta version of unraid as it has a newer kernel with a newer version of qemu.
  15. no, you would need to use the nvidia build, and remove the "stub" for the card. The stub is reserving the card for the VM and is preventing unraid from seeing it and thus prevents the driver from loading. By removing the stub, I mean remove it and all reference to it's related iommu in the vfio-pci plugin (assuming you are using the plug-in). You do not need to delete the VM. However, if you try starting the VM when the card is set for transcoding, bad things could happen - best case, the VM fails to load and just dies; worst case, the VM rips the card from an active transcode and your system crashes.
  16. I'm sorry, but I am the wrong person to ask. I was just looking around with google and found a lot of people having issues similar to yours. The uninitialized csrf_token seems related to the the rootfs filling up. And a mis-configured rclone seems to eat up a lot of memory when many files are open. I guess you could try that setting and see if a media scan completes without messing everything up. I wonder if you could kill the rclone process if this happens again and reclaim the memory - at least then you might be able to generate diagnostics after it happens. yes, I figured; in that case the nvidia build is pointless since the VM (windows) is handling the gpu and drivers. The nvidia build is primarily used if you want to allow the gpu to transcode in plex/emby, in which case it provides the necessary drivers for unraid/linux to see and talk to it. Unfortunately, it's one or the other: either keep it in unraid for transcoding OR pass through to the VM
  17. Like I said, I'm no expert on that front. From what i was able to understand as a possible fix would be to reduce the --vfs-cache-max-age to seconds (10s). In my experience with plex, it will try to scan all of my media and keep multiple files open. And from my understanding of rclone and your configuration, it will try to keep those open files in memory for 336h. There are a few posts out there that describe optimizations for rclone and streaming content in plex/emby. That might help you. You could also try the rclone support thread. And in regard to the 'nvidia build' of unraid, for troubleshooting, its best to simplify the process and that build is not "officially" supported (third party). Also, since you stubbed that video card anyway to pass it to a VM, unraid itself can't see it so the driver will never load and becomes pointless; and spams the log.
  18. I am by no means an expert here but my best guess is that rclone is using up all of your memory to cache your media files when plex is performing a scan. I tried to read through the different cache and buffer options and meanings but now my head just hurts.
  19. Judging by looking around on other posts, you might be filling up your rootfs (remember this resides in ram) you already assigned half of your memory to a VM. I assume you have plex - are you transcoding in there to RAM as well (/tmp)?
  20. Sorry, I was reviewing the motherboard manual and no, you can't select the primary display. Its usually the card in the first slot so you could try to physically swap your 2080ti and 1650 in their slots and see if the 1650 comes up as primary. If you do this, it will most likely swap around their location within the IOMMU group and move them to different addresses, so you will need to change your binding and the VM gpu assignment. But since you have it working, you can decide if you really need to admin it locally or remotely via the web interface. In either case I'm glad you have it working and glad I could help you some.
  21. That is what I expect, I'm assuming that is the output from the 2080ti. So basically, at that point the kernel binds that video card at the specified hardware location (based on the vfio-cfg.cfg file) to the vfio-pci driver. Thats basically just a placeholder for when the VM starts and it gets passed through. But once that card binds with that driver, unraid can no longer see or talk to it. So I expect that and your system is still functioning, not hanging. The 2080ti must be checking if the monitor is attached, if not then it excludes itself from being available. Does the other 1650 takes over? So you have three options it sounds like, see if you can specify in the bios which card is the primary, rearrange your gpus on the motherboard so the 1650 appears as the primary, or find a kernel boot parameter to specify which card to treat as the primary (haven't checked if possible but I'm sure it it)
  22. What do you mean by it hangs? Do you get to the boot menu to choose the GUI/non-GUI/safemode and then nothing appears on the screen? Try switching to the other gpu (1650) when that happens. When you isolate the 2080ti, unraid essentially doesn't see it so it can't use it. Therefore it will fall back to its other gpu (1650). Failing that, it will go into "headless" mode and not use any video card, in which case you should still see it on the network.
  23. 1. Trial should be good for more than 3 disks. Do you have internet connection to your server so that it can "phone home?" It only contacts unraid's servers while under the trial license. You If you aren't already on the beta, then you may need to for the network drivers. Else, maybe connect to a standard 1gb port instead of a 2.5gbe if your board has those - this is all assuming that your server isn't visible on the network, if it is then disregard. 2. Yes that is perfectly fine. You can easily add new drive as you please, including parity. Granted, it will take a good part of a day to create the parity so be aware of that when the time comes. Not so much critical now since you don't have parity, but you want to ensure all of your disk connections are high quality (power and data). Don't want to use questionable sata cables as they can cause problems, same for power.
  24. yeah, if you want parity (i recommend it, but remember that parity is not a backup) then that looks good. With parity, you will want to make sure turbo write is on (don't know if that is default by now but most likely is). If you are initially transferring data from another source then some people don't bother with parity until everything has been transferred to speed things up. if you don't have anything to transfer or very little then it doesn't hurt to setup parity now. You can setup the NVMe(s) as your cache and then run docker/VM from there. It's really your preference. If you decide to use both NVMes as cache then you can choose raid0 or raid1 (raid0 will get you 4tb storage, whereas raid1 will give you a 2tb mirror which can protect you if lose one of the devices, raid0 cannot). I would recommend the backup plugin on CA which can backup your docker containers to your array so if things do go south then you have a backup of your containers. As for the single SSD, its your call. You could utilize that to run your VM, or with the beta version of unraid, you can set it up as another pool