Jump to content

Ford Prefect

Members
  • Content Count

    1299
  • Joined

  • Last visited

Community Reputation

8 Neutral

About Ford Prefect

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  • Personal Text
    Don't Panic!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. ...you previously did configure to passthrough your external GPU card. When you configured the PCI-Slot address for it, the NVMe wasn't there. The address scheme/numbering with the Slot-IDs rebuilds every time the system boots... If you change your hardware config, like adding new PCIe devices (like NVMe-cards or reseating existing cards into a different slot, the numbering will/can change. Unfortunately passthrough is is a hard-configured list and will not auto-adjust itself, when the configuration/numbering changes. After you added the NVMe, the same number, that used to be your GPU catrd now is your SATA-Controller (at least some ports of it). So when booting unraid, the get to disapear from the system and are being passed on to the VM. Same can happen, when you pull a card. Again the ID just got assigned to another device. Add everything you want into your server, boot, disable VMs in the VM Manager Settings, then re-configure the passthrough assignments with the VFIO-PCI plugin, save and reboot, then all disks should be there and you can re-enable VMs, too.
  2. Well, I am not an expert with AMD based systems. The unraid diagnostics do not reveal anything going wrong, as far as I am concerned. But you are right, only one 8TB drive is present and slot-2 in the array is missing. Also, your MB should support 2 NVMe drives without any problems. So I doubt that this is unraid related but rather caused by the BIOS. What I found is this: Gigabyte Technology Co., Ltd. - X570 AORUS PRO BIOS Information Vendor: American Megatrends Inc. Version: F20a Release Date: 06/16/2020 ...but according to this: https://www.gigabyte.com/Motherboard/X570-AORUS-PRO-rev-10/support#support-dl-bios the Version F20a and its release-date does not even appear in the list. So your installed version is at least behind F20 and from the list, version F21 would improve PCIe compatibility (which could have an impact for PCIe based NVM;e drives. So the best thing I have to offer at this point is to recommend, that you should upgrade your BIOS to the latest release first.
  3. unraid will keep track of the data drive position in the array based on serial number/GUID, not port, so moving drives to a different port or even controller should be possible. If your diagnostic routine is non-invasive and as long as you do not format the drive, your data should be save. But you could always try and create a backup/dump to another location, before running diagnostics
  4. check the specs of your MB in detail. For many, when adding a second NVMe, one SATA port will be sacrificed. If not all SATA ports are in use at that time, moving the HDD to a different SATA port is the best option. Edit: never mind, missed the info, where you stated that in BIOS all disks were present
  5. Well, of course the .bash_profile is in the home directory of each user that you use/want to connect with, into a CLI (shell/bash) session, i.e. via ssh. Normally this is user "root", when you are using the terminal button/icon from the dashboard, so in this case the file location is /root/.bash_profile You can edit & save it there, but it will be gone after each reboot. In order to persist this setting, you should edit & save the "go" script, which is located in /boot/config. This is what I added into my go script: # fix console colors echo "alias ls=\"ls --color=never\"" >> /root/.bash_profile ...hope this helps.
  6. I am not aware that this kind of use-case will work with an intelligent setting of the Cache drive in the share config. Maybe there are methods or even community-Apps already available. If there are not, actually I think using the ZFS plugin and creating snapshots (from CLI) would work best here.
  7. ...what exactly are you missing? The way I described it doesn't work for you? Gesendet von meinem SM-G960F mit Tapatalk
  8. ...it's a setup in a holiday home. The SSD actually was a spare part, at that time. We are using it as a long term cache for movies and music, when being on site during vacation (pre-filling it from the main storage server at home), so wear and tear is minimal and data loss is not an issue, should it go down. Main purpose of the box is internet firewall and dockers for all sorts of smart home gizmos. Both Cache NVMe disks are identical and therefore form a btrfs pool, raid 1. Mixing models or technologies is something I wouldn't recommend, like you already found out yourself. The reason that I used unRAID was that I have a similar setup at home and using GRE-Tunnels (along with zerotier for backup) to connect both sites. I your case you should decide if you can live without redundancy. Another option maybe us to go for the zfs plug-in, an use this instead of cache for VMs and dockers. Gesendet von meinem SM-G960F mit Tapatalk
  9. Yes, but remember, that a parity is is not required in an array. I had a similar usecase, running a smaller 250GB SSD as the only array disk and 2x1TB NVMe-PCIe as Cache-Pool, for Docker/VMs. My box is an embedded/IPU system with i5-7200u, 6xIntel-NIC, running a Router VM (pfsense) and dockers.
  10. ...there is a problem with German brand AVM routers and build-in hardware-acceleration on their switch, that apparently causes issues when pulling images from docker-hub (https://github.com/docker/for-win/issues/6192#issuecomment-667663545). This bug is 100% reproducible when pulling via a windows host (docker-gui) but I can confirm that on unraid, the behaviour is the same as you describe. It is a very far call in your case, but I see the similarities here
  11. ...see my edit2 in the first post. That NIC has now 18 revisions, of which unraid 6.8.3 "only" supports the first 10. All newer 10th gen intel platforms with onboard NICs from intel are likely to be affected.
  12. ...it worked...your way A patchfile did almost work, but a single header had been moved out of the tree and had massive changes, so I did not pursue that route. I basically set the build script to sleep for a while, after the standard kernel and modules had been created, then I built the module from the sourceforge link manually from a second commandline in the docker, before the build script continued to run and finally created the image ready with the new module. Thanks again for your support and the fine docker!
  13. Yes, I understand. But I first wanted to check what the difference between both versions of the driver are, in terms of config & makefile etc. Also I cannot be sure, that the sourceforge link is actually the driver I have in mind. I know that this is not the proper way, but just a first step. I am just curious, nothing can break at this time. I am just gathering info for a future build, based on S1200/10thgen. My Workstation has that NIC and I can simply test this way with it....using the docker on my existing, older 6.8.3 Edit: the structure in both versions is the same...although the source files obvioulsly contain changes, but the version string is identical in both...so far for doing things the correct way I might create a patch from both versions and inject this in your docker, then 🙃
  14. Thank you very much for your detailed response. I have understood the basic concept, so far, I think. But I still cannot get my head around something. Your docker is for building new kernel modules, but in my special case I am talking about an existing (standard kernel) module. So, when your buildscripts pulls the kernel source (which should also include standard modules), I'd assume that the e1000e module will be present there in the source-tree. With your buildscript, you are building all standard modules anyway, without the need for an additional section in your buildscript. Let's assume I only enable ZFS module in the docker config, I will end up with a new, complete kernel (re-)build, including all standard modules plus ZFS-module(s). Is this assumption correct, so far? Then, why would I need a new section for my module in the build-script? In my mind, I'd use three steps to build a new kernel with my standard module: 1) basically split your build scrpt into two steps, i.e by inserting a stop/wait to the buildscript in the section where you just pulled the kernel source and confirm, that my desired driver (source - in this case old version) module is there, in the container. 2) then I would replace that with the source of the newer driver version ... and...instead of using the external sourceforge link, I use a tree collected the same way from another instance of your container, running in beta (and custom) mode...because I know that this instance works, including build-config (which I assume did not change much between k4.x and k5.x). 3) third step then is to let the rest of the buildscript continue. ...step (2), I am going to verify during this day...fingers crossed Edit: I'd make the script wait before starting the (re-)build at the "make oldconfig" section. Edit2: OK, driver source for 6.8.3 is in "/usr/src/linux-4.19.107-Unraid/drivers/net/ethernet/intel/e1000e"
  15. Hmmm, not quite sure if I understand what you are saying. I assumed that your container can be configured to either produce a kernel, including modules, for 6.8.3 or 6.9b25. Yes, I don't want to pull binaries from beta to stable, but rather the source-tree, including its build config/makefile - only for this specific driver. I guess I'll have to check how your container is doing that internally....starting with the readme Many thanks for your support...gute Nacht!