Jump to content

bamhm182

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by bamhm182

  1. My install has been booting just fine, and then I upgraded to 6.7.0-rc1 from 6.6.6 and am getting the following when I try to boot from any of my options: Loading /bzimage... ok Loading /bzroot... ok Loading /bzroot-gui... ok exit_boot() failed! efi_main() failed! I have moved my vfio-pci arguments over to the new config file, but other than that, I haven't touched anything since the last reboot. I'm on UEFI. Any ideas? I'm going to go remove that config file and see if that helps any. EDIT: It didn't change anything. Also, my default syslinux.cfg file entry is as follows: label unRAID... kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui I could probably roll back to 6.6.6 and be fine, but I'm going to take the time waiting to set up my bare metal hard drive. Let me know if there is anything you would like me to try. EDIT 2: I rolled back to 6.6.6 and as I predicted, everything was fine again. I then reinstalled 6.7.0-rc1 and it did the same thing. I made sure not to make any changes to syslinux.cfg or vfio-pci.cfg this time.
  2. My install has been booting just fine, and then I upgraded to 6.7.0-rc1 from 6.6.6 and am getting the following when I try to boot from any of my options: Loading /bzimage... ok Loading /bzroot... ok Loading /bzroot-gui... ok exit_boot() failed! efi_main() failed! I have moved my vfio-pci arguments over to the new config file, but other than that, I haven't touched anything since the last reboot. I'm on UEFI. Any ideas? I'm going to go remove that config file and see if that helps any. EDIT: It didn't change anything. Also, my syslinux.cfg file entry is as follows: label unRAID... kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui
  3. I didn't even catch that he probably just missed the 0. Haha. I thought he actually meant 6.7.9. Glad to hear it'll be released sooner than that. >_<
  4. Been running my kernel for a few days with no issues. I'm going to close this. In other news, still can't get this card to work, I even installed Windows baremetal on a spare disk, got the card working there, then copied the disk to an img. I booted that and it still told me to reboot to finish the driver installation. I feel like the programmers put that in there instead of "it isn't working and we don't know why." I think I'm going to stop trying to get this to work with unRAID and just dual boot windows and unRAID. No fault to unRAID, just this crappy card. Keep up the great work!
  5. Thanks for doing that! I'm beginning to think the card is garbage as well. I have a little mSATA drive I'm not using, I may install that and toss Windows on there just to give it one final go before I send it back to Amazon. I've just got one final question since I've been stuck on this for a few days. What's the proper way to extract and compile the bzroot file? All the information I can find suggests piping it from xzcat to cpio, but when I do that, xzcat complains about an invalid format. I was finally able to get it to do something with cpio -id < /boot/bzroot, but all that was extracted was a kernel folder. Is that all that's in there? As for compiling it, I never had any luck.
  6. That is correct. The person who originally made the patch was using Arch, but he had a virtual machine running on Arch and ran into the same issues as we are running into now. Making these changes and a few in syslinux solved the reboot VM crash for him. He modified his sysconfig to look similar to the one I have set up here: label unRAID OS (Elgato Fix) kernel /bzimage-new append pcie_acs_override=downstream pci-stub.ids=<removed>,12ab:0380 vfio-pci.ids=12ab:0380 disable_idle_d3=1 initrd=/bzroot-new EDIT: Making progress! I just took a moment to step back and think about the steps and realized that this whole time I've been trying to compile both bzroot and bzimage, but I haven't really been doing anything to bzroot, but extracting and recompiling it (wrong, I might add). I was replacing bzroot for no reason, so I used the bzroot that came with unRAID and the bzimage that I have been compiling, low and behold, I CAN REBOOT! There are some issues with the card still, but I think they may not be related. It's just weird, it prompts me to reboot to finish installing the drivers, but when I reboot, that message doesn't go away and the card still doesn't work. I'm wondering if maybe Windows isn't shutting down cleanly...
  7. Haha, that's true. That is the gist where I found the code, yes. I have been working on a script that checks the unRAID, Slackware and Kernel versions, then goes out and downloads the appropriate packages and files and follows similar steps to gfjardim's gist and the one on the wiki. Only problem is they're both pretty old and this is the first time I've ever tried to compile a kernel. I managed to get rid of most of the negative sounding things, except for what looks like a few warnings when running the make. I finally managed to get a bzimage and bzroot that look pretty similar to the bzroot and bzimage that come with unRAID, but it was late last night and I was unable to test it. With that, I haven't been able to verify that it fixes the problem just yet, but it worked for the Creator of that patch once he also made a few changes to syslinux.
  8. I just bought an Elgato HD60 Pro and when I pass the card through to my Windows 10 VM, it crashes the entire server when I turn off/reboot the VM. It appears as though the issue is resolvable by inserting the following lines into a few kernel files. drivers/pci/quirks.c: DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_YYE, PCI_DEVICE_ID_YYE_MOZARD_395S, quirk_no_bus_reset); include/linux/pci_ids.h #define PCI_VENDOR_ID_YYE 0x12ab #define PCI_DEVICE_ID_YYE_MOZART_395S 0x0380 I have been working for a while to rebuild the kernel with this code and I think I've got it building now but all I really have to show for my work is a pissed off wife. I also know that all this work I'm doing will go away as soon as Unraid updates the kernel again, so I figured I would ask if there's any way the code could get added into the next release. Thanks and keep up the great work! EDIT: I don't know if this should be marked Minor or Urgent, sorry. It crashes the entire server forcing me to hold down the power to reboot it, which the side bar says is urgent, but it only affects people who are trying to use this capture card, so it doesn't seem like it's an urgent issue. Additionally, I'm trying to generate the diagnostics zip, but for some reason it is changing my page to the link it should be downloading from and not actually downloading anything. I confirmed that it is working on my other Unraid server (both on 6.6.6) and it generates the diagnostics zip without any problems. tardis-diagnostics-20190112-2028.zip
  9. Anyone figure out how to get this working with a separate IP yet?
  10. Upon reading through the rest of the posts, I decided it wasn't NEARLY secure enough. Now I just need to figure out the steps to have my cellphone's presence on the LAN prompt the server to request that I insert and touch my YubiKey U2F. If my phone is not present, the server wouldn't care about the YubiKey. Only after the YubiKey is inserted and verified, will the server care about its attached retinal and finger printer scanners, which are the final key to decrypting the first server. Now the only way to get anything off of there is to... Steal me as well? God forbid I need to ask someone to boot my servers while I'm out of town because of a power outage. Haha
  11. Thanks for the info! I have been toying around with running 2 unRAID servers (One on a Dell R710, one on my primary Desktop w/ VM's getting passed hardware) and I took what you did and built upon it. We will call these servers U1 and U2. Both have a static IP and are plugged into a switch. U2 has a share named U1-key and U1 has a share named U2-key. Both have corresponding secure accounts. If both are on and I reboot U1, upon boot it reaches out to U2 and gets its key. If both are on and I reboot U2, it reaches out to U1 and gets its key. If both are off, I boot both and manually start the array on U2 (much faster boot time) and by the time U1 is up, it pulls its key from U2. Seems to be working well for me so far. Best of all, with this setup, if both are off (as would happen if someone walked off with my server rack), both keys are on encrypted drives, so the neither of them should be able to be booted without my key.
  12. Good catch, John_M! Let us know if fixing that doesn't fix it.
  13. I've actually been thinking about trying to fix this script up anyway. I plugged in my hard drive the other day and realized that it is currently synchronizing my deletions in addition to my changes. I briefly took a crack at getting it to copy over my new files and leaving old ones as they were, but I couldn't get it to work right and then I got busy. For the first error, what happens when you take that line and you open Terminal from the WebGUI and run it in that terminal? /usr/local/emhttp/webGui/scripts/notify -e "Unassigned Devices" -s "External Backup Completed" -d "Your files have been sucessfully backed up to your external drive.<br/>$finalUpdatedCount/$finalTotalCount files updated.<br/>You may now unmount it." -i "normal" For the second one, it's kind of hard to see what's going on without having your copy of the code in front of me. I feel like there may be an erroneous line break or character or something somewhere. Can I bother you to copy your entire script, click the < > button at the top of a new message on this forum so I can see what exactly you have? You can feel free to leave off the top lines that pertain only to you if you'd like.
  14. I personally added SSH keys to my root account by creating the keys normally (I had to create the ed25519 keys off-system. I forget exactly why creating them on-system wasn't working...) and putting them in /boot/config/ssh/ then I modified my /boot/config/go file (which is run during every boot) to copy the files over and set the permissions by adding the following: mkdir /root/.ssh cp /boot/config/ssh/authorized_keys /root/.ssh/authorized_keys cp /boot/config/ssh/id_ed25519 /root/.ssh/id_ed25519 cp /boot/config/ssh/id_ed25519.pub /root/.ssh/id_ed25519.pub cp /boot/config/ssh/id_ed25519 /root/.ssh/id_rsa cp /boot/config/ssh/id_ed25519.pub /root/.ssh/id_rsa.pub chown root:root /root/.ssh -R chmod 700 /root/.ssh chmod 600 /root/.ssh/authorized_keys chmod 600 /root/.ssh/id_ed25519 chmod 644 /root/.ssh/id_ed25519.pub chmod 600 /root/.ssh/id_rsa chmod 644 /root/.ssh/id_rsa.pub You'll also want to make sure /boot/config/ssh/sshd_config has passwords disabled and your keys set up. Actually, come to think of it, that go bit probably isn't even necessary if you just set up sshd_config to use the files in /boot/config/ssh/
  15. @sjohnson, for some reason my old script stopped backing things up when I would plug in my hard drive, so I've been working on the script. This time I'm using rclone and it seems a lot better. The new script is as follows: #!/bin/bash PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin ############# # Variables # ############# srcDir="/mnt/user" destDir="/mnt/disks/enc-extHDD" rcloneRemote="enc-extHDD:" maxSize="5G" deleteLogFiles="true" backupDirs=("/documents/My Important Information/" "/documents/IncomeTaxes/" "/media/Pictures/dogs, cats, and birds/") exclude=("VirtualBox VMs/" "Random Cats from Town/Terry/" "*.cache") ########## # Script # ########## case $ACTION in 'ADD' ) # Actions to perform upon mounting # Insert break in Logfile /usr/bin/echo -e "\\n-----------------------------------------------------------------------------------------------\\n" >> "$LOGFILE" excludeFile="/tmp/$(date '+%Y%m%d_%H%M%S')_rclone-excludes.log" for ((i=0; i < ${#exclude[@]}; i++)); do /usr/bin/echo "${exclude[i]}" >> "$excludeFile" done if [ -f /usr/sbin/rclone ] && [[ $(pgrep -cf "rclone.*sync.*$rcloneRemote") -eq 0 ]]; then finalTotalCount=0 finalUpdatedCount=0 # Sync files by looping through $backupDirs Array. /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Started Sync" >> "$LOGFILE" for ((i=0; i < ${#backupDirs[@]};i++)); do if [ -d "$srcDir${backupDirs[$i]}" ]; then /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Backing up ${backupDirs[$i]}" >> "$LOGFILE" currTrackerFile="/tmp/$(date '+%Y%m%d_%H%M%S')_rclone.log" /usr/sbin/rclone sync -vc --log-file "$currTrackerFile" --exclude-from "$excludeFile" --max-size $maxSize "$srcDir${backupDirs[$i]}" "$rcloneRemote${backupDirs[$i]}" newCount=$(more "$currTrackerFile" | grep -ci "copied (new)") replaceCount=$(tail -7 "$currTrackerFile" | grep -ci "replaced existing") noChangeCount=$(( $(tail -7 "$currTrackerFile" | grep -i "Checks:" | tr -s " " | cut -d" " -f2) - replaceCount )) updatedCount=$(( newCount + replaceCount )) totalCount=$(( updatedCount + noChangeCount )) finalTotalCount=$(( finalTotalCount + totalCount )) finalUpdatedCount=$(( finalUpdatedCount + updatedCount )) /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): $updatedCount/$totalCount files updated." >> "$LOGFILE" /usr/bin/sleep 1 else /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): $srcDir${backupDirs[$i]} does not exist!" >> "$LOGFILE" fi done /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Sync Ended" >> "$LOGFILE" /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Totals: $finalUpdatedCount/$finalTotalCount files updated." >> "$LOGFILE" /usr/local/emhttp/webGui/scripts/notify -e "Unassigned Devices" -s "External Backup Completed" -d "Your files have been sucessfully backed up to your external drive.<br/>$finalUpdatedCount/$finalTotalCount files updated.<br/>You may now unmount it." -i "normal" if [ "$(pgrep -cf "rclone.*mount.*$rcloneRemote")" -eq 0 ]; then # RClone Mount Directory /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Drive Mounted " >> "$LOGFILE" /usr/bin/mkdir -p $destDir /usr/sbin/rclone mount --max-read-ahead 1024k --allow-other --allow-non-empty $rcloneRemote $destDir & else /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Drive Already Mounted" >> "$LOGFILE" fi else if [ ! -f /usr/sbin/rclone ]; then /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): /usr/sbin/rclone does not exist. Please install RClone via Community Applications." >> "$LOGFILE" else /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): It seems like RClone is already running. Not running a second time." >> "$LOGFILE" fi fi ;; 'REMOVE' ) # Actions to perform upon umounting loopCount=0 # Kill mount script while [ "$(pgrep -P 1 -cf "bash.*unassigned.devices/$PROG_NAME.sh")" -gt 0 ]; do mapfile -t scriptPS < <(pgrep -P 1 -f "bash.*unassigned.devices/$PROG_NAME.*") if [[ ${#scriptPS[@]} -gt 0 ]]; then /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Unmounting prior to completion. Killing mount script (${scriptPS[*]})" >> "$LOGFILE" /bin/kill -9 "${scriptPS[@]}" /usr/bin/sleep 1 loopCount=$((loopCount + 1)) if [[ $loopCount -ge 30 ]]; then break fi fi done loopCount=0 # Kill rclone sync processeses while [ "$(pgrep -cf "rclone.*sync.*$rcloneRemote")" -gt 0 ]; do mapfile -t rclonePS < <(pgrep -f "rclone.*sync.*$rcloneRemote") if [[ ${#rclonePS[@]} -gt 0 ]]; then /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Unmounting prior to completion. Killing rclone sync processes (${rclonePS[*]})" >> "$LOGFILE" /bin/kill -9 "${rclonePS[@]}" fi /usr/bin/sleep 1 loopCount=$((loopCount + 1)) if [[ $loopCount -ge 30 ]]; then break fi done loopCount=0 # RClone Unmount Directory while [ "$(pgrep -cf "rclone.*mount.*$destDir")" -gt 0 ]; do /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Unmounting Drive" >> "$LOGFILE" /bin/fusermount -u $destDir /usr/bin/sleep 1 loopCount=$((loopCount + 1)) if [[ $loopCount -ge 30 ]]; then break fi done /usr/bin/echo "$(date '+%H:%M:%S (%Y%m%d)'): Drive Safe to Remove" >> "$LOGFILE" if [ -d $destDir ]; then /usr/bin/rm -r $destDir fi ;; esac errorsVGrep="modify window is\\|waiting for checks\\|waiting for transfers\\|waiting for deletions\\|Transferred:\\|Checks:\\|Elapsed time:\\|Errors:\\s*0\\|INFO\\s*:\\s*$\\|^$\\|copied (new)\\|Copied (replaced existing)\\|Transferring\\|*.*% done\\|::::::::::\\|/tmp/.*rclone.log" nonErrorLogs=() mapfile -t logFiles < <(find /tmp -maxdepth 1 -type f -name "*_rclone*" | grep -v ".err\\|_rclone-excludes") for ((i=0; i<${#logFiles[@]}; i++)); do mapfile -t errors < <(more "${logFiles[i]}" | grep -iv "$errorsVGrep") #errors="" #errors=$(more "${logFiles[i]}" | grep -iv "$errorsVGrep" | sed ':a;N;$!ba;s/\n/\t\n/g') if [ "${#errors[@]}" -eq 0 ]; then nonErrorLogs+=("${logFiles[i]}") else /usr/bin/echo -e "\\n##### POSSIBLE ERRORS DETECTED #####\\n${logFiles[i]}.err:\\n" >> "$LOGFILE" for ((j=0; j<${#errors[@]}; j++)); do /usr/bin/echo -e "\\t${errors[$j]}" >> "$LOGFILE" done /usr/bin/echo -e "\\n##### END OF ERROR LIST #####\\n" >> "$LOGFILE" /usr/bin/mv "${logFiles[i]}" "${logFiles[i]}.err" fi done #Delete non-error log files if [ "$deleteLogFiles" = "true" ] && [ "${#nonErrorLogs[@]}" -gt 0 ]; then /usr/bin/rm "${nonErrorLogs[@]}" fi #Delete rclone-excludes mapfile -t excludesFiles < <(find /tmp -maxdepth 1 -type f -name "*_rclone-excludes.log") if [ "$deleteLogFiles" = "true" ] && [ "${#excludesFiles[@]}" -gt 0 ]; then /usr/bin/find /tmp "$(pwd)" -type f -name "*rclone-excludes.log" -exec rm "{}" \; fi sleep 1 if [ "$(find /tmp -maxdepth 1 -type f -name "*_rclone.log.err" | wc -l)" -gt 0 ]; then /usr/local/emhttp/webGui/scripts/notify -e "Unassigned Devices" -s "External Backup - Errors Found" -d "Possible error(s) detected.<br/>Check /tmp for .log.err files." -i "warning" fi ## Available variables: # AVAIL : available space # USED : used space # SIZE : partition size # SERIAL : disk serial number # ACTION : if mounting, ADD; if unmounting, REMOVE # MOUNTPOINT : where the partition is mounted # FSTYPE : partition filesystem # LABEL : partition label (Name of mount, eg: /mnt/disks/myDrive, this returns "myDrive") # DEVICE : partition device, e.g /dev/sda1 # OWNER : "udev" if executed by UDEV, otherwise "user" # PROG_NAME : program name of this script (If script is "/boot/config/plugins/unassigned.devices/123.sh", returns "123") # LOGFILE : log file for this script EDIT: Cleaned it up and put in some error checking and other features that weren't in the script I posted last night. EDIT 2: Important note, if you're trying to delete a subdirectory of an included directory, (for example, '/media/Pictures/' is in the top list, but you don't want the "Terry" folder included from '/media/Pictures/Random Cats from Town/Terry/', you have to specify 'Random Cats from Town/Terry' if you don't want any random cats, you have to specify 'Random Cats from Town' if you don't want pictures of Terry from February 13th and you have the files labeled like '20170213_001.png', you could specify something like 'Random Cats from Town/Terry/20140213_*.png" This kind of took me a second to figure out, so I figured it would be worth sharing. Most important thing to note is that you have to specify the folder hierarchy all the way back to the "root" of the current folder set.
  16. I want to create a bridge that is not associated with a physical NIC. Is that possible to do in the GUI? I guess I could just set up a bunch of vlans on one of the physical ports and then just not have anything plugged into it. I should still be able to attach pfSense to each of the vlans and have it do my routing. I was just thinking the better way to do it would be to create virtual NICs, but for simplicity sake, maybe just creating a bunch of vlans on an unused port would be better.
  17. Okay, awesome, thanks for clearing that up for me. I thought it was custom networks all around. Not just for Docker containers. In that case, it seems to me like you can go into the command line and run the following 3 commands: brctl addbr br9_name ip addr add 192.168.0.2/24 dev br9_name ip link set dev br9_name up Docker will then create its network corresponding to that virtual bridge automatically when it starts, and then you're good to go. Testing it out on my box seems to work as I would expect it to, however, there's an issue where if the bridge name has capitol letters in it, it will show up as a network you can select, however, when the docker container goes to get recreated, the command will fail because it converts br9_Name to br9_name and it is case sensitive, therefore, br9_name doesn't exist. I don't know if this is something that would be fixed in your Dynamix code, or elsewhere, but it seems to me like the one of the following things could fix it: Filtering networks so those with capitol letters don't get automatically created Making the docker run command uses the network's name with the possibility of capitol letter Either way, thank you for all of your hard work, bonienl! Keep it up! EDIT: After playing around with this more, it looks like in 6.4, personally created networks like I did with the above commands are no longer persistent across reboots. Is it a terrible idea to add the commands to my go file so they're recreated upon reboot?
  18. So I installed 6.4.0_rc10b, and I'm not seeing what exactly you're talking about. Under the Network Settings, I see where you can set up a routing table now, but it doesn't actually let me set up new routes unless the gateway already exists. The gateways seem to be the bridges I was creating with the method I mentioned earlier, and I'm not seeing a way to create new macvlan gateways in the webgui. Is there something I am missing?
  19. Thanks for the info! I've had a few reasons to want to upgrade to 6.4. looks like I've got a real good reason now. I'll check it out later tonight.
  20. So I've been working on trying to virtualize my pfSense router for coming up on a week now and I'm starting to get there, but there's one really weird issue I can't seem to nail down. I have a Dell R710 and an Intel E1G44ETBLK Pro/1000 ET PCI-E Quad Port NIC that after much headache, I managed to pass through at a hardware level. If someone finds this thread later and is having similar problems passing through their NIC, I ended up needing to add vfio-pci.ids=[id of NIC], [id of bridge in the same IOMMU group as nics] vfio_iommu_type1.allow_unsafe_interrupts=1 to the append line of the syslinux.cfg file. I don't know if that's the best/only way to get it to work, but it is the only way that it would work for me. Now onto the reason I opened this thread in the first place. After passing through the 4 hardware ports to a pfSense virtual machine, I set up igb0 as my WAN, I set up igb1 to my LAN, and I set up a handful of bridges using the following commands to try and have different networks for different types of vms/docker containers. brctl addbr br9_Nickname ip link set dev br9_Nickname up All the bridges are named things like br9_TestNet, br9_Tools, br9_Media, etc. This allows me to quickly see what the bridge's purpose was, and the br9 at the beginning makes it show unRAID will show the interfaces in the VM screens. If I left off the br9 part, the wouldn't show up I feel like I could have still manually added them by editing the XML, but I found this route easier. I have several computers plugged into the switch that my LAN connection is plugged into, and those computers have absolutely no problem connecting to the internet. However, the VMs I have on the br9_ bridges are all doing the same thing. If I send the command "ping 8.8.8.8" it pings just fine. If I send the command "ping google.com" it resolves and pings just fine. Same results with ipinfo.io and its ip. However, if I try something like "curl ipinfo.io/ip" or "wget ipinfo.io/ip" it tries for a while, but eventually times out. If I try an apt-get update, it tries to connect to the ubuntu.com apt repos, but again, it times out. I'm able to ping us.archive.ubuntu.com just fine, I just can't read from it. Does anyone have any idea what step I'm missing? I'm not sure what other information you might need, so if you need anything else, please ask. Thanks in advance. EDIT: As it usually works out, the problem is that I didn't ask someone how to do it. Right after I posted this, I found the missing piece. I needed to also run the following command on each of the bridges: ip addr add XXX.XXX.XXX.XXX dev br9_Name Once I gave the bridges an IP address, curl, wget, apt-get, and everything else started working properly. EDIT 2: Actually, there may be one step I didn't mention that makes a difference and may not be ENTIRELY obvious. You need to set the address to something other than the ip address for pfSense and it needs to be the gateway on your devices. So for example, if your bridge in pfSense uses 192.168.15.1/24, you would type ip addr add 192.168.15.2 dev br9_Name, and then in your dhcp config or in your os, you would set the default gateway to 192.168.15.2 EDIT 3: I feel like with as much effort as it has taken me to figure all of this out, I will be writing a guide for pfSense on unRAID in the following days. For now, just adding a note on how to get Docker containers to work properly. Given the example from EDIT 2, the corresponding docker network create command would look like this: docker network create -d bridge --subnet 192.168.15.0/24 --gateway 192.168.15.2 -o parent=br9_Name Name You would then go into the docker container you want to add, and click Advanced settings, then add something like this to the Extra parameters box: --network Name --ip 192.168.15.3
  21. So you're saying it will move it over if it tries to put it in /mnt/user/domains/myArrayVM/disk1, but I specify to put it on disk 3 in the array and my domains share is set to prefer? If that's the case, I would still have to make a domains-array share and save the vdisk in /mnt/user/domains-array/myArrayVM/disk1 EDIT: I ran a few tests. Putting a vm on disk 3 while domains was set to preferred ended up having the vdisk move to the cache. Putting the vm on domains-array where domains-array is set to force the use of the array allowed me to have to the vm disk on the array. I'm just going to manually set their drive directory when I need them on the array. It doesn't happen very often, so it shouldn't be a huge issue.
  22. I swear I tried that and it moved my VM to the cache when the mover pop'd. I'll try it again for sanity sake. Thanks.
  23. I'm looking for a way to make certain VMs use the array as opposed to cache. I have two use-cases for this. One being that I would like to have several VMs with larger hard drives. I don't want them chewing through my cache drive, so I would prefer to have them use a small fraction of my primary array space. The other being that I would like to have a pfSense VM and from what I've heard, those can be a little rough on SSDs. The only problem is, I would also like to have most of my VMs on the cache. When assigning where to keep the domains share, we are given 4 options for the cache drives. No, Yes, Only, and Prefer. If my understanding of these is correct, no will not work because it doesn't allow me to use cache. Yes won't work because it moves everything from my cache to my array when the mover is invoked. Only won't work because it doesn't allow me to use my array. Prefer won't work because it moves everything from my array to my cache when the mover is invoked. Is there some sort of thing that just says "Allow me to keep everything where I originally put it"? Thank you in advance. EDIT: I had an idea right after I posted this. I'm going to try it out, but also post it here to get an opinion on going about it this way. I created a share named "domains-array" that is set to No for the use cache option. I'm thinking I should be able to set up new VMs and set the hard drive location to manual and tell it to save the hard drive there. I have no reason to think that the hard drive would get moved after doing it this way. Does anyone have any input on whether or not this is the way this should be done?
×
×
  • Create New...