Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About CraigGivant

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the work and information on this. In my case disabling the Plex Docker all but eliminated any errors. Didn't let it run a full 24 but log went from blood red to nothing. I'm on Version Out of curiosity how do I get detailed info about the actual docker container? Must just be a duh moment but I'll be dam'd if I can find it. I remember it being the one that was listed as from Plex and all updates that have popped up have been installed. I'm going to try and disable music scanning and see what shakes out.
  2. Using Plex. Almost certain that will be the culprit. Using the one directly from the plex repo.
  3. Been having the same issue for days now. Could be a coincidence but it seemed to happen right after and upgrade to 6.7.2 Error seems to happen every 20 minutes in my case and repeats over 100 times in a second for several seconds . To the point where the Syslog fills up, dockers crash and the only corrective action is a reboot. root: error: /gcsp/2.0: missing csrf_token
  4. Did not want to start a new thread for this as I'm merely reporting a finding. I found this thread because I as well was experiencing VERY slow boot times. In fact, the VM was going into suspend mode and I would need to force stop it the first time. After that it would boot but was taking up to 10 minutes to do so. I was assigning 48G Ram to a Windows 10 VM running on a Ryzen 1950X on 6.6.6. Dropping this down to 32G (host has 80G) the machine boots in about 15 seconds. Not posting diagnostics or looking for any support and will retry this once 6.7 reaches stable, but if anybody wants to investigate this just hit me up and I'll grab some logs.
  5. I understand. The farthest I got with this was trying the Dynamix Temp plugin but even after installing Perl I could not get the drivers to detect. I'm on a Gigabyte X399 with a 1950X. Haven't tried any other plugins yet but would assume that would be what is needed.
  6. Really no need for an update. It does what it needs to do which simply put is writing, moving and checking various pieces of data to and from different memory locations.
  7. Not sure if this helps the OP or the follow-up poster but I use a Corsair Commander Pro to adjust LED's and Fan speeds. This device is plugged in to a USB 2 header on my MOBO and shows up as a device eligible to attach to a VM. By doing so I am able to control everything using iCue software in a Windows VM which is running almost all the time. My Pump is NOT attached to this device and I control its speed as well as the LED's for the pump through the bios. I'm sure they could be hooked into the Commander but I am out of ports. With this said I would like to control these device in UnRaid when the VM is not running but have yet to investigate a method to do so. Its on my list of things to do.
  8. You should check into the settings you are using for each of your shares. It sounds like you may have them setup to "use cache disk" which you may not want for large files such as movies. If you set that setting to no for any folders that contain large files they will get written directly to the disk and not the cache drive. I suggest that you set the appdata share to "Use Cache Disk ONLY" and any other shares that do not need faster transfer speed to NO for cache. Keep in mind that the cache drive is not part of the array so any data there such as appdata will need to be backed up manually.
  9. Hopefully you made a screenshot or wrote down which drive was in which disk slot. If so I believe you will need to do a new config and re-assign the drives to the same slots. This happened to me when I changed controller cards as the new card reported much shorter names for the drives then the original card.
  10. I got this to work but not as eloquently as the original script. Once I figured out the path syntax, nothing I did would pass the backuplocation variable properly to the rest of the script. I removed the variable and directly entered the path in all instance of $dir. Here is what worked: #!/bin/bash #backs up datestamp="_"`date '+%d_%b_%Y'` mkdir -vp '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/nvram' mkdir -vp '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/xml' echo "Saving vm xml files" rsync -a --no-o /etc/libvirt/qemu/*xml '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/xml' echo "Saving ovmf nvram" rsync -a --no-o /etc/libvirt/qemu/nvram/* '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/nvram' chmod -R 777 '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/xml' chmod -R 777 '/mnt/user/'030\ -\ IT\ Support'/'Backup\ Data'/Unraid/'VM\ Settings'/'$datestamp'/nvram' sleep 5 exit Here is SpaceInvaderOne's original script for comparison: #!/bin/bash #backs up #change the location below to your backup location backuplocation="/mnt/user/test/" # do not alter below this line datestamp="_"`date '+%d_%b_%Y'` dir="$backuplocation"/vmsettings/"$datestamp" # dont change anything below here if [ ! -d $dir ] ; then echo "making folder for todays date $datestamp" # make the directory as it doesnt exist mkdir -vp $dir else echo "As $dir exists continuing." fi echo "Saving vm xml files" rsync -a --no-o /etc/libvirt/qemu/*xml $dir/xml/ echo "Saving ovmf nvram" rsync -a --no-o /etc/libvirt/qemu/nvram/* $dir/nvram/ chmod -R 777 $dir sleep 5 exit Was quite a bit of experimenting for a rookie but worth the lessons learned. If anybody see anything out of whack I'd appreciate it but the files are saving to my chosen path in a dated folder.
  11. I don't think mine did either. Something about when she said "oh yeah sure" led me to believe there was hope for the future. 😀
  12. Hello Everyone… I just wanted to take some time to introduce myself and thank Lime Tech for this resource as well as their software. I would also like to give a shout out to SpaceInvadeOne for his videos as they have proven very valuable. I can get pretty wordy so we can leave it with this or you can continue with the TL;DR portion below. Tech is a hobby for me and I consider myself a weekend warrior. While I owned my first computer at the age of 16 (I’m now 54) and at one time contemplated tech as a career, I took a different path and became a Land Surveyor. I have operated my own Land Surveying firm for almost 15 years and have been in that industry for over 30. I have been the go-to guy for some of the companies I have worked for to “fix” IT woes simply because I had more knowledge than anybody else. Most of the time this meant turning to the internet to seek out advice from others that “really” knew what they were doing. I am passionate about tinkering and finding solutions to problems when they crop up and enjoy passing any knowledge acquired along the way down to others as payback for the help I have received through the years. The land surveying industry can be fairly data intensive although more so in quantity of files and not so much in size. I also can be categorized as a “data hoarder” and have ISO’s and floppy.img files going all the way back to DOS. What can I say, I keep stuff. My UnRaid journey began as a way to consolidate hardware. We had everything stored on a QNAP TS-870 that I upgraded myself (processor and ram) to the Pro model. I was fine with this box but was running out of space and didn’t want to play the swapping game to upgrade the 4TB hard drives. And while that box also ran VM’s, the performance was not so great. VM’s are important as the plan is to downsize and eventually live in an RV. So I was also looking for a longer term solution with a smaller footprint. I initially looked at the QNAP TS-1685 but maxed out (the only way to do things, lol) it was close to 6 grand. I really liked the case with 16 bays and multiple M.2 slots but couldn’t swallow the price and the lack of upgrade paths. While I’d like to think that the maxed out model would do what I need and take me to the end of my years, I wasn’t willing to give up the ability to change hardware and replace components as needed. I began researching SFF cases, hardware, etc and spent several days doing so. Unfortunately I never really found anything I liked and almost pulled the trigger on the QNAP. Then it dawned on me …. my workstation was built last year with a Threadripper 1950x, 32GB Ram and two 1080TI’s. I also fully water cooled it with hardline tubing and it is in a Fractal Design R6 case. I’m am not a big gamer but do use AutoCAD and sold it to the wife as the “last computer I would ever need”. It was all setup just the way I like it but I figured what the hell, lets try UnRaid and convert the windows side to a VM. I was off and running. Due to the water cooling I only had room for two 3.5” drives so I opted for a SanDigital 12G 8-bay SAS/Sata enclosure and an LSI 9300 controller. I bought five 10TB WD Externals and “shucked” them. I know I am not getting 12G SAS speed but again I like to future proof things whenever I can. Installation and setup was a breeze and I got Linux, OSX and Windows machines setup as VM’s. The windows machine was my original daily driver which I converted to a VM and setup pass-through for a USB 3 controller and one of the 1080TI’s. I also setup Dockers for Plex, Deluge, Sonarr, Kusader and SAB along with various plugins. Everything is working top notch and I am really impressed by the capabilities of UnRaid. While the footprint I have ended up with is larger than the 1685, I also have a PfSense router that will eventually get incorporated along with some 10GB net cards. Baby steps for the time being but I feel as if I have made the right choice and will be purchasing my Pro license in the next few days. I look forward to getting to know everyone and hopefully being able to contribute to the community.
  13. I can appreciate that but the horse has already left the barn. I also believe this not only affects the share name but also the folders within the share. I migrated all data over from a Qnap and literally have thousands of folders. I certainly don't want to rename them all as even if there was a script to do this easily we have too many programs that would need to be re-pathed to find their data. I have had no issues with anything (including VM's and Dockers) using paths with spaces. I have only encounters it in this plugin as per above as well as with spaceinvaders icon pack downloader. Hoping for a simple way to define this path properly to work with the scripts and if not I'll just have to create a different share.
  14. I'm trying to run the VM Config Backup script and receiving the errors in the code box below. The path that is to be used and was added to the config file is: /mnt/user/030 - IT Support/Backup Data/Unraid/VM Settings I pretty sure this is an issue with the spaces and/or the dash in the path name but I have tried a bunch of different syntax including single quotes, backslashes to remove the spaces, etc. Just cant seem to get it right. Can anybody assist with how this path should be defined in the config? Thanks! Script location: /tmp/user.scripts/tmpScripts/vm settings backup/script Note that closing this window will abort the execution of this script /tmp/user.scripts/tmpScripts/vm settings backup/script: line 10: [: too many arguments As /mnt/user/030 - IT Support/Backup Data/Unraid/VM Settings//vmsettings/_27_Jan_2019 exists continuing. Saving vm xml files rsync: link_stat "/mnt/user/030" failed: No such file or directory (2) rsync: link_stat "/-" failed: No such file or directory (2) rsync: link_stat "/IT" failed: No such file or directory (2) rsync: change_dir "/Support" failed: No such file or directory (2) rsync: change_dir "/Data/Unraid" failed: No such file or directory (2) rsync: mkdir "/Settings//vmsettings/_27_Jan_2019/xml" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(664) [Receiver=3.1.3] Saving ovmf nvram rsync: link_stat "/mnt/user/030" failed: No such file or directory (2) rsync: link_stat "/-" failed: No such file or directory (2) rsync: link_stat "/IT" failed: No such file or directory (2) rsync: change_dir "/Support" failed: No such file or directory (2) rsync: change_dir "/Data/Unraid" failed: No such file or directory (2) rsync: mkdir "/Settings//vmsettings/_27_Jan_2019/nvram" failed: No such file or directory (2) rsync error: error in file IO (code 11) at main.c(664) [Receiver=3.1.3] chmod: cannot access '/mnt/user/030': No such file or directory chmod: cannot access '-': No such file or directory chmod: cannot access 'IT': No such file or directory chmod: cannot access 'Support/Backup': No such file or directory chmod: cannot access 'Data/Unraid/VM': No such file or directory chmod: cannot access 'Settings//vmsettings/_27_Jan_2019': No such file or directory
  15. Problem basically solved. See Edit 2 below. Thanks to anyone that takes the time to read this entire post. I'm just hoping to learn something. I was following SpaceInvader's video to install Virt-Manager and at the step in the video HERE I think I must have screwed something up. I opened the file using nano, made the required changes (listen_addr = "") and wrote out the file. I even went back into the file to ensure the changes stuck and all looked well. I then stopped the array and restarted it. Unfortunately after that I get "Libvirt Service failed to start" when clicking the VMS tab. A review of the system logs is throwing the following error: Jan 26 23:41:39 xxxxxxx root: 2019-01-27 06:41:39.113+0000: 9765: error : main:1165 : Can't load config file: configuration file syntax error: /etc/libvirt/libvirtd.conf:1: expecting a name: /etc/libvirt/libvirtd.conf I have SSH'd into the server and the libvertd.conf file is not in the /etc/libvert folder (it is empty). I did find a master config file in /etc/libvertd-/ but copying that to the /etc/libvert/ folder and rebooting does not work. In fact a reboot deletes the file I copied leaving /etc/libvert/ empty again. As a disclaimer I am a weekend warrior and while I have enough knowledge to get around, I do not understand what is happening and am hoping not to loose the week of time I have spent setting up what had turned into a great server. I had several VM's running flawlessly along with about 6 dockers, plugins, etc. My only goal was to install virt-manager to keep xml changes to the config file persistent when using a gui. I have not included diagnostics because my hope is that somebody more knowledgeable than myself can tell what is wrong from the error above. I assume I will need to find and/or download the correct file and put it somewhere but thats where my ideas end. If this is incorrect then I'm up for whatever it takes. The next lines of the log file state: Jan 26 23:41:39 xxxxxxx emhttpd: shcmd (109): exit status: 1 Jan 26 23:41:39 xxxxxxx emhttpd: shcmd (111): umount /etc/libvirt Jan 26 23:41:39 xxxxxxx emhttpd: nothing to sync I'm thinking that the /etc/libvert-/ folder is where the file goes and then that folder gets mounted as /etc/libvert but as I said I have no idea. I'm hoping to be able to correct this in some fashion without needing to re-install. I'd also like to figure out what went wrong to pick up some knowledge along the way. TIA to anybody that has any suggestions! EDIT: Well I was able to successfully get the service restarted by deleting the libvert image, turning off the service, rebooting and re-enabling the service. Of course all my VM's were gone but I thought I was smart by making a copy of the old image. I repeated these steps and rebooted with the service left off. I then copied the old image file back to the image directory, left the service off and re-booted. When I turned the service on this time I got the same error. This leads me to believe that the problem is within the image file but I have no way of knowing if (or how) to edit that file. I really don't want to loose my VM configs so if anybody knows how to edit that I would appreciate it. Barring none I guess I will just need to re-configure all the prior machines. Edit 2: I am back where I was before encountering this issue. I also have virt-manger running within a Fedora VM as outlined in Spaceinvaders video. All VM's have been re-created and re-configured properly. Other than loosing 12 hours I'm thrilled but still miffed and would appreciate any of you that are way more qualified than me chiming in. I believe when using nano the first time I must have fat-fingered something which was then written to the libvirt.img. Assuming that is in fact the case, I'd really like to know if there is a way to edit that image file or at least look inside of it. While I will certainly be more careful with future edits, I'd still like to pick-up some knowledge. Thanks.