• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About tomjrob

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Upgraded from 6.8.3 to 6.9.2 a few days ago. Upgrade went smoothly and all Dockers and VM's are running great. Only issue was the cache prefer setting not working on a share with a space. renamed shared to remove space and all working. Thanks to the entire team for the great work. Next step is to try multiple cache pools.
  2. I have a very similar configuration setup for my wife. Windows 10 Desktop KVM on Unraid Using a Raspberry Pi 3 running an older free version of WTware. Link to the product is here. This has been running flawlessly for a few years now. Haven't had to do anything to it since it was setup. When the rpi is turned on, it goes immediately to Sign In screen. She enters her thin client password, and the desktop loads. Enter password for desktop, and proceed. Sound, mouse, keyboard all work fine. Her "Monitor" is a cheap TV
  3. Could Use Some Guidance, I have installed and set up the Bazaar docker and have been able to get it working with my series (TV Shows) without any issues. This is a great addon and I appreciate the work done on this. Very helpful. I am having a problem with movies though. I am getting the message about the path being invalid, and I can see why it is happening. It appears as though Bazaar assumes that each movie will always be in it's own folder. However, mine are not. Example below: Actual Path: /mnt/Movies/Features/South Pacific 1958 Bluray-1080p.mkv Ex
  4. I found the issue to be a user script that was creating symlinks for virt manager persistent snapshots. It was referencing disk locations that were no longer valid after I reintroduced the ssd back into the array. It was set to run at array startup, and the script restarts the libvirt service. However, it was not completing due to the incorrect disk locations. Stopped the script and libvirt starts every time. Moved the specified locations back to original re-ran the script and all is ok. Sorry for the fire drill, and thanks for the assistance in identifying the root cause
  5. I rebooted the array in safe mode. I ran diagnostics prior to starting the array. That is file I then started the array and started the libvirt service with VM Manager. This time it started and the VM's showed up in the dashboard. Ran diags again. That is file I guess there is a plugin or something causing the problem. I hope this helps to identify the issue. I really appreciate the help.
  6. Rebooted 2x. Does not fix the problem. I also noticed that the system gets hung trying to unmount disks whn rebooting. Have to force reboot, which obviously causes unclean shutdown. Any other ideas? Thanks in advance.
  7. Hoping someone can help here so that I do not lose all of my VM machines. Used mover to empty cache ssd to check it with Crucial windows program. Looked like it was empty. Checked it in windows machine. All ok. Put it back into Unraid and used mover to add docker and kvm (libvirt files) back onto the cache. Restarted Docker & recovered all containers. But now libvirt service fails to start. Libvirt Log 2020-03-14 14:01:57.797+0000: 23130: info : libvirt version: 5.10.0 2020-03-14 14:01:57.797+0000: 23130: info : hostname: TOWER
  8. Upgraded from 6.8.1. to 6.8.3 today. No issues. All Dockers & VM's working. Kudos to the team.
  9. Upgraded today. All is good. Thanks for the efforts here.
  10. Just took a look at this thread and did a compare to the xml I have for my Windows 10 VM's. One difference I see is that I am using a different (older) version of the virtio iso. (virtio-win-0.1.141-1.iso) vs (virtio-win-0.1.171-1.iso). I have 2 Windows 10 VM's running, and the latest one was created a couple of weeks ago with 0.1.141-1.iso without issue. Maybe try a different version than virtio-win-0.1.171-1.iso. I only mention this because the error says to insure that the drivers are on the media. Good luck.
  11. (Solved): BTRFS Cache Pool Errors The ADATA ssd "worked" when connected to the LSI LSI 9211-8i controller, but did not work and kept dropping offline when connected to the AsRock 970 Extreme4 motherboard SATA controller. However, I noticed that even though it worked when connected to the LSI card, it was running very slow and also very hot (124 degrees F). So, I decided to take's advice and exchange it with a samsung 860 EVO ssd. Much better performance and runs much cooler (75 degrees F). As always, thanks for the great advice here. Hope this helps some
  12. Thank you for the quick response. As you suggested, I moved the Adata device to a different port (cable). The new connection is also on a different controller. Now connected to LSI card instead of motherboard sata port. Reintroduced it into the Cache, and the rebalance completed without error. The Adata has been running fine for a few hours in this configuration. I will monitor for awhile and report back as solved if there are no other errors. I am suspecting now that the new Adata SSD has an incompatibility with the port on the motherboard, because as mentioned in the initial post t
  13. Could use some help here with Cache Pool errors. I apologize for the long post, but I believe that I should try and provide as much info as I can if I am going to ask anyone for help here. So, long story, but here goes. Array had been running fine for months with the initial configuration of the cache pool. Initial Cache pool prior to any issues: Cache - 256GB SSD Cache2 - 256GB SSD Cache3 - 512GB SSD Goal: Remove both 256 GB SSD's and replace with a new ADATA 512GB SSD. End up with (2) 512GB SSD's in Raid 1 pool. Used the pro
  14. Just finished setting up and testing WireGuard. Very easy, and all is working great. Can access unRaid Gui, unRaid shares, and all servers on the LAN from a remote laptop in a different state. Great performance. Very impressed so far. Thanks to the entire team. Next step to try is ipad client access.
  15. Upgraded from 6.7.2 Last night. So far, all is good. Dockers, VM's all running. No errors in System Log. Thanks for all of the work to get this released. Great Work!