• Content Count

  • Joined

  • Last visited

  • Days Won


Tuftuf last won the day on June 11 2017

Tuftuf had the most liked content!

Community Reputation

18 Good


About Tuftuf

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've gone to the effort of building an 11th gen NUC system to replace my main unraid server party due to needing to troubleshoot this bug! The 11th gen was a bit of a failure as it seems driver support for 11th gen cpu quick sync is shocking right now. So I found a cheap used 10400T mini pc! Everything is running from there as of last night. Now it's time to unplug some hard drives and start testing this bug again! Good (well bad) but good to see others are still seeing the issue.
  2. @DZMM I moved over from plexguide to your script over a year ago. Using the old version of the script without cache settings works as expected. If I use the new version with cache defined, I get an extra folder created within my mount point the same name as my mount point. Am I missing something or should the configure below valid? The paths have all changed as I moved it to a new system. I'm not certain if I want the cache setting or not but I dislike the new script not working correctly for me, I've read before that it was not getting maintained within the rclone code.
  3. I'm setting up another system and changing how my paths are arranged. The main question here is, are people using the cache setting? I'm reading on other forums and places that the cache setting shouldn't be needed and hasn't been for a long time, since the ranged gets were added. Do I need this cache mount? Can I just remove the 3 lines defining it? /mnt/storage is a SSD cache pool. EDIT - I have changed the /mnt/remotes/rclonefs to be on the SSD. I was going to place the rclone mount in /mnt/remotes as I expected this to be read only,
  4. Can you expand on what you mean by share the same interface and virto-net? I'm trying to understand if this is or is not been viewed as a bug. I try to keep intervlan routing to a minimal as Unraid is in my shed and router is in the house. What I'm doing seems a very simple configuration and these errors are happening when upgrading to 6.9.x. The solution shouldn't be to not run a VM or Docker instance on br0. I'm running 1 VM running on my system and it's in a separate vlan (br0.50). *EDIT* Wow, I did not expect to see posts going back to 201
  5. Thanks for confirming you are able to run it another vlan. It doesn't fit with how I have things configured, but I can look at moving these to another vlan.
  6. I believe I'm seeing the same issue when I upgraded to 6.9.0, I have not tried 6.9.1 yet. Following a thread on a facebook group, someone else has this issue seen 6.9.0 and 6.9.1. After pointing to macvlan issues, he found the errors in his syslog. Similar to attached here. He believes its related to anything that is assigned a static IP on BR0.
  7. Thanks, I have no idea at this point other its stopping a vfio stage on a GPU that is used for passthrough but that is completely unrelated at this point and the VM is not set to auto start. Recently added all my drives from my main server into this, so I'm trying to get everything working right before I leave it alone again.
  8. I'm trying to find out why this system will not shut down correctly. Suggestions on where to look for what is keeping the mount active. I need to use fusermount -uz /mnt/user for the stop command to complete. EDIT - Added open files plugin.. Also it stops here at login but works otherwise ok. firefly-diagnostics-20200618-1826.zip
  9. The system is working, but the boot up stops here? Just checking that this is not normal? If I go the GUI route then the desktop does appear. Thanks,
  10. The issue is that I can no longer boot from USB, I think if I take the NVME out everything will be back to normal. I installed windows as a VM with the NVME passed through, this was working great until I rebooted the machine. It rebooted to windows, following that I rebooted it myself selecting the USB sandisk and unraid started to boot. I wish I took a screen shot, but it locked up. Maybe was showing vfio errors. Rebooted again I can't boot from the USB stick, can't see the boot manager anymore. Created a new USB stick, can't boot from that either. On
  11. I use vlans at home and this caused all the traffic to leave via the management address even after binding it to an interface within the rclone upload script. The fix was to add a route second routing table and route for the IP I assigned it to. The subnet is 192.168.100/24 The gateway is The IP assigned to rclone upload is echo "1 rt2" >> /etc/iproute2/rt_tables ip route add dev br0.100 src table rt2 ip route add default via dev br0.100 table rt2 ip rule add from
  12. My array was not stopping and I blamed this when I couldn't quite work out where the fuser command was, I'll have to see if there is something else causing it not to stop as it looks to be unrelated. I don't plan on stopping it just yet, its running its purpose. Main focus is getting things ready to back it all up.
  13. @watchmeexplode5 It's good to see someone else state we didn't need all the extra mount points. Also used Plex guide for awhile, plex left my Unraid system for about a year. I'm not having much luck with the unmount script on array stop, having to manually use fusermount -uz command each time. I let people start using plex again so I don't plan to stop it again just yet.
  14. @DZMM great info thanks, it'll help when I finish this as I need to move some disks around.
  15. It's the time to think, I previously moved my whole plex and related setup to a hosted dedicated server 1gb/1gb as with gdrive my upload is not good enough to keep up 400/35. Cost wise it would now work out around the same for me to upgrade to a business connection which gives me options of 400/200 or 750/375. I recently built a 2 in 1 gaming pc on a 7700 and since I've had an Intel CPU begging me to use quicksync I've been looking at options to bring my plex setup back home. Right now it only has 1 SSD and 1 NVME but that will change soon. Did you place your pool as