• Posts

  • Joined

  • Last visited


  • Gender

whoopn's Achievements


Newbie (1/14)



  1. I'm seeing some behavior that I wasn't expecting, I'm trying to share some folders from Unraid through to a VM, the ones from my actual array I'm able to mount with the fstab commands below, but the one going to an unassigned device (that is mounted), I cannot write to the ones on the unassigned device. I've tried all manner of what should be unnecessary chown commands (file are all 777 right now). What am I missing? The exact same thing happened with a VM Unraid share from the cache drive...what am I missing? Additional info: There are hardlinks to the folder (no idea from where) cat /etc/fstab | grep virtio PS_Library /mnt/PS_Library 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0 HousePhotos /mnt/HousePhotos 9p trans=virtio,version=9p2000.L,_netdev,ro 0 0 AppCacheData /mnt/AppDataCache 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0 I can write to PS_Library all day long... Thanks for any assistance in advance!
  2. The solution seems to be, for the time being, using 6.8. It doesn’t appear Limetech has made this a critical bug yet.
  3. I think the TL;DR is 6.9.X is borked...9 days uptime on 6.8.3 after only getting like 2-4 on 6.9.1 my best is around 140 days before I had to bounce it for something.
  4. That stinks. Perhaps this works? 1. make a CA backup, separately backup or protect VMs 2. save those files somewhere other than the array, keep a copy on the array for convenience 3. reinstall 6.8.3 4. Restore backup 5. cross fingers 6. profit?!?
  5. I just went into the tools >> upgrade os and it had the option to restore. It may only allow one version worth of regression.
  6. Yes, I’ve reverted to 6.8.3 and have been up and running with no issues for 4 days, 6.9.1 would have crashed by now. Perhaps the UNRAID devs could compare what in the network stack changed and revert that change as it’s a regression. I think I speak for most unraid users that reliability is far more important than feature set. Reliability is the reason I’ve stayed on unraid.
  7. Ah thank you! I'm going to setup syslog as well as disable any host attached network settings for docker...I'm seriously considering moving all of my docker containers to a VM...
  8. To UNRAID staff...can we get an official response? it happened again after only being up for one day. 0F1FCC4A-4318-4761-9E18-A00CC550CF9E.bmp
  9. I went through my UniFi setup and made all clients static assignments on IPs. I’ll report back if I stays up for more than a week
  10. I'm experiencing the exact same issue. On 6.8.3 I had ZERO issues, rock solid, I upgraded for NVIDIA GPU support so that is the only new thing. Yes I have br0 docker IP assignments, its never been a problem, even have VMs and Dockers sharing br0, no issues still. Limetech, this is a pretty serious issue, what do you need from me to investigate further? Are there logs I can upload? Thank you in advance!
  11. I had an issue where a drive was corrupted, and the xfs_repair tool was unable to find any suitable superblocks. Now I'd assume that a "good" disk would return something about good superblocks found and good magic numbers when you initially run it (xfs_repair -n /dev/sdX). However all of mine are returning bad superblocks and bad magic numbers. My question is two-fold: 1. Does it matter? 2. Is there some routine maintenance I should be running?
  12. I'm not sure the data logger does that, its expecting the database to be created for it. Is there a way to create the database manually? Here are the instructions I am following: http://codersaur.com/2016/04/smartthings-data-visualisation-using-influxdb-and-grafana/ I appreciate your help BTW!
  13. Will it create the database for me? This seems unsecure and different than other database technology i've used.
  14. I get the following: { "error": "error parsing query: found AND, expected ; at line 1, char 40" } Do I need telegraf for this to work? I'm just using it as a database for grafana, my smartthings is going to spit data into the database