Jump to content

jowe

Members
  • Content Count

    101
  • Joined

  • Last visited

Community Reputation

0 Neutral

About jowe

  • Rank
    Advanced Member
  • Birthday 11/11/1978

Converted

  • Gender
    Male
  • Location
    Sweden
  • ICQ
    174543
  1. That's strange. I just tried to enable/disable br0, and if i mark the checkbox, and start Docker. It's instantly showing up as a choice in any container. Or disappear if i disable the checkbox. br0 is not a vlan, all other are.
  2. Best thing is that i have been able to consolidate, 3 machines (or 4 really) into 1. Hyper-V server, NAS and HTPC. (And also running a virtual router now.) I would like to see snapshots in 2020, so that i can backup my VMs on the fly. Also, would be nice to replace the "Lime Tech" Sticker on the server, with a new UNRAID. Thanks
  3. I have set up my VLAN in network settings (without a static IP for unraid server) Then stop docker go to settings / docker, advanced view, and there you should be able to choose vlans for docker. And after that, see them in every container.
  4. Thank you, you are a life saver! Stting the PCI-E slots to "Disabled" instead of "Legacy or EFI" and also i have to run my VM with OVMF bios. Supermicro X11SPL-F
  5. The templates in unRAID is OpenELEC 6.0.3 and LibreELEC 7.0.1, thoose are from 2016, and though they should work with your 730. I would try to upgrade anyway. To be on a more current version.
  6. I think most people are using the official install from OpenELEC / LibreELEC, using this guide. Im using a LibreELEC install with Kodi 19. And it's working great! Passing a Geforce GT 1030 and a USB controller card. JoWe
  7. I managed to solve this by removing ACL, that probably has been there since i tried out AD permissions years ago... I went slow and didnt take all folders at once, and the went up one level, and removed the -R. Not to include subfolders, that i already had fixed. setfacl -bn -R directory/ Strange thing, is that after this command i went in to the folder and created a subfolder, and that made me loose permissions to the folder, and the ACL came back. So i had to redo the setfacl command again. And after that i stuck. This was the same for all folders, like 10+. Then reset permissions with chmod -R 777 directory/
  8. Hi I have a problem with permissions on my media share. When I create a folder within that share as user "downloader" i can't read the files as user "kodi". And if i create the folder as "kodi", "downloader" can't enter that folder. The settings on the share is as follows. User downloader is connected from a WS2019 and a mapped drive is connected with that user. Just reinstalled from WS2016. User kodi is connected both on a Virtual LibreELEC HTPC. And a Windows10 laptop. Same problem on both. First when i had this problem, it only effected the share if i used a cache disk and mover. So I haven't used cache disk for years now. But now it happends for everything that is saved on the server. Found the link to post when i had that problem, was never solved though. I have used the Docker safe "new Permissions", and also the old "new Permissions" only on that media share. Any hints would be greatly appreciated! JoWe root@Tower:/mnt/user/media/series/The.Station# ls -al total 92 drwxrwxrwx+ 1 nobody users 4096 Mar 19 20:48 ./ drwxrwxrwx+ 1 nobody users 38 Oct 5 07:42 ../ drwxrwxrwx+ 1 nobody users 4096 Oct 5 07:44 The.Station.E01 drwxrwxrwx+ 1 nobody users 4096 Oct 12 06:52 The.Station.E02 drwxrwxrwx+ 1 nobody users 4096 Oct 19 10:13 The.Station.E03 drwxrwxrwx+ 1 nobody users 4096 Oct 28 15:17 The.Station.E04 drwxrwxrwx+ 1 nobody users 4096 Nov 2 07:32 The.Station.E05 drwxrwxrwx+ 1 nobody users 4096 Nov 12 21:19 The.Station.E06 drwxrwx---+ 1 downloader users 4096 Mar 19 20:20 The.Station.E07 drwxrwx---+ 1 downloader users 4096 Mar 19 20:16 The.Station.E08 drwxrwx---+ 1 downloader users 4096 Mar 19 20:18 The.Station.E09 drwxrwx---+ 1 kodi users 45 Mar 19 20:48 aaaa/ tower-diagnostics-20190320-0707.zip
  9. I have 16GB, and it's used to about 60% But i don't have anymore problems since 6.6.4 and now with 6.6.5 all cron jobs work as well JoWe
  10. Updated to unRAID 6.6.4 and problem with spinning up disks is no longer present! But it seems to use alot of CPU, around 10% with plugin disabled. And 30%+ when enabled. Changed to adaptive, and it's sleeping for 10s insted of only 1 with fixed. And the cpu is much less spiky. Thanks for your support @Alex R. Berg
  11. No problem! From what i have read, you might want to include user shares. And set cache pressure to 1. Depending on if you are having problems or not.
  12. Try using this link as @BRiT suggested. From your plugin tab, and "install plugin". https://raw.githubusercontent.com/arberg/dynamix/master/unRAIDv6/dynamix.cache.dirs.plg
  13. Hi Alex, thanks for your reply. I tried the with the new version this morning, and it has been running for a couple of hours. The disks spun down at around 7:05. After almost 2 hours. But it's still restarting a lot of times before that. Attached a file with log. I used to have a Windows HTPC with 4GB RAM, now using LibreELEC with 1GB. So I think im actually using less RAM now. And with cache pressure 0 The server "should" crash before releasing any memory. Sorry but I can't test the memory script right now, I'm not by my server so a crash would not be optimal, but i can try later tonight. I had 1.6.9 on my flash already, so i probably used that one! Started the scrip with "cache_dirs -p 1 -S -i media -u -U 0 -d 6" I'll get back with results! Edit: After running 1.6.9 for a cuple of hours, its acting exactly the same. So the problem is not from within cache_dirs. But is there anything in the newer unraid versions that could interact with cache_dirs? No times in this log, but seems to be restarting as well. This log is from when it's been running for more than 1h. Executed find in 1.630547 seconds, weighted avg=2.683402 seconds, now sleeping 10 seconds Executed find in 1.629726 seconds, weighted avg=2.332453 seconds, now sleeping 10 seconds Executed find in 1.620553 seconds, weighted avg=1.980860 seconds, now sleeping 10 seconds Executed find in 67.436761 seconds, weighted avg=7.897477 seconds, now sleeping 9 seconds Executed find in 1.629236 seconds, weighted avg=7.583565 seconds, now sleeping 10 seconds Executed find in 1.702406 seconds, weighted avg=7.276872 seconds, now sleeping 10 seconds Executed find in 1.627479 seconds, weighted avg=6.962589 seconds, now sleeping 10 seconds Executed find in 1.615260 seconds, weighted avg=6.647173 seconds, now sleeping 10 seconds Executed find in 1.600871 seconds, weighted avg=6.330583 seconds, now sleeping 10 seconds Executed find in 1.657069 seconds, weighted avg=6.019647 seconds, now sleeping 10 seconds Executed find in 1.667682 seconds, weighted avg=5.709729 seconds, now sleeping 10 seconds Executed find in 1.620685 seconds, weighted avg=5.395157 seconds, now sleeping 10 seconds Executed find in 1.629018 seconds, weighted avg=5.081718 seconds, now sleeping 10 seconds Executed find in 1.663094 seconds, weighted avg=4.771544 seconds, now sleeping 10 seconds Executed find in 1.621273 seconds, weighted avg=4.457107 seconds, now sleeping 10 seconds Executed find in 1.668013 seconds, weighted avg=4.146966 seconds, now sleeping 10 seconds Executed find in 1.666618 seconds, weighted avg=3.836455 seconds, now sleeping 10 seconds Executed find in 1.632011 seconds, weighted avg=3.522318 seconds, now sleeping 10 seconds Executed find in 1.650354 seconds, weighted avg=3.209995 seconds, now sleeping 10 seconds Executed find in 1.659944 seconds, weighted avg=2.898463 seconds, now sleeping 10 seconds Executed find in 1.621575 seconds, weighted avg=2.583238 seconds, now sleeping 10 seconds Executed find in 1.618180 seconds, weighted avg=2.267734 seconds, now sleeping 10 seconds Executed find in 1.630193 seconds, weighted avg=1.953428 seconds, now sleeping 10 seconds Executed find in 92.141068 seconds, weighted avg=10.259159 seconds, now sleeping 9 seconds Executed find in 1.648757 seconds, weighted avg=9.828936 seconds, now sleeping 10 seconds Executed find in 1.668640 seconds, weighted avg=9.400513 seconds, now sleeping 10 seconds Executed find in 1.641687 seconds, weighted avg=8.969685 seconds, now sleeping 10 seconds Executed find in 1.638518 seconds, weighted avg=8.538486 seconds, now sleeping 10 seconds Executed find in 1.601272 seconds, weighted avg=8.103630 seconds, now sleeping 10 seconds Executed find in 1.643961 seconds, weighted avg=7.672838 seconds, now sleeping 10 seconds Executed find in 1.623800 seconds, weighted avg=7.240187 seconds, now sleeping 10 seconds Executed find in 1.599030 seconds, weighted avg=6.805387 seconds, now sleeping 10 seconds Executed find in 1.657247 seconds, weighted avg=6.376234 seconds, now sleeping 10 seconds Executed find in 1.614359 seconds, weighted avg=5.942863 seconds, now sleeping 10 seconds Executed find in 1.642858 seconds, weighted avg=5.512437 seconds, now sleeping 10 seconds Executed find in 1.615809 seconds, weighted avg=5.079333 seconds, now sleeping 10 seconds Executed find in 1.608156 seconds, weighted avg=4.645748 seconds, now sleeping 10 seconds Executed find in 1.609557 seconds, weighted avg=4.212576 seconds, now sleeping 10 seconds Executed find in 1.675881 seconds, weighted avg=3.785826 seconds, now sleeping 10 seconds Executed find in 1.684113 seconds, weighted avg=3.359739 seconds, now sleeping 10 seconds Executed find in 1.550842 seconds, weighted avg=2.920845 seconds, now sleeping 10 seconds Executed find in 156.650812 seconds, weighted avg=17.253713 seconds, now sleeping 9 seconds Executed find in 170.330314 seconds, weighted avg=32.151140 seconds, now sleeping 8 seconds Executed find in 176.063238 seconds, weighted avg=46.791227 seconds, now sleeping 7 seconds Executed find in 160.003598 seconds, weighted avg=59.502194 seconds, now sleeping 6 seconds Executed find in 21.547191 seconds, weighted avg=58.272766 seconds, now sleeping 7 seconds Executed find in 1.698216 seconds, weighted avg=55.058299 seconds, now sleeping 8 seconds Executed find in 1.725258 seconds, weighted avg=51.846139 seconds, now sleeping 9 seconds Executed find in 3.315480 seconds, weighted avg=48.785016 seconds, now sleeping 10 seconds Executed find in 3.101892 seconds, weighted avg=45.695388 seconds, now sleeping 10 seconds Executed find in 1.761153 seconds, weighted avg=42.471128 seconds, now sleeping 10 seconds Executed find in 29.715485 seconds, weighted avg=41.908531 seconds, now sleeping 10 seconds Executed find in 25.123730 seconds, weighted avg=40.774737 seconds, now sleeping 10 seconds Executed find in 2.516035 seconds, weighted avg=37.376083 seconds, now sleeping 10 seconds Executed find in 1.745359 seconds, weighted avg=33.899738 seconds, now sleeping 10 seconds Executed find in 1.808898 seconds, weighted avg=30.428957 seconds, now sleeping 10 seconds Executed find in 1.869671 seconds, weighted avg=26.963043 seconds, now sleeping 10 seconds Executed find in 62.108122 seconds, weighted avg=29.232880 seconds, now sleeping 9 seconds Executed find in 9.364115 seconds, weighted avg=26.191390 seconds, now sleeping 10 seconds Executed find in 1.932508 seconds, weighted avg=22.405517 seconds, now sleeping 10 seconds cache_dirs_diagnostics.zip
  14. I'm afraid it's not working. I meant that there is no difference between now, after reinstall and before, the disks never goes to sleep while cache_dirs are running with 200K files. As i wrote in the reply below. JoWe
  15. I'm on 2.2.2 as well, i have just reinstalled and restarted the server, but after a couple of hours it seems to be no difference at all.