• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dakipro's Achievements


Newbie (1/14)



  1. I have the same need now. One of dockers is killing my array with cpu/ram usage, and I cannot find which one because they just start running at the boot. I would like to start the array, but prevent/disable docker service from starting immediately. Thanks for great work otherwise btw!
  2. I am having the same problem, I am trying to backup my proxmox VMs to unraid over NFS, but I cannot get the "ERROR: Backup of VM 100 failed - job failed with err -116 - Stale file handle" unstuck
  3. I did the Q22 fix, but and it helped for a few days, but today all stopped again. Re-downloading and re-applying Q22 did not help in my case I tried using DE Berlin and DE Frankfurt. Even though I get the error from down below, internet does start working, and I am behind the VPN (based on IP) but it is very very slow (think gprs). It is so slow that I cannot even run a speed test. I guess that is because of port forwarding problem? The error I get 2020-11-09 23:37:32 DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. 2020-11-09 23:37:32 WARNING: file 'credentials.conf' is group or others accessible (this warning I get from time to time) The file DE Frankfurt.ovpn client dev tun proto udp remote de-frankfurt.privacy.network 1198 resolv-retry infinite nobind persist-key persist-tun cipher aes-256-gcm ncp-disable auth sha1 tls-client remote-cert-tls server auth-user-pass compress verb 1 reneg-sec 0 <crl-verify> -----BEGIN X509 CRL----- .....
  4. Hi, could this container be used to bounce against multiple VPNs via privoxy? I've been using it for years on unraid and it works awesome, one of the best containers out there, I couldn't live without it now. However on my last holiday I was uploading files via my ubiquity vpn to my unraid, and the it guy got jealous and blocked connection between my pc and my home/vpn IP. Now, I was about to boot VM inside my pc, so that my PC would connect to PIA vpn, and my VM would then connect to my home vpn, but then I found that I could still upload via my phone so I used that instead. Could this container be used for this scenario, where my pc would connect to some PIA server, and the OpenVpn would then connect to my home, and privoxy would be used for some browser/client so that I could upload files to my vpn? Or am I missing some obvious way of doing such VPN through another VPN solution on windows?
  5. I like the ease of using docker and community apps. I would like to see secure remote access to ui and perhaps data Happy new year, keep up the great work!
  6. Thank you for answering @itimpi and @johnnie.black , I will disable IOMMU and try to remember that no hardware pass-trough is possible on the pc. I was planning on having a memory card reader station for cameras, but I will see how and if that would work without pass-through. The thing is that I purchased hpe microserver gen10 to use it specifically as a Unraid NAS, since people are using it and it is working great. So the pc is pre-built to be a nas, without options to change pretty much anything on it. I used so much time on choosing right hardware and software for my Nas, and now a relatively new piece of hardware is, as it seams, not supported anymore by linux, and thus by Unraid as well. A bit odd that all is working fine on 6.6.7 but not on a new kernel. One could even argue that there IS a solution (running behind me), just that Linus thinks it is not a good one so here comes a "buy a new hardware" solution instead.
  7. Hi, is this something that Unraid team is working on, or is it something out of scope of unraid team and should be fixed by individual user? Any recommendation or more "official" details as to what is causing this and how to best fix it without risking other problems? I am not using any VMs at the moment, but would not like to exclude possibilities if one day I want to, because of some unresolved bug/issue. I just tried updating from 6.6.7 to last stable, but I am still getting this issue, so I am basically stuck on 6.6.7
  8. I would also like to use nzbget via VPN if that is somehow possible, meaning to do downloads trough vpn? I found on internet that nzbget does not support proxy configuration, so perhaps it is possible to add it via docker itself, like some parameter or something? I understand that it is not needed since ssl is used, but somehow I would feel much better, not even sure why... Please enjoy a beer @binhex as a than you for another excellent container!
  9. Do you know perhaps what practical implications disabling IOMMU has? Or is it perhaps a bad thing to have it enabled since new kernel/unraid doesn't work with it? I read a bit on wiki, but... it is out of my scope of knowledge(/interrest).
  10. I am also curious about this, I assume it would? I have HPE ProLiant MicroServer Gen10, I am not sure which options should be applied to it, and I would honestly much more prefer to wait for the next update which doesn't have this problem.
  11. Thanks, that did the trick. I first didn't got much in the logs Mar 22 20:30:03 hpenas emhttpd: req (17): shareMoverSchedule=40+3+*+*+*&shareMoverLogging=yes&cmdStartMover=Move+now&csrf_token=**************** Mar 22 20:30:03 hpenas emhttpd: shcmd (63101): /usr/local/sbin/mover |& logger & Mar 22 20:30:03 hpenas root: mover: started Mar 22 20:30:03 hpenas move: move: skip /mnt/cache/media/movies/movie.mp4 Mar 22 20:30:03 hpenas root: mover: finished but then I stopped all the containers and now mover did move the file properly. I am not sure what could have kept the file in use but that is would resolve itself once containers are restarted I guess/hope. Could it be that initial issue was because of similar scenario of file being in use?
  12. Now I am experiencing additional problem with this. Both shares are now set to use cache Yes, but the mover doesn't move the files from cache drive to main drive, thus leaving the file off the parity unprotected, and almost impossible for me to find it out unless I look for it manually (and expect it) Here is short summary: My system has one cache ssd, one 4tb parity drive and one main 4tb drive. I have two shares, one downloads, second called media. Now both are set to use cache disk Yes (after advised on this topic). I have a krusader docker with this mounted point /media/ -> mnt (this is how I saw it on spaceinvadersone video) Then I ripped dvd on to the downloads share from my pc, and tried to move it from /root/media/user/downloads/ to /root/media/user/media/movies using krusader (so this is how krusader maps /mnt/media/user/downloads/ and /mnt/media/user/media/movies). I am very very sure I used exactly these folders after having this issue. now on the media share I do see my movie just fine. But if I open root/media/cache/movies from krusader I still see my movie folder there, with files in it. if I open root/media/disk1/media/movies/ I do see the movie folder, but NO files in it. Mover is set to run every night, but even manually invoking it does not move the files. I first thought that this might do something with plugin/setting that makes disk sleep, but I now tried multiple times to invoke mover, once it said "moving" and I am checking now from Krusader, file is still on cache (for three days)
  13. Thanks for testing, that is how things should work I suppose. It is a mystery then how I managed to move files to the cache drive, twice in last two months.
  14. Thanks @trurl , since you have such drives set up, could you test similar thing but using some local (docker?) tool on the server itself? I am thinking now if, even before started moving files on windows over the network, I might have moved them using krusader and somehow they ended up on the cache drive instead (your hypotheses that this might be linux thing of renaming files, when they are moved localy?) Then it would look like that they were, from the network point of view, in the no-cache-media-share but they were actually already on the cache drive, by mistake done earlier in the process?