dakipro

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by dakipro

  1. I am having exact same issue as you @luhzifer, I do not use system level proxy, but privoxy has its port open, and firefox on one of my machines is configured to use it. Worked fine, but problems strangely started few months ago. Changing the PIA endpoints didn't help, restarting helps from time to time, but... it is very annoying. I think it has something with DSN/CDN's to do, as once all requests are resolved the speed is at max, but I haven't found time to investigate further to be honest. Hoped to just find a solution here, as usual
  2. I have the same need now. One of dockers is killing my array with cpu/ram usage, and I cannot find which one because they just start running at the boot. I would like to start the array, but prevent/disable docker service from starting immediately. Thanks for great work otherwise btw!
  3. I am having the same problem, I am trying to backup my proxmox VMs to unraid over NFS, but I cannot get the "ERROR: Backup of VM 100 failed - job failed with err -116 - Stale file handle" unstuck
  4. I did the Q22 fix, but and it helped for a few days, but today all stopped again. Re-downloading and re-applying Q22 did not help in my case I tried using DE Berlin and DE Frankfurt. Even though I get the error from down below, internet does start working, and I am behind the VPN (based on IP) but it is very very slow (think gprs). It is so slow that I cannot even run a speed test. I guess that is because of port forwarding problem? The error I get 2020-11-09 23:37:32 DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. 2020-11-09 23:37:32 WARNING: file 'credentials.conf' is group or others accessible (this warning I get from time to time) The file DE Frankfurt.ovpn client dev tun proto udp remote de-frankfurt.privacy.network 1198 resolv-retry infinite nobind persist-key persist-tun cipher aes-256-gcm ncp-disable auth sha1 tls-client remote-cert-tls server auth-user-pass compress verb 1 reneg-sec 0 <crl-verify> -----BEGIN X509 CRL----- .....
  5. Hi, could this container be used to bounce against multiple VPNs via privoxy? I've been using it for years on unraid and it works awesome, one of the best containers out there, I couldn't live without it now. However on my last holiday I was uploading files via my ubiquity vpn to my unraid, and the it guy got jealous and blocked connection between my pc and my home/vpn IP. Now, I was about to boot VM inside my pc, so that my PC would connect to PIA vpn, and my VM would then connect to my home vpn, but then I found that I could still upload via my phone so I used that instead. Could this container be used for this scenario, where my pc would connect to some PIA server, and the OpenVpn would then connect to my home, and privoxy would be used for some browser/client so that I could upload files to my vpn? Or am I missing some obvious way of doing such VPN through another VPN solution on windows?
  6. I like the ease of using docker and community apps. I would like to see secure remote access to ui and perhaps data Happy new year, keep up the great work!
  7. Thank you for answering @itimpi and @johnnie.black , I will disable IOMMU and try to remember that no hardware pass-trough is possible on the pc. I was planning on having a memory card reader station for cameras, but I will see how and if that would work without pass-through. The thing is that I purchased hpe microserver gen10 to use it specifically as a Unraid NAS, since people are using it and it is working great. So the pc is pre-built to be a nas, without options to change pretty much anything on it. I used so much time on choosing right hardware and software for my Nas, and now a relatively new piece of hardware is, as it seams, not supported anymore by linux, and thus by Unraid as well. A bit odd that all is working fine on 6.6.7 but not on a new kernel. One could even argue that there IS a solution (running behind me), just that Linus thinks it is not a good one so here comes a "buy a new hardware" solution instead.
  8. Hi, is this something that Unraid team is working on, or is it something out of scope of unraid team and should be fixed by individual user? Any recommendation or more "official" details as to what is causing this and how to best fix it without risking other problems? I am not using any VMs at the moment, but would not like to exclude possibilities if one day I want to, because of some unresolved bug/issue. I just tried updating from 6.6.7 to last stable, but I am still getting this issue, so I am basically stuck on 6.6.7
  9. I would also like to use nzbget via VPN if that is somehow possible, meaning to do downloads trough vpn? I found on internet that nzbget does not support proxy configuration, so perhaps it is possible to add it via docker itself, like some parameter or something? I understand that it is not needed since ssl is used, but somehow I would feel much better, not even sure why... Please enjoy a beer @binhex as a than you for another excellent container!
  10. Do you know perhaps what practical implications disabling IOMMU has? Or is it perhaps a bad thing to have it enabled since new kernel/unraid doesn't work with it? I read a bit on wiki, but... it is out of my scope of knowledge(/interrest).
  11. I am also curious about this, I assume it would? I have HPE ProLiant MicroServer Gen10, I am not sure which options should be applied to it, and I would honestly much more prefer to wait for the next update which doesn't have this problem.
  12. Thanks, that did the trick. I first didn't got much in the logs Mar 22 20:30:03 hpenas emhttpd: req (17): shareMoverSchedule=40+3+*+*+*&shareMoverLogging=yes&cmdStartMover=Move+now&csrf_token=**************** Mar 22 20:30:03 hpenas emhttpd: shcmd (63101): /usr/local/sbin/mover |& logger & Mar 22 20:30:03 hpenas root: mover: started Mar 22 20:30:03 hpenas move: move: skip /mnt/cache/media/movies/movie.mp4 Mar 22 20:30:03 hpenas root: mover: finished but then I stopped all the containers and now mover did move the file properly. I am not sure what could have kept the file in use but that is would resolve itself once containers are restarted I guess/hope. Could it be that initial issue was because of similar scenario of file being in use?
  13. Now I am experiencing additional problem with this. Both shares are now set to use cache Yes, but the mover doesn't move the files from cache drive to main drive, thus leaving the file off the parity unprotected, and almost impossible for me to find it out unless I look for it manually (and expect it) Here is short summary: My system has one cache ssd, one 4tb parity drive and one main 4tb drive. I have two shares, one downloads, second called media. Now both are set to use cache disk Yes (after advised on this topic). I have a krusader docker with this mounted point /media/ -> mnt (this is how I saw it on spaceinvadersone video) Then I ripped dvd on to the downloads share from my pc, and tried to move it from /root/media/user/downloads/ to /root/media/user/media/movies using krusader (so this is how krusader maps /mnt/media/user/downloads/ and /mnt/media/user/media/movies). I am very very sure I used exactly these folders after having this issue. now on the media share I do see my movie just fine. But if I open root/media/cache/movies from krusader I still see my movie folder there, with files in it. if I open root/media/disk1/media/movies/ I do see the movie folder, but NO files in it. Mover is set to run every night, but even manually invoking it does not move the files. I first thought that this might do something with plugin/setting that makes disk sleep, but I now tried multiple times to invoke mover, once it said "moving" and I am checking now from Krusader, file is still on cache (for three days)
  14. Thanks for testing, that is how things should work I suppose. It is a mystery then how I managed to move files to the cache drive, twice in last two months.
  15. Thanks @trurl , since you have such drives set up, could you test similar thing but using some local (docker?) tool on the server itself? I am thinking now if, even before started moving files on windows over the network, I might have moved them using krusader and somehow they ended up on the cache drive instead (your hypotheses that this might be linux thing of renaming files, when they are moved localy?) Then it would look like that they were, from the network point of view, in the no-cache-media-share but they were actually already on the cache drive, by mistake done earlier in the process?
  16. That is what I would have expected, if it was some docker doing things directly then it is a my configuration/user error. In this case I am 99% sure that it is done over the network. I downloaded some music files from my online backup using jDownloader inside the docker, where this container has only these two folders mapped /mnt/user/appdata/JDownloader2 and /mnt/user/downloads/completed/ Then on the network, on my windows pc I used CUETools to split files (windows software operating over network) I have folders organized in steps: 1toSplit, 2toTag and 3tagged folders, and all problematic files were in the "3tagged" folder, which is where software moves them after splitting. From there MusicBrainzPicard will move them to some other folder under same media share. So it is either CueTools or MusicBrainzPicard that puts files in the 3tagged folder. I never do that manually, especially not on the server itself. Network shares are "downloads" with using cache set to "Yes", and "media" with cache set to "No" (at the moment I changed it to yes, in order to have this working correctly). Only one docker has access to all is /mnt and that is krusader. And I was thinking about it, even if I did move files from /mnt/user/downloads to wrong place like /mnt/user0/media, or perhaps /mnt/disk1/media they would not end up on the cached drive (maybe parity would be broken, but files would not be on the cache). Only way to I could have done this by mistake would be to use krusader and move each individual file to /mnt/cache/media/music/flac/!NEW/3tagged folder, and in the process of doing that I would have noticed that all these folders were missing and I would have to create them manually "media", "music", "flac", "!NEW", "3tagged". And being a developer myself I would certainly notice so much manual labor and missing folders (that is why I split workflow by folders and I use automated tools, to avoid doing things manually). So pretty sure I would not make such mistake. I could have moved folders from download to media/music/flac/!new/1toSplit with krusader, but not on the cache drive, and then certainly not to the 3tagged folder. Perhaps I've done some other step manually somewhere, that at the end got me to this confused state. As I think about it now, I did notice that some files took longer to be tagged and moved to another share, while some of the files were moved instantly. But I thought that has to do with actual file format, as some flac files are not compressed so writing to them is instant (or something like that). And then at the end I have plex in docker who is only binded to /mnt/user/media/ , /transcode/tmp and /mnt/user/appdata/binhex-plexpass and it is seeing files properly as they all appear in /mnt/user/media just fine (whilst some files are actually on the cache drive)
  17. Hi, First I must say that I really enjoy using Unraid, I think it is amazing peace of software that solves A LOT of my problems, and I have been searching for it for years. As the title says, share "media" is set to not use the cache, but files do exist on the cache drive itself. People helped me and on this post explained that this is a linux thing, that occurs if I have some data on "cache-only-share" (cache set to Yes) and I move it over the network to some other "no-cache-only-share" (cache set to "No"), that linux will just rename file and make the data end up on the cache drive, under new share (even though that share cache is set to No). I understand why is this happening, but for me this is a pretty common use case, f.eks. downloads(cached) and media share(no-cache), and also for backup-cached-network-share where all PCs would backup over network and then another script would move files to backup-non-network-share, protecting data from ransomware and viruses that would delete my backups. This makes some part of the data ending up on the cache file reducing its space, but even worst it makes data not parity protected. And the worst part is that this is virtually undetectable from unraid unless you know and do look for it specifically (I found out about it using Fix common problems community plugin). It makes using the "no" for cache practically useless, even worst it makes entire unraid system and server unreliable. I have to use yes for cache drive, keeping data even longer on the cache drive until the mover runs. Unraid advertises itself as "Unraid is an operating system for personal and small business" and also it says "Stop worrying about losing your data due to storage device failure. Unraid saves your data even if one device goes bad", but in this use case of moving data around this is just not correct, some data (in reality randomly) ends up unprotected. Perhaps make this No cache not an option when cache drive is in use? Or at least show a big red triangle with some explanation as to why is this combination dangerous? Or maybe even make mover do move files to its expected location? I've come to conclusion from mentioned post that this is rather old and well known case (from the beginning?) but I would really like to hear opinion from some people from Limetech about the issue, as it seems like a very common use case and a serious issue of leaving files unprotected.
  18. Cache pool is something I will certainly implement. But it goes around the issue itself, one should not expect to have data accidentally stored on the cache. I did turn on the cached option for the share, and mover did move files as expected on final destination. However, I have just also checked the help about the cache option, and it says: "Specify whether new files and directories written on the share can be written onto the Cache disk/pool if present. No - prohibits new files and subdirectories from being written onto the Cache disk/pool." I would really like to hear what (other?) people from the limetech think about this so I will fill a bug report, my quick search on the forum didn't give relevant result for the issue.
  19. Turning the cache for this share from no to yes is a valid option, and I will certainly do that, thanks. I will also consider moving data manually just in case (I did this previous time I noticed this) However I do not agree that it is "just a linux thing, nothing to do about it ---> your problem for using linux" as unraid is not just a script I downloaded from the internet and installed on my server. It is a complete system that Limetech (only?) has full control over and knows its internal workings. Unraid is advertised on its home page as "Unraid is an operating system for personal and small business use..." so it is not just a script, but an operating system which I should not know how works internally. It also says here https://unraid.net/product/data-storage-users "Stop worrying about losing your data due to storage device failure. Unraid saves your data even if one device goes bad." Is my data in this scenario protected due to device failure, even though I have it all configured as unraid suggests? I could have understood that argument "that is how it is" if I downloaded freenas and installed it on my own, and thus have to know its internals to maintain it properly. This is for me pretty serious issue as my next scenario was to have a "backup-network-share" that is cache only and network shared where all devices would store backup, then to have docker script that would move all that to "backup-non-network-share" that should not be cached nor shared on network as a protection against malware/ransomware. Now if I am not mistaken this is exactly the scenario where I would have thought that my data is protected, but it is actually not because "Linux is free to decide whether to just rename its path." Again, I understand that it is linux thing, and that it is how it works and that is how the world functions, but I do not agree that it should be ignored by unraid team, even if that means disabling option to not use cache drive if one is present, or by moving the data with the mover even though it is set to No, or at least giving one big red rectangle saying that data might en up somewhere unprotected "because linux". Or perhaps check for this scenario when running parity check, at least giving a link to some page describing the issue, or do anything about it, something... It seems impossible to me that this is a edge case scenario and that it should be that easily accepted as "everyone should know about it, and that it is how it work - linux." I do not feel like I should have to install some community plugin that should protect me from linux itself. What if I didn't install it, I would live in the illusion that my data is protected on the unraid server. (btw thanks for community apps, they are awesome and a main reason I purchased unraid in the first place) Bottom line is that I installed everything as unraid would want me, a parity drive, a data drive and a cache drive. No custom tweaking, no hacking, all by the book. And I just moved files from one share to another, for me a real (common) world scenario. Maybe I am under illusion that data is protected with unraid, as I do not see what I have done wrong here to compromise my data?
  20. Thanks @itimpi you perfectly described the issue as I do move files from download (on a cached drive) to media share (not on a cached drive). Now the question is how to solve this issue and have all my shares on their desired locations? It would be ok if the mover could perhaps fix this automatically, as this doesn't look like an isolated and unlikely scenario to me (it would be even better if this doesn't occur in the first place, but I understand that technology has its limitations). This basically means that if one is using the cache drive, data can end up all over the place and some part of it will not be parity protected. Should this perhaps be elevated to the unraid team via some ticket or something?
  21. Hi. I got few times email from Fix Common Problems plugin saying that "Share media set to not use the cache, but files / folders exist on the cache drive", and that is true, I do have share "media" saying No for use cache, and when I check files from krusader I do see media folder on the /mnt/cache folder. And it contains some files I moved over the network using Music Brainz picard, it also contains some media that I moved trough Krusader (here I never access shares outside of /mnt/user folder, although I do have mounted entire /mnt folder in Krusader, basically to just control such things as this. Krusader is set per SpaceinvaderOne's video). I checked all docker containers, none has access to the media share except Krusader. Which would then be easy to blame, but I do move some files using my windows machine and Music Brainz picard, via network share, and over the network I only have few shares, not the cache drive itself. When I run mover, nothing happens, files stay split like that for weeks. Where should I start looking, what should I check first? Thanks!
  22. Hi, Anyone familiar with docker that might be able to make/fit/test this service on unraid? I have it installed on my workstation, but it would be great if it could be on the unraid server, thus running independently (maybe even automatically). I am pretty new to unraid and dockers (and nessus in fact). Or does anyone have suggestion to similar service that can scan entire network and give some security report and advice? Thanks!
  23. Hi @binhex is it possible to access/use clipboard from the browser or client when accessing Krusader container? Meaning, I can open a file in krusader editor, and copy/paste to my pc?
  24. Hei @binhex, great work as always Any chance of having this container use some proxy for outgoing communication? I am using privoxy from your delugevpn and it works great, however in my country some subtitle providers are being blocked, so I would like Plex to use privoxy when fetching movies/songs metadata from the internet. I read online that plex would use host configuration, thus there is no such config from plex itself, so perhaps you can do some docker magic here? Thanks again for great containers and as usually, buying you a beer!