• Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About ZekerPixels

  • Rank
  1. I thought both ware cache drives where on the motherboard, but i just checked; 1 cache drive using the motherboard sata amd the other one is connected to LSI9211 The disk reported is just the disk is tries to write to, with the only consistent being the cache. Im sure the cache is messed up, it now reports 2TB (it is 1tb) anyways i need to figure out how i can copy everything for the cache to an external or something edit: Ok, the cache drive ending on 208 is definitely fucked. but I think I can safe most of the data for the other drive. Unfortunat
  2. I have the parity disks removed from the array, otherwise I need to cancel the parity check every time. And we can also exclude it have anything to do with generating parity when moving to the array. 12:38 turn on syslog and reboot 12:41 start array 12:43 download something to cache only folder using a docker 12:45 Crashed and automatic reboot 12:48 start array 12:51 start mover 12:51 Crashed and automatic reboot 12:55 generate "diagnostics1", disable docker and reboot 12:58 start array (docker and vms are disabled) 12:00 start mover
  3. On what the issue could be, it can complete a parity sync without any issues. I would think temperature is good and also power is good, because during the parity check there more cpu utilization and all disks are doing something ofc requiring more power. I don't have an extra psu or any spares actually, so I cant really change out parts to try something. The syslog that i posted should contain two crashes. Anyways I will make a new one and this time writing down the time of events, give me like an hour.
  4. I had no solution or any clue on what the issue could be, so I made a fresh usb 6.9.2. Quickly setup my configuration, shares, ect. and it crashes. So, i have a fresh unraid install and having thesame issue as before. To me, that points to a hardware issue, what could to it. I removed the other files, these are the new diagnostics and syslog. I'm not sure of the time of the first crash, second one was on 02:20
  5. I also tough it could be the ram, so yes I have run memtest. With single sticks and both together, resulting in no errors after 8 passes in each configuration. Also the server can complete a parity check without any issues, if it would have been the memory is probably shouldn't be able to do that because with mover (or another method moving form cache to array) it crashes every time within a minute. The only weird line in the syslog is line 169, this is also close to the crash. But doesn't show anything because its also there when it doesn't crash. "ntpd[1758]: kernel reports
  6. The array are 8TB and 2 4TB, Cache is 2 1TB disks. Eris are two different sized ssds 120gb and 240gb mirrored, so effectively having 120gb and yes it is using the default btrfs raid1. Appdata, domains and system is all on this pool.
  7. Yes, that would have been a great idea. Updated, this time with the array running.
  8. Hi all, The server has an problem, it crashes every time within a short time after running mover. I have been using this system with 6.9.2 from release and it worked fine before and I have already done the following; - parity check - docker safe permissions - fix common problems - disabled VMs - disabled Dockers - mover, unbalance, krusader - memtest86, no issues on a couple of passes With Vms and Dockers disabled it still crashed every time within a minute of invoking mover. I hope you guys have a idea what the issue coul
  9. I have never realy done anything with github, but gave it a try with a contribution of the translation for the Recycle Bin. If it doesn't show up or if anything is wrong let me know.
  10. I set pihole as dns on my pc and the domain requests are indeed coming form my pc and not unraid. When you click the list of indexers in the jackett webui, I copied all of those and made it into a blocklist. (not reliable, I know, just to test something out) Testing it again all the domains from jackett ui get a blocked status in pihole. Testing the indexers still work successfully with the domain blocked, indicating jackett is using the vpn. also pointing to the pc making the request is completely unnecessary.
  11. Quite late here as well, goodnight. One thing to note, it is not only with firefox. At first i thought i misconfigured something, but you saw those settings and I not doing something stupid in the config. I'm also not doing something really advanced, just pihole. There are lots of people using it, someone would have noticed if its a big issue. Thats why I suspected the browser at first, and why I tried some different browsers and on different devices. I a bit out of things to try for now, maybe I can come up with something tomorrow. The first time i noticed th
  12. It was remove fairly quickly, but i could imagine using a bit more difficult password. It sounded a bit like the default welcome123 password company admins tend to use. My jackett log ends with the following, its a fresh install with everything on default settings. Hosting environment: Production Content root path: /usr/lib/jackett/Content Now listening on: http://[::]:9117 Application started. Press Ctrl+C to shut down. I think the proxy settings binhex referred to are in the webui @binhex previous post I discovered something, I mentioned using firefox on m
  13. I removed the privoxy container by accident, also removed the jackett container and also deleted the folders in appdata. reinstalled both using thesame settings in the template as previous and put the openvpn files back. When installed, let it run for a bit and restarted, stopped both containers and made sure privoxyvpn is started first. A quick check in the console returns the vpn ip. I did not change any other settings. (buy default proxy is det to disabled in jackett) Now on my pc using the firefox browser, I go to ip:9117 and check pihole and doesnt show any of the indexers now
  14. I didn't want to post a huge picture collection here, so i uploaded them to https://imgur.com/a/qGysAeM. In jackett the proxy is set to disabled, so it should take the network from unraid which point to the privoxy container. To be sure it started up the right way, I rebooted the server, started privoxyvpn and waited till it said listening to port before starting jackett.