• Content Count

  • Joined

  • Last visited

Community Reputation

27 Good

About Videodr0me

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I do not use any of these unofficial builds, nor do i know what they are about and what features they provide that are not included in stock unraid. That being said, i still feel that devs that release them have a point. I think the main issue are these statements by @limetech : "Finally, we want to discourage "unofficial" builds of the bz* files." which are corroborated by the account of the 2019 pm exchange: "concern regarding the 'proliferation of non-stock Unraid kernels, in particular people reporting bugs against non-stock builds.'" Yes technically its t
  2. The problem persists on 6.9 beta25. It seems to be related to the docker service. Turning off docker completely in the settings page->docker solved it. This is strange because all containers were already stopped. So maybe its related to the docker service somehow, whether containers are running or not.
  3. Same here. Shutting down docker (in the settings page->docker) fixed it here, too. Strange because all containers were stopped already, so it must be something with the docker service itself.
  4. Same thing here with beta 25. Drive temp of first parity drive is misreported (5588 Degrees). Unfortunately the drive spun down before i could make a screenshot. Smart data was normal.
  5. Just another update after 31 days of uptime with 6.9.0beta22. No pagefaults. I consider this issue fixed (at least in beta22).
  6. Installed 6.9.0 beta 22 and so far so good. No page faults, yet. Will keep you posted.
  7. I wonder if i should go back to 6.8.3 or wait for a new beta. Is there any rough timeframe for when the next beta will be dropped upon us?
  8. As I stated, I did not reboot at all. System is up for 44 days, in this time the page fault occured twice. I find it highly unlikely that the pagefaults have anything to do with these plugins, as others without these plugins report exactly the same page faults. The next time i reboot i might try safe mode for a brief period, but as these pagefaults occur rarely I would not pin to much hope reproducing these faults. Never had any of these with previous versions - not even with the other rc that used a 5.x kernel.
  9. I did not reboot the machine, just to see if more of these page faults show up. And yes it happened again. Will this be fixed in the next beta: May 5 08:46:39 TOWER kernel: kernel tried to execute NX-protected page - exploit attempt? (uid: 0) May 5 08:46:39 TOWER kernel: BUG: unable to handle page fault for address: ffffc900016b3e98 May 5 08:46:39 TOWER kernel: #PF: supervisor instruction fetch in kernel mode May 5 08:46:39 TOWER kernel: #PF: error_code(0x0011) - permissions violation May 5 08:46:39 TOWER kernel: PGD 276c19067 P4D 276c19067 PUD 276c1a067 PMD 276489067 PTE 800000019db6
  10. Here are the diagnostics.
  11. When checking the current server log i found this: Apr 5 16:19:37 TOWER kernel: BUG: unable to handle page fault for address: 000000005cefea34 Apr 5 16:19:37 TOWER kernel: #PF: supervisor instruction fetch in kernel mode Apr 5 16:19:37 TOWER kernel: #PF: error_code(0x0010) - not-present page Apr 5 16:19:37 TOWER kernel: PGD 274bb3067 P4D 274bb3067 PUD 0 Apr 5 16:19:37 TOWER kernel: Oops: 0010 [#1] SMP NOPTI Apr 5 16:19:37 TOWER kernel: CPU: 2 PID: 22765 Comm: shfs Not tainted 5.5.8-Unraid #1 Apr 5 16:19:37 TOWER kernel: Hardware name: Insyde AS Series/Type2 - Board Product Name, BI
  12. I simply pause the parity check whenever i really need full server performance. My main concern with the unneccessary long parity checks is not only performace, but that the resources are not utilized efficiently leading to higher energy consumption, more wear on drives, more noise, chassis/drive temperatures are higher for longer period of times etc..... This is awkward as it seems to be a solvable problem. Granted it affects fast cpu system less, but i always liked unraid because it worked well on low-power older cpu systems. Its annoying that one cpu core is maxed out while otherr
  13. No improvement with 6.9 beta1. Parity speeds still at about 55-65mb/s instead of approx 90mb/s. I rechecked very early logs (pre 6.x) and then i had over 100mb/s. I hope this gets addressed soon, as a parity check now takes 36-48 hours.
  14. Retested with 6.8.0 and its as slow as the 6.8.0RCs. This rules out the 5.x kernels as the reason for the slowdown. This points to the new multistream code as a potential reason for the slowdown. Especially with slowish Cpus it seems to not play well. Also one CPU core is maxed out at 100% while the others are close to idle during a parity check. Maybe its possible to distribute the load more evenly.
  15. Same here only more drastic. With 6.7.x I got 85-90 mb/s, now with scheduler mqdeadline only 55mb/s and with scheduler set to none 67mb/s. I only have a slow J1900 and it seems the new code does not play well with slow cpu's. Diagnostics added. The system has eight 8tb data drives and two 8 tb parity drives. I tried various disk settings (num_stripes, que_limit, NCQ, nr_requests) with no improvement over defaults.