magicmagpie

Members
  • Posts

    3
  • Joined

  • Last visited

magicmagpie's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thank you @trurl, that was very informative. I spent some time looking into this, and after a few restarts the error actually went away. Not sure what changed to fix the error; I suspect the error might have been erroneous since I was unable to find any data that was not saved to storage. Not surprising since I changed systems. I did order some additional RAM and reexamined some of my Docker settings (such as tuning the vdisk size to 20GB down from 60GB), so hopefully the error will not return.
  2. I recently moved Unraid from one machine to another. This involved moving the flash drive and array drives (parity and two disks), but I did not move the NVMe drive I was using as the exclusive cache pool disk as the new machine has an SSD I wanted to use instead. This was the basic procedure I followed: Moved everything I could from the cache pool to the array using the mover. This did not move everything (despite having shares configured to move from cache to array), so I used UnBalance to move everything else). Moved the flash drive and array disks to the new machine. Set shares to move from array to cache and activated mover. Again, this did not move everything, so I used UnBalance to move the rest. I am pretty amazed at how smoothly the transition went, but I am getting the "Rootfs file is getting full (currently 100 % used)" from Fix Common Problems. Using the recommendations in this post, I am posting in General Support along with my diagnostics files and the results of the memory storage script. Notably, the machine transfer was a downgrade for Unraid. I moved Unraid from a machine with 64 GB of DDR4 RAM and a 2TB NVMe cache drive to a machine with 16 GB of DDR3 RAM and a 256 GB SSD cache drive. The new machine is reporting 74% RAM usage and 104GB cache drive usage. I am looking through the logs to see if anything jumps out. In the meantime, any help with this is greatly appreciated. sanctum-diagnostics-20230213-1420.zip mem.txt
  3. Hi everyone, been using HandBrake for a few weeks now and it has been great. However, twice I have checked progress for an overnight job and found that my Unraid server has been put into a semi-nonfunctional state that can only be resolved by restarting the server. The diagnostics point to HandBrake as the culprit. My logs from last night show that at 12:18 AM the following error occurred: May 4 00:18:15 Sanctum kernel: BUG: unable to handle page fault for address: fffff8efd4e69688 May 4 00:18:15 Sanctum kernel: #PF: supervisor read access in kernel mode May 4 00:18:15 Sanctum kernel: #PF: error_code(0x0000) - not-present page May 4 00:18:15 Sanctum kernel: PGD 0 P4D 0 May 4 00:18:15 Sanctum kernel: Oops: 0000 [#1] SMP NOPTI May 4 00:18:15 Sanctum kernel: CPU: 4 PID: 20499 Comm: HandBrakeCLI Tainted: G W 5.10.28-Unraid #1 The rest of the error is in the attached diagnostics. Has anyone seen this before? sanctum-diagnostics-20220504-0700.zip