techystreamer

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by techystreamer

  1. I have attached diagnostics and ram usage. Getting and undefined error in the notifications, as well as ram usage being bumped up over the course of a few hours. Only thing I can think of is the rclone mount? Any indication of it, or something else in the diagnostics? tower-diagnostics-20240408-0835.zip ramUsage.txt
  2. How do I reverrse the echo command or remove the blacklist on the amdgpu? It wasn't the issue. I ended up having to do the parity sync in safe mode and I rolled the os back to 6.12.6. It seemed to quit about 80 to 90 % through the parity. Took a few times to get it done. Made sure to have reconstruct write on to speed it up too. Haven't seen the issue again though.
  3. So this just goes in the terminal? I guess that would be temporary then?
  4. I use the gpu sometimes to access the bare metal ssd windows system I have on the pc. Would this affect that? Also I see where it is noted in the release but where would that be implemented and what would I input for my case? Thanks.
  5. I saw that and wondered, but I haven't changed anything in the GPU recently. The crash happened within the last 10 hours of the log overnight to morning.
  6. Here is the diagnostics. tower-diagnostics-20240306-0745.zip
  7. I enabled the syslog and it has crashed again. Do I do the diagnostics the same way? Does it pull the results from the flash or do I need to access another way?
  8. It was their update issue. Fixed the next day.
  9. Anyone using Minio? It looks like an update was applied a day ago and mine isn't seeing the objects in the pool any longer.
  10. I wasnt aware of a backup to the system config other than backup of the flash. I am not familiar with wipefs. I removed the drive since it was mostly empty and didnt contain data I wanted. It was doing a parity renew for over a day, then the system was not accessible again. So restarted and restarted array and parity renew......the system was unaccessible again within a couple hours. I attached diagnostics. Hopefully it shows what is causing the inaccessibility. tower-diagnostics-20240302-1729.zip
  11. I see there is a sdj directory on the system, is that the directory before I changed over to zfs? Is this something that I can work with? Not sure how to restore the disk6 pool.
  12. Ok. So is disk6 its own pool? How do I destroy it and then restore?
  13. Would using zdb command provide any good info, I read about but not sure how to implement on my setup?
  14. Disks 1 and 2 are xfs as I kept them with original data.
  15. Had to reboot to get the diagnostics as it froze on above. tower-diagnostics-20240229-1326.zip
  16. Yes same issue on original disk 6. Trying to pull diagnostics seems stuck here.
  17. Upon accessing server Unraid was not running processes so I rebooted. Upon rebooting the a ZFS disk in the array was stuck on mounting. I checked all connections rebooted and the same. I tried to replace the drive thinking there is an issue with the drive since I had uncorrectable I/O failure. How do I get the drive to mount, and should it be the original drive since the drive I tried to replace it with won't mount either?
  18. It seems to reach the max and stay within the original settings from what I can see from that screenshot. Is there a way to determine from that diagnostics report what could be causing the ram increase problem?
  19. I know this. But yesterday I noticed that the rise in ram continuation happens after it hits 100%, it then drops from 100, then rises again to 100, then the ram incremently rises abit. This continues until the ram is maxed and system instability. I have used zfs for about 4 months without issue and understand zfs uses 100%. Whats unusual to me is that it seems to transfer to extra ram use in the ram bar, then continue to rise ram until unstable.
  20. Not sure where to look further. I have seen the ram usage increase to system instability after zfs rises then falls. I see the zfs rise to max, then the ram increases in small increments after each zfs 100% rise and fall. Once some of the docker apps are started. I have attached a diagnostics file. tower-diagnostics-20240131-0751.zip