Jump to content

deienache

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

55 profile views

deienache's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hi @onno204, Indeed, it looks like you did way more research in this issue than i did. Personally my problems started a few months back, i think February, not exactly sure, when i upgraded first to 6.12.2. At that point i had neither Tdarr ( the only -arr ever installed on this server ), neither Grafana or Prometheus, these 2 were installed to try to debug the slowness. I agree with you that -arr suite can cause this issue, but this is only because of the high I/O they require. I have personally never used SabNZBD, so my issue is definetely not coming from there. In my testing, what i did: -A full appdata/docker/vm backup and deleted everything, disabled docker, disabled vm. Issues persisted when doing high I/O operations. -After this, decided to uninstall ALL plugins, was left with a "as clean as it gets without new install" server. Issues persisted when doing high I/O operations. -Disable bond0 and removed the extra NIC had an old Intel PRO 1000/PT set up in Transmit Load Balancing. Left the server only on eth0 (Realtek, but meh, consumer board). Issues persisted when doing high I/O operations. -Since zfs is a new implementation, replaced my 3 m.2 nvme cache drives with a single xfs formatted m.2 nvme drive. Issues persisted when doing high I/O operations. -Recreated unraid usb with fresh 6.12.10 install using USB Creator tool and only replaced config file. Issues persisted when doing high I/O operations. Considering all this, i decided to rollback to 6.11.5. Installed a fresh copy with unraid usb creator, replaced config folder with the old. Everything went smooth and issues is gone, now with all previous plugins and dockers running. I don't have community apps working, but that's not an issue for me at this time. The only thing that changed was the unraid version, so it's clearly an issue in the OS. How and why is not everyone affected by it beats me, but it may have something to do with kernel drivers for different hardware platforms. I'm not that technical unfortunately. I agree with you that rollback is not the best option, but i personally decided to not continue with unraid. I have an order for a 730xd and some drives coming which will be running normal raidz2 using another solution. This is not because i have not loved unraid, i absolutely did and considering there are few people encountering this issue, i am positive it will keep growing. It is after all a perfect solution for a home server. However my use case has changed since i deployed this box and now data integrity and iops are way more important. Not a flaw of unraid, it's just not built for that. I am unable to keep testing for this as i mentioned before, i already went to 6.11.5. What i did not test is high I/O usage on direct disk share bypassing the /mnt/user shares as i didn't have the time. Might pe worth looking into. I hope you will be able to find the issue !
  2. Ok, nevermind, thread can be closed/archived, whatever needs to happen. I downgraded to 6.11.5 and all problems from the last months went away, the one described above, not being able to open another gui page if there was any plugin update/docker update going on and a few other little things. In my personal opinion ( i know others might not agree ) 6.11.5 is the last true version of unraid. The rest is just a rushed out the door mess... Sad to see limetech went downhill with 6.12, then the horrible april fools joke in community apps messing with MY piece of software thay I BOUGHT, not rented, not subscribed...anyway, discussion for another day. Will have to go somewhere else, as this piece of software is clearly no longer a safe home for my data... Anyway, rant is over, have a great day everyone !
  3. So what i did was try to eliminate zfs from the equation, replaced the three m.2 zfs cache drives with a single m.2 drive using xfs, no difference... Anyone has any clues where to start troubleshooting ? Thanks for any tips
  4. Hello, This is my first post here, as until now i was always able to find the solution in other people's threads. This time seems to be different. I am suffering from exactly the same issue described in these 2 other threads: I am currently on unraid 6.12.10, but have had this issue since i upgraded from 6.11.5 to 6.12.2 some time ago. This thing becomes an issue only when i have high disk I/O, for example scanning some files or in the current moment, reading from multiple disks. Using the same command as in the other topics: inotifywait --timefmt %c --format '%T %_e %w %f' -mr /boot I have some access to disk share configs and then a lot of access to /boot/bzfirmware (Thu May 9 10:04:27 2024 ACCESS /boot/ bzfirmware) At the moment the server only has 16 hours uptime and 500k reads from flash. Fortunately there are no writes done by this issue. Has anyone figured out what is causing this behaviour? I am at a loss trying to figure this one out.. Attached diagnostics. Thank you in advance ! titan-diagnostics-20240509_1257.zip
×
×
  • Create New...