Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Meph

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just want to add my experience to this.. I had the same problem with a lots of writes to the cache via loop2 and i can confirm it was the Official plex docker that caused issues on my system. I changed to linuxserver/plex and the issues went away...
  2. I ended up changeing this variables by hand.. I did set background to 5 (to start write earlier) and dirty_ratio to 90... That means i can copy around 10GB of data before a slowdown is noticed.. Thats perfect for my setup.. Before i could just copy around 2 GB or something and then transferspeed went down from 100MB/s to around 35 MB/s... Now i can transfer full speed for about 1,5 minutes or around 10-14GB of data.. vm.dirty_background_ratio = 5 vm.dirty_ratio = 90 Is there any backside of using this much dirty ratio? I guess the system cache less files when reading?
  3. Ok.. everything is back up now again.. Thanks all for the help!!!
  4. ahh.. thanks for that .. Rebooting now... is it powerdown -r that is a safe way of reboot.. or would actually reboot work fine too?
  5. Hmm.. i have no /flash directory.. not sure how the filestructure of unraid is setup .. Found it.. /boot/config/plugins..
  6. Well i cannot access the webpage.. so how do i remove it via ssh? or check what changes the plugin did?
  7. No.. thats not what those settings do.. Anyway.. i tried a powerdown -r after googling a bit, it seemed like it was ok to do that .. After the reboot the webinterface seemed ok until i chose to start the array.. now its back to not working again
  8. Tried setting the things back.. after googling i found how to do it.. sysctl -w vm.dirty_ratio=20 sysctl -w vm.dirty_background_ratio=10 But this did not help... any more suggestions? i have no idea what happened.. Can i see somewhere what changes the plugin changed and then change it back? The system seems really slow, cannot really access the fileshares anymore.. i can login via SSH though.. no high cpu load and memory seems fine..
  9. humm.... i just tried this plugin... changed the vm.dirty_background_ratio to 5 and the vm.dirty_ratio to 80.. clicked apply and got presented with a black page... now i cannot access the web interface anymore... picture look like this... any tips how to restore to the default settings again?
  10. Found some forum posts about the same issue i had... turned out i had a logfile growing out of control. over 18GB ... i have now cleaned that file... everything sorted
  11. Thank you! I had the same issue here.. Did not know how this docker stuff does work. But I searched the same error message you had on google and found this article.. My docker image size were already at 20. Set it to 25 now and it started up! What is stored in that image file? and how do i access that data? hmmm i can see that my dockers (plex and resilio sync) writes to /mnt/user/appdata/ folder.. But that isnt the image file, is it?
  12. @Frank1940 Yes, I agree.. I would never use this disk with those many errors.. It's in the pile marked as "Defective" and is going to the recycling center next time im going there.. One thing is strange though. The report for this disc tried to upload as usual to the developer of the plugin, but upload failed. I tried like 3-4 times.. All other reports (like 10-15 reports) have reported just fine. This is the only report where the preclear plugin did not run to the end.
  13. Squid is the leader so far with 75027 hours for a Seagate Barracuda Green 2TB (5900rpm) SATA 3 (6Gb/s) disk! Thats 8.5 years in runtime!!! wow
  14. What is the oldest disk you are running/have seen in working condition in relation to S.M.A.R.T attribute number 9 = Power_On_Hours? I have a 300GB Seagate Barracuda 7200rpm (SATA) disk here with power on hours = 57711 .. Thats over 6,5 years in runtime! This disk has been running in a file server in a RAID5 setup (4disks) for quite some time. It has now been decommissioned by me.....
  15. Ok, this is the infamous Seagate 3TB disk.. I bought 4 of them and now 3 of them has failed... But they are old now.... 1 of the disks still tests fault free... But i will keep an eye on it.. Changed from QNAP to unRAID also because it offers more security in case QNAP hardware fail or the RAID5...