NewDisplayName

Members
  • Posts

    2286
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by NewDisplayName

  1. Hey, i m calm down. Please chill. Im just a bit sarcastic and its not my mother language. okay??? I always set it to a fixed value like what happens if that version vanishes out of the list?
  2. Great plugin, but.. I dont know, i reported this bug (is it a feature?) 1000 years ago and it still does that. Why does it keep updating my graphic drivers on his own? Why does it not keep the setting ive set it to? Everytime i check this setting its back to "use the latest"... so why the f we can even change anything when it goes on his own back to "latest version" anyway?
  3. Yesterday was my first "ipvlan freeze". I think it was the igpu, i lost the error, but it was something like "i915"... I just played back a movie in plex, but the fun part is, it wasnt even a movie where the igpu should have done anything (direct playback...)
  4. To give some feedback, after switching to ipvlan no crashes so far, parity sync completed without problem also.
  5. Array drive configuration = where the disks are 1,2,3,4,5,6,7 and so on. THIS!? I dont know what you mean. I did numerous parity checks with my method, not a single problem. So it is perfectly working fine. Sorry, what do you mean?
  6. Its pretty easy, if its intented ADD IT TO THE BIG TEXT BOX SO ATLEAST USERS KNOW IT. If not, change it. Do you even know what setting im refering to? Thats exactly what im doin for years without a problem.
  7. could you please add this as a suggestion to make unraid better?
  8. Yes, but why only when in macvlan? Maybe it just triggers it faster. Im currently rebuilding with ipvlan. root@Unraid-Server:~# docker network ls WARNING: Error loading config file: /root/.docker/config.json: read /root/.docker/config.json: is a directory NETWORK ID NAME DRIVER SCOPE 9c02e6451544 br0 ipvlan local ffa2b65b637b bridge bridge local ba7d287b5243 filesharing bridge local c426a08021c6 host host local d2b93c4c6a03 none null local a6339c7cf36b watchtower_default bridge local 514fa62878e3 watchtower_webserver bridge local 04fb1877a554 webserver_webserver bridge local does that look correct?
  9. Yeah, we will see, if you want i report back in a week or so? I dont think we will see another crash for quite some time without macvlan. But do you know what i noticed? It didnt crash when i didnt parity SYNC, so it was just HOURS later after i pressed sync it crashed. Without sync it was running for like 2 days. Has macvlan anything to do with that? unraid was rock solid till 6.10 or something
  10. Yes it could be. Also plex wasnt beeing used while it crashed. Maybe it did access some files for refresh or anything, but nothing playing. But how u explain not a single crash for months and years? I mean not a single crash or bluescreen or anything for a long time. Same dockers, plugins, vms, only difference is macvlan. Different CPU, RAM, MB, PS, but same problem with macvlan. Anyway it crashed again without any log, it just displayed a empty line. So im going back to ipvlan. Very sad that you couldnt figure this out. (and i dont feel like you guys can fix it)
  11. Just tell me what you think. Its just files beeing accessed by dockers i know.
  12. Why you think ive cut it away? Nothing unusual gets opend... You just need to know which drive it is.
  13. They are, but you can still clearly see which drive it is.
  14. Ill give up, it froze again, interestingly IT DIDNT crashed while doing NO parity. (i just let it sit because i thought like if it crashes again, no need to try to parity sync) Then after 2 days i was like maybe its working this time, so yesterday evening i enabled sync, and crash.... Oo (no logs at this point, but ill add them later) If there is again NOTHING in logs i go back to ipvlan (i just did some port forwardings i can use later)
  15. Are you on Drugs? Please look another time at the photos. 🙂
  16. I also noticed drives dont spin down, it was set to 1h (since 9am, around 1,5h ago i changed it to 15min setting.) I enabled the activity plugin, and it says there was activity DISK 1,2,5,9,10 Wasnt that fixed years ago? As i startet my experiment, i spun down all drives. And unraid is like yeah, nows a good time for some smart data reading...? Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdm Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdj Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdk Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdh Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdg Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdd Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sde Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdf Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdc Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdl Jan 20 09:08:34 Unraid-Server emhttpd: spinning down /dev/sdi Jan 20 09:08:50 Unraid-Server kernel: mdcmd (154): set md_num_stripes 1280 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (155): set md_queue_limit 80 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (156): set md_sync_limit 5 Jan 20 09:08:50 Unraid-Server kernel: mdcmd (157): set md_write_method Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdk Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdh Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdd Jan 20 09:09:18 Unraid-Server emhttpd: read SMART /dev/sdi Jan 20 09:09:30 Unraid-Server emhttpd: read SMART /dev/sdg Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdm Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdj Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sde Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdf Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdc Jan 20 09:10:00 Unraid-Server emhttpd: read SMART /dev/sdl edit: another hour later, still not a single drive spun down, as per file activity still: 1,2,9,10
  17. I think my crashes startet with https://forums.unraid.net/bug-reports/stable-releases/since-612-hard-freezes-cought-a-call-trace-r2518/page/2/?tab=comments#comment-25171 btw, im currently in my 2. "server" (so completly upgraded mainboard, cpu, ram and power) - and the problem persists. Is there a way to enable some sort of debug logs? Maybe we can find out what the problem is. If all goes to hell, i will switch to macvlan, redirect some ports, switch back to ipvlan. Because i cant add new port forwardings while in ipvlan, but things added before just work fine... oO Since there is such a big thread about this whole issue in the german section, maybe it is even a problem with macvlan and fritzbox?
  18. To give you an example, the new update process, is exactly what i mean, this sort of step by step guide i meant. Now just carry it over for everything...
  19. But, cant you simply test which part of the linux kernel or driver makes the problem and just keep it old till its fixed...?
  20. Interesting... i didnt tried macvlan in 6.12.4, did you? Why is it so hard to find out what changed after these complaints startet with macvlan?
  21. I find it hard to understand that you suggest there is an error in vm docker plugin or hardware when its working perfectly fine in ipvlan or even in macvlan (before that update 6.10 or something) Is there something i can like downgrade to test? Some sort of network driver or what ever? Or can i enable "more logs"? Like debug or something? Because its very likly to happen atleast once every 2 days, so that wouldnt be that much of log probably.
  22. Btw ive got a new error i normally dont see: Jan 19 01:00:01 Unraid-Server kernel: mdcmd (60): set md_write_method 1 Jan 19 01:00:01 Unraid-Server kernel: mdcmd (61): set md_write_method auto Jan 19 01:11:48 Unraid-Server kernel: traps: lsof[31000] general protection fault ip:1546c44a4c6e sp:2221140b89765036 error:0 in libc-2.37.so[1546c448c000+169000] Jan 19 02:00:01 Unraid-Server kernel: mdcmd (62): set md_write_method 1
  23. Okay, can you tell me which commands i can run to see in ssh if its using macvlan correct? does that help?
  24. I dont understand why you tell me this. "Reducing the number of drives in the array." If i have a failed drive and i cant replace it immediatly i can (and want) simple remove that drive. Thats exactly what im doing. Im reducin the number of drives in the array...? I can do that without rebuilding parity. "Not sure why you want this as 0000 new drives it a lot less stress on the system. If you do it via the UD plugin then when you get around to adding it to the array it adds immediately so downtime is minimal." Thats exactly what unraid does if you start the array with new drives, lol? And yeah i even used the UD plugin to format it beforehand. I translate it to you: Instead of waiting 19h to 0000 all new drives, i just take 24h to rebuild parity. Do you understand? I have the feeling you dont understand me. Why are we even talki ng about that? BTW: that documentation sucks ps: after reading this, there is no indication that it will remove all changes i have made under "settings/disksettings". What do you not understand? What do i not understand? Is that not correct?