Richard Aarnink

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by Richard Aarnink

  1. @halorrr problem solved? My problem could be the IPVLan setting for Docker. I always used MACVLan. I installed 6.11.5 (which ran fine in the past) and first experienced the same crashes. I learned that new installs from 6.11.5 on have IPVLan as default, I changed it to MACVLan and now it is stable sofar.
  2. Hi @halorrr, Just wanted to check, are you aware of the iGPU issue on 11th gen Intel CPU's on 6.12 ? https://docs.unraid.net/unraid-os/release-notes/6.12.0/#crashes-related-to-i915-driver I am not sure, but might it also be applicable to 12th gen, I cannot find any additional info.
  3. Hi I am still struggling, the server ran 17 days without issues on 6.12.3. As I was on vacation, the WebGUI of Unraid was not used. When I got back and started using the WebGUI the server died in less then 2 days. I upgraded to 6.12.4 rc 18. It ran for 2,5 days and just died an hour ago. There are no nginx crashes in the syslog, only the “Stop running nchan processes“ message appears a couple of times. I am reverting back to 6.11.
  4. @halorrr with the new USB Flas disk I am running 4 days now no problems. Will keep the system as-is for now as I am off on vacation in a couple of days. not all plugins are running, but the dockers and the vm for homeassistant is. I can provide more details if you need to have more info. in the logs I see Nginx crashes, but this seems to be a known issue. The system however is stable for now. So a fresh new usb stick, a new installation (no copying of files fron the old usb stick) seemed to solve the issue for me. regards Richard
  5. @halorrr thank’s for letting me know. I created a new usb flashdrive using a samsung bar pro 32 gb and did not copy any files from the old usb device. Today I “downgraded” the cache drive to btrfs from zfs. This should enable me to downgrade if needed. the system is up for 1 day and 23 hrs now. Still on 6.12.3. I will keep you informed. in your case, if the downgrade to 6.11.5, which ran fine the past, still does not solve the problem, consider what us left to look at. 1. Any changes or upgrades to the BIOS? 2. The usb flash drive, is it still the same stick? Is it in an USB2 slot (USB3 causes problems in some cases) maybe do an extensive test using the tools from the ultimate boot usb https://www.pcwdld.com/ultimate-boot-cd-ubcd-review/
  6. Hi , please provide additional information, you can start here. In this post provides help in troubleshooting and if you cannot solve your issue with these tips it also explains how to provide sufficient info to get help. Hopefully you are able to resolve your issues quickly.
  7. I am starting to seeing a pattern @halorrr and @Hibiki Houjou and me are experiencing similar issues. The system freezes randomly, there is no change in the hardware. We are all on 6.12.2 or 6.12.3.
  8. It seems you are having similar issues as @halorrr and myself. We are all on 6.12.3 and we experience system crashes, without anything in the log that is helpfull. I am investigating if the USB flash drive is to blame (as the hardware is not changed). Hopefully someone can help pinpoint the issue, find the root cause. I might return to 6.11 this weekend if we cannot make any progress.
  9. I am still not out of the woods myself (if that is a saying). I replaced my flash drive with a Transcend jetflash 600 8gb. The system crashed within a couple of hours. i am unsure if the flashdrive is the issue. Now I am trying the Samsung Bar Pro 32 Gb. I use an usb2 port, no other devices are connected via usb2. I have created a complete new install and did not copy anything over from the original usb flash drive. currently there are no plugins active. The system is rebuilding parity now. I want to keep the system running a couple of days before i add plugins. I am curious whether this has solved it, but i am unsure. It certainly is not a good feeling that there is nothing in the logs.
  10. Yesterday evening I noticed that unraid was reporting that the usb flash drive was disconnected. I found errors using dosfsck. I replaced the usb flash drive earlier this month, by a new Philips 32 gb one, when I upgraded to 6.12. I did this as the previous one was rather old. Now I know to keep away from the Philips branded 32 gb usb flash drives. @halorrr you might want to investigate the USB Flash drive as well the memory is not the culprit. The server is still running, no crashes. All plugins are enabled and runtime is 23 hrs.
  11. @halorrr After 13hrs the system crashed again, so it seems I am in the same boat and we both are still having similar issues. I just started in safe mode. Now I am planning to wait at least 24hrs before I call it a victory and assume that the system is stable in safe mode. Currently Docker a minimum of docker containers (MQTT and Plex) and one VM (Home Assistant) are running. Do you have any updates?
  12. Update 3: Currently running without these plugins: appdata.backup.plg dynamix.file.manager.plg dynamix.unraid.net.plg unassigned.devices.preclear.plg It is too early to tell if the one of these is causing the crashes. the system is running 2,5 hrs now. @halorrr do you have any updates ?
  13. @halorrr thank you, when we combine our efforts we might find the rootcause faster. I have just rebooted the server with all plugins enabled except user.scripts and zfs.master as these are the plugins I installed last. Also updated NerdTools to 2023.07.18 as this update is now offered. I will keep you informed. update 1: running with all plugins except user.scripts and zfs.master for 5:30 hrs now. Keep you informed update 2: now after almost 9 hrs the system crashed. seems like user.scripts and zfs.master are not to blame now back to safe mode first. tomorrow I will continue the search
  14. I am having the same problems over the last couple of days, no changes in hardware and it ran stable over months (I am not suspecting anything being wrong with the hardware) The system crashes to a point where it needs a hard reboot. My system is on 6.12.3, and yesterday it crashed 3 times. the numlock key on the attached keyboard does not work and the monitor produces a no signal message. It is clear that it is not only the webgui of unraid that has crashed. the only 2 lines of concern in the syslog (to me) are: docker0: port 1(vethc08d6dd) entered disabled state. --> it occurs a lot, i do not no if I need to be concerned and if so how to find the responsible docker container nginx: 2023/07/19 14:01:54 [alert] 6707#6707: worker process xxxx exited on signal 6 --> which as far as I have learned from other posts is only concerning webgui of unraid Yesterday evening I booted in safemode, with VM's and Dockers enabled (and minimal set running). This morning the system is still up. It is most likely at this point to look for issues in plugins. Below is the list of plugins installed on my system. Of these the ZFS master and User Scripts were installed last, the other 2 are updated. As there is nothing in the log files I think I need to try to find the rootcause by disabling the yellow marked plugins one-by-one. Is there a smarter (less timeconsuming) way? -rw------- 1 root root 3415 Feb 12 04:40 unbalance.plg -rw------- 1 root root 4861 Feb 16 20:26 intel-gpu-top.plg -rw------- 1 root root 5629 Mar 3 04:40 dynamix.system.temp.plg -rw------- 1 root root 7287 Mar 3 04:40 dynamix.system.stats.plg -rw------- 1 root root 4572 Mar 3 04:40 dynamix.system.info.plg -rw------- 1 root root 4765 Mar 3 04:40 dynamix.password.validator.plg -rw------- 1 root root 6376 Mar 3 04:40 dynamix.cache.dirs.plg -rw------- 1 root root 5349 Apr 16 07:21 unassigned.devices-plus.plg -rw------- 1 root root 24691 May 6 04:40 dynamix.unraid.net.plg -rw------- 1 root root 9094 May 20 13:27 unassigned.devices.preclear.plg -rw------- 1 root root 7486 Jun 1 20:30 dynamix.file.manager.plg -rw------- 1 root root 6302 Jun 8 12:13 NerdTools.plg -rw------- 1 root root 4520 Jun 12 19:07 open.files.plg -rw------- 1 root root 5129 Jul 4 06:40 ca.mover.tuning.plg -rw------- 1 root root 103495 Jul 4 12:41 community.applications.plg -rw------- 1 root root 9814 Jul 5 21:51 tips.and.tweaks.plg -rw------- 1 root root 7165 Jul 8 13:29 appdata.backup.plg -rw------- 1 root root 3974 Jul 16 15:05 zfs.master.plg -rw------- 1 root root 7360 Jul 16 15:12 user.scripts.plg -rw------- 1 root root 20839 Jul 16 21:55 fix.common.problems.plg -rw------- 1 root root 90705 Jul 16 21:55 unassigned.devices.plg syslog
  15. I have similar questions and I think other users may also have questions if switching to zfs is the way forward. Therefore I can image having a other topic/discussion group for these kind of questions. For now, I think you need to do a lot of reading and watch some video's on the topic. some tips: - https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/ - a starting point https://docs.danube.cloud/user-guide/storage/zfs-best-practices.html - you could use https://www.raidz-calculator.com/default.aspx
  16. Hi, I know it is a bit off topic, but I know not any better place to ask my question. I am in process of reorganizing my server ,now running 6.11.5. As I am following the development of the RC series of 6.12 I know ZFS will be available. I also watched Spaceinvader one's youtube video which explains all the new features (see below). After watching this video I was left with some questions on the capabilities of unraid array and zfs. I tried to learn the basics of zfs, and learned that an RAIDZ1 vdev pool requires all the drives are the same size. However this is not the case in my server. Creating multiple RAIDz1 vdevs in which drives of the same size are clustered is certainly possible but then I have less storage capacity remaining. ZFS seems compelling for its capability to create snapshots on datasets. The question: is it possible to have the best of both worlds with Unraid? for example: I create single disk vdevs and combine these in the Unraid array, with a parity drive. then in ZFS create a pool combining all (or some) of the VDEVS and create the datasets, and use the ZFS snapshot capabilities. Downside would be that if the rebuild (if a drive fails) also fails, all data is lost, whilst with a xfs or btrfs filesystem one would lose only the data that was on the failed drive, right? Thanks, Richard nv
  17. I understand the reasons, though I do think you should also rename the section. This is still under “Bug Reports”. Which clearly is no longer a logical fit. The idea of creating a quick feedback as mentioned above I also like. Would be nice to see a small graph listing the counters of the categories and per version. also I would suggest adding the new release at the top as this is new and most relevant. Lastly it would be nice to add a shortcut (link) to the page where bugs can be reported but also viewed. regards Richard
  18. Jun 6 19:05:13 RichTech emhttpd: Device inventory: Jun 6 19:05:13 RichTech emhttpd: TOSHIBA_MG08ACA14TE_61S0A02TFVJG (sdj) 512 27344764928 Jun 6 19:05:13 RichTech emhttpd: WDC_WD30EFRX-68AX9N0_WD-WMC1T0376287 (sdk) 512 5860533168 Jun 6 19:05:13 RichTech emhttpd: WDC_WD30EFRX-68AX9N0_WD-WMC1T0597412 (sdh) 512 5860533168 Jun 6 19:05:13 RichTech emhttpd: WDC_WD50EZRZ-00GZ5B1_WD-WX11D278Z7E7 (sdg) 512 9767541168 Jun 6 19:05:13 RichTech emhttpd: WDC_WD40EFZX-68AWUN0_WD-WX32D51EL0S9 (sdd) 512 7814037168 Jun 6 19:05:13 RichTech emhttpd: WDC_WD30EFRX-68EUZN0_WD-WCC4N4KZTCZ7 (sde) 512 5860533168 Jun 6 19:05:13 RichTech emhttpd: TOSHIBA_MG08ACA14TE_61S0A02FFVJG (sdb) 512 27344764928 Jun 6 19:05:13 RichTech emhttpd: SanDisk_SDSSDA120G_171548451903 (sdf) 512 234441648 Oh, my god, you are right, it is a very small difference which I have overlooked completely.
  19. Hi, I purchaged 2 Toshiba MG08 enterprise capacity drives (14TB, model MG08ACA14TE (SATA). 1 is used as Parity the other as data disk (which replaces an older disk) in the Array. The prep went fine (pre-cleared) no issues. I tried to follow the @spaceinvaders tutorial for Shrinking the array to the letter, which is ok, until the point where I had to create a new config. To remeber which disk is in which slot, and which disk is the parity drive I made a picture as was also advised in the tutorial. When creating the new config I was confused when I saw the 2 Toshiba drives have the same serial number, as this is shown in the selector, I cannot tell which disk is the Parity and which is the Data disk. No issue for now, as both drives where new. But certainly a potential problem in the future. I checked the disk identity and I noticed that the "LU WWN device id" are different for both drives, however this is not shown in the selector when creating a new config. How should I select the proper parity disk in the future and tell both drives apart? Is it an idea to use the LU WWN device id field instead of the serial number in the drive selector, or am I doing something wrong? I run Unraid 6.10.2. Regards, Richard
  20. Upgraded from 6.9.2 -> 6.10 without issues. only network stats plugins has issues, this is known see
  21. Looks similar to https://access.redhat.com/solutions/5068871
  22. I am worried, the slow progress makes me nervously. It is common knowledge that stagnation means decline.
  23. Upgraded from rc1 since release without any problems. Now stable for 1 day and 11 hrs. System is ASRock H410M-ITX/ac with Intel 10400 CPU. I am running several Dockers, but no VM’s. I have 2 SSD’s of which one is dedicated for plex and outside the array. All drive temps are shown correctly. Drives are also spinning down correctly. also installed plugin Dynamix System Temperature v2020-06-20 which detects coretemp nct6775 drivers only remark is the Motherboard temp is not showing (did not work in previous versions either). CPU temp is shown.
  24. Updated from beta 35. Everyting seems to work. Paritycheck is triggered after updating to rc1. In the System Log however I have several wrong csrf_token messages Dec 10 20:37:24 RichTech nginx: 2020/12/10 20:37:24 [error] 11802#11802: *5432 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.11.1, server: , request: "POST /plugins/community.applications/scripts/notices.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.10.210", referrer: "http://192.168.10.210/Dashboard" Dec 10 20:37:24 RichTech nginx: 2020/12/10 20:37:24 [error] 11802#11802: *5425 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.11.1, server: , request: "POST /webGui/include/DashboardApps.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.10.210", referrer: "http://192.168.10.210/Dashboard" Dec 10 20:37:39 RichTech rc.docker: webserver-intern: started succesfully! Dec 10 20:37:39 RichTech rc.docker: webserver-intern: wait 4 seconds Dec 10 20:50:17 RichTech kernel: i2c /dev entries driver Dec 10 21:22:34 RichTech root: error: /plugins/preclear.disk/Preclear.php: wrong csrf_token Dec 10 21:22:34 RichTech root: error: /plugins/preclear.disk/Preclear.php: wrong csrf_token Dec 10 21:22:55 RichTech root: error: /webGui/include/DashUpdate.php: wrong csrf_token Dec 10 21:23:56 RichTech root: error: /webGui/include/DashUpdate.php: wrong csrf_token Dec 10 21:24:56 RichTech root: error: /webGui/include/DashUpdate.php: wrong csrf_token Dec 10 21:25:57 RichTech root: error: /webGui/include/DashUpdate.php: wrong csrf_token
  25. Yes, that is a solution, thx, still hope 6.9 will be released soon!