greenflash24

Members
  • Posts

    17
  • Joined

  • Last visited

About greenflash24

  • Birthday November 25

Converted

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

greenflash24's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. I am also experiencing this issue in 6.9.2 stable. Is there any news on this?
  2. I have created a fork of the linuxserver container, and i will try to maintain this container in the future (at least until the builds break again). Therefore I and others can have security fixes for a bit longer at least. You can find a mirror of my fork here: https://github.com/fabianbees/docker-openvpn-as Docker images will be pushed here: https://hub.docker.com/r/fabianbees/openvpn-as I have not found time yet to update the Readme.md and change the dockerimage to point to my image instad of the linuxserver one, but this will be done in the future. As i don't use jenkins in my homelab i just added some build-scrips, so i can build this image with gitlab-ci on my private gitlab instance. -> UPDATE: I have added a disclaimer at the top of the readme.md and removed all references to the linuxserver team i have found so far, so that it is clear, that these builds are not related to them. For now my image has the latest version of openvpn-as, which is version 2.9.2 at this time. (Linuxserver has 2.9.0) If you want to switch to my docker image, you can do so my changing the image-repository in unraid from linuxserver/openvpn-as to fabianbees/openvpn-as But please make a backup of your appdata first, before changing to the new image just in case something goes wrong.
  3. I have observed, that the layout in the "Advanced View" is not what is is supposed to be, when a container is installed, which has a very long repository name. When this is the case, the Container-ID is not displayed below the container name and picture anymore instead it shows up on the right. (See screenshots below.) This issue was tested with Google Chrome and Microsoft Edge in Version 90.x.x. To recreate this issue, simply run my hello-world container, which i have given a very long name. Then have a look at the Docker Page. docker run --rm fabianbees/a-simple-hello-world-example-with-an-extra-ultra-super-duper-long-name-for-showing-an-unraid-ui-layout-error:latest This is how it should look like: (Container-ID below the container name and picture) When a container with a very long image name is present it looks as follows: (Container-ID on the right hand side of the container name and picture) A simple fix could be to remove the display: inline; CSS from the class="advanced". This can be replicated with the Dev-Tools in the browser. Then all is displayed normally but the long image name isn't visible any more (but thats a better solution than the current behavior). Best solution would be to concat a long image name after a specifed width and show three dots at the end (e.g. long-name...).
  4. Does anybody know, how to get the new so called "high Performace backend" to work, which is one of Nextclouds 21 features? After some research i found, that this Plugin needs to be installed and set up: https://github.com/nextcloud/notify_push But I was not able to get it to work inside of this LinuxServer Docker container, because it said that this installation is not using systemd, which makes sense as this is a Docker conatainer. But what is the process for setting up this pluging in a Docker conatiner?
  5. Of course, here you go: { "period": { "duration": 0.030292, "unit": "ms" }, "frequency": { "requested": 0.000000, "actual": 0.000000, "unit": "MHz" }, "interrupts": { "count": 0.000000, "unit": "irq/s" }, "rc6": { "value": 95.507065, "unit": "%" }, "power": { "value": 0.000000, "unit": "W" }, "imc-bandwidth": { "reads": 34.253191, "writes": 12.089361, "unit": "MiB/s" }, "engines": { "Render/3D/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "Blitter/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "Video/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "VideoEnhance/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" } } }, { "period": { "duration": 250.577540, "unit": "ms" }, "frequency": { "requested": 0.000000, "actual": 0.000000, "unit": "MHz" }, "interrupts": { "count": 0.000000, "unit": "irq/s" }, "rc6": { "value": 100.000000, "unit": "%" }, "power": { "value": 0.000000, "unit": "W" }, "imc-bandwidth": { "reads": 183.204454, "writes": 33.909700, "unit": "MiB/s" }, "engines": { "Render/3D/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "Blitter/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "Video/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" }, "VideoEnhance/0": { "busy": 0.000000, "sema": 0.000000, "wait": 0.000000, "unit": "%" } } }
  6. @b3rs3rk I have just tried your new intel version on my i9 9900k with UHD 630 graphics and everything is working just fine. Thanks a lot for this update.
  7. I am also using multiple Docker containers, which have a custom static IP assinged to them and I am facing crashes of my Unraid server. Any of the three proposed solutions in the linked thread above are not viable solutions for me. I can not put my VMs and containers on a seperate VLAN because my router and switch does not support this. I also can not put my VMs and containers on seperate NIC, because i only have one network cable running to my server and i don't want to have a switch in front of it. And i also need to have seperate IPs for some containers, so i can prioritize traffic in my router between them. The interesting part for me is, that i am running my containers with static IPs since almost three years without any problems on all stable versions of Unraid, including latest stable 6.8.3. These system lookups first happend to me, after upgrading to 6.9.0-beta35 (first beta i have tested), so i would guess that this issue has been introduced with the beta versions and it should be resolved better sooner than later.
  8. This is probably related to the issues other people facing as well in this thread: https://forums.unraid.net/bug-reports/prereleases/690-beta-30-server-hard-lock-up-r1083/. For me personally this started right away with the first beta i have tested (which was beta35) and persists all the way through -rc2.
  9. I also can confirm this issue while downloading the latest 6.9.0-rc2 from Germany using Deutsche Telekom as my provider. Normally I get speeds arround 255 MBit/s downstream. Downloading Unraid updates I am only getting arround 30-40 KB/s as my download speed.
  10. @limetech, are there any updates or investigations on the server crashes, which are still present in 6.9.0-rc1? (See also: https://forums.unraid.net/bug-reports/prereleases/690-beta-30-server-hard-lock-up-r1083/page/2/)
  11. Hi, I also want to share my experiences with this issue, because I am also affected. Normally I only run stable releases on my main server, but i am running a Samsung 970 evo SSD and i am seeing exessive writes on the latest stable release (> 1TB per day). Therefore i switched to 6.9.0-beta35, reformatted my SSD, and my writes are fine now. When 6.9.0-rc1 was released i switched to that. I suffered from the lockups on both pre-releases. I have been running a Intel System (i7 6700K and MSI Z170 MB with 64 GB DDR4). I am running the System without any OC, so CPU was running at stock speed and RAM was running at 2133 MHz. Because my hardware was a bit old now and i wanted to upgrade anyways, I purchased new hardware, so i can exclude any hardware issues. I am using Intel QuickSync in my plex docker container, so i purchased another Intel System. I have chosen an i5 10600k in combination with ASUS Z490 Creator MB. When i got my new components, I run memtest86 for 4 days without any issues. Then i switched over my UNRAID server to the new platform, without any success. Just after 2 days the first lockup (running unraid 6.9.0-rc1). Now i wanted to make further hardware tests, so i switched my unraid server over to the hardware of my main workstation, which is a Intel i9 9900K on an ASUS WS Z390 PRO (also memtest stable and no OC). With this hardware i also faced a lockup just after 18 hours uptime. I have attached the diagnostics after my latest crash on -rc1, but they are probably not that helpful, because they are captured after the lockup and a hard reset. Also i am running my unraid-server headless down in the basement, so i unfortunatly cannot look at monitor output at the time of the lockup. After my own diagnosis i would definitely exclude a hardware error, because i have tested now 3 completely different systems with the same results. I would appreciate any help I can get on this.
  12. I'm having exactly the same issue. I have upgraded from 6.9.0-beta35 to 6.9.0-rc1 and now i am unable to boot. I have made a backup of the flash drive recently, which i am restoring now.
  13. I have also experienced this exact issue and could resolve it also through mounting the above mentioned Paths. Only difference for me is that i have altered the Host Paths to the Unraid /tmp Dir so i am using the RAM and not the Cache drive for the Temp files. Therefore my writes on the cache drive are not as high. Host Path: /tmp/nextcloud/ and Host Path: /tmp/swag/
  14. Yesterday I noticed this warning at the unassigned devices plugin: Warning: syntax error, unexpected '=' in Unknown on line 1 in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 1404 See screenshot: Is this normal (or even dangerous)? Does anyone had this issue before? EDIT: Probably this warning is related to this in the disk log: Jan 19 19:39:43 Home-Server unassigned.devices: Disk '/dev/sdd' does not have a serial number and cannot be mounted. But this is ablolutly untrue, this drive does in fact have a serial number (wich i have removed in the upper screenshot -> red box). Smart-Log is attatched. 35000cca26a34e868-diagnostics-20200121 (sdd).txt
  15. @BrianIsSecond Could you please update the title of this post to something like that: [6.6.6] Can't update VMs after they have been renamed