wambo

Members
  • Posts

    73
  • Joined

  • Last visited

Everything posted by wambo

  1. Hardware Raids sind deshalb mWn halbwegs überflüssig, weil es die externen Chips kaum noch braucht um die Paritätsberechnungen / Raid-Aufteilung vorzunehmen. Ich bin mir nicht sicher, ob das Gerät wirklich ein NAS ablöst oder eher "entlastet". Es kann nun mal keine HDDs selbst aufnehmen. Mein ähnliches Gerät bringt auch nur einen Adapter mit (mSata oder evtl. proprietär auf SATA, inkl. Strom, weiß nicht ob das für HDDs langt), ich weiß gar nicht ob es 2 SATA-Anschlüsse gibt? Aufgelistet ist auf Amazon für das RJ42 auch nur einer + das Punch-Through-Hole. NAS behalten wäre natürlich mehr Stromverbrauch (und evtl. kein Verkaufswert des NAS). Falls nur ein SATA-Anschluss würde sich die Lösung 1 erübrigen (SATA kann erstmal nur 1 device pro Port) - sonst musst du über Multiplier Chips gehen, die gibt es vermutlich auch in solchen externen Gehäusen, aber da würde die Geschwindigkeit leiden (evtl. bei HDDs auch wieder kein Problem). Deshalb würde ich darüber nachdenken, HDDs in der NAS zu belassen, und das neue Gerät nur mit SSDs --> 2x nVME zu betreiben. Das passt rein, dort können alle Dinge wie VMs, Docker, Services laufen und die großen Datenmengen werden per SMB vom NAS eingebunden. Damit wird zwar das Cache <-> Array System von Unraid etwas obsolet, aber evtl. kann man da auch irgendwie tricksen, z.B. die HDDs von der Synology irgendwie als devices einbinden (und damit das Array bauen) oder Unraid irgendwie vermitteln, dass es vom Cache auf eine Freigabe schiebt und zurück. Außerdem: Auch der Cache sollte mit Parität laufen, besonders da hier die Änderungen liegen, von denen es vermutlich noch kein Backup gibt! Lösung 2 finde ich nicht gut, das RAID auf ein externes Gehäuse umzulegen, wie läuft das ab, wenn eine Platte abraucht? Wie wird das Resilvering angestoßen? Du hast dann auch nicht den Vorteil von Unraid, dass du einfach die andere Platte einlesen könntest. Was noch ginge, wäre jede Platte per USB anzuschließen, dann könntest du trotzdem das Unraid Array nutzen, und USB Anschlüsse hat das Ding ja auch genug (ahhh nur 1x 3.0 also evtl. ein kleiner Hub dazwischen? Falls es USB-HDD-Gehäuse gibt, die beide Devices direkt übertragen, dann wäre das ebenfalls eine Möglichkeit. Ich würde sagen: Gehäuse 3D Drucker falls 2 Sata Ports > Externes HDD Bay mit SATA Multiplier = Externes Gehäuse mit USB (ohne RAID, sondern einzelne Devices) > NAS für HDDs >> alles andere
  2. I got redirected here from the changelog and tried to follow the steps in the first post. This is not accurate enough (anymore?, maybe it changed). Under tools I have several items with "System" in the name, but none just "System". Under "System devices" I can see the ethernet controller, so it should maybe be adjusted to this? My entry says "RTL8111/8168/8211/8411" - from my mainboards datasheet I can tell it is a RTL8111f. I'd also suggest adding some clarification whether this plugin is recommended if one of the supported chipsets is listed, or whether it means the user still needs to clarify whether a supported chipset (so which out of that list from the entry) is being used.
  3. I just checked my bookmarks what port I was using to access the VPN before and changed the port options to that, I believe the configs in the jdownloader should've kept the old values (which I probably adjusted months ago after initial setup) About the run command: it finishes successfully. The command finished successfully! I think I should've stated better in my edit that I managed to fix my problem - although I don't know why port allocations changed. That is what I wanted to leave here (along with how to fix my problem), because maybe there was some rare issue behind it 🤷‍♂️
  4. I guess hardware issue is one of the downside of reusing outdated hardware 😕 I'm gonna try relocating some important services, then run the memtest (maybe other hardware tests on live os I have lying around) and then the "slow and one service after another".
  5. I've had the occasional crash / *something that makes the server stop working and responding to anything* a few times now, with some months in between and once I could identify some plugin (I think it was some GPU statistics?) when it happened more often. But now I've had to leave and the server went unresponsive twice in 2 weeks (Crashed Mar 28th, restarted on Apr 3rd, crashed on 5 and only managed after reseating the usb drive...) and it became unresponsive again after less than 3days. -> API lost connection, no pings got answered, Dockers and VM were unaccessible. I don't know whether it still did something, it was still powered on (fans running), but no new syslog entries. I - again - can't find anything telling me what caused it. In the syslog I could find the Parity Tuner Plugin spamming errors - because I had to reset the server to start it again and it then had the flags for automatic and manual parity check (although I didn't start it manually? Must be from the "crash") I'm adding the syslog from after the restart on Apr 3rd -> End Apr 5th, I only changed the hostname (now I know that is useless because it appears in the anonymized diagnostics) and where the warning "Parity Check Tuning: ERROR: marker file found for both automatic and manual check P" happened multiple times so it only appears once. Since I haven't really done any changes apart from updating docker and unraid, this is really bothering me. Steps I have planned: - run a full memtest (although the server has quite high utilization on ram/cpu, so I can't see why defective memory wouldn't have shown earlier...) If I can't figure out why this is happening (and I don't have IPMI to reset it remotely) I'm thinking of either running Unraid as VM on proxmox (migration might be easy or impossible... ) + unloading the most important services to a micro server (Nextcloud, Homeassistant) In terms of resetting, only easy solution I can up atm is power cycling with a remote contrallable power plug... (could hurt the hardware eventually, but then I can't see a cheap alternative... if I don't want to go for a pikvm or similar) syslog-192.168.2.101-shortened.log unraid-diagnostics-20240417-1828.zip
  6. docker run -d --name='jDownloader2' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="nas-mhmx" -e HOST_CONTAINERNAME="jDownloader2" -e 'UMASK'='000' -e 'UID'='99' -e 'GID'='100' -e 'CUSTOM_RES_W'='1600' -e 'CUSTOM_RES_H'='900' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:7808]/vnc.html?autoconnect=true' -l net.unraid.docker.icon='https://raw.githubusercontent.com/ich777/docker-templates/master/ich777/images/jdownloader.png' -p '7808:8080/tcp' -v '/mnt/user/appdata/jdownloader2/':'/jDownloader2':'rw' -v '/mnt/user/downloads/jdown':'/mnt/jDownloader':'rw' -v '/mnt/user/downloads/jdown':'/output':'rw,shared' --restart=unless-stopped 'ich777/jdownloader2' Sorry for the late reply, but I had no access to my server for a while -.- I don't think this will help you, because I did change the values after I noticed...
  7. I'm stuck with my Jdownloader 2 Container, I noticed it stopped running / did not start, so I followed the pinned troubleshooting steps of removing Core.jar, JDownloader2.jar, tmp and update folders - no avail. When I try to start, I do get a "Server execution error" - nothing in the logs (that are accessible via the unraid UI) Then I uninstalled the app and reinstalled from Community Applications Store - it downloaded the image fresh, but still the same error: Execution error - Server error Edit: Something must've screwed with multiple containers, since they're all set to the same port o_O I noticed it wasn't the default port in the template when reinstalling, so I assumed it was the custom port I set last time - but nope.
  8. For a while already I keep getting connection timeouts when trying to access the deluge webui - or webuis of containers that I route through this container (their network is this container, and their webui ports have been added to this container) After some restarting each container, checking logs, pinging from inside / checking the port from outside with telnet it sometimes works again - but I have no clue why, and no clue why it doesn't from the beginning... In the logs I can see that the Deluge Webui has started - still: "ERR_CONNECTION_TIMED_OUT" And since the other webuis are also unreachable (but the containers work) - I think it's some network problem inside this container. While fiddling around to fix, I also updated the container (30min ago), and I do not get the error in the posts above: 2024-04-04 20:54:56,211 DEBG 'watchdog-script' stdout output: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... 2024-04-04 20:55:05,365 DEBG 'watchdog-script' stdout output: [info] Deluge process listening on port 58846 And this is with the latest tag, image from 4 days ago - same as 2.1.1-4-09 Edit2: I seem to have the same issue as Dazedv3 a fest posts above, but it's also the deluge webui that is inaccessible (and also after container restart...)
  9. This forum section is quite well populated, so there should be quite some experience, but I haven't found mentions yet: What is the performance impact of virtualizing unraid (assuming that nothing else is added, just slap proxmox /esxe/hyper-v in front) ? CPU / RAM / I/O performance? Is there maybe a cut-off point where the losses are too big (4 cores only or pcie 2, I don't know) My primary drive to virtualize unraid is to have means to "reset" it while I can't physically access the machine, so to add IPMI capabilities (compare here). I might decide to offload some services to other hardware instead of replacing the existing.
  10. Hi there, I'm currently visiting my family, and of course while I'm away my home server "crashed" - and I have no means to access it really (I get into my network, I can ping it, but that's it). So I'm thinking about adding IPMI functionality. There's PIKVM, which is interesting, but quite expensive. I'd rather - swap my mainboard to one that can be fit with an IPMI module (2nd hand is fine) - or just plain upgrade a tier (or more) MB+CPU+RAM My current hardware: https://geizhals.de/wishlists/2202699 MB: GIGABYTE GA-H97-HD3 -> I'm reaching the end of SATA slots already, and PCIE as well CPU: Intel Xeon E3-1230 v3 RAM: 4x8GB DDR3 GPU: nvidia Quadro P400 Midi Tower Case with a bunch of fans on a manual fan control (Cooltek Antiphon) 4x 3,5" HDDs + 2 2,5" SSDs running and I have a PCIE to SATA expansion card where currently my DVD drive is attached. I am running a bunch of containers (currently 34 "active") and 1 VM on it, and sometimes it's already struggling already (could be 1-2 rogue docker container though, like jellyfin grabbing lots of resources sometimes). I want to test and try out more stuff in the future. So another idea was moving the unraid to a VM so I can test other OS on the same machine - also as unraid has proven to not be super stable 😕 So the options: - piKVM v4 - 350€ - quite expensive... piKVM v3 300€, or maybe DIY with my existing PI 3B (no mass storage emulation) ~ 100-150?€ -> I could reuse this in the future - find a fitting MB for LGA 1150 with 4 ram slots, 2+ PCIe slots, 6+ SATA ports AND IPMI onboard, or at least extension pins like the Asus p10s-m - but for LGA1150... I could only find stuff for 1151 - upgrade my machine with new MB + CPU + RAM here I'd need quite a lot of support, I'd want to gain performance, and I suppose with newer gen's I can still also gain a lot of efficiency? Some modern i3? or some i5 from around 11th gen? I'd aim at around ~300-350€ and I doubt I can sell the old parts for a noteworthy amount... Sooo what would you do? Anybody know a MB that would fit? And if not, what hardware would you recommend? Or maybe even a completely different fix?
  11. Yes, I was deleting stuff in the wrong directory (which I got rid of now as well). I put the Edit at the top so ppl would immediately see it's resolved - I should've wrote that explicitly.
  12. Edit: Nvm, i noticed I had a JDownloader2 and a jdownloader2 folder in appdata (from different old package I thought I had cleaned up) So my JDownlaoder2 Container stopped working: the logs were empty (the ones I access from unraid UI docker tab) and the webui rejected connection. I restartet the container, logs still empty, and the console closed every few seconds. I could however see that start-server.sh and start.sh (or similar?) were running. So I followed the steps from the pinned post and restarted. WebUI still not answering. And the logs window (also console) close after a few seconds. I can see a repeating cycle in the logs. The Jdownloader does not seem to have started/rebuilt yet. It is noticing that JDownloader.jar is missing (invalid or corrupt) but no further error...
  13. Ngl, that project in docker is a frigging mess. You fix the permissions and with the next miniscule commits they screw it all over again... How do ppl handle this "outside setup" that is described it the manual? Did you just run the docker-setup script? *** Running /etc/my_init.d/10_syslog-ng.init... *** Running /etc/my_init.d/arm_user_files_setup.sh... rm: cannot remove '/home/arm/music': Is a directory *** /etc/my_init.d/arm_user_files_setup.sh failed with status 1 *** Killing all processes... Dec 9 06:49:13 3be1443b0b1c syslog-ng[19]: syslog-ng starting up; version='3.25.1' Updating arm user id from 1000 to 99... Updating arm group id from 1000 to 100... Adding arm user to 'render' group Creating dir: /home/arm/media/completed Creating dir: /home/arm/media/raw Creating dir: /home/arm/media/movies Creating dir: /home/arm/media/transcode I'm getting this error since the update. It's trying to setup everything again and while at it, it tries to delete /home/arm/music - but that is one of the mount points At least the user id updates seems to fit the existing permissions (same ID but different user name in unraid though?) Looking at https://github.com/automatic-ripping-machine/automatic-ripping-machine/blob/main/scripts/docker/runit/arm_user_files_setup.sh they are "handling" the mistake in the docker files by deleting the music folder and symlinking the Music folder there.... I changed the mounting position to "Music" again and made sure there was no directory "music" inside the folder I mounted to /home/arm... The log seems to append on top and bottom I might try to completely reinstall it from scratch That did it. Privileged mode also seems necessary to give access to the drives.
  14. Oh I wish I had found this post 4 hrs ago. These "combined" one-for-all threads are a huge mess. Just so other's can find it when they're searching for the error: caller=query_logger.go:93 level=error component=activeQueryTracker msg="Error opening query log file" file=/prometheus/queries.active err="open /prometheus/queries.active: permission denied" panic: Unable to create mmap-ed active query log @Roxedus I think this actually is an issue with the template, or could at least be adressed by it - when I searched for the problem, ppl using docker on other platforms also had the issue though. So Prometheus changed this at some point and the template did not follow. Ideally though the container would somehow translate the user inside 65534 to the user 99 on unraid...
  15. Yes, I'm free of errors now, with the old SSD formatted newly as 2nd device in the old pool (main device new SSD). While I was at it, I removed a smaller SSD along with a 2nd pool and moved all this to the big pool with parity (probably also not the best idea looking back, shouldve checked it more, ran all backups again before I changed another thing) I found the errors in syslog before, but I just convinced myself too early that it's the drive failing not the filesystem having an error (possibly only in unraid). Memo: bring the whole problem to support. But thx for the quick answers. I consider this solved.
  16. OK. For some reason I thought I could access the cache pool's disks just like I can access the arrays disk - which wouldve made it complicated. I'm still not sure what the original problem is... the SSD is fine outside, and probably would be fine inside if I formatted it again, but unraid couldn't mount/recognize its old filesystem (the original pool)
  17. Onto the array? Or only onto the new pool disk, once I have it?
  18. Yes, that part seems easy. My real problem is how to get the "old"/"rescued" data back into the system - since the original drive won't be recognized anymore - even though I can mount it with a USB-Sata adapter. (that makes me wonder whether one of my cables internally are broken - can't test that before I get a new SATA drive though) Should I copy the files onto the new drives once installed, or can I just move them onto the HDDs? Or is there a way to make the system "take" the "stand-in" nvme disk and just "continue" buiseness (and then I add a Sata for redundancy and replace the nvme in a 3rd step) ?
  19. Hey there, after I noticed some problems with syncthing, it turned out the SSD of my (bigger) cache pool has given out (lots of read errors, couldnt be mounted, filesystem not recognized in unraid...) - I managed to copy the partition on my Desktop, the drive didn't seem faulty there actually but I'm gonna replace the drive with 2 drives now, so I have parity... Until those drives arrive, I was thinking of using an nVME in an USB enclosure as replacement - I mirrored the faulty disk onto that, but it turns out I can't just add this device to the pool and keep using it. So I can still mount the "stand-in" drive and copy over the files manually. But this is where I'm not sure: where should I copy the files to, so Unraid won't be confused? Onto the Pool (as if the mover has already moved them?) onto the new SSD (once they're here)? Or is Unraid agnostic to "on which disk" files are lying, as long as they're still in the proper share?
  20. When trying to update a specific docker config, I was presented with an error about failing to stop the docker, and thus failing to start up the container with the new config. I tried to stop and then kill this container via cli (docker kill <containername> ) which also failed ( cannot stop container: <container-id>: tried to kill container, but did not receive an exit event ). I tried to stop the docker engine via the webinterface - which seemed to succeed, but upon starting it again docker daemon was unresponsive. I then tried to reboot from the webinterface - which send a message to all ssh connections, but didn't seem to do anything else. I tried again with shutdown (repeatedly) and in the end with shutting down the array manually. But this also failed and stayed forever (~20h+) on "Stopping Array - stopping services" (or similar) status. I'll add diagnostics from the point but as far as I can see there is nothing out of the ordinary in the syslogs. In the end I resorted to a hard reset, lacking other options. After this hard reset and starting docker, the container (Cloudflare DDNS) reacts without issues, so I still suspect some other system service being the hidden cause. This might be related to the linked issue below, but if it is then the thread also did not reach to the point of identifying the failing service. unraid-diagnostics-20230221-2135.zip
  21. For now it seems as if uninstalling GPUstat solved the issue. Has been running fine ever since. --- Edit Spoke to early (mabye?) I noticed I couldn't stop a docker container, tried to stop the docker service, which did not start back up. Trying to reboot failed and stopping the array is not coming to an end. Might be related or might not be related... Feb 21 20:37:10 <SERVER> nginx: 2023/02/21 20:37:10 [error] 22153#22153: *3456794 upstream timed out (110: Connection timed out) while reading response header from upstream, client: <client.ip>, server: , request: "POST /webGui/include/Boot.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "<serverip:sslport>", referrer: "https://<server:port>/Settings/DockerSettings" Found another one in the syslog 😕 Although this is probably connected to docker not responding.
  22. This is not a permanently reoccurring error, and it doesn't seem to start after boot but a after quite some time of running, so I don't see any reason that justifies setting everything up again (or does "redo the flash drive" refer to copy&pasting to a new drive? - Seems even less likely to improve anything). "Just format and install again" is just so windows like, I really didn't expect to hear that Until now we've established that it does not affect many areas of the system. but does affect the UI of Unraid, not the nginx but it's "upstream" - I was hoping someone could shed some light on what that would be, and whether I could test for its health next time it happens ( if it happens). It's possibly that it was GPUstat or at least connected with my GPU (nvidia P400) because that is a rather new component, and I did not use it much previously.
  23. I tried a the `powerdown -r` after getting a hint about gpustats plugin causing issues. But looks like this script did not fully go through. Shares are still accessible, ssh is unresponsive (although not getting refused or anything, just nothing happening after typing that command). I had to hardreset the system. About the failed powerdown the guess (from discord) is that a service was hanging. Still no further clue about the underlying issue.
  24. AFAIK already quite a while ago. Not sure whether I can control that via CLI. Is there anything that hints you to believe I did not? Or is that just the first reaction to anything webui related? But following that link showed me how to disabled ssl just to verify - this also shows the same behaviour (on a different port now of course) --> So same behaviour on http. So I am going to turn ssl back on. ..... After we can fix this issue. Why can I turn it off via CLI but not on ?