D1G1TALJEDI

Members
  • Posts

    17
  • Joined

  • Last visited

Everything posted by D1G1TALJEDI

  1. Are there any plans for your Pihole-DOH-DOT container to support running Gravity Sync? USE CASE: I have a physical Raspberry Pi running Pi-Hole (with DoH support and PiVPN) but when it goes down, it'd be nice to use your container as a backup on my unRAID server. Could it really be as simple as fulfilling the requirements for the install script to work?
  2. Turns out this is my PostgreSQL container. Why in the world does it care at all about `NVIDIAContainerCLIConfig`?
  3. I'm running into an issue that seems to bring my unRAID dockers to their knees because one of my dockers keeps dumping this into the log.json: {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-05-31T14:08:12-07:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-31T14:08:12-07:00"} Once it consumes all 32 MB (that's right, Megabytes), /run is full and I start losing dockers, today's flavor comes in the form of my Ghost CMS running a non-profit's blog. Docker issues are the hardest for me to track down because of the ease of setting them up in unRAID and how complex they are behind the scenes and the lack of insight into that complexity on the unRAID side of things. Anyone have any pointers?
  4. I know this isn't comprehensive because I've spent a whopping 5 minutes looking at the syslog you uploaded, but there are some issues with your CPU and the Linux kernel used on this version of unRAID. Note this message: ACPI BIOS Error (bug): Could not resolve symbol [\_SB._OSC.CDW1], AE_NOT_FOUND (20220331/psargs-330) Reference: https://bugzilla.kernel.org/show_bug.cgi?id=213023 There may be further issues that would be easier to see outside of unRAID, such as doing a memory test to see if you have any RAM instability, etc. While I love how Linux stuff can run on anything, it can also... run on anything. Garbage in, garbage out, right? Still, you might try a newer version of unRAID (like 6.12 RC6 that released yesterday or the day before) to see if it's happier with your CPU. But again, this smacks of hardware (with a sprinkling of kernel-hardware incompatibility). I'll keep perusing and see if I find anything else specific you can look at.
  5. For the record, these are the commands the GUI sent: /sbin/xfs_repair -v /dev/md5p1 /sbin/xfs_repair -v /dev/md6p1 Obviously, this was for my configuration and it's best to let the GUI do the work. I ran the first one with -nv to see what it looked like before deciding to go for it as-above (again, with the GUI; I'm only showing which command I found it ran). As you can see by the output, it took a while (just over 1 hour). Thanks for stopping me from doing something really stupid @JorgeB and @JonathanM. And thank you @itimpi for getting me to the GUI to make my life a lot easier 😁 I was curious about the commands, still, so I just did a "ps -ef | grep xfs_repair" from the command line after I kicked it off. I love this community!
  6. Ok, here's where I ended up: Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=,swidth= if necessary XFS_REPAIR Summary Fri May 19 14:04:08 2023 Phase Start End Duration Phase 1: 05/19 13:03:52 05/19 14:04:03 1 hour, 11 seconds Phase 2: 05/19 14:04:03 05/19 14:04:03 Phase 3: 05/19 14:04:03 05/19 14:04:05 2 seconds Phase 4: 05/19 14:04:05 05/19 14:04:05 Phase 5: 05/19 14:04:05 05/19 14:04:06 1 second Phase 6: 05/19 14:04:06 05/19 14:04:07 1 second Phase 7: 05/19 14:04:07 05/19 14:04:07 Total run time: 1 hour, 15 seconds done This is just for the first disk. Starting on disk #2
  7. Ah, that's good to know. Since I did it from the GUI, though, I assume that it's okay as-is? This is the result that came back, which is encouraging: verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now.
  8. I started with the instructions you sent the link to; I thought I was summarizing the command properly but now I see I missed the -n and the number. For simplicity, I went with the GUI. As far as I know, it's still looking for the secondary superblock.
  9. Thanks for the link; it looked like a few people ran with the -L option, so I jumped the gun on that. However, the instructions gave me some confidence to go ahead and give it a try with just the -v: xfs_repair -v /dev/sde xfs_repair -v /dev/sdf I started the first one and it's currently looking for the secondary superblock: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... I'll report back on how it goes. Thanks again @JorgeB for the direction.
  10. Last week I installed two new 16TB disks in my array, replacing two 14TB disks. I replaced them one at a time and it took the usual 24-ish hours to rebuild my array. Everything worked fine and I used the disks as normal for several days. Yesterday, I decided to upgrade to an unstable build of unRAID, 6.12 RC6 after thinking I'd like to play with the new dashboard and help test. I don't know that it was the new version or if I'd just missed it but my array was reporting much fuller. That's when I noticed my two new drives were reporting as not being mounted. Specifically: Unmountable: Unsupported or no file system If it were just one drive, my knee jerk reaction would be to format and let parity save me once again. However, with two drives, I don't even know if I could recover. I've seen similar issues that seemed to be resolved with xfs_repair but I'm no expert on xfs and figured I should check with the community before potentially doing something stupid. I'm attaching my complete diagnostics zip in the hopes that someone sees what I need to do. Without adult supervision, I may just attempt something like this after starting the array in maintenance mode: xfs_repair -Lv /dev/sde xfs_repair -Lv /dev/sdf The natives are getting restless without their shows... send help! collective-diagnostics-20230519-0716.zip
  11. I couldn't find the original request in here, but you've already added the Thermaltake Core WP200. You may or may not know that this case is actually a Core W200 sitting atop a Core P200. I see you have the wheels, which are perfect but I was wondering if you could do a variation on the icon but remove the bottom Core P200? Please and thank you!
  12. Just a quick note to new downloaders: there is no default password. It is generated when the docker is created. I just checked my log and found it near the bottom: 2021-05-19 06:43:21,280 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2021-05-19 06:43:21,281 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2021-05-19 06:43:23,247 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Starting Scheduler Daemon 2021-05-19 06:43:23,248 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Generating a self signed SSL 2021-05-19 06:43:23,249 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Generating a key pair. This might take a moment. 2021-05-19 06:43:23,249 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Starting Tornado HTTPS Server https://00bdd8da2b71:8000 2021-05-19 06:43:23,249 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Please connect to https://00bdd8da2b71:8000 to continue the install: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Your Username is: Admin [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Your Password is: ###### [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Your Admin token is: ########################################### 2021-05-19 06:43:23,951 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - INFO: Crafty Startup Procedure Complete 2021-05-19 06:43:23,952 DEBG 'start-script' stdout output: [+] Crafty: 2021-05-19 06:43:23 AM - HELP: Type 'stop' or 'exit' to shutdown Crafty Thanks for another excellent image @binhex!
  13. I apologize if this was already answered but I just haven't been able to find it (I'm not exactly sure what I should be searching for). Here's my issue: once I had Pihole-DoT-DoH installed and configured, I couldn't resolve any of my local network hosts, including my unRAID box. The pihole configuration looks fine but I'm not sure how to configure the underlying DNS server (dnsmasq?) without moving my DHCP server from my home router. I do have the host names showing up in the pihole next to the IPs but I can't get any of them to resolve. Here's my current configuration: Router/DHCP Server: 192.168.50.1 (/24 subnet) unRAID: 192.168.50.2 Pihole-DoT-DoH: br0 on unRAID Docker at 192.168.50.254 My ISP configuration has the DNS server set to 192.168.50.254. The DHCP Server is set to give out only 192.168.50.254 for DNS but every client I check shows this as well as 192.168.50.1 (I was hoping that'd allow it to resolve the local network host names). This may not matter but I did switch the CloudFlare servers to 1.1.1.3 and 1.0.0.3 in cloudflared.yml since I have a ton of young kids in the home. Can you point me in the right direction? I have a local domain suffix in case this helps.
  14. @binhex, I'm an idiot. It was my map. As soon as I tried moving my map into place and restarted the service, it disappeared. Because this was originally on my realm, there are a few flags that need to be changed in the level.dat file. I used Universal Minecraft Editor and after switching the LAN Broadcast on and a few other flags, it worked just fine. Since the solution is decidedly nothing to do with your image, I'll find the appropriate place to document it fully. Thanks again for the excellent image and new interface You rock!
  15. The problem I was reporting was showing up in Windows as well as on Android and Xbox One. Otherwise, I would have assumed it was something happening on the Windows side of things. While I thought it was most likely your docker image, it certainly could be something I inadvertently changed on my router. Just to be safe, I went ahead and added a vanilla installation of BDS on my Ubuntu VM running on unRAID and sure enough, it worked the way it did before and shows up in Windows, Android and Xbox just fine: So now I'm left with the question: why is BDS showing up on my LAN when running from my Ubuntu VM but not when running on your docker? I do have my VM running it's own virtual network adapter and so I tried the same thing with your docker image (latest), but no dice. No combination of networking seems to affect whether or not it shows up in Windows, Android or Xbox, though I'm decidedly a docker novice. If you -- or anyone else -- knows how this is being broadcast in BDS, perhaps we can figure out what specific network configuration I'd need to get it working for me. It seems like I'm not the only one experiencing this issue.
  16. I love the web interface to attach to the console... I'm not sure who helped you figure it out but thank you! Only problem I have now is that I can't see the server under 'Friends' like I used to when running bedrock_server under Ubuntu. Isn't this supposed to show up like any other LAN game?
  17. While this mostly works pretty well (unless you don't detach properly and then you have to force a detach first `screen -dr`) it'd be nice if there were a way to include a direct option like the console or WebUI.