Raiever

Members
  • Posts

    34
  • Joined

Everything posted by Raiever

  1. Bump, Not sure if anyone can assist with this, but I still cannot get new VMs added to Unraid, sadly. The same issue occurs on every OS that I attempt to use. ๐Ÿ˜ž
  2. Hey all, This is a slight rant after a frustrating day attempting to get a single Linux Virtual machine to install. I recently rebuilt my Unraid box, upgrading from an older Xeon to a much newer i9 13900 that I got on sale for Black Friday. This came with many changes, of course (Motherboard, RAM, PSU, Case, etc., because who updates one part on Black Friday?), listed below. I installed Unraid on a brand new flash drive, got everything except my Cloudflare tunnels set, all VMs transferred, containers, etc. Everything works phenomenally.. outside of this one issue. Motherboard: ASRock Z690 Extreme CPU: Intel I9 13900 (Stock) Ram: 128G DDR5 (Stock) GPU: N/A The problem is I can't get a single Linux VM to load properly, no matter the settings or changes I make to the XML data when starting. The VM boots -> Starts to launch or I choose "Test/Install" and then black screens. I tried every setting swap known to man, and the only configuration I could get to work temporarily was installing Manjaro with the CPU and the GPU virtualized. After it installed, it did updates, then refused to boot past the black screen. This was accompanied by the CPU cores assigned to the VM having one pegged at 100%, and never dropping. The most annoying part is that my previously set up VMs that were transferred between servers work just fine, and the new ones don't work even when configured with the exact same ISO image and settings in Unraid. I am sure this is just me being dumb, but I am at my whits ends trying to figure this out. What can I test, what else do I need to provide for further information, and what suggestions do y'all have to help me keep a bit of my sanity? Lol Thank you!
  3. It does, and apologies; I am thinking and typing simultaneously while also experimenting, lol! I believe we are technically on the same page in two different ways. It looks like the structure of /boot/config contains all the informational portions of the OS, and if I want to set up my shares + drive assignments on the new USB, then only transfer those .cfg files to the old USB and then move the old usb to the new system, I should, in theory, be able to: 1. Setup the new USB with Unraid + Start a trial key 2. Assign my drives and shares on the new system 3. RClone data from the old system to the new system 4. Shut down the old system, remove the USB, and copy only the shares/drive data from new to old 5. Plug the old USB into the new system; assign the old static IP to the new system 6. Boot! This would let me leave everything alone, except for the shares/drive assignments and my data, which means no dockers/applications/plugins/VMs configuring. This would mean transferring data + transferring 4-5 .cfg files from new to old + assigning static IP, and everything is ready.
  4. Sounds like me every Monday morning. lol This makes sense, I don't know why I didn't think to just copy everything but the license back over..... All of the VM data, etc. is all stored on the drives themselves. Now I am curious if I can ust plugin the new USB, get the arrays/shares setup, copy all the current data to the new arrays/shares, then then overwrite the share settings *only* on the old drive.. That way the VMs, Docker containers, system modifications, etc. all remain intact. That might need someone with more knowledge of the file structure to answer though.
  5. Bump! Hoping someone has the answer to this. โค๏ธ
  6. Howdy Darrell! No worries at all, trust me I know (already a few ciders deep lol!). I know I can use the old USB which holds the current license. But I want to keep the *new* configuration, on the *old* usb once I set the system up. As in, I want to turn on both machines, software raid the new box, copy the data over.. then pop the contents of the new systems usb (excluding the license) to the old usb, so that all the configs, drive allocations, etc. are on the old usb, now being used on the new one. If that..... makes sense. lmao The plan: 1. Turn on both servers. Setup the raid array in Unraid on the new server with its new empty SSDs. 2. Copy the data from the old server, to the new one (Virual machines, Docker settings, data files, IP settings, everything) OR only setup the software RAID and copy the data, then move that portion of the settings over to the old USB (Whichever is easier) 3. Plug the old USB into the new system, which will load Unraid with the new RAID + all the data + the new raid array settings etc.) 4. Get rid of the old server cause that son of a gun is loud. ๐Ÿ˜‚ Hopefully this helps explain what I meant. If not let me know, ill get chatGPT to explain it for me ๐Ÿ˜‰
  7. Howdy folks, This has probably been asked a million times, but none of the answers matched my questions to what I was looking for, so please bare with me. I am building a new Unraid server with all new drives. I have already taken one CPU and half my ram from the old server and put it into the new one, so I have both machines running side by side (both can ping each other); one is my pro license, and one is a temp license. The problem is that the old server is a hardware Raid, and the new one will use a Software raid inside Unraid. I need to move all the data from the old server to the new one, and I want to keep my flash drive as the current one for my license. I think I can create the setup for the software raids on the new drive/system, copy the data over (keeping the original copies of data as a backup until I am fully up and running) then copy everything on the trial flash drive over to the current licenses flash drive? That way the license would remain on the old USB, but be transferred to the new system. My general use case is a handful of docker apps (Jellyfin, couple of games, cyberchef, nothing crazy), and two virtual machines which I am just fine setting back up if needed if its a hassle to copy them over. Outside of that, not much besides cloudflared. Let me know your thoughts, and if I can provide any further info, or you can provide any tips for how to best setup a full SSD raid system, that'd be awesome! ๐Ÿ˜˜
  8. Hey boss! Just checking to see if there has been any changes on this bad boy? I understand if not.
  9. To report back a second time, I did the "git config --global --add safe.directory /home/docker/leon" command again, which again removed the "insecure" error, then gave all users on the server rwx permissions, and re-ran everything, still no luck. Same error below: error Command failed with exit code 243. node:internal/modules/cjs/loader:959 throw err; ^ Error: Cannot find module '/home/docker/leon/server/dist/index.js' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:956:15) at Function.Module._load (node:internal/modules/cjs/loader:804:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } error Command failed with exit code 1.
  10. Hey mate! No dice. Removed all files from all locations on disk (Consoled after removing via GUI and did a "grep -r leon ." from root folder and nothing was found), re-downloaded fresh image, changed no settings outside renaming the container, same error sadly.. fatal: detected dubious ownership in repository at '/home/docker/leon' To add an exception for this directory, call: git config --global --add safe.directory /home/docker/leon Babel could not write cache to file: /home/docker/leon/node_modules/.cache/@babel/register/.babel.7.21.4.development.json due to a permission issue. Cache is disabled. error Command failed with exit code 243. node:internal/modules/cjs/loader:959 throw err; ^ Error: Cannot find module '/home/docker/leon/server/dist/index.js' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:956:15) at Function.Module._load (node:internal/modules/cjs/loader:804:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } error Command failed with exit code 1. I can provide the entire log like you posted if you let me know how, never had to pull an entire log before lol!
  11. Just a heads up that it seems to have fixed their error, and possibly introduced another! Unless I am doing something dumb ๐Ÿ˜ For this one, I did as the exception states in the docker container and it seems to have fixed it: fatal: detected dubious ownership in repository at '/home/docker/leon' To add an exception for this directory, call: git config --global --add safe.directory /home/docker/leon Babel could not write cache to file: /home/docker/leon/node_modules/.cache/@babel/register/.babel.7.21.4.development.json due to a permission issue. Cache is disabled. error Command failed with exit code 243. node:internal/modules/cjs/loader:959 throw err; ^ Error: Cannot find module '/home/docker/leon/server/dist/index.js' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:956:15) at Function.Module._load (node:internal/modules/cjs/loader:804:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } error Command failed with exit code 1. But then that left this; which if I read correctly is the same error that the others where having?: error Command failed with exit code 243. node:internal/modules/cjs/loader:959 throw err; ^ Error: Cannot find module '/home/docker/leon/server/dist/index.js' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:956:15) at Function.Module._load (node:internal/modules/cjs/loader:804:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } error Command failed with exit code 1.
  12. @ljm42 Isn't incorrect that this is the *typical* way that this functions. However for me, my OS refused to pick up the new NIC even after reboots, and you can see in my original post that there was no MAC address tied to eth0, after deleting the rule file, and rebooting, it was recreated, and the MAC address was available for signing.
  13. Yep, doing what Jorge said had me back up and running in about 8 minutes. Backup your flashdrive JUST in case of issues. Navigate to main tab -> Flash -> Export set to "Yes" | Security set to "Public" Open Flash drive in file explorer like a Sharedrive Navigate to /config/ folder, and delete network-rules.cfg Shut down server, install new card, turn server back on Navigate to main tab -> Flash -> Export set to "No" | Security set to "Private" Configure your new network card as you did the first time!
  14. Makes sense that somethings break with weird settings, sadly there is never a "cookie cutter" perfect solution lol. I can confirm that reinstalling the game, with both those directories then re-uploading my save file; everything updated and is back up and running! I am sorry to bug you with such an easy fix, just a simple oversight on my part during the install that cause way more a headache than needed. Thank you so much for your patience and assistance. It is greatly appreciated. โค๏ธ
  15. I thought it was interesting too, took forever to grab that line lol! I had SteamCMD in a different directory than the server files, but I must have screwed up on using it in a separate directory when I re-installed the server (whoops). I have read up on the FUSE bypass and have directly linked SteamCMD and the Satisfactory Server to the following directories: /mnt/disk1/GameData/SteamCMD /mnt/disk1/GameData/Satisfactory I don't think it will make *that* much of a difference in performance however as I am running this on dual xeon 24 core/58 thread CPUs and 1.6TB of ram, with all SSD storage for my disks but I will trust in the wisdom of folks who know a lot more about Unraid than I do lmao I am attempting a reinstall + copying my save files back over and will let you know how it goes!
  16. 1. No sir, not on experimental, just the latest steam update (not opted into experimental) 2. Not running anything crazy custom for DNS, just Google and Cloudflare with no Adblocking functionality 3. Screenshots attached plus code block from top of the log. (Not sure how to export the entire start up log in Unraid?). The start does this then instantly spams the 10000 line start up process. usermod: no changes ---Ensuring GID: 100 matches user--- usermod: no changes ---Setting umask to 0000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Taking ownership of data...--- ---Starting...--- ---Update SteamCMD--- /serverdata/steamcmd/steamcmd.sh: line 39: 21 Segmentation fault $DEBUGGER "$STEAMEXE" "$@" ---Update Server--- /serverdata/steamcmd/steamcmd.sh: line 39: 39 Segmentation fault $DEBUGGER "$STEAMEXE" "$@" ---Prepare Server---
  17. Yes sir! Twice as a matter of fact. Looks like everything is good to go; but even after restarting + reinstalling the server version is mismatched with game version. Server version needs to be bumped to 21116 but I am unsure as to how lol
  18. Hey @Goobaroo looks like FTB Direwolf20 1.19 has been released, and it comes with updated versions of a lot of the plugins to fix bugs such as the FTBBackups2 memory leak issue causing servers to crash. (How fun it has been diagnosing that bug... 3000+ Ticks behind after a few days of running the server and I have 600G of ram available. lol) I see that there is a way to force the latest version, but I am unsure where to place the -latest in the configuration page of the container. Any assistance is appreciated
  19. Hey @ich777 I am having some issues with updating the Satisfactory container. The latest game update is v21116 and the server is v209846. I have attempted to reboot the container a few times and haven't had any luck with updating the files. I did attempt adding my Steam account as well but there is no update at the moment. Any way to check the container to see if there is any conflict with Steam appID, etc. and updating? Thank you in advance!
  20. Well, I will be damned. I have redone this entire process half a dozen times, removed every file, removed single files, changed drives, changed permissions, renamed files, manually downloaded files, and dropped them in the folder... just about everything I can name or think of.... then today I delete the installer, start server.. and it works. Black magic, I don't understand it but I appreciate it! ๐Ÿ˜
  21. It is indeed missing the Forge.jar files, the only thing created in the directory are: Eula.txt (1kb, text file) log4j2_112-116.xml (2kb, xml file) serverinstall_95_2291 (5.3mb, no file extension) I thought perhaps this was because it was on a cache drive and failing, but I went ahead and tested it on a secondary drive that isn't cached, and that also only pulled those three files. I also made sure that all permissions are correct on the entire share and for all folders so there is no read/writing issues. Thoughts?
  22. Hey @Goobaroo first off, thank you for the hard work on all your containers and support! I have been having some issues with the Direwolf docker container however, where no matter what I do, it refuses to install itself. I have changed permissions, accepted the eula, verified the "serverinstall_95_2291" file is 5,356KB, and logs.. but no matter what it always errors. The error from the log is below. Let me know if I am just dumb The only settings I have changed is initial Ram, and the server port from 25565 to 25566 as I have another service running on 25565. ++ ls 'forge-*.jar' ls: cannot access 'forge-*.jar': No such file or directory + FORGE_JAR= + curl -Lo log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1131 100 1131 0 0 13305 0 --:--:-- --:--:-- --:--:-- 13464 + java -server -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -Dfml.queryResult=confirm -Dlog4j.configurationFile=log4j2_112-116.xml -Xms8192m -Xmx16384m -jar nogui Error: Unable to access jarfile nogui + ID=95 + VER=2291 + cd /data + [[ true = \f\a\l\s\e ]] + echo eula=true + [[ -f serverinstall_95_2291 ]] + [[ -n FTB Presents Direwolf20 1.18 v1.10.1 Server Powered by Docker ]] + sed -i '/motd\s*=/ c motd=FTB Presents Direwolf20 1.18 v1.10.1 Server Powered by Docker' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n world ]] + sed -i '/level-name\s*=/ c level-name=world' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n '' ]] + [[ -n '' ]] + sed -i 's/server-port.*/server-port=25565/g' server.properties sed: can't read server.properties: No such file or directory + [[ -f run.sh ]] + [[ -f start.sh ]] + [[ -f run.sh ]] + [[ -f start.sh ]] + rm -f 'forge-*-installer.jar' ++ ls 'forge-*.jar' ls: cannot access 'forge-*.jar': No such file or directory + FORGE_JAR= + curl -Lo log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 1131 100 1131 0 0 14500 0 --:--:-- --:--:-- --:--:-- 14500 + java -server -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -Dfml.queryResult=confirm -Dlog4j.configurationFile=log4j2_112-116.xml -Xms8192m -Xmx16384m -jar nogui Error: Unable to access jarfile nogui
  23. @JorgeB You are a wizard; that seems to have done it! Everything appears to be working flawlessly now. I have to ask, what do you think went wrong here? Did I not prep this correctly, or did something strange happen on one of the reboots to cause it to wig out?
  24. Hey JorgeB! Thanks for the tip, let me give that a shot and see what happens. Will rename for now, and if it works I will delete the old copy.
  25. First and foremost, I am sorry even to be asking this. Coming from a networking background, I feel dumb after spending a few hours on this project. Backstory: I upgraded my internal network with new 10G SFP Ethernet modules and Cat7 networking. The new setup is simple: Ubiquity Dream Machine Pro to un-managed switch via SFP, un-managed switch to HP DL360 Gen9 via SFP. I then swapped from 100Mb Ethernet to the SFP port by plugging in the new SFP, hooked up an Ethernet to my router, and then swapped the 192.168.1.3 IP reservation from the old Ethernet to SFP. Rebooted, logged right in, no issues. Once I verified the SFP module and port were working and assigned the 192.168.1.3 in DHCP, I rebooted and disabled the original internal 4x Ethernet ports via the bios, again, logged right back in with no issues. After all this, I had my system running with Eth0, Eth4, and Eth5 (SFP ports). I shut Eth5down, leaving me with Eth0 and Eth4. Issue: Now this is where it gets weird. "ifconfig" shows no Eth0, and "ifconfig eth0." shows "eth0: error fetching interface information: Device not found." I have to manually assign the 192.168.1.3 IP to Eth0 for it not to pull a 169.x.x.x address. I believe I have everything set up correct; attached are pictures of the Network Configuration page. Any help would be appreciated regarding what I may be doing wrong or if I am way over thinking this lol ๐Ÿ™ƒ๐Ÿ™ƒ๐Ÿ™ƒ