Phillycj

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by Phillycj

  1. Unfortuantely, changing the passphrase to 123456789 didn't work. The passphrase I use only has alphanumeric characters and hyphens, so no particularly troublesome characters that would cause issues. (I'm deleting both appdatas after every new fix attempt btw) EDIT: Fixed it! in Anope's conf/services.conf, I changed services.example.com to services.fakedomain.com, and then it started working. On a related note, could you please update InspIRCd/Anope to use the same casemapping by default? It isn't the biggest issue and easy to spot when looking at the logs, but it is annoying having to force InspIRCd to use rfc1459. Thanks!
  2. I had changed the casemap from rfc1495 to ascii because I was getting this error in the logs: Anope 2.0.11, build #2, compiled 12:05:52 Apr 22 2022 Using configuration file conf/services.conf Attempting to connect to uplink #1 10.0.1.10 (10.0.1.10/7000) with protocol InspIRCd 3 Successfully connected to uplink #1 10.0.1.10:7000 ERROR: CAPAB negotiation failed: The casemapping of the remote server differs to that of the local server. Local casemapping: ascii Remote casemapping: rfc1459 [Jun 29 16:46:38 2022] Anope 2.0.11 starting up [Jun 29 16:46:38 2022] Loading modules... [Jun 29 16:46:38 2022] Using IRCd protocol inspircd3 [Jun 29 16:46:38 2022] Loading databases... [Jun 29 16:46:38 2022] DB_FLATFILE: Unable to open data/anope.db for reading! [Jun 29 16:46:38 2022] Databases loaded [Jun 29 16:46:38 2022] Attempting to connect to uplink #1 10.0.1.10 (10.0.1.10/7000) with protocol InspIRCd 3 [Jun 29 16:46:38 2022] Successfully connected to uplink #1 10.0.1.10:7000 [Jun 29 16:46:38 2022] ERROR: CAPAB negotiation failed: The casemapping of the remote server differs to that of the local server. Local casemapping: ascii Remote casemapping: rfc1459 [Jun 29 16:46:38 2022] Received ERROR from uplink: CAPAB negotiation failed: The casemapping of the remote server differs to that of the local server. Local casemapping: ascii Remote casemapping: rfc1459 I had never set the InspIRCd casemapping to be ascii, I don't know where it picked that up from. EDIT: I checked conf/inspircd.conf, casemapping is hardcoded to ascii there by default. Swapped both InspIRCd and Anope to rfc1459, same "invalid link credentials" error. The previous value for INSP_NET_NAME was "Firstname Lastname's Server" so I changed it to fakedomain.com and I'm still encountering the "invalid link credentials" error. Deleted both appdatas and only modified the docker.motd, and the errors persist.
  3. Is that what I currently have in the screenshots? 10.0.1.10 is the IP of the unRAID server, and I have updated Anope's "Hostname from Anope" to simply "irc-services" EDIT: Anope logs ---Ensuring UID: 99 matches user--- usermod: no changes ---Ensuring GID: 100 matches user--- usermod: no changes ---Setting umask to 000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Taking ownership of data...--- ---Starting...--- ---Version Check--- ---Anope v2.0.11 up-to-date--- ---Preparing Server--- ---Checking if configuration is in place--- ---Configuration found!--- ---Starting Anope--- Anope 2.0.11, build #2, compiled 12:05:52 Apr 22 2022 Using configuration file conf/services.conf Attempting to connect to uplink #1 10.0.1.10 (10.0.1.10/7000) with protocol InspIRCd 3 Successfully connected to uplink #1 10.0.1.10:7000 ERROR: Mismatched server name or password (check the other server's snomask output for details - e.g. user mode +s +Ll) And /mode oper +s +Ll output is : * *** LINK: Server connection from services.example.com denied, invalid link credentials * *** LINK: Connection to 'inbound from 172.17.0.1' failed with error: Mismatched server name or password (check the other server's snomask output for details - e.g. user mode +s +Ll) * *** LINK: Connection to 'inbound from 172.17.0.1' failed. That example.com is as-is, I didn't change it for privacy.
  4. Hey, love the containers! I'm trying to set up InspIRCd + Anope but I'm encountering some issues. Firstly, I'm having trouble viewing Anope logs, the window shows the logs for a split second, then the window closes. I have two externalized domain names for this irc.fakedomain.com (points to 6697) and irc-services.fakedomain.com (points to 7000), and the local IP address of 10.0.1.10. Ports 6667, 6697, 7000, and 7001 are port forwarded to 10.0.1.10. I have disabled SSL in Anope to try eliminate issues for now, but would like to enable it if need be. Anope's password and InspIRCd's INSP_SERVICES_PASSWORD are the same. I'm able to get InspIRCd up and running and can connect to it with HexChat using irc.fakedomain.com/6697, but Anope is not behaving well. In the Hostname from Anope parameter, I currently have irc-services.fakedomain.com, but I have tried putting in just the default 'services' value, and neither works. Attached are my configurations for both containers. I'd much appreciate some help with this.
  5. Hey, I'm getting this error in PavlovVR Connecting anonymously to Steam Public...Logged in OK Waiting for user info...OK Success! App '622970' already up to date. ---Prepare Server--- ---Checking if 'Game.ini' exists--- ---'Game.ini' found--- ---Server ready--- ---Start Server--- ln: failed to create hard link '/serverdata/.steam/sdk64/steamclient.so' => '../serverfiles/linux64/steamclient.so': Invalid cross-device link ln: failed to create hard link '/serverdata/serverfiles/Pavlov/Binaries/Linux/steamclient.so': File exists /serverdata/serverfiles/Pavlov/Binaries/Linux/PavlovServer: error while loading shared libraries: libc++.so.1: cannot open shared object file: No such file or directory All the other game servers have worked fine for me, and I can't find any Pavlov or hardlink issues in this thread, any ideas?
  6. If just the file system on the two drives was wiped, is there any way to restore the fs on these drives while keeping the contents intact?
  7. The pool has data, yes. I have not attempted to format any of the drives yet.
  8. Hi, I updated to 6.9 stable today and I tried taking advantage of multiple pools and tried removing 2 of my 4 SSDs from the cache pool. I spun down the array, reset the config for the cache pool and removed the bottom two drives, but did not bring the number of the cache pool down. Upon booting back up, I am constantly getting the "Unmountable: No pool uuid" error across all of my cache drives, even after readding the two removed drives. I tried looking up the error but the one forum post I could find with this error wasn't all that helpful for my scenario. Is anyone able to give me a solution? Thanks! sol-diagnostics-20210302-1848.zip
  9. System info unRAID 6.9 rc2 Ryzen 9 5950X AMD Vega 56 (current main GPU) RTX 3080 (Future, but can use another RX 570 now for testing purposes) Asus ROG STRIX X570-F (BIOS Version 3001) G.SkillTrident Z Neo 3600MHz CL18 1x NVMe Gen4 SSD 1TB (hopefully a solution that involves making sure the Gen4 speeds are taken advantage of) 1x SATA SSD 2TB A few hard drives Hi all, I'm planning out my new gaming/content creation/development machine with two GPUs (one AMD, one Nvidia), but while I'm waiting on the new Nvidia card to ship, I've been having some doubts about overall suitability for my use case. I've been using unRAID as a media server in a separate box primarily as a media NAS/Plex, but I'm looking at VMs now and it seems like it can solve some of my use cases, but possibly not all. I've been using the unRAID trial for this new machine for a bit, and using 1 AMD GPU manually swapping between VMs from my laptop is annoying, but doing the trick for now. Use case I want to be able to run two VMs at the same time while in unRAID with GPU passthrough for each, one for Windows, and one for Ubuntu/Pop OS. Windows would have the beefier GPU, but the VM wouldn't be running all the time, only when I want to game or do something Windows-specific. The Linux VM would be on as much as possible, and would act as my development/content creation. As far as I can tell, all of the above is possible providing I'm willing to put up with a few AMD reset bugs and things like that. This machine and the Linux VM will be running (hopefully) 99% of the time. My trouble is when I want to bring this PC elsewhere (or just not have to deal with virtualization-related issues), and I want to be able to boot the same version of Windows that I have in the VM on bare-metal. The key requirement here is being able to access the same game files in both unRAID and bare-metal Windows. I'm okay with working around this through means you suggest, such as partitioning, two minimal Windows installs that are able to access the same D: drive with my games, NVMe passthough, etc., if they're at all possible. So the core requirements are: 1. Be able to run 2 GPU-powered VMs at once + Dockers/NAS when in unRAID (seems easy enough, I'm okay with the performance hits in VMs) 2. Be able to run Windows on bare metal (also easy enough) 3. Be able to access the same Windows OS in both methods (the tricky part) If number 3 is completely impossible, two separate Windows installs will work. Both methods should have somewhere in the ballpark of 240GB for the C: drive, and a D: drive for game installs. The D: drive should be at least 1 TB. I'm planning on buying another mid-range 2TB Gen 3 SSD if that helps with my use case. Maybe partitioning that with 240GB for the Windows and the rest for D: drive then passing through the NVMe controller to the VM for my D: drive might work for the two-Window workaround? Thoughts? If anyone could offer a solution, or steer me away from unRAID for this, it would be much appreciated. Thanks!