FrozenGamer

Members
  • Posts

    362
  • Joined

  • Last visited

Everything posted by FrozenGamer

  1. Thanks Much.. would using windows machine to extract and copy those files vs terminal cause any problems with the install in appdata?
  2. How do i convert or find the linked threads.. a lot of my searching has dead ended because i can't go to the link... or are these links no longer applicable?
  3. How do i restore an individual container? I have a corrupt database.. I am not sure if the 1.2 month old backup will even be of a working database. And perhaps i have a bigger problem, as i noticed that i have 2 non working docker containers.. both radarr and nyzbhydra2 stopped working... It probably took me over a month to see that this had happened.
  4. Thanks everyone! Sorry if my question was not written clearly enough.
  5. So I should mark this as solved, but I am still not clear on the following example (which is my current situation. Let me know if i have it right. If i have done a previous parity check, a month ago with zero errors, i upgrade an 8 terabyte drive to 14tb in the array, the history of parity checks now says that it has done a parity check (from the data rebuild) with zero errors. I now have parity unless parity was lost in the 1 month between last parity check and the data rebuild which rebuilt based on inaccurate data? In which case it says i have parity, which i do, but my data may not actually be perfect.. Or i have parity, because i just completed a parity check when i did the data rebuild and my data should be considered accurate? I probably shouldn't worry about this, but i have been curious about this since parity checks are really slow on my machine and prefer not to bog the system down for 3 or 4 days until next month. Thanks for all the replies and help
  6. Perhaps you mean Highwater (recommended and default) instead of Fillup. Yes
  7. The parity check history calls this a parity check and labels parity as valid. If i did a parity rebuild to upgrade a drive from 8 to 14 is this effectively a parity check. Also what would happen to data if a parity was not valid and a rebuild was done of a failed drive or drives? I do have 2 parity drives. I do understand that if 3 drives failed i would lose the data on failed drives only, which would be kind of a bummer since i assume that the fill drive method has data scattered, ie incomplete collections/albums etc. Just curious. Thanks in advance.
  8. Ok thanks.. I am working remotely on a vacation.. I didn't see the share but after enabling it in a different settings than i originally did, it worked and i have solved the problem. I was pretty sure i was missing something somewhat obvious.
  9. The drive is straight out of the box wd 14tb, but i could format it.. preferably with a windows readable format. I would like to access this drive which is not in the array but unprotected from other windows computers on the network. Hopefully this question makes sense. I saw the remote smb share but it didn't seem like it was the right thing.
  10. I have identified a problem. I told it to choose disk 2 (sdab) and it is doing another disk which is not 8tb but 10. then it gets stuck at 90% every time.. SAS2308 PCI-Express Fusion-MPT SAS-2: Scanning Disk 2 (sdab) at 8 TB 90% It appears to continue to read at 133mb/s so i assume that isn't the slow disk, if i have one. Seems to be continuing long enough that it isn't going to stop.
  11. I am attaching screenshot of my array.. It is my understanding that having drives so full is bad for an array? I thought i could use unbalance but that appears to be good to move data the other direction, to free up space on a single drive. Is there a simple app or way to fix my space issue? How big of a deal and what are the consequences of the array as i have it?
  12. I have always had problems finishing the diskspeed benchmark to find which drive is going slow.. Is there a way to just benchmark a specified number of drives at a time? I can't do another test for a while since i have a parity check going.. which has been getting really slow for an unknown reason.. But i will try one in a few days. Right now i am not even starting any dockers because i don't want to interfere with the parity check.
  13. It would appear that CPU is spiking when it slows down (at least a few of the times it was behaving that way.. But not always). I paused and restarted parity checks with different dockers closed, VM's etc. It still was slow. I shut off all dockers and VM and started parity check over and it started out faster like it normally does.. One other potential factor is i have my Search Everything set to scan many of the directories at 3 am and those directory scans can be quite long.. Perhaps they are slowing the parity check down? I have turned off the search everything daily scan of directories, and am hoping that helps. Could this be related to one drive which has a problem? If so how do i figure out which one it is. Diskspeed docker hasn't really worked well for me, so looking for another way of checking. Other factors.. My enclosure isn't all that fast, but this is beyond that. a normal parity check should be 2 or 3 days for 26 drives plus 2 parity. Also all my drives are almost full (I'm afraid to throw a new drive in to expand, or replace a smaller 6 with a 14, until i know my parity is good) To add to complications, I am working on ship so my answers may be delayed and depend on when i get into good cell range to remotely do things. Diagnostics and a screenshot attached. Thanks for any help.. Advice. tower-diagnostics-20220731-2128.zip
  14. I am running catalina 10.15.4 and have not updated for a long time i have the option to go to MacOs Montery - it says updating requires 16.75 GB of space. I have the option for Catalina 10.15.7 but that says "Your disk does not have enough free space" updating requires 20.81 gb. When i check my storage it says i have 13.14 GB available of 68.38GB and that 14.68GB are used by messages. I mainly use my vm for imessaging, since its nice and easy compared to using a phone or setting up another mac/monitor and keyboard. So i probably don't need all the new features or slower? Monterey? Looking through the forums it looks like it might be safer to my install to not upgrade to montery. 1. Will i have security holes fixed with Catalina upgrade? 2. Can i delete the messages files (videos etc) out of my vm without them being deleted from my phone and ipad? If so is there a special way to do this? 3. Can i increase the size of the vdisk to make enough space to do the upgrade? or would it be easier to delete files? 4. Will the deleted files even make space on my vm? i am reading that there is a problem and whn i deleted my 2.57GB of podcasts it didn't seem to do that. 5. How would i go about backing up my disks and settings in case i wanted to go to Monterey? 6. Any other advice on how to get upgraded? or should i not worry and just keep using the vm with old MacOS? Thanks in advance for any help..
  15. On terraria template, i probably just edited 7777 external port to 7779 i don't remember for sure, but this allowed both ark-se and terraria to be running at the same time. It may have been solved but i didn't see that when i read it. I messaged him and perhaps he will answer. Thanks, i just now figured out how to mention someone properly! @Saiba Samurai @Cyd
  16. Back to my ongoing Ark-Se problem on not showing up online or connectable by anyone else. The last thing i did was try and remove all udp ports and then add them again, 7777 7778 27015 only. still not visible outside, all other dockers working fine. Terraria, minecraft, minecraftbedrockserver, 2 valheim servers. i forwarded 7777 to 7779 for terraria to not conflict with ark-se. As far as i can tell from the posts previous to this @Saiba Samurai gave up with the same issues as i had. Also i was considering trying the server cluster setup that is stickied instead but I didn't quite understand how that setup goes. Either way, thanks everyone for trying to help me..
  17. I can see it at 192.168.1.154:7777 only - and i can connect. Not on external Ip:27015, internal ip:7777, internal ip:27015 or any other combination. fixed- thank your for letting me know it is not necessary. Back to bridged. In addition i shut down and rebooted the unraid server, with all other dockers and my one VM set to not autorun. This did not seem to help. I tried Privileged, that didn't solve it. Also i eliminated the RCon port as suggested by Mushroom.
  18. I am at a loss as to why my single (not cluster) Ark survival evolved server will not show on the internet. I will reiterate and add a few things from my previous post which starts here. To be clear i am asking about Ark, but reference Valheim as it has been working fine. Valheim container is working and has always worked from external connections. Yet Ark just won't be seen or connect via external connections, just on my local network. Every other game server i have used from this repository/thread works perfectly. I understand port forwarding on my router At this point, I have the following ports mapped on my router 7777,7778,27015 udp only, 27020 tcip. Testing with a cell phone tethered laptop, I am able to connect to valheim which uses udp ports via typing the IP address in But i do not see my valheim server in the steam server favorites. I tried 2 udp port scanners. one has an error writing to my port on all udp ports i tried to test, 2456 etc for valheim, and 27015 for the following udp port scanner. https://openport.net/udp-port-checker/ (this one doesn't work) But https://www.ipvoid.com/udp-port-scan/ does connect to all the udp ports as open/filtered. it lists the service for 7777 as cbt, 7778 as interwise, and 27015 as halflife. I was asked the following question from Ich777 Are you sure that your ISP is not blocking any ports or you are behind a NAT from your ISP? -I have called my isp to ask if they are blocking udp ports or specific udp ports. They said they don't block much, just a few russian military IP's that have been trying to hack them. Also I don't believe they are because if i can connect to valheim externally and the above port scanner shows all the ports as open/filtered. I believe this is unrelated but i discovered that my router nighthawk r7000p was using a 2 year old firmware and not reporting that updates were available. I updated to a recent firmware this morning to eliminate that issue and be more secure. I cannot see my server at https://arkbrowser.com/ I cannot seem to find it when i download all the servers with the boxes checked on my compute (this one is on the same network as unraid/ark server) I cannot see the game via tethered laptop and the ark server browser. Do you have other services exposed to the internet? - i have many computers on the network and i have a few other services exposed for sure, but none on 27015 to the best of my knowledge.. Or 7777,7778 (if that matters or should i forget about those ports for my connection issue?). Here is my run script for the server root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='ARKSurvivalEvolved' --net='bridge' -e TZ="America/Anchorage" -e HOST_OS="Unraid" -e 'GAME_ID'='376030' -e 'VALIDATE'='' -e 'MAP'='TheIsland' -e 'SERVER_NAME'='IcyWaves' -e 'SRV_PWD'='deleted' -e 'SRV_ADMIN_PWD'='deleted' -e 'GAME_PARAMS'='?MaxPlayers=11?' -e 'GAME_PARAMS_EXTRA'='-server -log' -e 'USERNAME'='' -e 'PASSWRD'='' -e 'UID'='99' -e 'GID'='100' -p '7777:7777/udp' -p '7778:7778/udp' -p '27015:27015/udp' -p '27020:27020/tcp' -v '/mnt/user/appdata/steamcmd':'/serverdata/steamcmd':'rw' -v '/mnt/cache/appdata/ark-se':'/serverdata/serverfiles':'rw' --restart=unless-stopped 'ich777/steamcmd:arkse' 8dce2f7d187d21eebe6bc311f5275bdb2deec3f050ca6d474f0b58e475b10451 I also changed the following setting in my router "Respond to Ping on Internet Port" to enable it. I tried Bridged and Host network, but not sure how to try the specific IP:br0 option. Nothing seems to work. Attaching the screenshot of router that might be affecting? and the log from the docker Any more suggestions? or data that i might provide to help see what's going on?
  19. edit: Disregard the question , i think i understand now (you mean the 4 checkboxes at bottom left of join server dialog) Thanks for your patience.
  20. Thanks.. I need to try 7daystodie - but today i will stick with ARK.. I am sorry, what checkboxes and options, on the unraid container? I don't understand what you mean?
  21. I have been working on reading through the forums for quite a bit here and can't seem to find the answer. Can someone point me in the right direction for a single Ark server. I see the stickied Cluster info. I can see my server and connect to it locally no problem but my friend cannot connect externally. 1. What are the specific ports to be forwarded and connection type? 2. How should a friend be trying to connect to my game from steam? 3. Can i test my own connect-ability by adding my external IP address and a port to my server list on steam (clicking view, then servers, then adding it by my external ip and port? 7777? 27015? The ports i have forwarded at this time after reading the forums are 7777,7778,27015 UDP only, and 27020 TCIP. But i did leave in 27016 UDP only. Why is IppoKun talking about 26900? in the post above, did i completely miss that port? Thanks in advance and sorry if this is clearly spelled out in this thread already?
  22. I think I am good to go now. I moved all the files out of the appdata folderafter it finished doing whatever it was doing. I had to create a new folder for my torrents and then edit categories. So many things are not obvious with this program, but once its working it works pretty well.. with the exception of todays little nightmares. Thanks again for your help wgstarks!
  23. It would appear that when the update to the broken qbittorrent the unraid server moved files from the array to the cache drive and put them in this directory "/mnt/user/appdata/binhex-qbittorrentvpn". This one does not get moved off the cache. I tried to download another torrent prior to reverting the version of qbittorrent as suggested and it also went into the same folder. Normally it should go into /data/QBT/temp/ and then move to /data/QBT/ once it is completed. Now whenever i am running qbittorrent the cache drive reads and writes are maxed out at the same time, even though i have deleted all the existing torrents and just did one test download of 10GB which is stuck at moving. Is it possible that Qbittorrent is still moving all those files around on my cache drive? Because when i shut down the docker the writing/reading stops and then i can manually start the mover script on docker/main and it moves over a bunch of the previously torrented files according to the system log with logging of moved files enabled. I think it may be fixing itself? Edit. Also thanks for posting your working settings, i will try them after i let this play out, the Downloads folder in the appdata folder is getting smaller each time i run the qbittorrent... What or why i don't know.