FrozenGamer

Members
  • Posts

    368
  • Joined

  • Last visited

Everything posted by FrozenGamer

  1. I should also mention i do not have a monitor attached since it is ryzen 5900 and doesn't have graphics. System has been running for a long time with no problems.. so this is new behaviour. 1. How do i get a copy of the logs or diagnostics to see what is happening before i shutdown 2. How do i cleaning shut it down via telnet from windows.. I have some connection to gui and can maybe initiate this but don't have a lot of confidence since most things aren't happening properly such as download diagnostics. 3. Can i try and shutdown the array via the gui and still retain pertinent data to see what is happening? 4. Should i create a separate thread to work on what caused the issue? I used compose manager and created a container with it that seemed to be working fine yesterday. from here - https://github.com/smkent/safeway-coupons?tab=readme-ov-file and i did one other thing. So it is likely something from this caused the problem. While i was typing this up the log finally responded well enough to show me this. See attached - completely out of memory. unraid crash log .txt Edit-update 1.. I have shutdown the array via the gui. I have been able to ssh into the machine and done the following. df -h / = only using 443M of 16GB on usb stick free -m - total used free shared buff/cache available Mem: 32021 513 30358 663 1149 30343 Swap: 0 0 0 I still can't get it to finish the diagnostics (via gui) download. even though i let it sit for an hour. I feel like i could probably get it to shutdown via the GUI, but would lose information that might point to exactly why this happened. Edit-update 2 - i have got a diagnostics.zip done via command line (not anonymous) - the array shut down via gui. Using Gemini.google.com it would appear that my memory errors are related to a container. my error logs for var/log/nginx are about 40M in size. Not seeing any responses, i will reboot and then disable autostart on the safeway container. which is suspect may have caused the problem since it appeared the day after i setup the container.
  2. According to this site i have CMR.. happy since i bought 5 for 199 each at bestbuy.. thanks for your help! https://nascompares.com/answer/list-of-wd-cmr-and-smr-hard-drives-hdd/
  3. It is now just finishing the rebuild.. at 106 MB/S and started and stayed at about 57 MB/s.. Total time will have been 3 days 1.4 hrs, but i have 29 drives and really need to find a different caddy to store the drives with faster throughput i think.. eliminating the slower 8tb seagate smr drives will help. (3 more to get rid of) but those i think were just slow on the write and not so much on the read.. 3 days isn't a terrible overall time compared to my normal times though on the with 14tb seagate faster drives for parity, considering i am doing 4 more TB.
  4. Normally when i rebuild a drive it runs a bit slower until i get past the old 8gb seagate st8000dm004 drives which seem to limit me. then the last 6tb would be fairly fast. My old parity drives were 2 14TB Seagate (ST14000nm001g) which i believe were much faster.. Could this be my new WD parity drive doing the slowdown or is it a product of rebuilding 1 of 2 parity drives? If so i think i should order some faster seagate drives for parity?
  5. I am not sure what the actual error is with drive 2.. i have a UPS installed but it doesn't shut down the drives which are in a separate caddy.. Just the main box with unraid and then the ups eventually runs out of batteries and the caddies turn off. I am attaching my diagnostics. Of course this is happening when i am out of town (for another 4 days). I didn't notice anything about the drive being disabled when i remotely restarted the parity check after i had a friend reboot the server for me. The parity check was 25 percent through with a disk emulated and i shut it down.. what should i do next? I have identical spare 8tb drives i could replace the failed with.. I just ordered 5 18tb drives to shuck and replace the 8tb drives at some point but my 2 parity drives are 14tb. Should i just shut it all down and wait 4 days until i get home then rebuild the 8 with an old 8tb? 1st question - should unraid shutdown just the same during a parity check when ups tells it to via usb cable? it normally does just fine when not in parity check. 2nd question is what went wrong on 8tb (drive 2) 3rd- should i assume it is bad and just pull it and rebuilt to one of the several 8tb i have already pulled and upgraded to 14tb? 4th - how to introduce the 18tb drives into the array. but i think i am getting ahead of myself on that. tower-diagnostics-20231122-1658.zip UPDATE: Thanks Frank1940 and JorgeB, i rebuilt it on itself and its back up and running.. I would have gone with more conservative methods but only home for 4 days this month. Will introduce 18tb drives at a later date..
  6. Thanks Much.. would using windows machine to extract and copy those files vs terminal cause any problems with the install in appdata?
  7. How do i convert or find the linked threads.. a lot of my searching has dead ended because i can't go to the link... or are these links no longer applicable?
  8. How do i restore an individual container? I have a corrupt database.. I am not sure if the 1.2 month old backup will even be of a working database. And perhaps i have a bigger problem, as i noticed that i have 2 non working docker containers.. both radarr and nyzbhydra2 stopped working... It probably took me over a month to see that this had happened.
  9. Thanks everyone! Sorry if my question was not written clearly enough.
  10. So I should mark this as solved, but I am still not clear on the following example (which is my current situation. Let me know if i have it right. If i have done a previous parity check, a month ago with zero errors, i upgrade an 8 terabyte drive to 14tb in the array, the history of parity checks now says that it has done a parity check (from the data rebuild) with zero errors. I now have parity unless parity was lost in the 1 month between last parity check and the data rebuild which rebuilt based on inaccurate data? In which case it says i have parity, which i do, but my data may not actually be perfect.. Or i have parity, because i just completed a parity check when i did the data rebuild and my data should be considered accurate? I probably shouldn't worry about this, but i have been curious about this since parity checks are really slow on my machine and prefer not to bog the system down for 3 or 4 days until next month. Thanks for all the replies and help
  11. Perhaps you mean Highwater (recommended and default) instead of Fillup. Yes
  12. The parity check history calls this a parity check and labels parity as valid. If i did a parity rebuild to upgrade a drive from 8 to 14 is this effectively a parity check. Also what would happen to data if a parity was not valid and a rebuild was done of a failed drive or drives? I do have 2 parity drives. I do understand that if 3 drives failed i would lose the data on failed drives only, which would be kind of a bummer since i assume that the fill drive method has data scattered, ie incomplete collections/albums etc. Just curious. Thanks in advance.
  13. Ok thanks.. I am working remotely on a vacation.. I didn't see the share but after enabling it in a different settings than i originally did, it worked and i have solved the problem. I was pretty sure i was missing something somewhat obvious.
  14. The drive is straight out of the box wd 14tb, but i could format it.. preferably with a windows readable format. I would like to access this drive which is not in the array but unprotected from other windows computers on the network. Hopefully this question makes sense. I saw the remote smb share but it didn't seem like it was the right thing.
  15. I have identified a problem. I told it to choose disk 2 (sdab) and it is doing another disk which is not 8tb but 10. then it gets stuck at 90% every time.. SAS2308 PCI-Express Fusion-MPT SAS-2: Scanning Disk 2 (sdab) at 8 TB 90% It appears to continue to read at 133mb/s so i assume that isn't the slow disk, if i have one. Seems to be continuing long enough that it isn't going to stop.
  16. I am attaching screenshot of my array.. It is my understanding that having drives so full is bad for an array? I thought i could use unbalance but that appears to be good to move data the other direction, to free up space on a single drive. Is there a simple app or way to fix my space issue? How big of a deal and what are the consequences of the array as i have it?
  17. I have always had problems finishing the diskspeed benchmark to find which drive is going slow.. Is there a way to just benchmark a specified number of drives at a time? I can't do another test for a while since i have a parity check going.. which has been getting really slow for an unknown reason.. But i will try one in a few days. Right now i am not even starting any dockers because i don't want to interfere with the parity check.
  18. It would appear that CPU is spiking when it slows down (at least a few of the times it was behaving that way.. But not always). I paused and restarted parity checks with different dockers closed, VM's etc. It still was slow. I shut off all dockers and VM and started parity check over and it started out faster like it normally does.. One other potential factor is i have my Search Everything set to scan many of the directories at 3 am and those directory scans can be quite long.. Perhaps they are slowing the parity check down? I have turned off the search everything daily scan of directories, and am hoping that helps. Could this be related to one drive which has a problem? If so how do i figure out which one it is. Diskspeed docker hasn't really worked well for me, so looking for another way of checking. Other factors.. My enclosure isn't all that fast, but this is beyond that. a normal parity check should be 2 or 3 days for 26 drives plus 2 parity. Also all my drives are almost full (I'm afraid to throw a new drive in to expand, or replace a smaller 6 with a 14, until i know my parity is good) To add to complications, I am working on ship so my answers may be delayed and depend on when i get into good cell range to remotely do things. Diagnostics and a screenshot attached. Thanks for any help.. Advice. tower-diagnostics-20220731-2128.zip
  19. I am running catalina 10.15.4 and have not updated for a long time i have the option to go to MacOs Montery - it says updating requires 16.75 GB of space. I have the option for Catalina 10.15.7 but that says "Your disk does not have enough free space" updating requires 20.81 gb. When i check my storage it says i have 13.14 GB available of 68.38GB and that 14.68GB are used by messages. I mainly use my vm for imessaging, since its nice and easy compared to using a phone or setting up another mac/monitor and keyboard. So i probably don't need all the new features or slower? Monterey? Looking through the forums it looks like it might be safer to my install to not upgrade to montery. 1. Will i have security holes fixed with Catalina upgrade? 2. Can i delete the messages files (videos etc) out of my vm without them being deleted from my phone and ipad? If so is there a special way to do this? 3. Can i increase the size of the vdisk to make enough space to do the upgrade? or would it be easier to delete files? 4. Will the deleted files even make space on my vm? i am reading that there is a problem and whn i deleted my 2.57GB of podcasts it didn't seem to do that. 5. How would i go about backing up my disks and settings in case i wanted to go to Monterey? 6. Any other advice on how to get upgraded? or should i not worry and just keep using the vm with old MacOS? Thanks in advance for any help..
  20. On terraria template, i probably just edited 7777 external port to 7779 i don't remember for sure, but this allowed both ark-se and terraria to be running at the same time. It may have been solved but i didn't see that when i read it. I messaged him and perhaps he will answer. Thanks, i just now figured out how to mention someone properly! @Saiba Samurai @Cyd
  21. Back to my ongoing Ark-Se problem on not showing up online or connectable by anyone else. The last thing i did was try and remove all udp ports and then add them again, 7777 7778 27015 only. still not visible outside, all other dockers working fine. Terraria, minecraft, minecraftbedrockserver, 2 valheim servers. i forwarded 7777 to 7779 for terraria to not conflict with ark-se. As far as i can tell from the posts previous to this @Saiba Samurai gave up with the same issues as i had. Also i was considering trying the server cluster setup that is stickied instead but I didn't quite understand how that setup goes. Either way, thanks everyone for trying to help me..
  22. I can see it at 192.168.1.154:7777 only - and i can connect. Not on external Ip:27015, internal ip:7777, internal ip:27015 or any other combination. fixed- thank your for letting me know it is not necessary. Back to bridged. In addition i shut down and rebooted the unraid server, with all other dockers and my one VM set to not autorun. This did not seem to help. I tried Privileged, that didn't solve it. Also i eliminated the RCon port as suggested by Mushroom.