sumidor063

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by sumidor063

  1. Since my most recent edit (14 minutes ago at the time of this reply), I decided... y'know what, if everything is gone and I have to start from scratch, rebooting can't hurt things worse than they already are. I have since done so, and everything appears to be back. If anyone wants to dig through to put the pieces of the puzzle together to satisfy your curiosity (and mine, admittedly), I'd happily listen to ideas. The diag attached to the original post is pre-restart, and the actions I took step-by-step were noted in the original post. Today's lesson: don't do things differently just to fiddle, when you know another method works just fine.
  2. Hey all, So... not sure how I did it, but I managed to nuke my media directory, and now everything is broken. In the Unraid GUI (I'm now realizing that this was probably my mistake, as I've done the following via Krusader numerous times before without any issue), I went into my cache pool, then appdata/plex/Library/Application Support/Plex Media ServerCache/PhotoTranscoder, with the intent of deleting everything inside of it to clean things up. Clicked the select all button, delete, yes, and it appears to have deleted far more than just the photo cache directories. Now, my media folder (which is where all of my media is kept, namely for Plex), is showing up as nothing. I don't know what I did wrong, nor how to fix this, and my attempts to search for a similar issue/resolution have resulted in dead ends. Any help would be appreciated. Running Unraid 6.10.3. Edit1: None of the files within media appear to have been deleted, as there is no disk activity and my usage levels haven't changed. I also just went and turned off Docker, and noticed the following. I'm now afraid to try to turn it back on. Both the .img within that path and the appdata directory still exist and appear completely intact. Edit2: This might be helpful. But I don't know what it means. t1-diagnostics-20221001-0319.zip
  3. A month later, and I figured it out while on an unrelated mission. I created a script, set to run daily, to restart the container, at some point. I have no recollection of doing so, nor why. This thread can be closed. My apologies for the lack of memory.
  4. Hello everybody, I'm having an issue with my Valheim game server container, and I don't believe it to be an issue with the contents of the container itself, so I'm hoping this is the correct subforum for this issue. I have attempted to Google/search through these forums for a related issue from someone else, with no luck (perhaps I'm blind). I currently have this particular container set to not autostart within unraid. The game server isn't actively used much currently, so I prefer to keep the container turned off and I'll turn it on whenever the urge to play the game presents itself again. However, the container appears to decide to turn itself on at will. I had to manually turn it off about 2-3 hours ago. I don't watch my server close enough to notice a pattern (i.e. if it restarts at a specific interval consistently). The container is configured to backup every 62 minutes (default); as it's currently been down for between 2-3 hours, I don't think it's firing itself up just to run a backup. I do not have this issue with any of my other containers that I have turned off manually and set autostart to off with. I'm not experienced enough with Docker/Unraid to know where to start looking for an issue, or something that I don't have set correctly. If anybody has any knowledge of why this is occurring for me, or can point me in the direction of what to check, I would appreciate the assist. I'm happy to supply any logs or such; just need to know what you'd be looking for. I have attached to this post, screenshots of my container configuration. Thanks for the time, -Sumi Edit1: I've grabbed my unraid diagnostics and attached them here. I dug around inside the zip and honestly don't know what I'm looking for. I checked everything I figured to be relevant. unlazarusraid-diagnostics-20211020-1822.zip
  5. @trurl I have done some more playing around and found that a large part of my problem was related to my Plex library, and the phototranscoder cache. Currently, it's eating up about 30GB of the space I was expecting to have access to. I will get something set up to automatically nuke any portion of that cache older than a few days. Prior to today, I wasn't aware that that cache existed and that it can easily take up massive amounts of space. I thank you again for your assistance! -Chris
  6. trurl, Thank you very much for the help you've provided me. I'm going to go forward with your recommendations, and keep your suggestions in mind. Thank you!!! -Chris [Edit 1]: I now realize that if a share is to be removed, all of its contents have to be deleted. I just cleared it through Krusader and it gave me the option to delete the share. Apologies for my misunderstanding!
  7. I'm going to be perfectly honest with you, I don't recall setting it to 40GB (but I obviously did as I assume that's not the default setting) and have no active reason for it to be that large, to my knowledge. If I bump it back down to 20, will it cause any issues now that it's been set to 40? I do not have any VMs on my rig, no. And at this time I'm not looking to add any. In the future, sure, but not right now. Before continuing on with the rest of this reply, I disabled VM Manager and ran the mover, and there was minimal, if any, change in the amount used on the cache. The P-E share is unused currently. This is the share I set up for transcoding purposes, on its own cache pool. Not sure if that was the best way to go about it, but it worked at the time. I intend on removing it. There shouldn't be anything in there of any value to me and nuking it shouldn't be an issue. I just haven't stopped the array to do so yet (based on the available options in the main shares screen and the P-E share while the array is running, it has to be stopped first, yes? Apologies, I'm still learning). The P-A share is the one I actively use after reconfiguring some things regarding my media storage. This share is where all of my media resides. Forgive me for my lack of knowledge, but would this mirror you refer to be residing as... "duplicate", I suppose, of the data that was on the second drive, and existing from having removed the second drive? Is this mirror something that can be removed, and do I even want to remove it? With this, your recommendation is that I cease use of the automove script? I don't object to this at all. There are times where I am throwing onto the cache drive at considerable rates, so the cache will fill up pretty easily. I added the script in an attempt to continue to use the cache drive as much as possible, but perhaps that's a pointless effort? Thank you very much for your insight above, and for any other information you're willing to provide for me. -Chris
  8. Good evening everybody, I'm a relatively new user of unraid (less than two months), and I'm still learning the ins and outs of the system. So far, I'm loving it; definite upgrade for me from running some docker containers haphazardly in Ubuntu. I'm having an interesting issue (to me), that I've been unable to resolve. I've searched through the forums here as best I can, and I've also google'd a bit, and I can't find a solution. I may have missed it posted here somewhere, so my apologies if I've overlooked it. Last night, I removed one of my two cache drives (before removal, there were 2x250GB SSDs in the pool). I found a walkthrough (found here), and followed it step by step. Currently, one of those two 250GB drives remains. This remaining drive is the first drive I ever added to my machine, and ran with just that drive for a while before I put a second one in. I added the second because I wanted to get more cache space, not understanding that's not how it works, and just decided to leave well enough alone. This second drive, in addition to another caching drive that I used [edit, 922pm 8/6: in a separate cache pool for transcoding], was removed to allow for two more drives for the array (I need a new PSU; I'm missing sata power cables). The issue I'm having currently, is my remaining cache drive is showing a usage of about 70-75GB after mover has run and nothing else is being put onto the cache. Prior to the removal of my second cache drive, it sat somewhere between 20-25GB of usage (based on my recollection). I have a script to invoke the mover once the cache hits 75% or more full, checking once an hour. Prior to yesterday/today, this script worked as advertised without issue. Manually invoking the mover does not lower it past this 70-75GB point. My cache pool is using btrfs, and after removing the drive and restarting the array, it began "doing its thing". However, the automove script ran once during that process. I completely forgot to turn that off prior to this drive removal. I don't know if it caused an issue during the removal process, but I didn't know how to stop the mover from running once it was going and felt it best to just leave everything alone to finish so I hopefully don't screw anything up [further]. The removal process took around 2 hours to finish; I don't know if this is normal, or if the mover running delayed its completion. This may be my inexperience with unraid that's causing me to not be able to figure it out, but I don't know how to determine where all this extra usage is coming from. I looked in unbalance to see if there's any folders that look incorrect, and nothing there sticks out. I tried looking through things through Krusader, and nothing is sticking out to me there either (but, I might not know what to look for). I'm not knowledgeable enough to know what commands to run in the CLI, so maybe there's something there I can check? I've attached the diagnostics report to this post; how to dig into it to decipher anything is beyond me. I'm not sure if this will show up, but you may find an unused 10TB disk in the report. This is intended to be my parity drive in the immediate future. I just got the two 10TB drives hooked to the machine, yesterday. To my knowledge, I have included all relevant details that I can think of at the moment. Any assistance that any of you can provide would be infinitely appreciated. If I've missed including anything that would shed light on my issue, please let me know and I will promptly provide it to you if I can. Thank you in advance for your time. -Chris/Sumi unlazarusraid-diagnostics-20210806-2052.zip