codearoni

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by codearoni

  1. Thank you! What is missing from the unraid docs above is the Two Parity use-case. If was my understanding that two parity's mean two drives can fail without a loss of data, is this incorrect? I think I'm getting tripped up on your statement: "every bit of parity" When you say "every bit of parity" are you referring to both parity drives being fully operational, or a single one? Thanks again I appreciate all your explanations!
  2. Ty @trurl, Sorry if I'm being obtuse, I just want to make sure I'm understanding everything. In summary: I should get Parity #2 replaced ASAP while I don't have data drive issues. Assuming Parity #2 is replaced and both parity's are OK - I will be ok in the event of a future data drive failure. In it's current state, if a data drive falls over right now, I might be in trouble during recovery, as Parity #2 is also borked. ?
  3. To further clarify: my expectation is that Parity #2 could burst into flames, and I could still recover the array.
  4. Thank you @itimpi "which is concerning as any recovery after a data drive failure would require it to be read error free to avoid data loss." Regarding this statement, is it still a concern even though I have two parity drives?
  5. I run an array with 2 parity drives and 4 data drives. They're all the same model. Recently, Parity #2 begin having errors. After running a short self-test, I get the result "read error". I am attaching both my diagnostics, and the SMART report for the parity drive in question. What I'm wondering is: what should I do in this situation? If it's a read error, can I simply do a Parity check and write corrections and move on? Or is this something most drastic that needs to be addressed, such as replacing the parity drive. alexandria-diagnostics-20240213-1915.zip alexandria-smart-20240213-1913.zip
  6. Was a teachable event all around. Ty for getting the fix in so quickly @TQ
  7. This container is seriously flawed. The default path of /opt/minecraft/ is in-memory only. And will be wiped after a reboot/shutdown. At the very least, the default path needs to be a user share like appdata. Use at your own peril.
  8. Ty for the fast reply @Squid! I gotta say that is super depressing, but now I know... Here's the container: And yes, out of the box it's configured to use /opt/, which based on your post, seems insane. I should probably just stick with binhex containers moving forward
  9. I spun down my unraid device to install a new GPU. When I turned it back on, everything looked good, but I had one docker container configured to point to /opt/my_folder/ that has all it's data missing. I know /opt/ isn't a user share, but I'm curious as to what happened. Does /opt/ get reset after a shutdown? Is there any possibility I can recover data from that location? I've ssh'd into the box and it /opt/my_folder/ seems completely gone.
  10. I have dozens of pdf's I need access to (manuals for equipment I own). I've downloaded them all in case the external sites go down, and have a folder with them. While functional, I'd ideally like them to be available on my network, without needing a device to mount a share. I'd like to spin up a web server, so they're available over http. Basically a web server serving static content. Ideally, I could drop in a new pdf, and the index page would update automatically, etc. Is there a plugin/app that does this already? I haven't found anything pdf-specific on the community store, but perhaps there's something similar that would work for my use case. Any advice would be appreciated.
  11. Sounds great! Thanks for such a fast reply. I've been using your Valheim container for months and it's been amazing. Ty for all your hard work!
  12. Hi @ich777! There was a Valheim update today. I was wondering what the right approach would be to get that update in your gameserver docker?
  13. Of course the moment I post this, I find an answer. As I understand it, you cannot achieve this via the Dolphin UI. One must go to System Settings -> Network -> Settings -> Window Shares, and then enter your desired user/ps under "Default Username, Default Password". Seems to work, though only applicable for a single user.
  14. I have a new desktop computer connected on my network. It's running Manjaro Linux (v21, KDE Plasma), and uses Dolphin as its default File Manager. On OSX, it's pretty easy to connect to my Unraid shares. I can use the "connect as" feature in Finder to log in with my username/password and get write access to shares. I am struggling to find the equivalent in Linux. I can see my shares in the File Manager (Dolphin), but am unable to connect as a user to said shares. As such, my unraid access is read-only. It seems that by default I connect to Samba as a guest, but how do I connect with my username/ps? I've done some basic forum searching, but haven't found a topic like this. If anyone out there is running a KDE-ish Linux distro, I'd love some input!
  15. My appdata directory (user share) is configured to use my cache drive. But they're moved to disk during my nightly mover job. This is a pretty common configuration, as I understand it.
  16. Getting back to you. I think I see the same bug: 1. I updated my Docker container and created a new path for /serverdata/serverfiles, to ensure a "fresh" install. 2. However, after my mover job runs at 03:40, it seems to stop the backups from being generated. Here's a pic of my /Backups/ directory today. The /Backups/ directory is not on my cache drive at all. So these 9 files are the only backups I have for my image. This matches what I saw in my previous image, and they seems to strangely line up with my mover job.
  17. Thanks! Updated and will get back to you on how backups work over the coming day(s). I just logged in though, and my save wiped on update (i.e. my world is gone). Is this a known issue?
  18. I have a question about backups for my Valheim image. Currently I'm using the defaults (interval = 62, backups to keep = 24). For context, my container has been up for approx 3 days. I would expect 24 tar.gz files to be in my /Backups/ directory, however I only see 9. Additionally, the modified dates on the containers range from 2-20 (19:19) to 2-21 (03:35). I have my mover scheduled for 03:40 daily, so I would expect the latest tar.gz file to have a modified date of approx 3:00-ish this morning (2-23). Am I missing something / misunderstanding how the backup feature works?
  19. If you haven't already, I'd install the community applications plugin. On there, search for the jlesage makemkv plugin. It seems to be the most poopular MakeMKV solution right now. Link: https://old.reddit.com/r/unRAID/comments/dek25r/best_makemkv_docker_container/f30i8kf/ I myself am looking for a similar solution at the moment. Hopefully this will give you a direction to go in. Cheers
  20. Couldn't you write a launchd script that runs every night at 2am, etc? One that simply rsync's your photo folder to a share on your unraid nas?
  21. Thanks to everyone for the help on this issue! I've updated my OP with my triage steps. Hopefully it'll help future WD Red users in the future. I've got my array back online. The rebuild process was incredibly easy. Hardest part of this whole thing was waiting on the RMA drive. It's only strengthened the idea that Unraid was the right choice for my NAS.
  22. Everything has been updating swimmingly, just taking a while given the drives I got (14 hours each). I had a question about extended SMART tests though: can I run them while the array is up and running? Will things like mover jobs be interrupted by extended SMART tests if I run them at night?
  23. From the Disk's personal page: go to the section labeled "Self-Test" and click the "Download SMART report" button. If you've done a short SELF test, then yes, the same button.
  24. Also, traditionally it helps to attach general diagnostics in additional to the SMART results. Tools -> Diagnostics -> Download.