Ouze

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by Ouze

  1. Thanks again. Everything is good now. I reseated my cables. This reconnected the drive. I see a CRC errror count of 6. If this climbs, I will replace the cables and maybe plug into a different port. I ran the scrub and repaired corrupted blocks. I stopped docker, deleted my image, and restarted docker. Then, I reinstalled my apps from the "previously installed" spot. Very easy. I really appreciate your help.
  2. OK - I am going to go do that now. When I do the scrub, should I check the box for "repair corrupted blocks" or not? Thank you again for holding my hand through this, I really appreciate it. I have replaced disks in the array many time no problem but this is the first time I have had to deal with issues with the cache pool so I feel especially lost.
  3. I don't have anything important in the domains or system shares. I don't have any VMs. I'm not sure what a NOCOW share is, though? thank you
  4. Hello all, I got a weird email so I logged into one of my Unraid boxes and found that I think sdb, one of the cache drives, might have died. I honestly have no idea what my next steps would be. I am attaching what I have for logs here. Right now the server is up and seems fine. I am hesitant to take any steps without checking in because I don't want to make it worse - I'm not sure if the disk is just unmounted, or of it's dead, or... what. Thank you! min-syslog-20230509-0436.zip min-smart-20230508-2333.zip min-diagnostics-20230508-2345.zip min-smart-20230508-2349.zip
  5. Ok, mostly good news, some minor bad news: Before I rebooted I tried copying the syslog to disk1. It did so, but it doesn't seem to be useful: whatever broke seems to have broken prior to when the log went sideways, but attaching what I have anyway. The other minor issue: I rebooted dirty. I thought "poweroff" was safe from googling, but that clearly is not correct, as it's doing a parity check (Powerdown didn't seem to do anything). Is there a command I could be using in the future? Thank you again for the swift assistance, I really appreciate it! syslog.zip
  6. Hello all, At 0800 I got the following email from myself, as follows: When I went to check my server, every menu is blank. There is a warning about my license key missing. All shares are accessible and up. Plex via a docker is working OK. I can get in via SSH. I have re-seated the USB key, but have taken no other actions yet - I have not rebooted - for fear of making it worse. What should I do next? I'm not sure how to get diagnostics: root@Min:~# diagnostics Starting diagnostics collection... mkdir: cannot create directory ‘/boot/logs’: Input/output error done. ZIP file '/boot/logs/min-diagnostics-20211019-1130.zip' created. I don't have flash exported, so I have no way - that I know of? - to get to that path. Thank you!
  7. Kind of a general admin question here. I have a PEC2100 with 16gb of RAM, running the current stable of Unraid. In my Docker, I have 2 main dockers: Plex and Valheim, currently. I have some other stuff that is either low impact, or just plain turned off when not actively being used. The Plex instance has maybe 4 people tops at a time, if that, Valheim has maybe 3. What kinds of things should I be looking at to determine how good I am on RAM? When I look at the dashboard, all the usage meters seem reasonable, not pegged. Should I be going any more in depth than that? Thanks!
  8. Hello all, My cache drive is no longer big enough for what I store on it. I want to replace my single 500gb SDD with a 1TB SSD. I did a search, and I found this guide: https://wiki.unraid.net/Replace_A_Cache_Drive which seems straightforward, but it's also pretty old. I believe it was last updated in 2017, before cache pools were a thing. Is that above tutorial still the best way to do it, or is it better do something with cache pool shenanigans (add the new drive to a cache pool, and then subtract the original drive) or some other new way of doing it that I hadn't found in my search? I am terrified of doing this wrong and screwing up my Plex install. Thank you!
  9. Hi guys, I used to try to keep my directories under x number of files, because Windows would bog down when navigating them. That's not really a problem any more: since the only one looking in the directory is Plex, I almost never navigate to those directories anymore. So I was wondering: Is there a point where too many files in a directory will impact performance or cause performance problems? I have about 3,000 movies in a single directory, and it SEEMS ok, but wondering if I should start breaking those up at some point in the future. I googled it, and it seems like the answers is "no" for XFS, but those searches aren't unraid-specific. Thanks!
  10. Ugh really sorry I didn't see this until now. I'm glad someone else showed up with that. The file he linked is what I used.
  11. Happened again - did a check, out the output: "bunker: error: SHA256 hash key mismatch, rnal.720p.bluray.x264-reward.mkv is corrupted" According to a search, that narrows it down to 125 files. There has got to be a way to get this plugin to output more useful (specific) information, right? Why are these error messages being truncated this way? I did an md5 check against all 125 files individual checksums and of course, they pass just fine. These files got dropped onto the server once and have not been touched since then. If this plugin is going to generate output vague as it is, and as prone to false positives as it seems to be; then ultimately it's just contributing noise, not value.
  12. I am doing a mass move from my old unraid server over to my new one. I expected some likely file transfer damage, so no surprise when I did a manual check and saw errors. What was a surprise was how... incomplete those errors were. I saw a few pages back someone else had a similar problem, and the instructions were to go into the hash log for the disk and find candidates, then run a b2sum (path) against each candidate. For some that is easy, but for like that first one and second one, there are a lot of candidates. Is there a less tedious way of configuring this output so I don't need to check 39 false positives to find the one I want manually? Also, that last one only has a single candidate, so I checked it. Those files have individual md5 checksums, one for each file. That directory passes an md5 check on both the source and the allegedly corrupt destination. I don't understand how that can be. I don't mean that I don't believe it, I just don't quite understand it.
  13. I am 90% sure I understand what is going on here from previous threads I read when searching on but would like to confirm. I'm getting "shfs: share cache full" verbiage in my syslog. All of the shares have tons of free space. Each disk has at least 3tb free. I have the "Minimum free space" set to 100gb for each share. My cache drive is at 420 out of 500gb since I did a bunch of big data moves today. I take this to mean "Minimum free space" doesn't just apply to the shares, but it also applies to the cache drive - is that right? Once I only have 100gb free space on the cache drive, it will start writing directly to the array until the mover clears off some space?
  14. Man. I cannot believe how easy that was. Thank you so much!
  15. Hello all, I'd like to harden my servers against ransomware from my SMB shares. I used to use the deprecated plugin, but that's not updated anymore. I read a bunch of forum threads about this and some people suggested a scheme to set directories/shares to chattr +i on a set schedule, so new files can be added and then locked down. I did a few files manually via ssh and liked how well it worked. The problem is I don't know how to automate it. I assume they are doing a cron job or something... like that to make that happen. Ideally I'd like to set it per-directory so incoming stuff has it applied after it gets moved and dynamix integrity and plex do their hashing and metadata when applicable. How can I schedule something like that? I know very little about basic linux stuff and just about nothing about cron other than you can schedule stuff with it. Thanks!
  16. OK, I figured it out. Turns out there was nothing wrong with my process, I just wasn't waiting long enough. After the bin file is uploaded, it just sits there for about 8 minutes, but it did indeed then finally update and I have my issue fixed. 3500RPM fans now, much better!
  17. hello all, I bought a dell poweredge c2100 for my second Unraid build. I've built many PCs but this is my first actual commercial server - the other Unraid build was a Supermicro board in a Norco 4224. This server is 2u and it is very, very loud. It appears the fans run at 100%, all the time, which is like 5k rpm. They are not in a particularly warm room. With the Norco, I removed the 80mm screamers and replaced them with a 120mm fan plate and some nice quiet Noctuas, but this doesn't seem like a viable option here - changing to different fans would seem to cause lots of fan alerts, according to Google. What google said was that BMC firmware of 1.7.0 and below didn't have this problem, the fans idled at around 3k rpms. I'd like to downgrade as I am at 1.82. Here is where I am running into trouble: I don't know what I am doing. I enabled dedicated NIC for BMC and plugged in a cable. I set an IP in the bios, then logged into BMC, changed to a secure password. I am not clear exactly what to do now. When I go into the "firmware update" section of BMC, which seems very straightforward, I see an option to upload a new firmware. I tried getting the appropriate firmware from here: https://www.dell.com/support/home/ae/en/aebsdt1/drivers/driversdetails?driverid=hh9ww but... the files I see on here don't seem to actually contain the correct bin I need to upload, and the instructions included with the files are... very not clear to me. What is the easiest way to do this? Thanks!
  18. I get that, thanks. I still want that long running preclear to re-run because I'm not 100% sure it's good; it had a reallocated sector. Maybe it was just that one, and it's totally fine, but in my limited experience one reallocated sector tends to turn into more and I want to stress it a bit more first. Also, those two disks will go into new data slots. Mostly I was surprised by how out of date I let my unraid become.
  19. Thank you for the swift response, and oh boy, I didn't realize I was so behind on updating the Unraid OS
  20. Hello all, I upgraded a bunch of drives around my house, so I took the known good all drives and added them to my server; a 4/6/8TB drive. I noticed you can preclear multiple drives simultaneously, so I did exactly that, and as expected, the 4TB is clear, the 6TB is almost clear, and the 8TB (which I only sort of trust) has a way to go in the first of two cycles. My guess is that I don't have to wait for that 8tb to finish clearing to monkey with the array, right? Specifically, I have a super old 1tb drive that I want to drop from the array and slot in the 4 or 6 and rebuild. I bet I can do that while that other preclear continues running. Is it wise, though? Should I wait for all that 8tb preclear action to finish before I start doing other stuff? Or does it not matter? I want to find the right balance between saving time, and being safe. Thanks!