Crlaozwyn

Members
  • Posts

    44
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Crlaozwyn's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Thanks. Just activated the trial yesterday, so I won't be able to confirm for a month. That would be great though and much easier than the command prompt I've been using for years.
  2. I have a pro license server with all SATA slots full. I like to preclear drives to reduce the chance of installing a potentially defective drive. Since the SATA ports are full, using the UD preclear plugin doesn't work for my use case, and I use a bench computer to preclear drives before swapping them in. I realized today that the bench runs unraid 3.9. Yeah. So, way back in the day, the trial license allowed only a couple of drives but didn't expire. It seems licensing has been updated to work with any number of drives but only for 30 days. All I do on this computer is preclear drives, so buying another license seems excessive. Any ideas? I have no problem continuing to use my very old version as I've been doing for years, but wouldn't mind some modern conveniences if available. TIA.
  3. Restart fixed it. Parity looks good. I can finally relax. At least at the moment, I'm not aware of any outstanding issues with the server. Thanks again for all the help!
  4. Parity will be rebuilding for the next 20 hours or so. OK to let that finish before rebooting or will it cause issues to have parity without that folder? I think it should be OK because it's system generated, but I'm obviously in over my head. Thanks for sticking with me through this!
  5. Ok, I'm mostly there and very thankful for all the help that's brought me here. Parity is rebuilding and it looks like my data is intact. One thing is very strange and I'm sure it's a setting I missed: my /mnt/user directory has been moved (renamed?) to /mnt/user0. Though all my user shares were originally present, they disappeared after about 20 minutes of the server running. From a brief search, it looks like user0 is similar to user but excludes the cache drive. I've never had a cache drive enabled. I can deal with the "user0" path if I have to, though I'd prefer the cleaner "user" but is there any way to restore the missing user shares? I assume they disappeared because the user path no longer exists.
  6. Hey tazire, if you'd like an answer to your question, your best bet is to make a thread for your issue. While you have the same error, it's really a unique situation with different symptoms, concerns, and probably solutions. Marking your response as the answer, though I do have more info to share in case others find this thread in future. The reason no one talks about copying drives over SATA is because it's not feasible. The unraid file system can't be read in Linux. While UFS Explorer can read it fine, it requires a traditional file path to SAVE, which means that even though it COULD read from one drive and write to another, this wasn't available in the current release (I believe it's 9.1.3). So, guess what I'm doing? Yeah, a network transfer. Since my parity was shot in my attempts to get a file system readable in Linux anyways, I'm going the "New Config" route. For now, only the target 14TB drive is in the unprotected array, which means I get full network write speeds and the other drives have less opportunity to eat dirt. Once everything is copied over, I'll add the other drives to the array and build parity. I'll report back when it's complete, which will probably be in a few days. Oh, forgot to add - I did try swapping the file system - it was already set to XFS. Tried ReiserFS too, since that's how UFS Explorer identified the drive. In both cases, I got no love.
  7. Sorry for not asking my question clearly. I know parity will automatically be updated when I make changes to the array; what I'm trying to figure out is if I need to add data to the reformatted drive while it's in the array or if I can format it in the array, shut down the array and remove the drive, add the data back outside the array where the speed will be much higher, and reintroduce the drive to unraid. I don't have a cache drive (limited to 6 drives on my board) so write speeds plummet after a few GB of transfers (from >100 MB/sec to ~20 MB/sec). If I'm extending that to transferring 10TB onto this 14TB drive over the network with the drive in the array, that's going to take about six days. If I do it over SATA on my workbench computer, it's going to take about a day. Building the parity will take probably three, but I'd have access to my content during that time. It seems that UFS Explorer's involvement was unnecessary. It found a bunch of files that had been moved to other disks, but the content I need was readily available. The disk had been formatted as XFS but was showing as ReiserFS. No clue how that happened, but that's probably what confused unraid. If it were possible to change the file system without wiping the contents, I could probably just put the drive back in the array as-is, but I'm not aware of a way to do that. Before checking the drive with UFS Explorer, I cloned the entire drive using HDDSuperClone, so when unraid's formatting wipes out the drive contents they'll still exist on the cloned drive. What I was asking above is if I can put that freshly formatted drive back on my Linux workbench and copy the contents of the cloned drive directly to the reformatted drive (not clone, as I know that would wipe out unraid's flags and the file system) or if that transfer process has to happen while the reformatted drive is in the unraid array.
  8. Follow up question about procedure as UFS Explorer is completing its scan and I'm preparing to move data back to the drive. I'm obviously going to have to rebuild parity since it includes the corruption. After reformatting the drive in unraid to prepare it for the array, does it make more sense to: Add it as a blank drive and use some rsync-based solution like DFM to copy everything over the network or Plug the formatted drive into a separate system, boot up UFS Explorer again, and copy the files over SATA Everything I'm seeing online seems to assume #1, but wouldn't that significantly slow down the transfer speed as parity tries to keep up? I guess I'm trying to see if there's a reason not to go with #2 because it seems that I'd have a protected array with all of my data much faster that way.
  9. That's absolutely terrifying. Is there any way to diagnose or have early detection? I have cloud backup for family photos and the like, but even if I don't lose data in this situation, I'm estimating it's going to be at least another week before my unraid server is up and running again (~3 days to clone drive based on preclear times, probably similar to copy the data, which I'll have to do twice). I just want to make sure I've done what I reasonably can on my end. I've been using unraid for around 15 years and so I was probably overdue for some kind of catastrophic failure, but I'd prefer to have another 15 before it happens again. The rig has been stable for three years without hardware changes other than drive upgrades.
  10. Drive is ordered for the data recovery. HDDSuperClone is burned to USB and ready. I’m thinking I probably haven’t asked the most important question, which I haven’t been able to find an answer to in my searches so far, which is: how does this happen and is there anything I can do to prevent it?
  11. Much appreciated. This stuff is stressful! Want to take time off for the rest of the day to go home and try now, but apparently I’m supposed to be an adult. I’ll report back what I find, but am thankful for the guidance. Edit: So, turns out I can WFH today after all. Sweet. Followed the steps you outlined. The array did start, but unsurprisingly disk1 was missing and wasn't emulated by parity. I've started in maintenance mode and it's looking for the disk 1 superblock again. Not going to lie, I want to cry. Edit 2: So, over an hour has passed since attempting to find secondary superblock with the drive unassigned, which I assume means I'm SOL. I believe using UFS Explorer is going to be as "simple" as removing the 14TB drive from the unraid server, plugging it into my desktop along with another 14TB drive, and getting what I can off of it. But what happens after that? Do I put the drive back into the unraid server and format it, before copying everything back over, or is there another way? When I'm faced with data loss, my brain shuts down and doesn't process things correctly, so my apologies for needing so much help. Edit 3: Looks like it'd be the process at https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/#redoing-a-drive-formatted-with-xfs unless that's out of date.
  12. It finished with "exiting now," which seems to be the computer equivalent of "I quit." So, when I get home I'll be searching how to emulate the disk to see if I've lost everything? Any idea why a three month old drive that passed preclear would do this on what appeared to be a routine clean shutdown and restart?
  13. Thanks. It's been about 20 hours now, so I assume I'm SOL. Is it time to restore from parity? If so, I assume the process would be 1) Remove the drive from the array 2) Format the drive in unraid 3) Return the drive to the array 4) Wait for days as parity rebuilds Edit: Sorry, you clearly said, "Let it finish" so I'll do that and hope. I'll respond here with the results but, assuming the news isn't good, is the process above correct?
  14. I've had an unused cable in my case close to a fan for a while and when something shifts, it makes an awful noise. Decided to fix my cable management, so I did a clean shutdown through the UI, moved cables around, and put everything back together - no SATA or power cables were swapped. When I powered back on, I noticed that my newest drive, a 14 TB WD, said "Unmountable: Wrong or no file system." Cue panic mode. Fortunately, I don't think I did anything stupid this time, having learned my lesson before (I hope!). I powered down, made sure cables were solid, and tried again. Same error. I knew it wouldn't help, but some of the SATA cables are old so I replaced this one with a newer locking cable. Unsurprisingly, it made no difference. If someone know where to look through the attached logs and is willing to help, I'd greatly appreciate it. I haven't done a parity check so I think that if worst comes to worst I can remove the drive, from array, format, and have it rebuild parity. I'm on unraid 6.11.5. Thanks for taking a look. Edit: Looking back, I should've had the presence of mind to pull diagnostics before shutting down. I'll know better next time. I also realize there are a few posts with this same error; having looked through them, it appears the cause is corruption due to an unclean shutdown or something like a failed drive. There's also a recent post for issues after upgrading to 6.12, which doesn't apply to me. While this new drive could have failed now instead of during my preclear, I think it's unlikely. Rather than hijacking another thread, I figured it more polite to make a new one. Edit 2: While it seems there are many causes for what I'm experiencing, it seems the response always starts with "Check Filesystem Status." Since I'm not some special snowflake, I figured this would make sense for me too. It says "Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!!" and has been searching for a secondary superblock for about four hours. I'll update once that finishes. nas-diagnostics-20230619-1139.zip
  15. Well, I suppose if there's not a real issue, that's why I haven't seen a real solution 😛 Thanks. I'll get it excluded. I know it may not seriously matter, but I like it when tests come back clean. Stupid OCD...