Jump to content

zyrmpg

Members
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

0 Neutral

About zyrmpg

  • Rank
    Member
  1. So I got a very quick response and turns out they blacklisted due to some kind of payment mishap when the original order was made. Unusual, as its been in use for 2 years now and I wasn't aware of the issue. I'm still looking into it but it doesn't appear to be a technical issue but thanks for the suggestions.
  2. Procedures for both options look fairly quick and straightforward. I'll give em a shot if they don't get back to me tomorrow. Any precautions or pit falls I would be aware of. I seem to be very competent at making this kind of thing worse.
  3. @jonathanm Thanks, I've submitted a support request earlier today. Hopefully, they'll get back to me soon. @itimpi I don't have it in front of me, but I'll see what I can find out about the GUID tomorrow. I'm sure I didn't use some freebie flash drive, but I'm even more sure of the integrity of the license, so who knows. I'll find out when I hear back from LT. At least I know the important stuff is backed up elsewhere. Although, it's cost me so much time and stress that I'm starting to question if unraid really qualifies as a layer of a reliable 3-2-1 backup plan. Maybe it's not really best used as backup, kinda like what they say about... raid. Seems a lot of people use it to containerize their entertainment servers with some unique backup side perks. That's what it turned into for me, which apparently compromised my original intent.
  4. Did you ever get this resolved? I'm facing a similar issue upgrading 6.3.5 to 6.5.3 with a regular usb2.0 flash drive.
  5. Oh ok. Didn't notice the 2018. Yeah, it guess it is outdated. That configuration sounds about right. Can't image why I would turn backups off. That is unfortunate. Thanks for the info. I assume the current disk assignments are stored somewhere else then. No way I can put it together from memory.
  6. So I was looking through the flash drive and I noticed something. DISK_ASSIGNMENTS.txt Disk Assignments as of Sat, 21 Apr 2018 23:21:03 -0700 Disk: parity Device: Status: DISK_NP_DSBL Disk: disk1 Device: STXXX Status: DISK_OK Disk: disk2 Device: WDCXXX Status: DISK_OK Disk: disk3 Device: STXXX Status: DISK_OK Disk: disk4 Device: Status: DISK_NP Disk: disk5 Device: Status: DISK_NP Disk: parity2 Device: Status: DISK_NP_DSBL Disk: cache Device: TOSXXX Status: DISK_OK Disk: cache2 Device: Status: DISK_NP Disk: flash Device: Cruzer_Fit Status: DISK_OK The "Disk" labels are correct, but the "Device" and "Status" don't make sense. The disk.cfg has some similar oddities. These were the only lines with "cacheId" in them. /config/disk.cfg cacheId="TOSXXX" ... cacheId.1="SamXXX" cacheId.2="" cacheId.3="" Everything has been green since April as dated in the txt so I'm kinda confused. Is this going to be an issue when I eventually start the array?
  7. Wow ok. Lots of new info to look into; I appreciate the knowledge download. Might be a few days to research before get back here. I'm going to leave some notes for myself or anyone in my situation. Do correct me if I've misunderstood anything. - /config/ is most important. If that's corrupt, find a backup copy - if no drives have been replaced or added since an older backup was made, it can be used as a straight swap. - if, at any point, array is bootable and accessable look for those CA backups. Those would be most up to date. (Thanks for reminding me. Forgot about those) - before attempting to boot a backup, edit /config/disk.cfg and /config/docker.cfg so the array and docker don't autostart (simple "yes" ->"no"). Check for outdated settings since the backup had been made. - should be able to rebuild docker image off /config/plugins/dockerMan/templates-user. If those are corrupted, try the user templates from a backup. *Rename instead of delete the image as a backup. Thanks for the tip. I'll definitely be replacing the flash drive. This little guy has been getting old. Shoulda laid him to rest earlier. Might try a scheduled replacement after this.
  8. hm ok. I do have some old back ups. Could I do a straight swap if they're differences? I'm most concerned with getting my docker containers back up with the data on the cache and array. Are those dependent on my boot drive? I don't fully understand it but I've heard Unraid operates somewhat independently from the flash drive. Settings - I think I can put back together. Plugins - I don't think my containers were dependent on any in particular. Docker management settings? Can I set them manually from a fresh install on a fresh usb drive without messing with what I had before the crash? I assume my docker images are intact and i can still access them.. somehow? Thanks for answering my barrage of questions. Almost 5 years use of Unraid and recovering from something like this is new territory for me.
  9. Thanks for taking the time, Frank. So I've only got Ubuntu at the moment so I tried a couple other commands: sudo fsck /dev/sda fsck from util-linux 2.33.1 e2fsck 1.44.6 (5-Mar-2019) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device> Found a dos partition table in /dev/sda sudo fsck.fat /dev/sda fsck.fat 4.1 (2017-01-24) Logical sector size (1766 bytes) is not a multiple of the physical sector size. sudo dosfsck -w -r -l -v -t /dev/sdc1 fsck.fat 4.1 (2017-01-24) open: No such file or directory sudo dosfsck -w -r -l -v -t /dev/sda1 fsck.fat 4.1 (2017-01-24) Checking we can access the last sector of the filesystem Boot sector contents: System ID "MSWIN4.1" Media byte 0xf8 (hard disk) 512 bytes per logical sector 4096 bytes per cluster 44 reserved sectors First FAT starts at byte 22528 (sector 44) 2 FATs, 32 bit entries 7808000 bytes per FAT (= 15250 sectors) Root directory start at cluster 281 (arbitrary size) Data area starts at byte 15638528 (sector 30544) 1949974 data clusters (7987093504 bytes) 63 sectors/track, 255 heads 2048 hidden sectors 15630336 sectors total Checking file / Checking file /UNRAID Checking file /EFI- Checking file /System Volume Information (SYSTEM~1) Checking file /bzfirmware (BZFIRM~1) . . . Checking file /preclear_reports/preclear_report_3153474D5034345A_2018.04.27_16.43.42.txt (PRECLE~8.TXT) Checking file /preclear_reports/preclear_report_1SG6T0EZ_2018.05.09_19.45.17.txt (PRECLE~9.TXT) Checking file /preclear_reports/preclear_report_37534A4730323757_2019.03.23_21.11.13.txt (PRC2E9~2.TXT) Checking for bad clusters. Cluster 212756 is unreadable. Cluster 212757 is unreadable. Cluster 212758 is unreadable. Cluster 212759 is unreadable. Cluster 212782 is unreadable. Cluster 212783 is unreadable. Checking for unused clusters. Checking free cluster summary. Free cluster summary wrong (1756693 vs. really 1756687) 1) Correct 2) Don't correct ? That last bit took a while. Safe to correct?
  10. Hi guys, Sad day for me today. Here's what happened in as much detail as I can remember: I was moving files around when docker crashed. After some amateur fiddling, I ended up trying to do a safe restart via the webUI. It looked like the array did stop, but somewhere after the restart got caught on something and got the UI stuck. SSH access was available but almost any command would just hang. Here's what I got before I gave up and hard rebooted: Via SSH Before Hard Reboot: - htop: shfs ~50% cpu usage - iotop: a few things were high io%. all from /usr/local/ - diagnostics: I dont know what happened to the zip Current Status: - server boots to unraid splash screen on attached monitor ( I set default gui boot a while back) - I can log in with physically attached keyboard - after login I get a black screen - ping fails I do have the boot drive. I found a diagnostics zip from yesterday, though I didnt run it. Theres some log files, I dont know which might be useful. Can someone help me out? unraid-diagnostics-20190726-1244.zip
  11. Did you guys happen to figure this out? I've been seeing this exact same problem for a week now. Its been so frustrating!
  12. Hey Squid, Just checked BIOS version is latest. CPU doesn't support VT-d so no pass through. I'll need to check again but I'm pretty sure VT-d is greyed out since it isn't supported.
  13. Hi all, first time posting. I ran "Fix Common Problems" a few weeks ago and have been getting the following error. No particular problems have appeared since seeing this error, but I'm still seeing it and am curious what I should do about it. Call Traces found on your server Your server has issued one or more call traces. This could be caused by a Kernel Issue, Bad Memory, etc. You should post your diagnostics and ask for assistance on the unRaid forums If someone could take a look at the attached diagnostics and enlighten me, I would be ever grateful. n304-diagnostics-20171118-1410.zip