eXorQue

Members
  • Posts

    25
  • Joined

Everything posted by eXorQue

  1. So I have this issue where sometimes, the server hangs and I'm not able to access shares or docker services. I notice it when my uptime monitor starts signalling that websites are unreachable. Sometimes this happens every other day, or it happens once a month. Usually it comes back after a few hours(!) and I can do a safe reboot, sometimes I have to force restart. I attached diagnostics and the uptime logs supermicro-diagnostics-20240324-1425.zip
  2. I'm unable to start the array. After doing what you said (which was my first guess), I get the following issue: ``` Too many wrong and/or missing disks! ```
  3. Hi, After moving my disks from one system to the other I added a 4TB disk to the parity set. I should've probably just replaced the old one instead of extending it with a second disk, but here we are. Then a data disk broke down, and I had to replace it. But as my 2TB parity is still there, it's not big enough. What would be a good way to go forward? I've added my diagnostics as an attachment. supermicro-diagnostics-20231218-1500.zip
  4. Everything works now. Thank you so much @JorgeB!
  5. @JorgeB I assume I need to follow this procedure next: https://docs.unraid.net/unraid-os/manual/storage-management/#rebuilding-a-drive-onto-itself
  6. @JorgeB first of all: thank you for your time! Done: diag attached supermicro-diagnostics-20231124-1019.zip
  7. Started the array with one disk 1 "no device". Diagnostics attached supermicro-diagnostics-20231123-2131.zip
  8. Parity check is completed. So now I would take one disk out, as in, set the disk to "no device", and run diagnostics again? Results: Added diagnostics as well (for the record, it says "supermicro" that's my NAS name as my previous machine was a Supermicro) supermicro-diagnostics-20231123-2038.zip
  9. Yes, I guess I let the sync finish. I'll do a parity check just to be sure. This is my diagnostics, just to be sure. supermicro-diagnostics-20231123-1633.zip
  10. Wouldn't I want to get parity working first? When I do this I get this message: > Start will disable the missing disk and then bring the array on-line. Install a replacement disk as soon as possible. > [ ] Yes, I want to do this This sounds to me like it'll try to rebuild from 2 disks + parity, does that make sense?
  11. What does this mean. Where do I need to do what? Sorry for my ignorance. My question is still, how do I do this? This is my current situation: Step-by-step (is this correct?): stop array set parity 1 to disk `(sde)` set disk 1 to disk `(sdd)` set disk 2 to disk `no device` set disk 3 to disk `no device` start array do parity check set disk 1 to disk `no device` set disk 2 to disk `(sdc)` start array do parity check set disk 2 to disk `no device` set disk 3 to disk `(sbd)` start array do parity check set disks 1,2,3 to resp sdd, sdc, sdb start array everything works? Does this change when I would like to move from the 2TB disk to the 4TB disk (as I'm planning to replace all disks with 5 4TB disks)
  12. Thank you for the response. I understand that it happens, which is fine-ish. It's explainable, so for me that is ok. What does this mean. Where do I need to do what? There was not such an option to check parity was valid... Where should that have been?
  13. @ConnerVT I started moving the disks and ran into issues starting the array. Take a look at this thread, which is related to my issue, and where I commented my info: https://forums.unraid.net/topic/84717-solved-moving-drives-from-non-hba-raid-card-to-hba/?do=findComment&comment=1329471
  14. First of all, sorry for digging up this old thread, but this is exactly what happens with me too. I moved from a Supermicro blade (with a separate hotswap bay/raid controller, I guess) to a new custom build, with sata on the micro-ATX motherboard. I am 100% sure which drive belongs in which "slot" of the "Disk 1-3" (I used to have only 3 disks and an SSD cache). I am sure about this, because I used Unassigned devices to check the amount that's used of the disk. Old server: Unassigned Devices, mounted disks, with disk usages: New server after assigning disks to the correct Disk # I ran the "New Config" on the "Array" disks after I set them up as shown in the screenshot above, which lead to this config (btw, the cache disk was detected before successfully (as seen on the screenshot above this line), nothing changed there. Maybe because I only created "New Config" for "Array" devices?) However, when I start the array, I get an error on each disk. "Unmountable: Unsupported partition layout" What do I do next? I don't understand what needs to be done from the comments above saying "rebuilding one by one". What do I need to do to get it up again. IMO it should be fine now, why is it Unmountable?
  15. First off, thank you for the thorough response. To recap, correct me if I misunderstood, you basically mention two upgrade/transfer strategies with some presteps Presteps 1. list of drives/serial numbers, print of Main tab 2. local backup of flash drive (Main > flash > Flash Backup) 3. backup of Diagnostics These seem logical presteps. Thanks for that! Upgrade/transfer strategies 1. move as much as possible to the new server, check disk assignments, start the array 2. replace parity drive, rebuild untill all drives are replaced Or did you mean that it's part of ONE process? You mention doing baby steps, this doesn't look like a baby step to me though... Sounds more like one big massive step, moving all drives, and hoping that everything will work when you put them in the new pc. A few questions/comments on things you mentioned that aren't clear to me. Understood. To be completely safe, I could use my spare 4TB as a second parity drive. Thus, first adding the 4TB as second parity, then replace the old 2TB parity with the new 4TB HDD. I think this is incorrect as mentioned here: https://docs.unraid.net/unraid-os/manual/storage-management/#replacing-a-disk-to-increase-capacity It does make sense, as when you're rebuilding parity, there's no "backup" when that rebuilding is going on. I don't see that in the manual, but to replace the cache drive I could do this? (starts at 8:40) https://youtu.be/ij8AOEF1pTU?t=520
  16. Currently I have a Supermicro 1U Blade NAS (very old) with 4 HDD + 1 SSD; 3 HDDs 2TB storage 1 HDD 2TB parity 1 SSD 1TB cache I want to move everything on to a new server and new disks. The new setup will be 5 HDDs + 1 M.2 SSD: 4 HDDs 4TB storage 1 HDD 4TB parity 1 M.2 I had the following in mind, correct me if this isn't going to work. I have a few question with this approach at the end. Add one 4TB as parity disk. Now I have two parity disks, this'll protect the data a bit more for when one would fail during rebuilding when replacing the others Replace one 2TB with a new 4TB disk Repeat step 2 till all disks are replaced Add the last 4TB disk. As I had only 4 2TB disks, I can add the last 4TB disk as the last one Remove the 2TB parity disk. Now my system is complete wrt HDDs for the new system Questions: How do I transfer from my SSD to the M.2, or should I not do that at all? How to get this part to work? Can I just transfer the disks to the new system? Does it work in these steps? Transfer disks to new system Transfer Flash drive to new system Boot from Flash drive Have unraid happily running?
  17. How to see which packages were installed? I've upgraded, but didn't know this one would be outdated. Is there any log of the previous packages that were installed so I can add them to the /boot/extra list. Btw, is the /boot/packages replaced by /boot/extra? Edit: just found out that /boot/packages was from the dmacias72 plugins. So that's my answer to both questions
  18. I've seen kernel issues popping up, but can't seem to see what is the large log file root@supermicro:/var/log# du -smh * 0 atop 0 btmp 0 cron 0 debug 72K dmesg 40K docker.log 0 faillog 8.0K gitflash 4.0K lastlog 4.0K libvirt 0 maillog 0 messages 0 nfsd 0 nginx 0 packages 24K pkgtools 0 plugins 0 preclear.disk.log 0 pwfail 0 removed_packages 0 removed_scripts 0 removed_uninstall_scripts 128K samba 0 scripts 0 secure 0 setup 0 spooler 0 swtpm 0 unraid-api 0 vfio-pci 12K wtmp supermicro-diagnostics-20220314-0917.zip
  19. I have tried clearing cache, and in a fresh install of firefox (as I didn't have it installed yet). In Firefox I see another message ``` GET wss://488af51abcfd8d6b9afac041cb7eb63e770b2928.unraid.net:4433/sub/dockerload [HTTP/1.1 507 Insufficient Storage 48ms] ```
  20. Adds diagnostics Was a bit of a hassle because I didn't have access to the terminal from the web interface because of the wss issue. So had to ssh into it and then scp it from the server. supermicro-diagnostics-20220304-1001.zip
  21. When I go to the plugins tab, I only get the loading thingy (vertical bars) and then nothing happens. Console shows WebSocket connection to 'wss://<my-server>:4433/sub/var' failed: from dynamix.js?v=1596576684:34 How to fix this? I'm unable to upgrade my plugins at this moment.