mattw

Members
  • Posts

    176
  • Joined

  • Last visited

Everything posted by mattw

  1. Is is simply good enough to make sure that I can use IPMI and have a monitor connected to the HDMI port an have it operational? It will be a couple of days, my CPU cooler will not be here until sometime Thursday and I want memtest to run for a while on it. But, while that is running I should be able to connect up a monitor and see if both work together. And I will have to get brave enough to migrate unraid to the new board... maybe this weekend. Still trying to decide if I am going to even go to 6.12.
  2. I will see if I can get a ticket opened with them regarding the bios... will report back either way. I had seen in the long thread that there was a special bios for iGPU/IPMI support and I will want that to work together. The board came in with the latest consumer bios. Thanks for the info.
  3. That is good to know, I have not ever run Unraid on an Intel cpu and it took some work to get it right on the AT-6700 over the years. Thanks for the input.
  4. @Hoopster I have been reading that thread til my head hurts... I am wondering about things like special bios configs needed to properly support Intel CPU's. It reads a little like C and P states may need some special settings. I do want it to govern when it should. As I said, I have never run Unraid on Intel processors and just want a smooth transition.
  5. I am getting the Asrock E3C246D4U2-2T and an i3-8100 in the very near future. I assumed there was a detailed setup guide for this board since it seems popular and is loaded with features. My old, very old board is an ASRock FM2A85X Extreme6 with an A10-6700 and I have not set up an Intel based board for Unraid before. I have found good snippets here and there but have likely missed several things. Any guidance would be great. Thanks Matt
  6. Parity rebuild started... I really do feel like an idiot at this point. So, I assume that it will find and fix errors since I did remove a disk that was likely not fully zerod out. On the plus side, my array is running faster with that old Segate 7200.11 drive out of the picture.
  7. I did abort the GUI run and was going to run the cli command and got distracted by the shutdown issues... so it appears that I need to rebuild parity. Correct.
  8. Yes, I screwed up... I should not do things when I am tired... It just dawned on me that because of all of the problems with the shutdown that I never ran the zero command. I just removed the drive. So, I guess I do really need to rebuild parity! Please correct me if I am wrong.
  9. No, I did not. I was able to run the New Config properly and check the box that parity was valid. I would run a parity check if needed.
  10. Ok, I feel like a dumb-ass! I only clicked done in New Config the first time... done has always seemed to me that it should do everything in one shot, it does not! The array is up and the disk I wanted to remove can now be remove. Thanks much @JonathanM, I always get nervous when my array is in an odd state.
  11. So kill -9 21434 allowed the stopping array to progress to stopping array - unmounting disks. Current dead processes... root@Tower:~# ps -aux | grep D USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 51 0.0 0.0 0 0 ? S Jun11 0:00 [irq/24-AMD-Vi] root 11701 2.3 0.0 0 0 ? D Jun12 68:59 [kworker/u8:0+flush-9:2] root 19703 0.0 0.0 2676 116 ? S 14:42 0:00 /usr/sbin/avahi-dnsconfd -D root 28751 0.0 0.0 4048 2296 pts/0 S+ 21:25 0:00 grep D root 30160 0.0 0.0 3296 1148 ? D 21:11 0:00 umount /mnt/disk2 My gut now tells me to kill 30160. From system logs: Jun 14 21:11:33 Tower emhttpd: Unmounting disks... Jun 14 21:11:33 Tower emhttpd: shcmd (680076): umount /mnt/disk1 Jun 14 21:11:34 Tower kernel: XFS (md1): Unmounting Filesystem Jun 14 21:11:37 Tower emhttpd: shcmd (680077): rmdir /mnt/disk1 Jun 14 21:11:37 Tower emhttpd: shcmd (680078): umount /mnt/disk2 It appears to be stuck unmounting disk 2. That was the last log entry short of me opening an ssh session. Open files plugin reports no open files.
  12. So with ps -aux | grep zero from the shell I see this... root@Tower:~# ps -aux | grep zero root 6211 0.0 0.0 4048 2232 pts/0 S+ 20:56 0:00 grep zero root 21434 0.1 0.0 3636 2460 ? D Jun12 4:50 dd bs=1M if=/dev/zero of=/dev/md2 If I am not mistaken the "D" indicates that the process is dead but not gone? should I kill process 21434? tower-diagnostics-20230614-2101.zip
  13. I did kill the zero script in the gui, and it said it aborted. Will look at htop from the shell.
  14. So, how long should it say stopping array, sync filesystems. ?
  15. It appears that the following command would be issued from the cli since disk 2 is the one I want to remove: dd bs=1M if=/dev/zero of=/dev/md2 status=progress Thanks
  16. So, if I am reading my screen correctly... the script that zeros a drive is not slow, it is almost not worth trying. I have been running almost 48 hours at this point and if I am reading correctly I have only cleard 200 mb of 1.5 tb. Am I correct? If so, I think I will just rip the empty drive out and unassign it and do the darn parity rebuild as all of my remaining drives are in good condition.
  17. I am ok with command line linux. I would be willing to give that a go... if this does not finish today. Not sure why I am really in a hurry, but I just want to remove an very old drive with some error.
  18. Well crap, I wanted to shrink my array and maintain parity. This seems to be the best option, what is a better idea?
  19. I am running the "clear an array drive" script on a 1.5tb drive. I am only writing at around 5.2 MB/s, this has been running about 12 hours so far and appears that it will be running for a good while. I see that it is not the fastest script, but is this about right? BTW, running 6.11.5.
  20. So, will I need to do a new config, or just reorder in the slot assignments and bring the array up?
  21. So given the table below, if i rearrange the drive physically... when I boot back up Unraid will just put them in the right order? That just does not make sense to me. So, it tracks the serial number to assign the slot and it really is that transparent? Years ago when I built the machine and cabled the 3x2 5.25 racks, I assumed that the racks were top down and they are actually bottom up so my OCD has driven me nuts for years as the top drive is not sda followed by sdb, sdc, sdd and so on. But, you are telling me that I can just reorder them and let the chips hit the floor and it will figure it all out? Why have I waited so long to fix this? I just want my slots and controller ports to line up 1:1. So, just shut down, reorder and reboot?
  22. So, I am going to replace an ancient 1.5tb drive with a precleared 4tb drive. Seems pretty basic as I have watched the Spaceinvader One video several times. The question I have is more related to drive order in the array. I have a very large case with 3 in 1 5.25 drive bays and only 4 array drives + parity + ssd cache. If I wanted to only house 2 drives in each rack for increased airflow, would I need to make sure the drives are in the array in serial number order regardless of controller port? I think the proper thing to do is to stop the array, shutdown the system, move the drives around to where I want them, restart the system and run new config to get the drives back in the proper order on the new sata ports. I know that I have to keep the parity drive in the parity slot, or at least make sure that the parity drive serial number always ends up in the parity slot. My google fu has been failing me, I assume the SpaceInvader One has a video on just the process. If someone wants to point me toward something please do. I am not going to attempt a drive rearrangement until I have the 1.5tb replaced and the second 1.5 tb drive migrated to the new drive as well and removed from the array. I am running 6.11.5
  23. So... I think you are on to something. This mobo is pretty old and it appears that I have to set the performance options for each stick! Has been more stable now. I also removed the Kingston ram as it was only 4gb sticks and they were not a brand match for the others.
  24. So, I shutdown my server, pulled my 2tb parity drive, installed 4tb precleared drive, booted server, assigned drive, ran new config because it refused to start the array. All drives mapped properly parity rebuild started and the machine rebooted several minutes into the check. So far, I have gotten to 24% before it rebooted the last time. Nothing is reporting to much heat, no drive errors and no indicator of a problem. I do have my old parity drive untouched and should be able to go back to it if I wanted to, but I really want to use larger drives to replace some really old 1.5tb drives. I have also stopped all but 2 of my docker containers after the last reboot. It almost feels like something is happening when my screen on the pc I am working from times out and goes to sleep. The CPU and motherboard are really old, maybe it has a problem with 4tb drives? But, it did not have a problem with doing a preclear on the drive. BTW, working with Microsoft Edge as Firefox seemed to give me some issues and I seem to remember that there was some problem doing this kind of stuff in Firefox. I did not remember this until the first reboot was intentionally done. I am attaching the last 2 diag files for your viewing pleasures. tower-diagnostics-20230609-1830.zip tower-diagnostics-20230609-2202.zip
  25. So, I am planning to upsize my 2tb parity drive to 4tb. I think I understand the process, but I would like a sanity check. I am planning to do this in the following order. 1) Shutdown server 2) Remove old parity drive 3) Install new precleared parity drive in the same slot on the same controller port 4) Boot server 5) From GUI acknowledge that the parity drive does not match and start array to rebuild parity... 6) And done. Am I on the right track? Thanks Matt