combine 2 unraid server's data drives? + ESXI 7


Go to solution Solved by JorgeB,

Recommended Posts

Hello, I have a current unraid server#1 with 3 16tb Data drives and 1 parity 16tb drive, and a 2nd unraid server#2 with 1 16tb parity and 2 16tb data drives, I want to take the data drives from the 1st server and add them to the 2nd server and keep the data on all the drives,  and after i add all the data drives i would then take the 1st server parity drive and add it as a 2nd parity drive on my 2nd server, The data on the 1st server is the most important and the 2nd server is not but my 2nd server is running on esxi7 , with a usb drive in a vm and a pass through  LSI 9300 disk controller and the 1st server is a bare metal install,

If i can not combine all the 16tb drives into one system would it be better to take the 1st server and it's usb and plug it all into my ESXI 7 server , setup everything and then add the 2nd server 16tb drives. Any suggestions would be great since i've never done this before and i'm not sure how my dockers and vm's on my 1st server will work on the new ESXI 7 setup if moved and i want to take the safest route since i don't have a back up of my 1st server

 

Link to comment

ok to be sure , i take a screen shot of my drives on the main tab, of server 1 to know what drives were data and parity, remove drives from 1 and install into server 2, then i boot server2, the array will be off line,  then run new config, making sure to preserve settings, and leave parity is valid box unchecked, I add server 1 drives to empty slots and add the old parity drive as as a second parity drive in server 2 with out formatting it, lastly bring the array back on line so it can build the 2 parity drives for the current drives and the data stored on the drives added from server 1

Link to comment
  • 4 weeks later...
Quote

ocyberbum

Members

 26

 1

Author

Posted November 15

 

ok to be sure , i take a screen shot of my drives on the main tab, of server 1 to know what drives were data and parity, remove drives from 1 and install into server 2, then i boot server2, the array will be off line,  then run new config, making sure to preserve settings, and leave parity is valid box unchecked, I add server 1 drives to empty slots and add the old parity drive as as a second parity drive in server 2 with out formatting it, lastly bring the array back on line so it can build the 2 parity drives for the current drives and the data stored on the drives added from server 1

Hi, Were you successful in combining both servers with this method?

 

I have a similar situation, Server 1 has a pro-key (Only keeping Cache and removing 2 Parity and 9 Data (1C+2P+9D= 12 total Drives) that I have transfer all data to server 2)  Server 2 has a Plus-Key (Keeping 2 Parity and 2 Data Drive (0C+2P+2D=4 total Drives))

Final Consolidated wanted Config: ((Server 1: Hardware, Pro Key, and Cache) + (Server 2's 2 Parity and 2 Data)) Server 1: (1C+2P+2D= 5 total drives).

I have one VM on server 1 Cache drive that I want to keep, But If it will not reconfigure, I will not be upset.

 

I just want to make sure before I begin? 

 

    

Link to comment

well.... yes and no, I am still new and learning unraid, i was a Synology user before so im being cautious , but i did move a drive over to the other server and mount it so i could check that it could be read with data but i ended up ordering another drive that will be here (12/16) so i can copy the essential data to that as a back up and then try the merger knowing i have a back up of  my data just in case.

Since unraid has everything i ever downloaded i wanted to take extra extra care to have a back up then make the move. i was also thinking if i wanted to run both unraids in a vm on my Esxi server so i would have 1 unraid for use and the 2nd unraid as a back up.. but im not sure if that would make sense to do....

Link to comment
  • 2 months later...

I wanted to see if someone can help me decide what went wrong here, When i combined the drives from the 2nd unraid  the system showed some drives and others it showed i had to format the drives and so after trying to troubleshoot this on my own i can't seem to figure out what's causing the drives to not show up or randomly show up, I purchased a new Controller after i thought the Asus pike card was the issue and now its disabled .

--StorageTekPro - LSI 9300-16i 16-Port 12Gb/s SAS Controller HBA Card with P16 IT Mode for ZFS TrueNAS unRAID--

I connected all my drives to this one card and originally it picked up the 5 drives 16tb in unraid and the other drives i used for esxi7, but after a reboot the 16tb drives show missing.  I have attached diagnostics, and a screen shot showing the controller at boot which has the drives listed, and a snapshot showing the drive history, I have changed controllers, changed pci-e 16x slots and switched out the cables(new) but there is something im missing? i am booting this straight from the usb and have removed esxi7 to make sure that was not causing a issue... I am hoping its not a motherboard issue and i have limited experience with this so any ideas would be appreciated

 

Other drives connected:

image.thumb.png.f149ab80e23192a3a1ae99e59bdc8859.png

20230301_154656.jpg

20230301_154919.jpg

20230301_154902.jpg

20230301_154758.jpg

unraid drive errors.jpg

Opera Snapshot_2023-03-01_171450_192.168.1.33.png

colossus-mark-2-diagnostics-20230301-1625-2.zip

Link to comment
On 3/2/2023 at 3:34 AM, JorgeB said:

There are errors identifying multiple disks, swap cables from a known good to a known bad and see if it follows cable or disk.

Thanks Moderators for the help and after some testing, cable swapping, and removing all drives 1 by 1,  i noticed one of the drives #5 was power cycling, i disconnected that drive and put it in my front drive bay and when i booted back up it all seems to be working, it looks like if i connect a 5th drive to the same power cable of my other 4 internal drives it causes the issue , im not sure why because i have a EVGA 750 Bronze power supply (link below) and no video card so i didn't think it would be an issue. So im thinking of getting the EVGA 850 to see if that will solve the issue since it has a 3rd sata power 6 pin and my 750 has only 2 sata power plugs

i have 4 x16tb drives from my other unraid server  to still add to this server (2 for Parity) so im hoping it can handle it all 

I attached another diagnostics this one may look weird because i have been trying to use the freefilesync vnc docker to transfer my files from my old unraid server to my new one and its not showing all the shares using the MNT Root share option, Going to fix this one next before adding my other drives!!  

 

https://www.amazon.com/gp/product/B084R89CJ5/ref=ox_sc_saved_title_1?smid=A3KGNLL200UCI2&psc=1

 

 

image.thumb.png.3e23c3cf6c6cdfef4629f7869964abc0.png

download (5).png

colossus-mark-2-diagnostics-20230303-2231.zip

Link to comment

 

Well i guess i spoke to soon,..... when i try to add my new 20tb drive for parity disk, The Disk #1 drops out or disk #2 reports errors. This goes away if remove the parity drive. The 20tb shows   under unassigned devices without issue , so after new sata controller, new higher out power supply, and cables it allowed me to add the 5th drive but now i cant add a 6th drive, im really stumped now,  i attached everything i can think of hoping the problem is found

 

 

5 added drives after new power supply

image.thumb.png.89043582658ac7506d682c03472a1d74.png

 adding 20tb as parity, then created new config, but this time i checked the box that said Parity was already Valid and no errors

image.thumb.png.88d74381e1aada7d3863f60b91f279eb.png

all disk added no errors
image.thumb.png.0530b713b6f2fab9d8b65ebb5edae621.png

image.thumb.png.2b89c8713a87d10f0277dd63e6dc56a9.png

 

Started Parity sync , has errors and shows a new 16tb drive in unassigned devices that matches the number of disk#5 and has errors

image.thumb.png.819e9b0b1318b7490487e9655f936379.png

image.thumb.png.e167cf9e7af138a5101083ccd52738aa.png

 this is the syslog, all errors were in red, 

text  error  warn  system  array  login  

Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18960
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18968
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18976
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18984
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18992
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19000
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19008
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19016
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19024
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19032
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19040
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19048
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19056
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19064
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19072
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19080
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19088
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19096
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19104
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19112
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19120
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19128
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19136
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19144
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19152
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19160
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19168
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19176
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19184
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19192
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19200
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19208
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19216
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19224
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19232
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19240
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19248
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19256
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19264
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19272
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19280
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19288
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19296
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19304
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19312
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19320
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19328
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19336
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19344
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19352
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19360
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19368
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19376
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19384
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19392
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19400
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19408
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19416
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19424
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19432

Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19400
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19408
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19416
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19424
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19432
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19440
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19448
Mar 14 15:21:03 Colossus-Mark-2 kernel: md: recovery thread: exit status: -4
Mar 14 15:21:03 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:03 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: [sdi] tag#3273 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=DRIVER_OK cmd_age=0s
Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: [sdi] tag#3273 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 78 00 00 00 08 00 00
Mar 14 15:21:03 Colossus-Mark-2 kernel: I/O error, dev sdi, sector 120 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdb)' is not set to auto mount.
Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdc)' is not set to auto mount.
Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'WDC_WD101EFAX-68LDBN0_VCGL80EP (sdj)' is not set to auto mount.
Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdl)' is not set to auto mount.
Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred
Mar 14 15:21:03 Colossus-Mark-2  emhttpd: error: hotplug_devices, 1730: No medium found (123): Error: tagged device OOS16000G_0000GDBV was (sdg) is now (sdl)
Mar 14 15:21:03 Colossus-Mark-2  emhttpd: read SMART /dev/sdl
Mar 14 15:21:03 Colossus-Mark-2 kernel: emhttpd[6434]: segfault at 674 ip 0000559e173189d4 sp 00007ffce179a5a0 error 4 in emhttpd[559e17306000+21000]
Mar 14 15:21:03 Colossus-Mark-2 kernel: Code: 8e 27 01 00 48 89 45 f8 48 8d 05 72 27 01 00 48 89 45 f0 e9 79 01 00 00 8b 45 ec 89 c7 e8 89 b1 ff ff 48 89 45 d8 48 8b 45 d8 <8b> 80 74 06 00 00 85 c0 0f 94 c0 0f b6 c0 89 45 d4 48 8b 45 e0 48
Mar 14 15:21:04 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:04 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred
Mar 14 15:21:04 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:05 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred
Mar 14 15:21:05 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:06 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred
Mar 14 15:21:06 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03)
Mar 14 15:21:06 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred
Mar 14 15:21:24 Colossus-Mark-2 flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
Mar 14 15:30:00 Colossus-Mark-2 unassigned.devices: Removing configuration 'WDC_WD80EFZX-68UW8N0_R6GK332Y'.
Mar 14 15:30:04 Colossus-Mark-2 unassigned.devices: Removing configuration 'ST16000NM001G-2KK103_ZL214Y01'.
Mar 14 15:30:08 Colossus-Mark-2 unassigned.devices: Removing configuration 'ST16000NM001G-2KK103_ZL23E5CT'.
Mar 14 15:30:24 Colossus-Mark-2 flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update

 

during shutdown

Mar 14 15:43:10 Colossus-Mark-2 nginx: 2023/03/14 15:43:10 [error] 6814#6814: *30229 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.184, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket/update.htm", host: "192.168.1.33", referrer: "http://192.168.1.33/Main"
Mar 14 15:44:59 Colossus-Mark-2  shutdown[32577]: shutting down for system reboot
Mar 14 15:44:59 Colossus-Mark-2  init: Switching to runlevel: 6
Mar 14 15:44:59 Colossus-Mark-2 flash_backup: stop watching for file changes
Mar 14 15:44:59 Colossus-Mark-2  init: Trying to re-exec init
Mar 14 15:46:31 Colossus-Mark-2 root: Status of all loop devices
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop1: [2049]:11 (/boot/bzmodules)
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop2: [2097]:1073741961 (/mnt/docker-pool/system/docker/docker.img)
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop0: [2049]:9 (/boot/bzfirmware)
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop3: [2097]:2149437262 (/mnt/docker-pool/system/libvirt/libvirt.img)
Mar 14 15:46:31 Colossus-Mark-2 root: Active pids left on /mnt/*
Mar 14 15:46:31 Colossus-Mark-2 root:                      USER        PID ACCESS COMMAND
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/addons:         root     kernel mount /mnt/addons
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk1:          root     kernel mount /mnt/disk1
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk2:          root     kernel mount /mnt/disk2
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk3:          root     kernel mount /mnt/disk3
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk4:          root     kernel mount /mnt/disk4
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk5:          root     kernel mount /mnt/disk5
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disks:          root     kernel mount /mnt/disks
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/docker-pool:    root     kernel mount /mnt/docker-pool
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/remotes:        root     kernel mount /mnt/remotes
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/rootshare:      root     kernel mount /mnt/rootshare
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/user:           root     kernel mount /mnt/user
Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/user0:          root     kernel mount /mnt/user0
Mar 14 15:46:31 Colossus-Mark-2 root: Active pids left on /dev/md*
Mar 14 15:46:31 Colossus-Mark-2 root:                      USER        PID ACCESS COMMAND
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md1:            root     kernel mount /mnt/disk1
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md2:            root     kernel mount /mnt/disk2
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md3:            root     kernel mount /mnt/disk3
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md4:            root     kernel mount /mnt/disk4
Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md5:            root     kernel mount /mnt/disk5
Mar 14 15:46:31 Colossus-Mark-2 root: Generating diagnostics...
Mar 14 15:46:42 Colossus-Mark-2 unraid-api[7197]: 👋 Farewell. UNRAID API shutting down!
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Got SIGTERM, quitting.
Mar 14 15:46:54 Colossus-Mark-2  avahi-dnsconfd[32879]: read(): EOF
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface vethe82a48c.IPv6 with address fe80::b4f3:4fff:fed1:2b6d.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::42:f1ff:feb4:a653.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.33.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface lo.IPv6 with address ::1.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1.
Mar 14 15:46:54 Colossus-Mark-2  avahi-daemon[32868]: avahi-daemon 0.8 exiting.
Mar 14 15:46:54 Colossus-Mark-2  wsdd2[32834]: 'Terminated' signal received.
Mar 14 15:46:54 Colossus-Mark-2  winbindd[32840]: [2023/03/14 15:46:54.138920,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Mar 14 15:46:54 Colossus-Mark-2  winbindd[32840]:   Got sig[15] terminate (is_parent=0)
Mar 14 15:46:54 Colossus-Mark-2  wsdd2[32834]: terminating.
Mar 14 15:46:54 Colossus-Mark-2  winbindd[32837]: [2023/03/14 15:46:54.139018,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Mar 14 15:46:54 Colossus-Mark-2  winbindd[32837]:   Got sig[15] terminate (is_parent=1)
Mar 14 15:46:54 Colossus-Mark-2  winbindd[34527]: [2023/03/14 15:46:54.139941,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Mar 14 15:46:54 Colossus-Mark-2  winbindd[34527]:   Got sig[15] terminate (is_parent=0)
Mar 14 15:46:54 Colossus-Mark-2  rpc.mountd[25821]: Caught signal 15, un-registering and exiting.
Mar 14 15:46:55 Colossus-Mark-2 kernel: nfsd: last server has exited, flushing export cache
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: ntpd exiting on signal 1 (Hangup)
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: 127.127.1.0 local addr 127.0.0.1 -> <null>
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: 216.239.35.0 local addr 192.168.1.33 -> <null>
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: 216.239.35.4 local addr 192.168.1.33 -> <null>
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: 216.239.35.8 local addr 192.168.1.33 -> <null>
Mar 14 15:46:55 Colossus-Mark-2  ntpd[1971]: 216.239.35.12 local addr 192.168.1.33 -> <null>
Mar 14 15:46:55 Colossus-Mark-2 rc.inet1: dhcpcd -q -k -4 br0
Mar 14 15:46:55 Colossus-Mark-2  dhcpcd[34664]: sending signal ALRM to pid 1831
Mar 14 15:46:55 Colossus-Mark-2  dhcpcd[34664]: waiting for pid 1831 to exit
Mar 14 15:46:55 Colossus-Mark-2  dhcpcd[1832]: received SIGALRM, releasing
Mar 14 15:46:55 Colossus-Mark-2  dhcpcd[1832]: br0: removing interface

 

after reboot

image.thumb.png.bf3e7bb27ad5c6913bd279d599b063b7.png

 

after removing parity drive and doing a new config, started the array and data disk are back to normal

image.thumb.png.037843739ff6c1daaf82be905613486f.png


 

 i attached diagnostics before taking the array off line and doing a reboot,  i have to reboot the system to stop the parity process, The pause and stop buttons don't seem to work: 

 

colossus-mark-2-diagnostics-erros with parity

 

i attached another after reboot and all drives are normal but no parity

colossus-mark-2-diagnostics-afterreboot noParity

 

colossus-mark-2-diagnostics-erros with parity.zip colossus-mark-2-diagnostics-afterreboot noParity.zip

Link to comment

without taking a good look at all the the hardware that you are using... You have tons of free space on your array... I would suggest that you move data to as few disks that you can... your array could live on 3 disks... and build parity... 

 

Also... check your SATA connectors... ensure that you have good connections... 

 

I currently have 23hhds running on a DAS that has 750 WATT PSU... your PSU should be enough for this setup, thus I dont think the PSU is at fault... are you using a RAID controller in this system?

Link to comment

The PSU total capacity isn't always the issue, if you have too many drives on a single PSU power feed it can cause voltage sags. Splitters can amplify that problem because each slip fit connection limits the current by increasing the resistance of the cable. Make sure you use as many direct individual cables from the PSU as you can to power the drives. The shorter the better. If you still have that 5 bay cage, make sure to feed it with at least 2 different sets of cables from the PSU.

Link to comment
58 minutes ago, JonathanM said:

The PSU total capacity isn't always the issue, if you have too many drives on a single PSU power feed it can cause voltage sags. Splitters can amplify that problem because each slip fit connection limits the current by increasing the resistance of the cable. Make sure you use as many direct individual cables from the PSU as you can to power the drives. The shorter the better. If you still have that 5 bay cage, make sure to feed it with at least 2 different sets of cables from the PSU.

good advice here

Link to comment
4 hours ago, mathomas3 said:

without taking a good look at all the the hardware that you are using... You have tons of free space on your array... I would suggest that you move data to as few disks that you can... your array could live on 3 disks... and build parity... 

 

Also... check your SATA connectors... ensure that you have good connections... 

 

I currently have 23hhds running on a DAS that has 750 WATT PSU... your PSU should be enough for this setup, thus I dont think the PSU is at fault... are you using a RAID controller in this system?

I am using a LSI 9300-16i that i just bought, i also have a Asus Pike with 8 sata ports installed in the pike Slot on my Asus Board but i disabled it in Bios, since i had thought it could be a issue originally (Link below to new controller)  

https://www.amazon.com/dp/B0B23S57ZS?psc=1&ref=ppx_yo2ov_dt_b_product_details,

i have all disk connected directly to my new power supply like the old one except i used the new power supply that has a 3rd sata power connection, to connect directly to disk #5 only , I thought adding a new power supply directly to drive #5 would have worked , so im going to trace everything down as suggested and put in new sata power cables. The person i bought this from has it wired up good so i'll need to figure it out as i go,  I also thought about temporally taking my old power supply and plugging it in to the wall and run one of the sata power cables over at a time  to see if it changes anything and test if  having 2 power supplies at the same time solves the issue or shows which cable could be causing the issue but not sure if that is a good idea

 

Thanks for the info time to order somethings on amazon !! also i had bought this cable below to replace one in the pc earlier that looked sketchy  so i'm hoping these are of a good quality to get again

https://www.amazon.com/gp/product/B012BPLW08/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

 

 

Link to comment
8 hours ago, ocyberbum said:

so im going to trace everything down as suggested and put in new sata power cables.

Hard to determine exactly what this means, so I'll just leave a warning here about modular power supplies. The physical pins often look identical, but the pin assignments can differ without warning. NEVER swap modular cables without using either a multimeter or some other form of validation to be sure the correct voltages are being sent to the drives. A mistake here will be VERY expensive to correct, the cheapest way out is buying all new drives and abandoning any data on the blown drives.

 

The safest and best method for powering drives is using only the cables that are soldered to the PSU board. If the PSU doesn't have enough connectors, the next best thing is a 4 pin "molex" to dual SATA power adapter connected to each 4 pin connector available on the PSU. Use as many of the stock 4 pin "molex" style connectors as you can, they have the highest current rating.

 

If you are in any way questioning what you are seeing when you are tracing things out, take good pictures and post them here. We can help decipher what you are seeing.

Link to comment
15 hours ago, ocyberbum said:

Thanks for the info time to order somethings on amazon !! also i had bought this cable below to replace one in the pc earlier that looked sketchy  so i'm hoping these are of a good quality to get again

https://www.amazon.com/gp/product/B012BPLW08/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

You normally want to avoid using SATA power splitters if you can.   It is much better to use a Molex->SATA splitter if your power supply has free Molex connectors as they can take a higher current load with out voltage sag.

Link to comment
On 3/15/2023 at 3:51 PM, itimpi said:

You normally want to avoid using SATA power splitters if you can.   It is much better to use a Molex->SATA splitter if your power supply has free Molex connectors as they can take a higher current load with out voltage sag.

ok, i just replaced all the cables from the power supply to the drives, i was able to remove 3 cables that were not needed after routing the wires again, so knock on wood .... i believe its working again. The only odd thing was after replacing the cables and booting the server, disks 1-5 were not listed and the array was empty, so i added each disk back the way they were before and i added the 20tb parity drive and so far no errors with Parity Sync!! i'm a little nervous about adding the other 3 disk so i might keep it like this for now and just add a 2nd 20tb parity disk in a few months 

 

image.thumb.png.d852036d3d9a1c130569ebd363fdd324.png

 

Thanks to itimpi, JonathanM, and mathomas3 for helping me figure this out ... again, and to the other members that posted solutions!!  

 

Just wanted to add this site: it help me decide what powersupply to get based on what devices i want to add to the server

 

https://outervision.com

 

 

Edited by ocyberbum
weblink power supply calculator
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.