ocyberbum

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by ocyberbum

  1. I guess for me i figured that this would need to happen also. I pay yearly for Plex, and BlueIris because i know my way around the software and new features that come out just enough to be dangerous...., i just had to update my phone to a S24 ultra to make sure i have the updates and features i wanted, i pay monthly for my HomeyPro automation, DDNS and my Domains....+ all the other stuff i can't remember right now!! The community, Dockers, and Features that bring all my projects together is the reasons i believe Unraid is the best choice ... So if this is the answer to keeping Unraid operational for the future👍, Then i will add it to the list as long as i can justify its cost, because the idea of starting over may give me a fatal case of Insomnia!!!
  2. ok, i just replaced all the cables from the power supply to the drives, i was able to remove 3 cables that were not needed after routing the wires again, so knock on wood .... i believe its working again. The only odd thing was after replacing the cables and booting the server, disks 1-5 were not listed and the array was empty, so i added each disk back the way they were before and i added the 20tb parity drive and so far no errors with Parity Sync!! i'm a little nervous about adding the other 3 disk so i might keep it like this for now and just add a 2nd 20tb parity disk in a few months Thanks to itimpi, JonathanM, and mathomas3 for helping me figure this out ... again, and to the other members that posted solutions!! Just wanted to add this site: it help me decide what powersupply to get based on what devices i want to add to the server https://outervision.com
  3. Ok, so its like my old Synology setup i had, I hate to more questions but i have everything done but the name and password, and i dont think im doing right? and can't find password section! here is what i am changing in the account_setting.pl using Vim This for user email for Idrive ); Common::displayMenu('', @options); my $loginType = Common::getUserMenuChoice(scalar(@options)); # Get user name and validate my $uname = Common::getAndValidate(['enter_your', " ", $AppConfig::appType, " ", 'username', ': '], "Idrivename@mail,com", 1); $uname = lc($uname); #Important my $emailID = $uname; can't find where password goes for sure, i did find this section but not sure # creates all password files Common::createEncodePwdFiles($upasswd); Common::getServerAddress();
  4. I am using a LSI 9300-16i that i just bought, i also have a Asus Pike with 8 sata ports installed in the pike Slot on my Asus Board but i disabled it in Bios, since i had thought it could be a issue originally (Link below to new controller) https://www.amazon.com/dp/B0B23S57ZS?psc=1&ref=ppx_yo2ov_dt_b_product_details, i have all disk connected directly to my new power supply like the old one except i used the new power supply that has a 3rd sata power connection, to connect directly to disk #5 only , I thought adding a new power supply directly to drive #5 would have worked , so im going to trace everything down as suggested and put in new sata power cables. The person i bought this from has it wired up good so i'll need to figure it out as i go, I also thought about temporally taking my old power supply and plugging it in to the wall and run one of the sata power cables over at a time to see if it changes anything and test if having 2 power supplies at the same time solves the issue or shows which cable could be causing the issue but not sure if that is a good idea Thanks for the info time to order somethings on amazon !! also i had bought this cable below to replace one in the pc earlier that looked sketchy so i'm hoping these are of a good quality to get again https://www.amazon.com/gp/product/B012BPLW08/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
  5. Well i guess i spoke to soon,..... when i try to add my new 20tb drive for parity disk, The Disk #1 drops out or disk #2 reports errors. This goes away if remove the parity drive. The 20tb shows under unassigned devices without issue , so after new sata controller, new higher out power supply, and cables it allowed me to add the 5th drive but now i cant add a 6th drive, im really stumped now, i attached everything i can think of hoping the problem is found 5 added drives after new power supply adding 20tb as parity, then created new config, but this time i checked the box that said Parity was already Valid and no errors all disk added no errors Started Parity sync , has errors and shows a new 16tb drive in unassigned devices that matches the number of disk#5 and has errors this is the syslog, all errors were in red, text error warn system array login Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18960 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18968 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18976 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18984 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=18992 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19000 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19008 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19016 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19024 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19032 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19040 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19048 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19056 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19064 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19072 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19080 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19088 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19096 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19104 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19112 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19120 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19128 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19136 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19144 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19152 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19160 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19168 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19176 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19184 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19192 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19200 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19208 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19216 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19224 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19232 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19240 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19248 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19256 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19264 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19272 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19280 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19288 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19296 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19304 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19312 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19320 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19328 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19336 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19344 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19352 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19360 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19368 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19376 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19384 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19392 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19400 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19408 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19416 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19424 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19432 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19400 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19408 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19416 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19424 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19432 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19440 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: disk5 write error, sector=19448 Mar 14 15:21:03 Colossus-Mark-2 kernel: md: recovery thread: exit status: -4 Mar 14 15:21:03 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:03 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: [sdi] tag#3273 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=DRIVER_OK cmd_age=0s Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: [sdi] tag#3273 CDB: opcode=0x88 88 00 00 00 00 00 00 00 00 78 00 00 00 08 00 00 Mar 14 15:21:03 Colossus-Mark-2 kernel: I/O error, dev sdi, sector 120 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdb)' is not set to auto mount. Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdc)' is not set to auto mount. Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'WDC_WD101EFAX-68LDBN0_VCGL80EP (sdj)' is not set to auto mount. Mar 14 15:21:03 Colossus-Mark-2 unassigned.devices: Disk with ID 'OOS16000G_0000GDBV (sdl)' is not set to auto mount. Mar 14 15:21:03 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred Mar 14 15:21:03 Colossus-Mark-2 emhttpd: error: hotplug_devices, 1730: No medium found (123): Error: tagged device OOS16000G_0000GDBV was (sdg) is now (sdl) Mar 14 15:21:03 Colossus-Mark-2 emhttpd: read SMART /dev/sdl Mar 14 15:21:03 Colossus-Mark-2 kernel: emhttpd[6434]: segfault at 674 ip 0000559e173189d4 sp 00007ffce179a5a0 error 4 in emhttpd[559e17306000+21000] Mar 14 15:21:03 Colossus-Mark-2 kernel: Code: 8e 27 01 00 48 89 45 f8 48 8d 05 72 27 01 00 48 89 45 f0 e9 79 01 00 00 8b 45 ec 89 c7 e8 89 b1 ff ff 48 89 45 d8 48 8b 45 d8 <8b> 80 74 06 00 00 85 c0 0f 94 c0 0f b6 c0 89 45 d4 48 8b 45 e0 48 Mar 14 15:21:04 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:04 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred Mar 14 15:21:04 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:05 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred Mar 14 15:21:05 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:06 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred Mar 14 15:21:06 Colossus-Mark-2 kernel: mpt3sas_cm0: log_info(0x31110e03): originator(PL), code(0x11), sub_code(0x0e03) Mar 14 15:21:06 Colossus-Mark-2 kernel: sd 11:0:4:0: Power-on or device reset occurred Mar 14 15:21:24 Colossus-Mark-2 flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update Mar 14 15:30:00 Colossus-Mark-2 unassigned.devices: Removing configuration 'WDC_WD80EFZX-68UW8N0_R6GK332Y'. Mar 14 15:30:04 Colossus-Mark-2 unassigned.devices: Removing configuration 'ST16000NM001G-2KK103_ZL214Y01'. Mar 14 15:30:08 Colossus-Mark-2 unassigned.devices: Removing configuration 'ST16000NM001G-2KK103_ZL23E5CT'. Mar 14 15:30:24 Colossus-Mark-2 flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update during shutdown Mar 14 15:43:10 Colossus-Mark-2 nginx: 2023/03/14 15:43:10 [error] 6814#6814: *30229 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.184, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket/update.htm", host: "192.168.1.33", referrer: "http://192.168.1.33/Main" Mar 14 15:44:59 Colossus-Mark-2 shutdown[32577]: shutting down for system reboot Mar 14 15:44:59 Colossus-Mark-2 init: Switching to runlevel: 6 Mar 14 15:44:59 Colossus-Mark-2 flash_backup: stop watching for file changes Mar 14 15:44:59 Colossus-Mark-2 init: Trying to re-exec init Mar 14 15:46:31 Colossus-Mark-2 root: Status of all loop devices Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop1: [2049]:11 (/boot/bzmodules) Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop2: [2097]:1073741961 (/mnt/docker-pool/system/docker/docker.img) Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop0: [2049]:9 (/boot/bzfirmware) Mar 14 15:46:31 Colossus-Mark-2 root: /dev/loop3: [2097]:2149437262 (/mnt/docker-pool/system/libvirt/libvirt.img) Mar 14 15:46:31 Colossus-Mark-2 root: Active pids left on /mnt/* Mar 14 15:46:31 Colossus-Mark-2 root: USER PID ACCESS COMMAND Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/addons: root kernel mount /mnt/addons Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk1: root kernel mount /mnt/disk1 Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk2: root kernel mount /mnt/disk2 Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk3: root kernel mount /mnt/disk3 Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk4: root kernel mount /mnt/disk4 Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disk5: root kernel mount /mnt/disk5 Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/disks: root kernel mount /mnt/disks Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/docker-pool: root kernel mount /mnt/docker-pool Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/remotes: root kernel mount /mnt/remotes Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/rootshare: root kernel mount /mnt/rootshare Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/user: root kernel mount /mnt/user Mar 14 15:46:31 Colossus-Mark-2 root: /mnt/user0: root kernel mount /mnt/user0 Mar 14 15:46:31 Colossus-Mark-2 root: Active pids left on /dev/md* Mar 14 15:46:31 Colossus-Mark-2 root: USER PID ACCESS COMMAND Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md1: root kernel mount /mnt/disk1 Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md2: root kernel mount /mnt/disk2 Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md3: root kernel mount /mnt/disk3 Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md4: root kernel mount /mnt/disk4 Mar 14 15:46:31 Colossus-Mark-2 root: /dev/md5: root kernel mount /mnt/disk5 Mar 14 15:46:31 Colossus-Mark-2 root: Generating diagnostics... Mar 14 15:46:42 Colossus-Mark-2 unraid-api[7197]: 👋 Farewell. UNRAID API shutting down! Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Got SIGTERM, quitting. Mar 14 15:46:54 Colossus-Mark-2 avahi-dnsconfd[32879]: read(): EOF Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface vethe82a48c.IPv6 with address fe80::b4f3:4fff:fed1:2b6d. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::42:f1ff:feb4:a653. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.1.33. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface lo.IPv6 with address ::1. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: Leaving mDNS multicast group on interface lo.IPv4 with address 127.0.0.1. Mar 14 15:46:54 Colossus-Mark-2 avahi-daemon[32868]: avahi-daemon 0.8 exiting. Mar 14 15:46:54 Colossus-Mark-2 wsdd2[32834]: 'Terminated' signal received. Mar 14 15:46:54 Colossus-Mark-2 winbindd[32840]: [2023/03/14 15:46:54.138920, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 14 15:46:54 Colossus-Mark-2 winbindd[32840]: Got sig[15] terminate (is_parent=0) Mar 14 15:46:54 Colossus-Mark-2 wsdd2[32834]: terminating. Mar 14 15:46:54 Colossus-Mark-2 winbindd[32837]: [2023/03/14 15:46:54.139018, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 14 15:46:54 Colossus-Mark-2 winbindd[32837]: Got sig[15] terminate (is_parent=1) Mar 14 15:46:54 Colossus-Mark-2 winbindd[34527]: [2023/03/14 15:46:54.139941, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 14 15:46:54 Colossus-Mark-2 winbindd[34527]: Got sig[15] terminate (is_parent=0) Mar 14 15:46:54 Colossus-Mark-2 rpc.mountd[25821]: Caught signal 15, un-registering and exiting. Mar 14 15:46:55 Colossus-Mark-2 kernel: nfsd: last server has exited, flushing export cache Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: ntpd exiting on signal 1 (Hangup) Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: 127.127.1.0 local addr 127.0.0.1 -> <null> Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: 216.239.35.0 local addr 192.168.1.33 -> <null> Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: 216.239.35.4 local addr 192.168.1.33 -> <null> Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: 216.239.35.8 local addr 192.168.1.33 -> <null> Mar 14 15:46:55 Colossus-Mark-2 ntpd[1971]: 216.239.35.12 local addr 192.168.1.33 -> <null> Mar 14 15:46:55 Colossus-Mark-2 rc.inet1: dhcpcd -q -k -4 br0 Mar 14 15:46:55 Colossus-Mark-2 dhcpcd[34664]: sending signal ALRM to pid 1831 Mar 14 15:46:55 Colossus-Mark-2 dhcpcd[34664]: waiting for pid 1831 to exit Mar 14 15:46:55 Colossus-Mark-2 dhcpcd[1832]: received SIGALRM, releasing Mar 14 15:46:55 Colossus-Mark-2 dhcpcd[1832]: br0: removing interface after reboot after removing parity drive and doing a new config, started the array and data disk are back to normal i attached diagnostics before taking the array off line and doing a reboot, i have to reboot the system to stop the parity process, The pause and stop buttons don't seem to work: colossus-mark-2-diagnostics-erros with parity i attached another after reboot and all drives are normal but no parity colossus-mark-2-diagnostics-afterreboot noParity colossus-mark-2-diagnostics-erros with parity.zip colossus-mark-2-diagnostics-afterreboot noParity.zip
  6. This is the same reason i had run unraid with dockers in esxi7 and keep the VM's separate, i am hoping to find away to run vmware workstation on unraid like i did on Ubuntu but until then i am working without ESXI to keep it simple for now and making sure i remember to shut down my Vm's 1st before taking the array offline!!
  7. just a quick question, there is no GUI for this docker? or anyway to add one if needed Thanks
  8. Awesome!, thank you for the how 2, im going to work on this when i get back into town!!
  9. i would like vm snapshots or a way to easily install vmware workstation, but Multiple Unraid Array Pools sounds good also!!
  10. Just to update the new power supply has fixed the issue https://www.amazon.com/dp/B084R8FT1X?psc=1&ref=ppx_yo2ov_dt_b_product_details
  11. how great it would be to have this in the unraid apps/dockers for easy install but i did pull the container using Community Apps plug in but im not sure what i need to do next..
  12. This is what i did to get a network connection, you have to edit the VM and change the network model to e1000, This worked for me
  13. Yes i was able to install the virtio-win-guest-tools.exe but i had to set the Network Model e1000 so i could have access to the network and transfer the guest-tools over first.
  14. i just did this with ghoste spectre tiny 11 Virtio Files: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.229-1/
  15. Ok after some quality google time and thinking about what Kilrah said ... i believe i found out what i did last time to get my all my shares to show in freefilesync with 2 unraid servers, This spaceinvaderone video gave me the how 2 to create the rootshare https://www.youtube.com/watch?v=TM9pPz732Gc&t=158s&ab_channel=SpaceinvaderOne for a public share (not advised) [rootshare] path = /mnt/user comment = browseable = yes # Public public = yes writeable = yes vfs objects = Code used in the video. For a share with security [rootshare] path = /mnt/user comment = browseable = yes valid users = write list = vfs objects = Thank You Kilrah and SpaceinvaderOne for the video, now i can sync all my files with ease!
  16. Well.. The paths you wanted i thought were listed on the pic below freefilesync screenshot above. Both are linked to /mnt: the remote mount to itself is created when i use the add Root Share, which i assumed means it puts all your shares under one link to avoid having to do individual links to each share of that unraid server and use its Remote Root Share to get all of the shares under one link in freefilesync. etc I looked at the Global Settings to see if something needed to be changed but i have never changed these before , in the FileSync Docker i have path 1 and 2 pointing to /mnt for both so last time i went to path 1 \user which list all my shares server 1 and on path 2 i goto remotes and it shows the 2 shares i added individually with "add remote shares" menu under Main Tab so after going over this i guess the problem im having is when i add a remote share on server #1 (the pic below) it doesn't show the Root Share user.pool.col2 for server 2, just the individual shares. if the remote share could find server 2's user.pool.col2 in the list added below i think it would work for sure, I am not sure why or how to get freefilesync docker itself to list all the SMB Shares by putting in a link like SMB:\\ip address or colossus-mk-2 in the box directly using the network directly
  17. Question about filesync and using it to sync two unraid servers , before one of my servers crashed i had this working but not now, i am having to mount one folder at a time and before i could mount the root shares and use freefilesync and it would show all the folders for me at one time to sync, So i was hoping someone could let me know what im missing or doing wrong Thanks here is a screen shot of #2- server mark-2 that i am copying files too: i have root share mounted and bought software here is a screen shot of #1- server 1 with the source files: i can mount the share Bought Software from server 2 but only that one and not Root Share freefilesync screenshot unraid share list of #2 Server
  18. Sorry, i missed this question but , no but i did email Zenju and heard nothing so im guessing i will have to find someone that has made one so i can figure out a way to show Proof of purchase.
  19. Thanks Moderators for the help and after some testing, cable swapping, and removing all drives 1 by 1, i noticed one of the drives #5 was power cycling, i disconnected that drive and put it in my front drive bay and when i booted back up it all seems to be working, it looks like if i connect a 5th drive to the same power cable of my other 4 internal drives it causes the issue , im not sure why because i have a EVGA 750 Bronze power supply (link below) and no video card so i didn't think it would be an issue. So im thinking of getting the EVGA 850 to see if that will solve the issue since it has a 3rd sata power 6 pin and my 750 has only 2 sata power plugs i have 4 x16tb drives from my other unraid server to still add to this server (2 for Parity) so im hoping it can handle it all I attached another diagnostics this one may look weird because i have been trying to use the freefilesync vnc docker to transfer my files from my old unraid server to my new one and its not showing all the shares using the MNT Root share option, Going to fix this one next before adding my other drives!! https://www.amazon.com/gp/product/B084R89CJ5/ref=ox_sc_saved_title_1?smid=A3KGNLL200UCI2&psc=1 colossus-mark-2-diagnostics-20230303-2231.zip
  20. I wanted to see if someone can help me decide what went wrong here, When i combined the drives from the 2nd unraid the system showed some drives and others it showed i had to format the drives and so after trying to troubleshoot this on my own i can't seem to figure out what's causing the drives to not show up or randomly show up, I purchased a new Controller after i thought the Asus pike card was the issue and now its disabled . --StorageTekPro - LSI 9300-16i 16-Port 12Gb/s SAS Controller HBA Card with P16 IT Mode for ZFS TrueNAS unRAID-- I connected all my drives to this one card and originally it picked up the 5 drives 16tb in unraid and the other drives i used for esxi7, but after a reboot the 16tb drives show missing. I have attached diagnostics, and a screen shot showing the controller at boot which has the drives listed, and a snapshot showing the drive history, I have changed controllers, changed pci-e 16x slots and switched out the cables(new) but there is something im missing? i am booting this straight from the usb and have removed esxi7 to make sure that was not causing a issue... I am hoping its not a motherboard issue and i have limited experience with this so any ideas would be appreciated Other drives connected: colossus-mark-2-diagnostics-20230301-1625-2.zip
  21. I know this is a old........ post but this just solved a issue i was having with my pike 2308, Drives would error out or fail at random, so i put a small floor fan on it, tried again and i was able to setup my disk so far without issue. I am going to try one of these from amazon to see if it will move enough air https://www.amazon.com/gp/product/B07H5KPY8P/ref=ox_sc_act_title_3?smid=A235LT0EDLFSAR&th=1 https://www.amazon.com/gp/product/B07P9Y18HB/ref=ox_sc_act_title_1?smid=A1TOEWSRSZKIY7&psc=1
  22. well.... yes and no, I am still new and learning unraid, i was a Synology user before so im being cautious , but i did move a drive over to the other server and mount it so i could check that it could be read with data but i ended up ordering another drive that will be here (12/16) so i can copy the essential data to that as a back up and then try the merger knowing i have a back up of my data just in case. Since unraid has everything i ever downloaded i wanted to take extra extra care to have a back up then make the move. i was also thinking if i wanted to run both unraids in a vm on my Esxi server so i would have 1 unraid for use and the 2nd unraid as a back up.. but im not sure if that would make sense to do....
  23. ok to be sure , i take a screen shot of my drives on the main tab, of server 1 to know what drives were data and parity, remove drives from 1 and install into server 2, then i boot server2, the array will be off line, then run new config, making sure to preserve settings, and leave parity is valid box unchecked, I add server 1 drives to empty slots and add the old parity drive as as a second parity drive in server 2 with out formatting it, lastly bring the array back on line so it can build the 2 parity drives for the current drives and the data stored on the drives added from server 1
  24. Hello, I have a current unraid server#1 with 3 16tb Data drives and 1 parity 16tb drive, and a 2nd unraid server#2 with 1 16tb parity and 2 16tb data drives, I want to take the data drives from the 1st server and add them to the 2nd server and keep the data on all the drives, and after i add all the data drives i would then take the 1st server parity drive and add it as a 2nd parity drive on my 2nd server, The data on the 1st server is the most important and the 2nd server is not but my 2nd server is running on esxi7 , with a usb drive in a vm and a pass through LSI 9300 disk controller and the 1st server is a bare metal install, If i can not combine all the 16tb drives into one system would it be better to take the 1st server and it's usb and plug it all into my ESXI 7 server , setup everything and then add the 2nd server 16tb drives. Any suggestions would be great since i've never done this before and i'm not sure how my dockers and vm's on my 1st server will work on the new ESXI 7 setup if moved and i want to take the safest route since i don't have a back up of my 1st server