jnheinz

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by jnheinz

  1. It's actually Step 2 -- and yes, you just select No Device. It's a good idea after doing that to Start the array, so it shows a missing drive -- THEN shut down and physically replace the drive. [if there's room to have all drives in the system, you don't actually have to remove the drive -- you'll simply assign the "new" drive in its place when you reboot] Thanks, this makes perfect sense. I will test it this way when I am to that point.
  2. Sorry, I meant Step 2. I am just testing a rebuild. I just have gotten slightly burned by a 2TB SAS disk that has half-failed during other maintenance where I had to rebuild parity (going to attempt a ddrescue on a live CD to clone it to another disk.... xfs_repair & mounting it have failed), so I would like to make sure my array is capable of rebuilding a disk in a standard scenario at least. I will pre-clear the disks in advance. I will retain the "original/good" data on the 1TB disk being replaced... so even if the "new" drive is bad, I still have the old data.
  3. https://lime-technology.com/wiki/index.php/Replacing_a_Data_Drive On Step 3, I recall my only choices were No device or the drive itself. There was no Unassigned. Am I selecting No device? Does it matter that the drive hasn't actually failed? I have several blank 1TB disks that I could replace an active 1TB disk with. I will retain the original disk (with its data) until I have confirmed the rebuild is complete.
  4. I have a SAS disk that was part of the array for awhile that I have about 1TB of data that I want to copy off of it. I suspect it may be failing & was encountering a complicated scenario that I ended up removing it from the array. I have since rebuilt parity. I would rather not add it back to the array. What are my options? Should I run a filesystem check outside of the array first? Thanks in advance.
  5. Hi jnheinz, well gracefully isn't possible. I'd do killall rsync killall unbalance You'd be left with a partially copied folder. The log will tell you which folders were copied and which one was in progress. Then you'd have to do some manual tending after (delete the partially copied folder on the target disk maybe?) Ok, thanks. I will probably hold off. Thank you for this plug-in, it has been very helpful. Unrelated question, I'll lay out the scenario to see if this is something unBALANCE can handle. I have a share called TV shows that is roughly 5TB in size, spread out amongst about 6 disks currently. I have three 2TB disks that are empty that I would like to migrate these shows to. However, I do want to observe the correct split level.. which is keeping the entire series together, so I have it set to split the top level only now. I don't think it was set this way when it was originally copied, so the data has ended up everywhere. In unBALANCE, If I go through each source disk with tvshows selected & select the 3 target disks.. Does it observe the split level? Or will I need to split manually? Am I better off moving them all out of the share & back in? Looking for the most efficient approach to this.
  6. Is there a way to gracefully terminate an unBALANCE job in progress?
  7. Thank you for this docker, works great thus far.
  8. Dumb question.. if I change the Included Disks from All Disks to Disk 20 on a share that is currently spread on Disks 1, 13, & 22.. Will just new data go to Disk 20 & the old data on Disks 1, 13 & 22 still be accessible? It doesn't actually move all the data to Disk 20 does it? Just talking about the settings of a Share. Thank you for this plug-in, it helps me move stuff around.
  9. I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount. This would be extremely helpful to implement.
  10. I believe that's a known issue, spindown probably won't work as well. Ok. Wasn't a big deal with my one SAS disk, I noticed it would refuse to spin down. I will have 10 of them now, so I guess I will have more heat to deal with. Is there a link to it being reported as an issue?
  11. Unrelated - Is there a reason why SAS disks can't show temperature? I've found a few other posts reporting this with no answers.
  12. This setting did the trick. SAS disks require that setting to be set to Automatic to successfully add to the array. I've only had one SAS disk prior to these, so that would explain why.
  13. Oh, this - sorry. I overlooked this. I will try this out when the parity rebuild is done.
  14. Yes, I attached my diagnostics zip above. I am running UnRAID 6.2. I don't know what other information is needed.
  15. Can anyone tell me what it means when I select a disk at the array configuration (Main) tab.. with the array stopped. Where it says Unassigned.. say, at Disk 16, I select my 2TB precleared disk. The page refreshes and would normally say the name of the disk, but instead it refreshes and still says Unassigned rather than the name of the disk. What does that mean? I wish an error would appear.
  16. Parity rebuild is about 40% done. Will probably be done later tonight.
  17. See attached for diagnostics. unraid-diagnostics-20161102-0851.zip
  18. I swear this was easier in the past, but I believe I am fighting a drive failure on one of my empty 1TB disks.. so I opted to do New Config to remove the disk and shrink the array for now. Shortly after, I had a change of thought. I decided to rebuild parity as well just in case. After I decided to do that, I stopped the parity rebuild and tried adding one of my two precleared disks to an open disk slot "unassigned" - the screen just refreshed and it pops back up as "unassigned". Is this because my parity is not yet valid? I have access to 30 disk slots & 2 cache slots according to the configuration (and my license). I realize I may have made an error, but I am just trying to figure out the correct process for adding a disk. I understand to preclear them. I just don't understand what UnRAID is trying to say when it refuses to attach a pre-cleared disk to an open disk slot. Thanks in advance, banging my head on a wall on this one waiting for a 22-hour parity rebuild. If that's the problem, then I'll wait obviously.
  19. LibreELEC VM log Windows 10 VM log Windows 2012 R2 log (my mistake, it was not 2008 R2, it was 2012 R2)
  20. I just upgraded to UnRAID 6.2 Stable from 6.1.X. I have an AMD FX-8320E running on a Gigabyte 990FXA-UD5-R5 (?), I believe. I use an old ATI Rage XL PCI as my boot GPU. My 2012 R2 VM did not incur the errors everyone else is having, it only uses VNC.. but. My Windows 10 VM with GPU passthrough on an nVidia GPU had it. I deployed a LibreELEC VM from the new templates with an ATI Radeon 65XX GPU passthrough. Both had the issues listed. Both seem to be RESOLVED using Eric's suggestions. I see why both are required.. makes sense. Thank you Eric.
  21. Bump.. Any advice? Would support/diagnostic logs help when it happens?
  22. So, I have UnRAID 6.1.8 that has been running stable for 4-5 months. I decided to work on a LibreELEC VM with an NVIDIA GPU passthrough with a FLIRC USB passthrough.. works great. I added an AMD HD6450 (I believe) with a Microsoft USB mouse & IBM USB keyboard with a Windows 10 VM.. Passthrough appears to work great. One small problem, sometimes.. when I am using the Windows 10 VM on its passthrough monitor.. Some key combination or something I am doing is triggering UnRAID to stop Docker, stop libvirt & stop the array. In the logs, there is a lot of noise because I had "screen mc" running through that SSH login a few minutes prior to the issue this evening, so I had to terminate the screen session for it to successfully unmount the shares. I do not have or use the Powerdown plug-in, and it only seems to happen when I am actually touching that keyboard that is supposedly passed through to Windows 10. I am half wondering if, despite it being passed through to Windows 10, UnRAID is somehow getting the keystrokes and forcing the array to stop.. Ctrl+Alt+Del comes to mind.. but it just happened while I definitely was not pressing that combo. Any help is appreciated. Jul 11 06:47:29 unraid kernel: kvm [16526]: vcpu0 unimplemented perfctr wrmsr: 0 xc0010000 data 0x530076 Jul 11 13:09:02 unraid kernel: mdcmd (98): spindown 14 Jul 11 14:12:10 unraid kernel: mdcmd (99): spindown 5 Jul 11 17:01:15 unraid kernel: mdcmd (100): spindown 6 Jul 11 18:33:37 unraid kernel: mdcmd (101): spindown 6 Jul 11 18:57:56 unraid kernel: mdcmd (102): spindown 2 Jul 11 20:01:51 unraid sshd[6183]: Accepted password for root from 10.0.0.233 po rt 50117 ssh2 Jul 11 20:07:24 unraid kernel: mdcmd (103): nocheck Jul 11 20:07:24 unraid kernel: md: nocheck_array: check not active Jul 11 20:07:24 unraid emhttp: Stopping Docker... Jul 11 20:07:24 unraid logger: stopping docker ... Jul 11 20:07:26 unraid kernel: veth5028631: renamed from eth0 Jul 11 20:07:26 unraid kernel: docker0: port 1(veth202d364) entered disabled sta te Jul 11 20:07:26 unraid kernel: docker0: port 1(veth202d364) entered disabled sta te Jul 11 20:07:26 unraid kernel: device veth202d364 left promiscuous mode Jul 11 20:07:26 unraid kernel: docker0: port 1(veth202d364) entered disabled sta te Jul 11 20:07:26 unraid logger: 3d0ad9808ffc Jul 11 20:07:26 unraid kernel: vethad0f168: renamed from eth0 Jul 11 20:07:26 unraid kernel: docker0: port 2(vethafbbb7b) entered disabled sta te Jul 11 20:07:26 unraid kernel: docker0: port 2(vethafbbb7b) entered disabled sta te Jul 11 20:07:26 unraid kernel: device vethafbbb7b left promiscuous mode Jul 11 20:07:26 unraid kernel: docker0: port 2(vethafbbb7b) entered disabled sta te Jul 11 20:07:26 unraid logger: ff0df318ec27 Jul 11 20:07:26 unraid kernel: traps: Plex Media Serv[13276] general protection ip:7ffb923b2236 sp:7ffb6ebf5d60 error:0 in libsqlite3.so.0[7ffb923a2000+e9000] Jul 11 20:07:27 unraid logger: 9d172e666963 Jul 11 20:07:27 unraid kernel: veth6c544f3: renamed from eth0 Jul 11 20:07:27 unraid kernel: docker0: port 4(veth698746c) entered disabled sta te Jul 11 20:07:27 unraid kernel: docker0: port 4(veth698746c) entered disabled sta te Jul 11 20:07:27 unraid kernel: device veth698746c left promiscuous mode Jul 11 20:07:27 unraid kernel: docker0: port 4(veth698746c) entered disabled sta te Jul 11 20:07:27 unraid logger: ca11248e8960 Jul 11 20:07:29 unraid kernel: veth3fe7a37: renamed from eth0 Jul 11 20:07:29 unraid kernel: docker0: port 3(veth732e3c7) entered disabled sta te Jul 11 20:07:29 unraid kernel: docker0: port 3(veth732e3c7) entered disabled sta te Jul 11 20:07:29 unraid kernel: device veth732e3c7 left promiscuous mode Jul 11 20:07:29 unraid kernel: docker0: port 3(veth732e3c7) entered disabled sta te Jul 11 20:07:29 unraid logger: dcdb23bb8bf2 Jul 11 20:07:30 unraid logger: unmounting docker loopback Jul 11 20:07:30 unraid emhttp: Stopping libvirt... Jul 11 20:07:31 unraid ntpd[1690]: Deleting interface #5 docker0, 172.17.42.1#12 3, interface stats: received=0, sent=0, dropped=0, active_time=92471 secs Jul 11 20:07:35 unraid logger: Domain a2a2a260-95c3-522c-9e2d-db41fbde4c4e is be ing shutdown Jul 11 20:07:35 unraid logger: Jul 11 20:07:40 unraid logger: Domain 05663840-df87-2908-2755-8898cb424adf is be ing shutdown Jul 11 20:07:40 unraid logger: Jul 11 20:07:45 unraid logger: Domain f263166c-cb5a-b1b9-8cbe-75620413105a is be ing shutdown Jul 11 20:07:45 unraid logger: Jul 11 20:07:46 unraid kernel: usb 2-3.6: reset low-speed USB device number 6 us ing ehci-pci Jul 11 20:07:46 unraid kernel: usb 7-2: reset full-speed USB device number 2 usi ng ohci-pci Jul 11 20:07:47 unraid kernel: usb 2-3.5.1: reset full-speed USB device number 7 using ehci-pci Jul 11 20:07:47 unraid kernel: usb 1-1.4: reset low-speed USB device number 3 us ing ehci-pci Jul 11 20:07:47 unraid kernel: br0: port 3(vnet1) entered disabled state Jul 11 20:07:47 unraid kernel: device vnet1 left promiscuous mode Jul 11 20:07:47 unraid kernel: br0: port 3(vnet1) entered disabled state Jul 11 20:07:47 unraid kernel: input: flirc.tv flirc as /devices/pci0000:00/0000 :00:12.2/usb1/1-1/1-1.4/1-1.4:1.0/0003:20A0:0001.0012/input/input16 Jul 11 20:07:47 unraid kernel: hid-generic 0003:20A0:0001.0012: input,hidraw1: U SB HID v1.01 Keyboard [flirc.tv flirc] on usb-0000:00:12.2-1.4/input0 Jul 11 20:07:47 unraid kernel: logitech-djreceiver 0003:046D:C52B.0015: hiddev0, hidraw2: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:16.0-2/inpu t2 Jul 11 20:07:47 unraid kernel: input: Logitech M515 as /devices/pci0000:00/0000: 00:16.0/usb7/7-2/7-2:1.2/0003:046D:C52B.0015/0003:046D:4007.0016/input/input17 Jul 11 20:07:47 unraid kernel: logitech-hidpp-device 0003:046D:4007.0016: input, hidraw3: USB HID v1.11 Keyboard [Logitech M515] on usb-0000:00:16.0-2:1 Jul 11 20:07:47 unraid kernel: input: Logitech K400 as /devices/pci0000:00/0000: 00:16.0/usb7/7-2/7-2:1.2/0003:046D:C52B.0015/0003:046D:400E.0017/input/input18 Jul 11 20:07:47 unraid kernel: logitech-hidpp-device 0003:046D:400E.0017: input, hidraw4: USB HID v1.11 Keyboard [Logitech K400] on usb-0000:00:16.0-2:2 Jul 11 20:07:59 unraid kernel: br0: port 2(vnet0) entered disabled state Jul 11 20:07:59 unraid kernel: device vnet0 left promiscuous mode Jul 11 20:07:59 unraid kernel: br0: port 2(vnet0) entered disabled state Jul 11 20:07:59 unraid kernel: input: Silitek IBM USB HUB KEYBOARD as /devices/p ci0000:00/0000:00:13.2/usb2/2-3/2-3.5/2-3.5.1/2-3.5.1:1.0/0003:04B3:3005.0018/in put/input19 Jul 11 20:07:59 unraid kernel: hid-generic 0003:04B3:3005.0018: input,hidraw5: U SB HID v1.10 Keyboard [silitek IBM USB HUB KEYBOARD] on usb-0000:00:13.2-3.5.1/i nput0 Jul 11 20:07:59 unraid kernel: input: Microsoft Microsoft 5-Button Mouse with In telliEye as /devices/pci0000:00/0000:00:13.2/usb2/2-3/2-3.6/2-3.6:1.0/0003:0 45E:0039.0019/input/input20 Jul 11 20:07:59 unraid kernel: hid-generic 0003:045E:0039.0019: input,hidraw6: U SB HID v1.10 Mouse [Microsoft Microsoft 5-Button Mouse with IntelliEye] on u sb-0000:00:13.2-3.6/input0 Jul 11 20:08:26 unraid logger: Waiting machines................................. ....... Jul 11 20:08:26 unraid logger: The following machines are still running, forcing shutdown: kammi Jul 11 20:08:26 unraid logger: Jul 11 20:08:26 unraid kernel: br0: port 4(vnet2) entered disabled state Jul 11 20:08:26 unraid kernel: device vnet2 left promiscuous mode Jul 11 20:08:26 unraid kernel: br0: port 4(vnet2) entered disabled state Jul 11 20:08:26 unraid logger: Domain f263166c-cb5a-b1b9-8cbe-75620413105a destr oyed Jul 11 20:08:26 unraid logger: Jul 11 20:08:26 unraid logger: Jul 11 20:08:28 unraid logger: Stopping libvirtd... Jul 11 20:08:32 unraid emhttp: shcmd (290): /etc/rc.d/rc.atalk status Jul 11 20:08:32 unraid emhttp: shcmd (291): pidof rpc.mountd &> /dev/null Jul 11 20:08:32 unraid emhttp: Stop NFS... Jul 11 20:08:32 unraid emhttp: shcmd (292): /etc/rc.d/rc.nfsd stop |& logger Jul 11 20:08:33 unraid rpc.mountd[12067]: Caught signal 15, un-registering and e xiting. Jul 11 20:08:34 unraid kernel: nfsd: last server has exited, flushing export cac he Jul 11 20:08:34 unraid emhttp: Stop SMB... Jul 11 20:08:34 unraid emhttp: shcmd (293): /etc/rc.d/rc.samba stop |& logger Jul 11 20:08:34 unraid emhttp: shcmd (294): rm -f /etc/avahi/services/smb.servic e Jul 11 20:08:34 unraid emhttp: Spinning up all drives... Jul 11 20:08:34 unraid emhttp: shcmd (295): /usr/sbin/hdparm -S0 /dev/sdc &> /de v/null Jul 11 20:08:34 unraid kernel: mdcmd (104): spinup 0 Jul 11 20:08:34 unraid kernel: mdcmd (105): spinup 1 Jul 11 20:08:34 unraid kernel: mdcmd (106): spinup 2 Jul 11 20:08:34 unraid kernel: mdcmd (107): spinup 3 Jul 11 20:08:34 unraid kernel: mdcmd (108): spinup 4 Jul 11 20:08:34 unraid kernel: mdcmd (109): spinup 5 Jul 11 20:08:34 unraid kernel: mdcmd (110): spinup 6 Jul 11 20:08:34 unraid kernel: mdcmd (111): spinup 7 Jul 11 20:08:34 unraid kernel: mdcmd (112): spinup 8 Jul 11 20:08:34 unraid kernel: mdcmd (113): spinup 9 Jul 11 20:08:34 unraid kernel: mdcmd (114): spinup 10 Jul 11 20:08:34 unraid kernel: mdcmd (115): spinup 11 Jul 11 20:08:34 unraid kernel: mdcmd (116): spinup 12 Jul 11 20:08:34 unraid kernel: mdcmd (117): spinup 13 Jul 11 20:08:34 unraid kernel: mdcmd (118): spinup 14 Jul 11 20:08:34 unraid kernel: mdcmd (119): spinup 15 Jul 11 20:08:34 unraid emhttp: shcmd (296): /usr/sbin/hdparm -S0 /dev/sdb &> /de v/null Jul 11 20:08:48 unraid emhttp: Sync filesystems... Jul 11 20:08:48 unraid emhttp: shcmd (297): sync Jul 11 20:08:48 unraid emhttp: shcmd (298): set -o pipefail ; umount /mnt/user | & logger Jul 11 20:08:48 unraid logger: umount: /mnt/user: device is busy. Jul 11 20:08:48 unraid logger: (In some cases useful info about processe s that use Jul 11 20:08:48 unraid logger: the device is found by lsof( or fuser( 1)) Jul 11 20:08:48 unraid emhttp: shcmd: shcmd (298): exit status: 1 Jul 11 20:08:48 unraid emhttp: shcmd (299): rmdir /mnt/user |& logger Jul 11 20:12:51 unraid emhttp: Unmounting disks... Jul 11 20:12:51 unraid emhttp: shcmd (496): umount /mnt/disk1 |& logger Jul 11 20:12:51 unraid kernel: XFS (md1): Unmounting Filesystem Jul 11 20:12:51 unraid emhttp: shcmd (497): rmdir /mnt/disk1 |& logger Jul 11 20:12:51 unraid emhttp: shcmd (498): umount /mnt/disk2 |& logger Jul 11 20:12:51 unraid kernel: XFS (md2): Unmounting Filesystem
  23. Thanks for the advice. This is exactly what I currently have setup. A monthly parity check (on the 1st of the month).. and I just had my first drive throw a bunch of errors & had 51 currently pending sectors. I ran a SMART long test & it claims it is still healthy.. so I was just curious if I could run more frequent long tests to see if the drive gets progressively worse. Sorry - getting off topic here. If I suspect a drive is failing, do I acknowledge the current state & wait for notifications of it to get worse? I setup an advanced RMA with WD, so it shouldn't be more than a week.
  24. Any news on this? Would be a great feature. I am a newer UnRAID Pro user. I was bummed to not see this easily scheduled. EDIT - Is this implemented now? Reference - https://lime-technology.com/forum/index.php?topic=43874.0