Jump to content

derekos

Members
  • Content Count

    71
  • Joined

  • Last visited

Community Reputation

2 Neutral

About derekos

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. I was able to get this working by running it manually from shell with -port 8999. But I don't know why it wont start on 6237.
  2. Not using ipv6, only ipv4. Port not in use: root@tower:~# lsof -i :6237 root@tower~#
  3. I have uninstalled and reinstalled. Through APPS and by pasting URL. It will not start. F: 2019/07/05 01:24:22 server.go:149: Unable to start on http: listen tcp: address tcp/6237: unknown port F: 2019/07/05 01:24:13 server.go:149: Unable to start on http: listen tcp: address tcp/6237: unknown port I: 2019/07/05 01:24:22 app.go:51: unbalance v5.5.0-1104-b9678b5-v2019.02.12b starting ... I: 2019/07/05 01:24:22 app.go:59: No config file specified. Using app defaults ... I: 2019/07/05 01:24:22 server.go:77: Starting service Server ... I: 2019/07/05 01:24:22 server.go:94: Serving files from /usr/local/emhttp/plugins/unbalance I: 2019/07/05 01:24:22 server.go:155: Server started listening https on :6238 I: 2019/07/05 01:24:22 server.go:145: Server started listening http on :6237 I: 2019/07/05 01:24:22 array.go:46: starting service Array ... I: 2019/07/05 01:24:22 planner.go:52: starting service Planner ... I: 2019/07/05 01:24:22 core.go:101: starting service Core ... F: 2019/07/05 01:24:22 server.go:149: Unable to start on http: listen tcp: address tcp/6237: unknown port W: 2019/07/05 01:24:22 core.go:116: Unable to read history: open /boot/config/plugins/unbalance/unbalance.hist: no such file or directory I: 2019/07/05 01:25:16 app.go:51: unbalance v5.5.0-1104-b9678b5-v2019.02.12b starting ... I: 2019/07/05 01:25:16 app.go:59: No config file specified. Using app defaults ... I: 2019/07/05 01:25:16 server.go:77: Starting service Server ... I: 2019/07/05 01:25:16 server.go:94: Serving files from /usr/local/emhttp/plugins/unbalance I: 2019/07/05 01:25:16 server.go:155: Server started listening https on :6238 I: 2019/07/05 01:25:16 array.go:46: starting service Array ... I: 2019/07/05 01:25:16 planner.go:52: starting service Planner ... I: 2019/07/05 01:25:16 server.go:145: Server started listening http on :6237 I: 2019/07/05 01:25:16 core.go:101: starting service Core ... F: 2019/07/05 01:25:16 server.go:149: Unable to start on http: listen tcp: address tcp/6237: unknown port W: 2019/07/05 01:25:16 core.go:116: Unable to read history: open /boot/config/plugins/unbalance/unbalance.hist: no such file or directory I: 2019/07/05 01:25:16 app.go:73: Press Ctrl+C to stop ...
  4. Just to close this thread out - it turned out to be a bad controller (Marvel). I replaced it with an LSI controller and speed returned to normal. The thread detailing that is here:
  5. Okay new SAS cables arrived. I had an LSI card available. I removed the marvel card giving me trouble and replaced it with the LSI. Problem solved! And yes, I am replaced drive 18 as well. Thank you everyone for your help. Unraid community is by far the best. Some details for anyone else that needs to do this in the future - ESXi booted up and saw it. It dropped it under Storage Adapters. But, I pass through the Marvel and LSI cards as RDMs. So, I went into Configuration > Hardware > Advanced Settings and using Edit I added the LSI controller as a DirectPath I/O. Had to reboot ESXi after that. Next, I had to edit the VM. Removed the PCI that pointed to the Marvel controller that I replaced. Added a new PCI that pointed to the LSI controller. Booted unRaid and benchmarked disks. diskspeed.sh for UNRAID, version 2.6.5 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV /dev/sdc (Disk 14): 120 MB/sec avg /dev/sdd (Disk 7): 114 MB/sec avg /dev/sde (Disk 3): 117 MB/sec avg /dev/sdf (Disk 19): 117 MB/sec avg /dev/sdg (Disk 1): 141 MB/sec avg /dev/sdh (Disk 17): 145 MB/sec avg /dev/sdi (Disk 13): 140 MB/sec avg /dev/sdj (Disk 2): 114 MB/sec avg /dev/sdk (Disk 11): 118 MB/sec avg /dev/sdl (Disk 15): 138 MB/sec avg /dev/sdm (Disk 8): 112 MB/sec avg /dev/sdn (Disk 16): 108 MB/sec avg /dev/sdo (Disk 18): 109 MB/sec avg /dev/sdp (Disk 20): 111 MB/sec avg /dev/sdq (Disk 21): 112 MB/sec avg /dev/sdr (Disk 9): 105 MB/sec avg /dev/sds (Disk 12): 112 MB/sec avg /dev/sdt (Parity): 111 MB/sec avg /dev/sdu (Disk 5): 112 MB/sec avg /dev/sdv (Disk 23): 122 MB/sec avg /dev/sdw (Parity 2): 116 MB/sec avg /dev/sdx (Disk 4): 122 MB/sec avg /dev/sdy (Disk 10): 107 MB/sec avg /dev/sdz (Disk 6): 108 MB/sec avg
  6. Using serial numbers - Disk 14 and Disk 18 did not swap. Disk 14 is reporting no errors in the Smart reports, unlike Disk 18. Hopefully the longer SAS cables will be here tomorrow afternoon and I can report back on the LSI card.
  7. I have an unused LSI card that I am going to put in the case tomorrow - I need longer SAS cables. But, after re-seating the card and cables - much better. However, disk 14 is stuck at 9mbs. I did find this thread after jonatham's advice. Which is more evidence for replacing the Marvel cards. ---- speed results --- /dev/sdc (Disk 15): 138 MB/sec avg /dev/sdd (Disk 8): 108 MB/sec avg /dev/sde (Disk 16): 108 MB/sec avg /dev/sdf (Disk 11): 118 MB/sec avg /dev/sdg (Disk 20): 111 MB/sec avg /dev/sdh (Disk 21): 112 MB/sec avg /dev/sdi (Disk 9): 105 MB/sec avg /dev/sdj (Disk 18): 109 MB/sec avg /dev/sdk (Disk 14): 9 MB/sec avg /dev/sdl (Disk 17): 142 MB/sec avg /dev/sdm (Disk 13): 134 MB/sec avg /dev/sdn (Disk 2): 112 MB/sec avg /dev/sdo (Disk 7): 110 MB/sec avg /dev/sdp (Disk 3): 116 MB/sec avg /dev/sdq (Disk 19): 116 MB/sec avg /dev/sdr (Disk 1): 139 MB/sec avg /dev/sds (Disk 12): 116 MB/sec avg /dev/sdt (Parity): 120 MB/sec avg /dev/sdu (Disk 5): 114 MB/sec avg /dev/sdv (Disk 23): 122 MB/sec avg /dev/sdw (Parity 2): 117 MB/sec avg /dev/sdx (Disk 4): 121 MB/sec avg /dev/sdy (Disk 10): 112 MB/sec avg /dev/sdz (Disk 6): 109 MB/sec avg
  8. If I were to swap out the Marvel - would unraid just pickup where it left off? What should I watch out for? First I am going to open that box and reseat everything, maybe swap the cables out for new ones.
  9. Okay, a little more progress - a the slow drives are on the same controll 4:0:x, which is a Marvel controller. I have two of them. I used this command to match the drives to pci device. Then I looked in ESXi to see what PCI adapter mapped to 4:0:x:x. ls -ld /sys/block/sd*/device <snip> lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdk/device -> ../../../4:0:0:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdl/device -> ../../../4:0:1:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdm/device -> ../../../4:0:2:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdn/device -> ../../../4:0:3:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdo/device -> ../../../4:0:4:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdp/device -> ../../../4:0:5:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdq/device -> ../../../4:0:6:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdr/device -> ../../../4:0:7:0/ <snip>
  10. At Johnnie's suggestion in another thread - I decided to check the disk speed of all the drives first. I used the diskspeed.sh script from here on the forum. I pasted the output below. I am guessing that I have a controller that has either gone bad or has bad or loose cables. 8 drives all in sequence below are very slow. How can I determine which controller runs those drives? --- paste below -- diskspeed.sh for UNRAID, version 2.6.5 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV /dev/sdc (Disk 15): 140 MB/sec avg /dev/sdd (Disk 16): 106 MB/sec avg /dev/sde (Disk 11): 118 MB/sec avg /dev/sdf (Disk 8): 117 MB/sec avg /dev/sdg (Disk 20): 108 MB/sec avg /dev/sdh (Disk 21): 110 MB/sec avg /dev/sdi (Disk 18): 106 MB/sec avg /dev/sdj (Disk 9): 113 MB/sec avg /dev/sdk (Disk 14): 2 MB/sec avg /dev/sdl (Disk 17): 10 MB/sec avg /dev/sdm (Disk 13): 10 MB/sec avg /dev/sdn (Disk 2): 10 MB/sec avg /dev/sdo (Disk 7): 10 MB/sec avg /dev/sdp (Disk 3): 10 MB/sec avg /dev/sdq (Disk 19): 10 MB/sec avg /dev/sdr (Disk 1): 10 MB/sec avg /dev/sds (Disk 12): 118 MB/sec avg /dev/sdt (Parity): 124 MB/sec avg /dev/sdu (Disk 5): 115 MB/sec avg /dev/sdv (Disk 23): 124 MB/sec avg /dev/sdw (Parity 2): 119 MB/sec avg /dev/sdx (Disk 4): 124 MB/sec avg /dev/sdy (Disk 10): 118 MB/sec avg /dev/sdz (Disk 6): 113 MB/sec avg
  11. If Drive 18 is the drive that is slowing things down and I need to replace it in order to rebuild Drive 17, what is the best approach?
  12. Okay, so either the new drive or one of the existing drives may be limited to 2.5mbs. It seems like the thing to do is test each of these drives individually to locate the one or more with issues. Meanwhile, I am trying to see if I can pass the new drive through to a different Linux VM.
  13. Btw - I see one of my drives is reading at 95mbs. So perhaps it is this new drive which is stuck at 2.5mbs write?
  14. Which drive? The one that I am rebuilding? I will do it tomorrow and report back. Its late now.