Pirate43 Posted January 2, 2017 Share Posted January 2, 2017 Hello Unraid Community, I'm running Unraid Server 6.1.9 and have a Basic license which I believe allows me to attach 6 devices to my array. One of my drives is getting reallocated sectors every so often so I decided to stop the array, remove the problem drive, and restart the array. The problem is that immediately upon stopping my array, the web GUI says I c cannot start my array because of too many devices attached, despite changing nothing and still having 6 attached. Even removing one drive doesn't let me start my array. Any help? Screenshot http://i.imgur.com/HaeXsnR.png - 6 devices, can't start array. http://i.imgur.com/GhYzfe8.png - now 5 devices, still can't start array. EDIT: Attached diagnostics zip. Thanks! ~Pirate43 shibe-diagnostics-20170102-1215.zip Quote Link to comment
itimpi Posted January 2, 2017 Share Posted January 2, 2017 Devices attached to the PC count towards the license limits even if they are not configured for use by unRAID, so your screenshots are not any help in deciding why unRAID thinks your limit is exceeded. If you are sure you are not exceeding the limit then attach your diagnostics file (Tools->Diagnostics) so others might spot what is causing your issue. Quote Link to comment
Pirate43 Posted January 2, 2017 Author Share Posted January 2, 2017 I've attached my diagnostics file. My main question is that I've been running the array in this configuration for a while, and have done multiple array stops and starts, but now it thinks i have too many devices attached without changing anything, including remaining on the same unraid version. Hope this helps. Quote Link to comment
BRiT Posted January 2, 2017 Share Posted January 2, 2017 Excerpts from your log file showing unRaid thinks you have more than 6 ARRAY SLOTS assigned, once at the beginning with 11 devices and once at the end with 8 devices. You first attempt to start with 11 slots, 10 array and 1 cache. You finally attempt to start with 8 slots, 7 array and 1 cache. Be sure to remove / unassign the empty slots, slots 5 - 9 at the beginning and slots 5 - 6 at the end. Jan 2 00:57:06 shibe emhttp: array slots: 10 Jan 2 00:57:06 shibe emhttp: cache slots: 1 Jan 2 00:57:06 shibe kernel: mdcmd (1): import 0 8,80 1953514552 Hitachi_HUA723020ALA641_YGGHZM7A Jan 2 00:57:06 shibe kernel: md: import disk0: [8,80] (sdf) Hitachi_HUA723020ALA641_YGGHZM7A size: 1953514552 Jan 2 00:57:06 shibe kernel: mdcmd (2): import 1 8,64 1953514552 Hitachi_HUA723020ALA641_YGH9JNMA Jan 2 00:57:06 shibe kernel: md: import disk1: [8,64] (sde) Hitachi_HUA723020ALA641_YGH9JNMA size: 1953514552 Jan 2 00:57:06 shibe kernel: mdcmd (3): import 2 8,32 1953514552 Hitachi_HUA723020ALA641_YFGR8H5C Jan 2 00:57:06 shibe kernel: md: import disk2: [8,32] (sdc) Hitachi_HUA723020ALA641_YFGR8H5C size: 1953514552 Jan 2 00:57:06 shibe kernel: mdcmd (4): import 3 8,112 976762552 Hitachi_HDS721010CLA332_JP2940J83MU9ZV Jan 2 00:57:06 shibe kernel: md: import disk3: [8,112] (sdh) Hitachi_HDS721010CLA332_JP2940J83MU9ZV size: 976762552 Jan 2 00:57:06 shibe kernel: mdcmd (5): import 4 8,96 312571192 ST3320418AS_9VMFHWMQ Jan 2 00:57:06 shibe kernel: md: import disk4: [8,96] (sdg) ST3320418AS_9VMFHWMQ size: 312571192 Jan 2 00:57:06 shibe kernel: mdcmd (6): import 5 0,0 Jan 2 00:57:06 shibe kernel: mdcmd (7): import 6 0,0 Jan 2 00:57:06 shibe kernel: mdcmd (: import 7 0,0 Jan 2 00:57:06 shibe kernel: mdcmd (9): import 8 0,0 Jan 2 00:57:06 shibe kernel: mdcmd (10): import 9 0,0 Jan 2 00:57:06 shibe emhttp: import 10 cache device: sdd Jan 2 00:57:06 shibe emhttp: import flash device: sda ---- Jan 2 12:15:03 shibe emhttp: array slots: 7 Jan 2 12:15:03 shibe emhttp: cache slots: 1 Jan 2 12:15:03 shibe kernel: mdcmd (1): import 0 8,80 1953514552 Hitachi_HUA723020ALA641_YGGHZM7A Jan 2 12:15:03 shibe kernel: md: import disk0: [8,80] (sdf) Hitachi_HUA723020ALA641_YGGHZM7A size: 1953514552 Jan 2 12:15:03 shibe kernel: mdcmd (2): import 1 8,64 1953514552 Hitachi_HUA723020ALA641_YGH9JNMA Jan 2 12:15:03 shibe kernel: md: import disk1: [8,64] (sde) Hitachi_HUA723020ALA641_YGH9JNMA size: 1953514552 Jan 2 12:15:03 shibe kernel: mdcmd (3): import 2 8,32 1953514552 Hitachi_HUA723020ALA641_YFGR8H5C Jan 2 12:15:03 shibe kernel: md: import disk2: [8,32] (sdc) Hitachi_HUA723020ALA641_YFGR8H5C size: 1953514552 Jan 2 12:15:03 shibe kernel: mdcmd (4): import 3 8,112 976762552 Hitachi_HDS721010CLA332_JP2940J83MU9ZV Jan 2 12:15:03 shibe kernel: md: import disk3: [8,112] (sdh) Hitachi_HDS721010CLA332_JP2940J83MU9ZV size: 976762552 Jan 2 12:15:03 shibe kernel: mdcmd (5): import 4 8,96 312571192 ST3320418AS_9VMFHWMQ Jan 2 12:15:03 shibe kernel: md: import disk4: [8,96] (sdg) ST3320418AS_9VMFHWMQ size: 312571192 Jan 2 12:15:03 shibe kernel: mdcmd (6): import 5 0,0 Jan 2 12:15:03 shibe kernel: mdcmd (7): import 6 0,0 Jan 2 12:15:03 shibe emhttp: import 7 cache device: sdd Jan 2 12:15:03 shibe emhttp: import flash device: sda Quote Link to comment
Pirate43 Posted January 2, 2017 Author Share Posted January 2, 2017 Removed all extra slots without success. Screenshot and fresh diagnostics file attached. shibe-diagnostics-20170102-1307.zip Quote Link to comment
Squid Posted January 2, 2017 Share Posted January 2, 2017 Remove the 2nd Flash Drive that you've got plugged in. Quote Link to comment
Pirate43 Posted January 2, 2017 Author Share Posted January 2, 2017 Looking through the diagnostics files I found that lssci.txt thinks I have 8 devices, 2 USB Cruzer devices: [0:0:0:0] disk SanDisk U3 Cruzer Micro 2.16 /dev/sda /dev/sg0 state=running queue_depth=1 scsi_level=3 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/host0/target0:0:0/0:0:0:0] [1:0:0:0] disk SanDisk Cruzer 1.26 /dev/sdb /dev/sg1 state=running queue_depth=1 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/1:0:0:0 [/sys/devices/pci0000:00/0000:00:13.2/usb2/2-5/2-5:1.0/host1/target1:0:0/1:0:0:0] [4:0:0:0] disk ATA Hitachi HUA72302 A840 /dev/sdc /dev/sg2 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/4:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/ata3/host4/target4:0:0/4:0:0:0] [5:0:0:0] disk ATA INTEL SSDSA2MH16 8820 /dev/sdd /dev/sg3 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/5:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/ata4/host5/target5:0:0/5:0:0:0] [6:0:0:0] disk ATA Hitachi HUA72302 A840 /dev/sde /dev/sg4 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/6:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/ata5/host6/target6:0:0/6:0:0:0] [7:0:0:0] disk ATA Hitachi HUA72302 A840 /dev/sdf /dev/sg5 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/7:0:0:0 [/sys/devices/pci0000:00/0000:00:11.0/ata6/host7/target7:0:0/7:0:0:0] [10:0:0:0] disk ATA ST3320418AS CC45 /dev/sdg /dev/sg6 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/10:0:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:02:00.0/ata9/host10/target10:0:0/10:0:0:0] [11:0:0:0] disk ATA Hitachi HDS72101 A3MA /dev/sdh /dev/sg7 state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/11:0:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:02:00.0/ata10/host11/target11:0:0/11:0:0:0] What's weird is that the two Cruzer USB devices are just 1 flash drive, and it seems to be creating some kind of ghost device that I'm not sure how to remove. The physical computer has only one USB drive attached, the one with unRAID on it. Also, removing two more (empty) devices via the webgui still doesn't let me start the array. (screenshot attached) Quote Link to comment
trurl Posted January 2, 2017 Share Posted January 2, 2017 Try plugging the cruzer into another port, preferably USB2 if you have one. Quote Link to comment
xxbigfootxx Posted October 19, 2020 Share Posted October 19, 2020 (edited) I can't start my Array either. i ran New Config - to clear some empty drive slots and fix issue with parity, as a cable failed, now the array isn't starting due to saying too many devices added. Diagnostics added. Edit: I can actually see that the device thinks i have about 30 devices attached? I installed a HBA card and it's working perfectly fine prior to this. 5 drives are connected to MB and 1 to HBA. I plan to increase the license to Plus eventually when funds permit. zeus-diagnostics-20201019-1044.zip Edited October 19, 2020 by xxbigfootxx Additional info Quote Link to comment
Squid Posted October 19, 2020 Share Posted October 19, 2020 You're allow 6 attached storage devices on basic, and you've got 8 excluding the flash drive Oct 19 10:36:16 Zeus emhttpd: WDC_WD30EZRZ-00Z5HB0_WD-WCC4N0LF45L1 (sdh) 512 5860533168 Oct 19 10:36:16 Zeus emhttpd: Samsung_SSD_860_EVO_250GB_S3Y9NF0K222430B (sdg) 512 488397168 Oct 19 10:36:16 Zeus emhttpd: ST8000DM004-2CX188_WCT3J3A7 (sdd) 512 15628053168 Oct 19 10:36:16 Zeus emhttpd: ST6000VN0033-2EE110_ZAD7N41T (sde) 512 11721045168 Oct 19 10:36:16 Zeus emhttpd: WDC_WD50EZRX-11NWHB1_WD-WX31DC45TNU3 (sdf) 512 9767541168 Oct 19 10:36:16 Zeus emhttpd: ST6000VN0033-2EE110_ZAD7MFKF (sdc) 512 11721045168 Oct 19 10:36:16 Zeus emhttpd: ST3000DM008-2DM166_Z503J4HA (sdi) 512 5860533168 Oct 19 10:36:16 Zeus emhttpd: TOSHIBA_TOSHIBA_USB_DRV_07087A1EAD549849-0:0 (sda) 512 30285824 Quote Link to comment
itimpi Posted October 19, 2020 Share Posted October 19, 2020 10 hours ago, xxbigfootxx said: Edit: I can actually see that the device thinks i have about 30 devices attached? I installed a HBA card and it's working perfectly fine prior to this. 5 drives are connected to MB and 1 to HBA. I plan to increase the license to Plus eventually when funds permit. You have mis-interpreted the syslog output. What you see is just a listing of all the possible array drive positions (regardless of licence level). The statement about it working previously suggests that some of your drives are removable. The check for the number of attached drives is carried out when starting the array so removing such drives may allow you to start the array. You can plug removable drives in at any time after the array is started without Unraid complaining about the number of drives. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.