Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

I hot swap drives in and out of my servers without stopping the array and without rebooting. Most of the time it works flawlessly.

 

But I have had 2 cases when I was precleaning that unRaid was confused with drive names, and had a previous drive listed.

 

What are the known hot swap issues?

Link to comment

I hot swap drives in and out of my servers without stopping the array and without rebooting. Most of the time it works flawlessly.

 

But I have had 2 cases when I was precleaning that unRaid was confused with drive names, and had a previous drive listed.

 

What are the known hot swap issues?

I can see how that could happen, which makes it a little dangerous.  The Linux kernel does support hot-swapping, but I don't think the unRAID modules are completely in sync with the kernel.  That is, unRAID does not always know what has been re-assigned.  Occasionally, the kernel will reassign a previously used drive symbol (sdj, sdm, etc), once it has been dropped and available.  I believe unRAID will assume that that drive symbol still refers to the drive it knew about.

 

I would definitely avoid doing that.

Link to comment

Question is in regards to UD and External USB.

 

In the process of setting up CrashPlan to backup locally to external USB, in addition to cloud, I noticed that when accessed from within CrashPlan folder destination, it only browses till the root.

 

USB is mounted by UD as: /mnt/disks/<LABEL>

In USB: <LABEL>/Crashplan_Backup/<number> => I've old local backup here.

 

But when browsing through CrashPlan app to specify the previous backup location, the folder explorer dialog (perhaps not the correct vocabulary in Linux) does not show the full path. It stops as <containter_mount>/disks/<LABEL>.

 

The issue is, CrashPlan looks for

 

<Container_mount>/disks/<LABEL>/<number>

 

while the previous backup is in

 

<Containter_mount_point>/disks/<LABEL>/Crashplan_Backup/<number>

 

Am I doing something wrong for the subfolder to not show up? If I move the <number> folder to root, it quite likely would work, but I'd prefer to keep it in sub folder for clean organization purpose.

Link to comment

Question is in regards to UD and External USB.

 

In the process of setting up CrashPlan to backup locally to external USB, in addition to cloud, I noticed that when accessed from within CrashPlan folder destination, it only browses till the root.

 

USB is mounted by UD as: /mnt/disks/<LABEL>

In USB: <LABEL>/Crashplan_Backup/<number> => I've old local backup here.

 

But when browsing through CrashPlan app to specify the previous backup location, the folder explorer dialog (perhaps not the correct vocabulary in Linux) does not show the full path. It stops as <containter_mount>/disks/<LABEL>.

 

The issue is, CrashPlan looks for

 

<Container_mount>/disks/<LABEL>/<number>

 

while the previous backup is in

 

<Containter_mount_point>/disks/<LABEL>/Crashplan_Backup/<number>

 

Am I doing something wrong for the subfolder to not show up? If I move the <number> folder to root, it quite likely would work, but I'd prefer to keep it in sub folder for clean organization purpose.

Dockers can only see disks that were already mounted when the docker service started. After mounting a disk you need to access from a docker, you will have to go to Docker Settings and restart the docker service.
Link to comment

If you are using cache_dirs, any disk mounted by UD will be included as a scanned directory unless specifically excluded, or not included.  Cache_dirs will scan disks one at a time as you describe.  I've also seen when cache_dirs does not have enough memory, it will have to un-cache and re-cache directories.  You might try excluding the seven drives in cache_dirs you have mounted with UD.

 

Hi, I will try if the behvavior persists without cache dirs and report back.

Link to comment

If you are using cache_dirs, any disk mounted by UD will be included as a scanned directory unless specifically excluded, or not included.  Cache_dirs will scan disks one at a time as you describe.  You might try excluding the seven drives in cache_dirs you have mounted with UD.

I've suggested elsewhere in the past that CacheDirs should ALWAYS be used with specified Includes.  Bonienl provides a very convenient dropdown, to select only those folders you really need cached, and no others.  Performance and RAM usage are best doing it this way.  And if you only specify the exact folders you want included, you should never have to worry about other disks being scanned.

 

However, that was all before Unassigned disks could be in the system, and there have been code changes, so excluding unwanted drives may be necessary now (don't know for sure).

 

I've also seen when cache_dirs does not have enough memory, it will have to un-cache and re-cache directories.

Well technically CacheDirs doesn't 'un-cache' anything, but if a user wants too many items cached and doesn't have sufficient buffer space for them, then CacheDirs is going to fill up all available cache room, then the kernel is going to dump the earliest entries (from the first drives and folders).  Then seconds later on the next pass, CacheDirs is going to have to go to disk to reload those, keeping those drives spinning and setting up constant disk thrashing.  Again, you should only cache what you absolutely need cached, and nothing more.

 

Almost a year ago, I started rewriting CacheDirs into a plugin friendly version, event driven.  I made some slow progress, cutting out a lot of the old, then discovered what a great job bonienl did in wrapping and controlling it from his Dynamix plugin.  I stopped, since it wasn't needed any more.  And then I believe you provided additional modifications and fixes, to modernize it and make it more plugin compatible.  One thing I was going to add was more control over what is cached, by adding the option to specify absolute paths.  Currently, paths are relative to /mnt/user, but if you allow the user to specify paths beginning with a slash, you can assume they are absolute, and provide more pinpoint specification of sub-folders they may want cached.  Should be simple for one of you to add, and be helpful for certain users.

Link to comment

I've suggested elsewhere in the past that CacheDirs should ALWAYS be used with specified Includes.

Because if you don't do that, then cachedirs will wind up trying to handle your appdata folder for instance which (in my case at least) could easily have 1,000,000 files in it that do not need to be kept track of
Link to comment

Hi

 

I use this plugin for mounting 4 x SSDs and it works very well. However, I noticed a strange issue, maybe bug?

On the main UNRAID tab under Unassigned Devices, the SMART status of each drive changes from green to grey and the Temperature disappears. I need to click the drive and poll SMART for it to turn green again and display the temp. Not sure why this is, can it be fixed?

 

thank you!

Grey means spun down or on standby and temperatures are unavailable in that state.

 

Since they are SSDs, can I prevent them from spinning down or going to standby?

 

Do you have a script defined?  If you create a script file, the drive will be monitored for spin down status and temperature.  Read the second post about best practices.

 

No, I do not have a script defined. What would a script look like that allows me to monitor the SMART/Temp even if the SSD is unmounted or in standby? I really do appreciate your guidance with this.

 

thank you.

 

The check is for the existence of a script file.  It can be empty or use the default script that really does nothing.  Just click on the edit script icon, select the default script, and then save it.

 

This was done because some users want a "hot standby" disk and don't want it checked for spin up status and temperature.  The existence of a script file implies that you are going to use the device and want it monitored.

 

A lot of the support questions coming up here can be easily resolved by reading my second post about "best practices" and looking at the UD log file.  I realize everyone is in a hurry these days and wants a quick answer, but the first two posts are an easy read and will answer a lot of your questions.  If I have not been clear enough, let me know and I will elaborate as needed.

 

I finally got around to trying this again since having many other issues with my UNRAID system. The fact is, my SSDs still show a grey status icon even though I select the default script and save it. The SSDs are mounted and have VM images on them although the VM is powered off. I thought the default script kept the drive alive (hot) ?

Link to comment

The default script will enable the drive standby check and temperature display, but the drive can still go into standby mode in 15 minutes.

 

Does the temperature of the SSD show?

 

Nope, there is an asterix where the temp usually is. Is there a way to disable standby or increase from 15mins to something like 180mins?

Link to comment

The default script will enable the drive standby check and temperature display, but the drive can still go into standby mode in 15 minutes.

 

Does the temperature of the SSD show?

 

Nope, there is an asterix where the temp usually is. Is there a way to disable standby or increase from 15mins to something like 180mins?

 

Go to a command line and give the output of the following commands:

 

smartctl -A -d sat,12 /dev/sdx

 

hdparm -C /dev/sdx

 

Where sdx is the device.

Link to comment

The default script will enable the drive standby check and temperature display, but the drive can still go into standby mode in 15 minutes.

 

Does the temperature of the SSD show?

 

Nope, there is an asterix where the temp usually is. Is there a way to disable standby or increase from 15mins to something like 180mins?

 

Go to a command line and give the output of the following commands:

 

smartctl -A -d sat,12 /dev/sdx

 

hdparm -C /dev/sdx

 

Where sdx is the device.

 

Here you go...

 

----------------------------

 

root@MOUNRAID01:/dev# hdparm -C /dev/sdb

 

/dev/sdb:

drive state is:  active/idle

root@MOUNRAID01:/dev# hdparm -C /dev/sdd

 

/dev/sdd:

drive state is:  standby

root@MOUNRAID01:/dev# hdparm -C /dev/sde

 

/dev/sde:

drive state is:  standby

root@MOUNRAID01:/dev# hdparm -C /dev/sdc

 

/dev/sdc:

drive state is:  active/idle

root@MOUNRAID01:/dev# hdparm -C /dev/sdf

 

/dev/sdf:

drive state is:  standby

root@MOUNRAID01:/dev# smartctl -A -d sat,12 /dev/sdb

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.17-unRAID] (local build)

Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

 

=== START OF READ SMART DATA SECTION ===

SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  5 Reallocated_Sector_Ct  0x0033  100  100  010    Pre-fail  Always      -      0

  9 Power_On_Hours          0x0032  095  095  000    Old_age  Always      -      22819

12 Power_Cycle_Count      0x0032  099  099  000    Old_age  Always      -      397

177 Wear_Leveling_Count    0x0013  094  094  000    Pre-fail  Always      -      63

179 Used_Rsvd_Blk_Cnt_Tot  0x0013  100  100  010    Pre-fail  Always      -      0

181 Program_Fail_Cnt_Total  0x0032  100  100  010    Old_age  Always      -      0

182 Erase_Fail_Count_Total  0x0032  100  100  010    Old_age  Always      -      0

183 Runtime_Bad_Block      0x0013  100  100  010    Pre-fail  Always      -      0

187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

190 Airflow_Temperature_Cel 0x0032  063  036  000    Old_age  Always      -      37

195 Hardware_ECC_Recovered  0x001a  200  200  000    Old_age  Always      -      0

199 UDMA_CRC_Error_Count    0x003e  099  099  000    Old_age  Always      -      11

235 Unknown_Attribute      0x0012  099  099  000    Old_age  Always      -      352

241 Total_LBAs_Written      0x0032  099  099  000    Old_age  Always      -      37190850732

 

root@MOUNRAID01:/dev# smartctl -A -d sat,12 /dev/sdc

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.17-unRAID] (local build)

Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

 

=== START OF READ SMART DATA SECTION ===

SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  1 Raw_Read_Error_Rate    0x0000  100  100  000    Old_age  Offline      -      0

  5 Reallocated_Sector_Ct  0x0000  100  100  000    Old_age  Offline      -      0

  9 Power_On_Hours          0x0000  100  100  000    Old_age  Offline      -      2

12 Power_Cycle_Count      0x0000  100  100  000    Old_age  Offline      -      134

160 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      0

161 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      23

163 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      322

148 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      988

149 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      24

150 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      0

151 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      14

164 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      207

165 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      4

166 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      0

167 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      0

169 Unknown_Attribute      0x0000  100  100  001    Old_age  Offline      -      100

181 Program_Fail_Cnt_Total  0x0000  100  100  000    Old_age  Offline      -      0

182 Erase_Fail_Count_Total  0x0000  100  100  000    Old_age  Offline      -      0

192 Power-Off_Retract_Count 0x0000  100  100  000    Old_age  Offline      -      9

194 Temperature_Celsius    0x0000  100  100  070    Old_age  Offline      -      30 (38 40 40 35 0)

199 UDMA_CRC_Error_Count    0x0000  100  100  000    Old_age  Offline      -      0

232 Available_Reservd_Space 0x0000  100  100  000    Old_age  Offline      -      100

241 Total_LBAs_Written      0x0000  100  100  000    Old_age  Offline      -      1244

242 Total_LBAs_Read        0x0000  100  100  000    Old_age  Offline      -      705

245 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      1222

246 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      1976

247 Unknown_Attribute      0x0000  100  100  000    Old_age  Offline      -      0

 

root@MOUNRAID01:/dev# smartctl -A -d sat,12 /dev/sdd

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.17-unRAID] (local build)

Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

 

=== START OF READ SMART DATA SECTION ===

SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  5 Reallocated_Sector_Ct  0x0033  100  100  010    Pre-fail  Always      -      0

  9 Power_On_Hours          0x0032  097  097  000    Old_age  Always      -      12629

12 Power_Cycle_Count      0x0032  099  099  000    Old_age  Always      -      418

177 Wear_Leveling_Count    0x0013  094  094  000    Pre-fail  Always      -      61

179 Used_Rsvd_Blk_Cnt_Tot  0x0013  100  100  010    Pre-fail  Always      -      0

181 Program_Fail_Cnt_Total  0x0032  100  100  010    Old_age  Always      -      0

182 Erase_Fail_Count_Total  0x0032  100  100  010    Old_age  Always      -      0

183 Runtime_Bad_Block      0x0013  100  100  010    Pre-fail  Always      -      0

187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

190 Airflow_Temperature_Cel 0x0032  059  035  000    Old_age  Always      -      41

195 Hardware_ECC_Recovered  0x001a  200  200  000    Old_age  Always      -      0

199 UDMA_CRC_Error_Count    0x003e  100  100  000    Old_age  Always      -      0

235 Unknown_Attribute      0x0012  099  099  000    Old_age  Always      -      341

241 Total_LBAs_Written      0x0032  099  099  000    Old_age  Always      -      34838096662

 

root@MOUNRAID01:/dev# smartctl -A -d sat,12 /dev/sde

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.17-unRAID] (local build)

Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

 

=== START OF READ SMART DATA SECTION ===

SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  5 Reallocated_Sector_Ct  0x0033  100  100  010    Pre-fail  Always      -      0

  9 Power_On_Hours          0x0032  099  099  000    Old_age  Always      -      170

12 Power_Cycle_Count      0x0032  099  099  000    Old_age  Always      -      5

177 Wear_Leveling_Count    0x0013  100  100  000    Pre-fail  Always      -      0

179 Used_Rsvd_Blk_Cnt_Tot  0x0013  100  100  010    Pre-fail  Always      -      0

181 Program_Fail_Cnt_Total  0x0032  100  100  010    Old_age  Always      -      0

182 Erase_Fail_Count_Total  0x0032  100  100  010    Old_age  Always      -      0

183 Runtime_Bad_Block      0x0013  100  100  010    Pre-fail  Always      -      0

187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

190 Airflow_Temperature_Cel 0x0032  066  054  000    Old_age  Always      -      34

195 Hardware_ECC_Recovered  0x001a  200  200  000    Old_age  Always      -      0

199 UDMA_CRC_Error_Count    0x003e  100  100  000    Old_age  Always      -      0

235 Unknown_Attribute      0x0012  099  099  000    Old_age  Always      -      3

241 Total_LBAs_Written      0x0032  099  099  000    Old_age  Always      -      205105535

 

root@MOUNRAID01:/dev# smartctl -A -d sat,12 /dev/sdf

smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.17-unRAID] (local build)

Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

 

=== START OF READ SMART DATA SECTION ===

SMART Attributes Data Structure revision number: 1

Vendor Specific SMART Attributes with Thresholds:

ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

  5 Reallocated_Sector_Ct  0x0033  100  100  010    Pre-fail  Always      -      0

  9 Power_On_Hours          0x0032  098  098  000    Old_age  Always      -      8563

12 Power_Cycle_Count      0x0032  099  099  000    Old_age  Always      -      88

177 Wear_Leveling_Count    0x0013  099  099  000    Pre-fail  Always      -      5

179 Used_Rsvd_Blk_Cnt_Tot  0x0013  100  100  010    Pre-fail  Always      -      0

181 Program_Fail_Cnt_Total  0x0032  100  100  010    Old_age  Always      -      0

182 Erase_Fail_Count_Total  0x0032  100  100  010    Old_age  Always      -      0

183 Runtime_Bad_Block      0x0013  100  100  010    Pre-fail  Always      -      0

187 Reported_Uncorrect      0x0032  100  100  000    Old_age  Always      -      0

190 Airflow_Temperature_Cel 0x0032  062  039  000    Old_age  Always      -      38

195 Hardware_ECC_Recovered  0x001a  200  200  000    Old_age  Always      -      0

199 UDMA_CRC_Error_Count    0x003e  100  100  000    Old_age  Always      -      0

235 Unknown_Attribute      0x0012  099  099  000    Old_age  Always      -      86

241 Total_LBAs_Written      0x0032  099  099  000    Old_age  Always      -      7595253818

 

Link to comment

I forgot to let you know that the update did fix the issue.  With a raid drive sent through with the default name on it (with the # sign in it), the display works fine without the errant text.

 

 

I changed the way that UD gets the unraid drives and determines which drives are unassigned to be more robust several versions ago.  I have some ideas on why this is happening.

 

In order to troubleshoot this issue, I will need more information so I can try to reproduce the error.

 

If you can get the drive back to producing the error and get the following information from a command line, I'll see what I can do to try to reproduce the error.

 

ls /dev/disk/by-id

 

Also post the /usr/local/emhttp/state/disks.ini file.

 

These two pieces of information should help me troubleshoot.

 

ls /dev/disk/by-id:

root@media01:~# ls /dev/disk/by-id/
ata-MKNSSDCR480GB-G2_MK151112AS2215611@            scsi-ST8000AS0002-1NA_Z84030D2-part1@
ata-MKNSSDCR480GB-G2_MK151112AS2215611-part1@      scsi-ST8000AS0002-1NA_Z8405313@
ata-OCZ-TRION100_952B54S2KMCX@                     scsi-ST8000AS0002-1NA_Z8405313-part1@
ata-OCZ-TRION100_952B54S2KMCX-part1@               scsi-WD30EFRX-68AX9N0_WD-WMC1T3481101@
scsi-ARC-1231-VOL#06_0000003895606702@             scsi-WD30EFRX-68AX9N0_WD-WMC1T3481101-part1@
scsi-ARC-1231-VOL_00_0000003274592794@             scsi-WD5000HHTZ-04N21_WD-WX61E62L3287@
scsi-ARC-1231-VOL_00_0000003274592794-part1@       scsi-WD5000HHTZ-04N21_WD-WX61E62L3287-part1@
scsi-SATA_MKNSSDCR480GB-GMK151112AS2215611@        usb-MUSHKIN_MKNUFDAM16GB_070B4262D199D096-0:0@
scsi-SATA_MKNSSDCR480GB-GMK151112AS2215611-part1@  usb-MUSHKIN_MKNUFDAM16GB_070B4262D199D096-0:0-part1@
scsi-SATA_OCZ-TRION100_952B54S2KMCX@               wwn-0x58889141000739ff@
scsi-SATA_OCZ-TRION100_952B54S2KMCX-part1@         wwn-0x58889141000739ff-part1@
scsi-ST8000AS0002-1NA_Z8402DA1@                    wwn-0x5e83a972002b8d00@
scsi-ST8000AS0002-1NA_Z8402DA1-part1@              wwn-0x5e83a972002b8d00-part1@
scsi-ST8000AS0002-1NA_Z84030D2@

 

And disks.ini:

 

["parity"]
idx="0"
name="parity"
device="sdb"
id="ARC-1231-VOL_00_0000003274592794"
rotational="1"
size="7814036428"
status="DISK_OK"
temp="*"
numReads="694304"
numWrites="545097"
numErrors="0"
format="GPT: 4K-aligned"
type="Parity"
comment=""
color="green-on"
exportable="no"
fsStatus="-"
fsColor="grey-off"
spindownDelay="-1"
spinupGroup="host1"
deviceSb=""
idSb="ARC-1231-VOL_00_0000003274592794"
sizeSb="7814036428"
["disk1"]
idx="1"
name="disk1"
device="sde"
id="ST8000AS0002-1NA_Z84030D2"
rotational="1"
size="7814026532"
status="DISK_OK"
temp="*"
numReads="13018140"
numWrites="85082"
numErrors="0"
format="GPT: 4K-aligned"
type="Data"
comment=""
color="green-on"
exportable="no"
fsStatus="Mounted"
fsColor="green-on"
fsError=""
fsType="xfs"
fsSize="7811939620"
fsFree="487010472"
spindownDelay="-1"
spinupGroup="host1"
deviceSb="md1"
idSb="ST8000AS0002-1NA_Z84030D2"
sizeSb="7814026532"
["disk2"]
idx="2"
name="disk2"
device="sdf"
id="ST8000AS0002-1NA_Z8405313"
rotational="1"
size="7814026532"
status="DISK_OK"
temp="*"
numReads="16001758"
numWrites="450315"
numErrors="0"
format="GPT: 4K-aligned"
type="Data"
comment=""
color="green-on"
exportable="no"
fsStatus="Mounted"
fsColor="green-on"
fsError=""
fsType="xfs"
fsSize="7811939620"
fsFree="836276820"
spindownDelay="-1"
spinupGroup="host1"
deviceSb="md2"
idSb="ST8000AS0002-1NA_Z8405313"
sizeSb="7814026532"
["disk3"]
idx="3"
name="disk3"
device="sdg"
id="ST8000AS0002-1NA_Z8402DA1"
rotational="1"
size="7814026532"
status="DISK_OK"
temp="*"
numReads="12613446"
numWrites="14560"
numErrors="0"
format="GPT: 4K-aligned"
type="Data"
comment=""
color="green-on"
exportable="no"
fsStatus="Mounted"
fsColor="green-on"
fsError=""
fsType="xfs"
fsSize="7811939620"
fsFree="773954064"
spindownDelay="-1"
spinupGroup="host1"
deviceSb="md3"
idSb="ST8000AS0002-1NA_Z8402DA1"
sizeSb="7814026532"
["disk4"]
idx="4"
name="disk4"
device="sdc"
id="WD30EFRX-68AX9N0_WD-WMC1T3481101"
rotational="1"
size="2930266532"
status="DISK_OK"
temp="*"
numReads="2414973"
numWrites="4608"
numErrors="0"
format="GPT: 4K-aligned"
type="Data"
comment=""
color="green-on"
exportable="no"
fsStatus="Mounted"
fsColor="green-on"
fsError=""
fsType="xfs"
fsSize="2928835740"
fsFree="963815736"
spindownDelay="-1"
spinupGroup="host1"
deviceSb="md4"
idSb="WD30EFRX-68AX9N0_WD-WMC1T3481101"
sizeSb="2930266532"
["cache"]
idx="24"
name="cache"
device="sdd"
id="WD5000HHTZ-04N21_WD-WX61E62L3287"
rotational="1"
size="488386552"
status="DISK_OK"
temp="*"
numReads="3816162"
numWrites="909037"
numErrors="0"
format="MBR: 4K-aligned"
type="Cache"
comment=""
color="green-on"
exportable="no"
fsStatus="Mounted"
fsColor="yellow-on"
fsError=""
fsType="xfs"
fsSize="488148084"
fsFree="482845092"
spindownDelay="-1"
spinupGroup="host1"
deviceSb="sdd1"
idSb="WD5000HHTZ-04N21_WD-WX61E62L3287"
sizeSb="488386552"
uuid=""
["flash"]
idx="25"
name="flash"
device="sda"
id="MKNUFDAM16GB"
rotational="0"
size="15141472"
status="DISK_OK"
temp="*"
numReads="1838"
numWrites="2709"
numErrors="0"
format="unknown"
type="Flash"
comment="unRAID Sever OS boot device"
color="green-on"
exportable="yes"
fsStatus="Mounted"
fsColor="yellow-on"
fsError=""
fsType="vfat"
fsSize="15133280"
fsFree="14926880"

 

I found the problem. It is with a '#' in the device id string.

 

'scsi-ARC-1231-VOL#06_0000003895606702@'

 

It was causing a failure with a php string function finding the partitions on a device.  I'll release a new version this evening with a fix.

Link to comment

Dan, purely academic right now but what do you think about btrfs pool managed outside the Array?

 

For those who are thinking of jumping in and are going to point me in the direction of the Cache drive - lets just say I know about it.

 

I started a thread in the lounge prompting a discussion as to whether I "needed" a disk managed outside the array (http://lime-technology.com/forum/index.php?topic=46811.0) and as usually the big hitters came back in fine form with some good arguments to get the thought juices going. When I went to bed last night I'd almost convinced myself that I didn't need to do this anymore (I'll let you read the posts in that thread if you're interested rather than repeat them here). Then I woke up and read RobJ's comment:

 

I like the idea of a separate drive or pool, call it 'Apps' for want of something better, with the primary distinction that it starts and stops with the system, unlike the Cache drive or pool which goes up and down with the array.  It makes it easier to manage always-on apps.  It also should be easy to implement, because it just uses a stripped down version of the Cache drive/pool code base, minus the Share stuff and 'Cache: Only' stuff.  An officially supported Apps drive/pool makes things like pfsense easier.  It also makes management more intuitive, both the Cache and Apps drives/pools are optional and work almost the same, but one starts and stops with the array, the other doesn't.

 

I've swung again (sat here over my coffee before I get into mid month project finance reviews) and I'm sharing RobJ's view which brings me back to this excellent plugin which of course I am currently using.

 

So, long story short (I know - too late  ;)) - do you think Unassigned Devices could be developed to manage a btrfs array? Would LT allow for the stripped down version (as RobJ referred to it) of the Cache Pool code to make its way into this Plugin?

 

Just thinking out loud ....

 

 

Link to comment

Dan, purely academic right now but what do you think about btrfs pool managed outside the Array?

 

For those who are thinking of jumping in and are going to point me in the direction of the Cache drive - lets just say I know about it.

 

I started a thread in the lounge prompting a discussion as to whether I "needed" a disk managed outside the array (http://lime-technology.com/forum/index.php?topic=46811.0) and as usually the big hitters came back in fine form with some good arguments to get the thought juices going. When I went to bed last night I'd almost convinced myself that I didn't need to do this anymore (I'll let you read the posts in that thread if you're interested rather than repeat them here). Then I woke up and read RobJ's comment:

 

I like the idea of a separate drive or pool, call it 'Apps' for want of something better, with the primary distinction that it starts and stops with the system, unlike the Cache drive or pool which goes up and down with the array.  It makes it easier to manage always-on apps.  It also should be easy to implement, because it just uses a stripped down version of the Cache drive/pool code base, minus the Share stuff and 'Cache: Only' stuff.  An officially supported Apps drive/pool makes things like pfsense easier.  It also makes management more intuitive, both the Cache and Apps drives/pools are optional and work almost the same, but one starts and stops with the array, the other doesn't.

 

I've swung again (sat here over my coffee before I get into mid month project finance reviews) and I'm sharing RobJ's view which brings me back to this excellent plugin which of course I am currently using.

 

So, long story short (I know - too late  ;)) - do you think Unassigned Devices could be developed to manage a btrfs array? Would LT allow for the stripped down version (as RobJ referred to it) of the Cache Pool code to make its way into this Plugin?

 

Just thinking out loud ....

 

It does appear that this was asked in the previous thread here:

 

http://lime-technology.com/forum/index.php?topic=38635.msg376126;topicseen#msg376126

 

The response (as I interpret it) was that if a btrfs pool is set up prior and one of the disks is mounted then the entire pool is mounted. So far it appears it is untested as this is where the discussion appeared to end.

Link to comment

Dan, purely academic right now but what do you think about btrfs pool managed outside the Array?

 

For those who are thinking of jumping in and are going to point me in the direction of the Cache drive - lets just say I know about it.

 

I started a thread in the lounge prompting a discussion as to whether I "needed" a disk managed outside the array (http://lime-technology.com/forum/index.php?topic=46811.0) and as usually the big hitters came back in fine form with some good arguments to get the thought juices going. When I went to bed last night I'd almost convinced myself that I didn't need to do this anymore (I'll let you read the posts in that thread if you're interested rather than repeat them here). Then I woke up and read RobJ's comment:

 

I like the idea of a separate drive or pool, call it 'Apps' for want of something better, with the primary distinction that it starts and stops with the system, unlike the Cache drive or pool which goes up and down with the array.  It makes it easier to manage always-on apps.  It also should be easy to implement, because it just uses a stripped down version of the Cache drive/pool code base, minus the Share stuff and 'Cache: Only' stuff.  An officially supported Apps drive/pool makes things like pfsense easier.  It also makes management more intuitive, both the Cache and Apps drives/pools are optional and work almost the same, but one starts and stops with the array, the other doesn't.

 

I've swung again (sat here over my coffee before I get into mid month project finance reviews) and I'm sharing RobJ's view which brings me back to this excellent plugin which of course I am currently using.

 

So, long story short (I know - too late  ;)) - do you think Unassigned Devices could be developed to manage a btrfs array? Would LT allow for the stripped down version (as RobJ referred to it) of the Cache Pool code to make its way into this Plugin?

 

Just thinking out loud ....

 

It does appear that this was asked in the previous thread here:

 

http://lime-technology.com/forum/index.php?topic=38635.msg376126;topicseen#msg376126

 

The response (as I interpret it) was that if a btrfs pool is set up prior and one of the disks is mounted then the entire pool is mounted. So far it appears it is untested as this is where the discussion appeared to end.

 

I'm not sure it would be appropriate for me to butt into the conversation you seem to be having with yourself.

Link to comment

Mounting a btrfs pool would be beyond the scope of what I feel is the goal of UD.  I would think this capability might be better served if LT were to incorporate the functionality into unraid.

 

I see that what is being requested is the ability to set up a btrfs pool and mount/unmount the pool on startup/shutdown and not the array starting and stopping.  I believe there are some issues with starting and stopping Dockers and VMs that need to be handled by unraid, and not the UD plugin.

Link to comment

Mounting a btrfs pool would be beyond the scope of what I feel is the goal of UD.  I would think this capability might be better served if LT were to incorporate the functionality into unraid.

 

I see that what is being requested is the ability to set up a btrfs pool and mount/unmount the pool on startup/shutdown and not the array starting and stopping.  I believe there are some issues with starting and stopping Dockers and VMs that need to be handled by unraid, and not the UD plugin.

 

Awwww. Spoil Sport. It would have been really nice to mount a protected pool outside the array!  8)

Link to comment

@dlandon, a question about using UD for formatting external devices.

 

I used this feature yesterday to format an old laptop drive in NTFS.  I then backed up a load of data from the server on to it.  Plugging the hard drive in to windows 10 laptop however, the drive did not show up in My Computer. Looking in Drive Management just showed the drive as being there but not assigned any drive letter or having a defined file system.  I then re-formatted in NTFS and then re-copied everything over from the server. 

 

Just wondering how UD is formatting drives in NTFS and if this is some different form of the file system which isn't 100% windows compatible or something?!?

Thanks.

Link to comment
  • trurl pinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.