brainbone

Members
  • Posts

    272
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by brainbone

  1. I understand that port multipliers are, in general, actively discouraged for use with Unraid -- with most/all discussion I'm able to find about them ending in finding alternatives instead of use of them.

     

    I'd like to add some more SSD drives to an unraid box, but all PCIe/NVME/SAS/SATA are currently exhausted, and I'd like to avoid replacing the motherboard. (asrock x470 taichi w/ 8 onboard SATA and an IBM M1015 / SAS2008 SAS HBA.)  Unfortunately, this puts me in the position of looking at a SAS/SATA port multiplier as a possible option, and I'm hoping there's been some success in finding something that works.

     

    I'm thinking if I could find a good multiplier/expansion, I could move some of my 15 array drives to that, opening up some non-expanded SATA ports for more SSDs.

     

    I've used IBM 46M0997 SAS Expansion cards with IBM M1015 HBAs in the past (not on unraid) with good results, but they unfortunately require a PCIe slot, which I don't have available.  Does something like it exist that's known to work well with unraid and doesn't require a PCIe slot, or am I looking for a unicorn?  Perhaps replacing the M1015 with a 9201-16i would be a better option?

     

     

    Edit:  After thinking this though, I think the obvious answer is something like the 9201-16i -- not sure why I got hung up on a port expander --- probably because I have some 46M0997 sitting around.  Anyone know if the 9201-16i has trouble at PCIe x4? 

     

  2. What's the correct way to re-build a drive marked as disabled that passes extended SMART test without losing the emulated data on it?    I'd ideally like to pre-clear the drive as a test before re-adding it back.

     

    Are these the correct steps?:

    1. Stop array

    2. Set device to "No device" (this is the step that concerns me.)

    3. Start the array (hopefully my disk 13 will still be emulated even after marking no-device?)

    4. Preclear the now un-assigned disk

    5. Stop the array

    6. Set disk 13 to the pre-cleared disk

    7. start the array and let unraid rebuild it.

     

    Edit:  Here's my "PASSED" smart report for the drive:

    Spoiler


    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-4.19.107-Unraid] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Seagate IronWolf
    Device Model:     ST10000VN0008-2JJ101
    Serial Number:    ZPW0GTB8
    LU WWN Device Id: 5 000c50 0c775e6c2
    Firmware Version: SC60
    User Capacity:    10,000,831,348,736 bytes [10.0 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    7200 rpm
    Form Factor:      3.5 inches
    Device is:        In smartctl database [for details use: -P show]
    ATA Version is:   ACS-4 (minor revision not indicated)
    SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Sun Aug 22 10:25:29 2021 CDT
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    AAM feature is:   Unavailable
    APM feature is:   Unavailable
    Rd look-ahead is: Enabled
    Write cache is:   Enabled
    DSN feature is:   Disabled
    ATA Security is:  Disabled, frozen [SEC2]
    Write SCT (Get) Feature Control Command failed: scsi error badly formed scsi parameters
    Wt Cache Reorder: Unknown (SCT Feature Control command failed)
    
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    General SMART Values:
    Offline data collection status:  (0x82)	Offline data collection activity
    					was completed without error.
    					Auto Offline Data Collection: Enabled.
    Self-test execution status:      (   0)	The previous self-test routine completed
    					without error or no self-test has ever 
    					been run.
    Total time to complete Offline 
    data collection: 		(  575) seconds.
    Offline data collection
    capabilities: 			 (0x7b) SMART execute Offline immediate.
    					Auto Offline data collection on/off support.
    					Suspend Offline collection upon new
    					command.
    					Offline surface scan supported.
    					Self-test supported.
    					Conveyance Self-test supported.
    					Selective Self-test supported.
    SMART capabilities:            (0x0003)	Saves SMART data before entering
    					power-saving mode.
    					Supports SMART auto save timer.
    Error logging capability:        (0x01)	Error logging supported.
    					General Purpose Logging supported.
    Short self-test routine 
    recommended polling time: 	 (   1) minutes.
    Extended self-test routine
    recommended polling time: 	 ( 978) minutes.
    Conveyance self-test routine
    recommended polling time: 	 (   2) minutes.
    SCT capabilities: 	       (0x50bd)	SCT Status supported.
    					SCT Error Recovery Control supported.
    					SCT Feature Control supported.
    					SCT Data Table supported.
    
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
      1 Raw_Read_Error_Rate     POSR--   083   064   044    -    183983159
      3 Spin_Up_Time            PO----   096   096   000    -    0
      4 Start_Stop_Count        -O--CK   100   100   020    -    3
      5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
      7 Seek_Error_Rate         POSR--   079   060   045    -    75627117
      9 Power_On_Hours          -O--CK   095   095   000    -    4962
     10 Spin_Retry_Count        PO--C-   100   100   097    -    0
     12 Power_Cycle_Count       -O--CK   100   100   020    -    3
     18 Unknown_Attribute       PO-R--   100   100   050    -    0
    187 Reported_Uncorrect      -O--CK   100   100   000    -    0
    188 Command_Timeout         -O--CK   100   100   000    -    0
    190 Airflow_Temperature_Cel -O---K   065   049   040    -    35 (Min/Max 33/43)
    192 Power-Off_Retract_Count -O--CK   100   100   000    -    0
    193 Load_Cycle_Count        -O--CK   091   091   000    -    19009
    194 Temperature_Celsius     -O---K   035   044   000    -    35 (0 11 0 0 0)
    195 Hardware_ECC_Recovered  -O-RC-   083   064   000    -    183983159
    197 Current_Pending_Sector  -O--C-   100   100   000    -    0
    198 Offline_Uncorrectable   ----C-   100   100   000    -    0
    199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
    200 Multi_Zone_Error_Rate   PO---K   100   100   001    -    0
    240 Head_Flying_Hours       ------   100   253   000    -    688 (97 61 0)
    241 Total_LBAs_Written      ------   100   253   000    -    33688597376
    242 Total_LBAs_Read         ------   100   253   000    -    174173883275
                                ||||||_ K auto-keep
                                |||||__ C event count
                                ||||___ R error rate
                                |||____ S speed/performance
                                ||_____ O updated online
                                |______ P prefailure warning
    
    General Purpose Log Directory Version 1
    SMART           Log Directory Version 1 [multi-sector log support]
    Address    Access  R/W   Size  Description
    0x00       GPL,SL  R/O      1  Log Directory
    0x01           SL  R/O      1  Summary SMART error log
    0x02           SL  R/O      5  Comprehensive SMART error log
    0x03       GPL     R/O      5  Ext. Comprehensive SMART error log
    0x04       GPL     R/O    256  Device Statistics log
    0x04       SL      R/O      8  Device Statistics log
    0x06           SL  R/O      1  SMART self-test log
    0x07       GPL     R/O      1  Extended self-test log
    0x08       GPL     R/O      2  Power Conditions log
    0x09           SL  R/W      1  Selective self-test log
    0x0a       GPL     R/W      8  Device Statistics Notification
    0x0c       GPL     R/O   2048  Pending Defects log
    0x10       GPL     R/O      1  NCQ Command Error log
    0x11       GPL     R/O      1  SATA Phy Event Counters log
    0x13       GPL     R/O      1  SATA NCQ Send and Receive log
    0x15       GPL     R/W      1  Rebuild Assist log
    0x21       GPL     R/O      1  Write stream error log
    0x22       GPL     R/O      1  Read stream error log
    0x24       GPL     R/O    768  Current Device Internal Status Data log
    0x2f       GPL     -        1  Set Sector Configuration
    0x30       GPL,SL  R/O      9  IDENTIFY DEVICE data log
    0x80-0x9f  GPL,SL  R/W     16  Host vendor specific log
    0xa1       GPL,SL  VS      24  Device vendor specific log
    0xa2       GPL     VS   16320  Device vendor specific log
    0xa4       GPL,SL  VS     160  Device vendor specific log
    0xa6       GPL     VS     192  Device vendor specific log
    0xa8-0xa9  GPL,SL  VS     136  Device vendor specific log
    0xab       GPL     VS       1  Device vendor specific log
    0xad       GPL     VS      16  Device vendor specific log
    0xbe-0xbf  GPL     VS   65535  Device vendor specific log
    0xc1       GPL,SL  VS       8  Device vendor specific log
    0xc3       GPL,SL  VS      32  Device vendor specific log
    0xc9       GPL,SL  VS       8  Device vendor specific log
    0xca       GPL,SL  VS      16  Device vendor specific log
    0xd1       GPL     VS     336  Device vendor specific log
    0xd2       GPL     VS   10000  Device vendor specific log
    0xd4       GPL     VS    2048  Device vendor specific log
    0xda       GPL,SL  VS       1  Device vendor specific log
    0xe0       GPL,SL  R/W      1  SCT Command/Status
    0xe1       GPL,SL  R/W      1  SCT Data Transfer
    
    SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
    No Errors Logged
    
    SMART Extended Self-test Log Version: 1 (1 sectors)
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Extended offline    Completed without error       00%      4954         -
    
    SMART Selective self-test log data structure revision number 1
     SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
        1        0        0  Not_testing
        2        0        0  Not_testing
        3        0        0  Not_testing
        4        0        0  Not_testing
        5        0        0  Not_testing
    Selective self-test flags (0x0):
      After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    
    SCT Status Version:                  3
    SCT Version (vendor specific):       522 (0x020a)
    Device State:                        Active (0)
    Current Temperature:                    35 Celsius
    Power Cycle Min/Max Temperature:     33/43 Celsius
    Lifetime    Min/Max Temperature:     11/51 Celsius
    Under/Over Temperature Limit Count:   0/24
    SMART Status:                        0xc24f (PASSED)
    Vendor specific:
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    00 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00
    
    SCT Temperature History Version:     2
    Temperature Sampling Period:         3 minutes
    Temperature Logging Interval:        59 minutes
    Min/Max recommended Temperature:     10/25 Celsius
    Min/Max Temperature Limit:            5/70 Celsius
    Temperature History Size (Index):    128 (25)
    
    Index    Estimated Time   Temperature Celsius
      26    2021-08-17 04:36    37  ******************
      27    2021-08-17 05:35    37  ******************
      28    2021-08-17 06:34    38  *******************
      29    2021-08-17 07:33    36  *****************
      30    2021-08-17 08:32    36  *****************
      31    2021-08-17 09:31    36  *****************
      32    2021-08-17 10:30    37  ******************
      33    2021-08-17 11:29    36  *****************
      34    2021-08-17 12:28    36  *****************
      35    2021-08-17 13:27    35  ****************
      36    2021-08-17 14:26    37  ******************
      37    2021-08-17 15:25    36  *****************
      38    2021-08-17 16:24    35  ****************
      39    2021-08-17 17:23    35  ****************
      40    2021-08-17 18:22    36  *****************
      41    2021-08-17 19:21    36  *****************
      42    2021-08-17 20:20    36  *****************
      43    2021-08-17 21:19    37  ******************
     ...    ..(  9 skipped).    ..  ******************
      53    2021-08-18 07:09    37  ******************
      54    2021-08-18 08:08    36  *****************
      55    2021-08-18 09:07    36  *****************
      56    2021-08-18 10:06    37  ******************
      57    2021-08-18 11:05    38  *******************
      58    2021-08-18 12:04    36  *****************
     ...    ..(  5 skipped).    ..  *****************
      64    2021-08-18 17:58    36  *****************
      65    2021-08-18 18:57    37  ******************
      66    2021-08-18 19:56    37  ******************
      67    2021-08-18 20:55    37  ******************
      68    2021-08-18 21:54    38  *******************
      69    2021-08-18 22:53    37  ******************
     ...    ..(  8 skipped).    ..  ******************
      78    2021-08-19 07:44    37  ******************
      79    2021-08-19 08:43    38  *******************
      80    2021-08-19 09:42    37  ******************
      81    2021-08-19 10:41    37  ******************
      82    2021-08-19 11:40    36  *****************
      83    2021-08-19 12:39    37  ******************
      84    2021-08-19 13:38    36  *****************
     ...    ..(  2 skipped).    ..  *****************
      87    2021-08-19 16:35    36  *****************
      88    2021-08-19 17:34    38  *******************
      89    2021-08-19 18:33    37  ******************
     ...    ..( 12 skipped).    ..  ******************
     102    2021-08-20 07:20    37  ******************
     103    2021-08-20 08:19    39  ********************
     104    2021-08-20 09:18    38  *******************
     105    2021-08-20 10:17    39  ********************
     106    2021-08-20 11:16    37  ******************
     107    2021-08-20 12:15    37  ******************
     108    2021-08-20 13:14    36  *****************
     109    2021-08-20 14:13    36  *****************
     110    2021-08-20 15:12    36  *****************
     111    2021-08-20 16:11    37  ******************
     ...    ..(  3 skipped).    ..  ******************
     115    2021-08-20 20:07    37  ******************
     116    2021-08-20 21:06    38  *******************
     117    2021-08-20 22:05    37  ******************
     ...    ..(  2 skipped).    ..  ******************
     120    2021-08-21 01:02    37  ******************
     121    2021-08-21 02:01    38  *******************
     122    2021-08-21 03:00    38  *******************
     123    2021-08-21 03:59    37  ******************
     124    2021-08-21 04:58    38  *******************
     125    2021-08-21 05:57    37  ******************
     126    2021-08-21 06:56    37  ******************
     127    2021-08-21 07:55    38  *******************
       0    2021-08-21 08:54    38  *******************
       1    2021-08-21 09:53     ?  -
       2    2021-08-21 10:52    37  ******************
       3    2021-08-21 11:51    43  ************************
     ...    ..(  5 skipped).    ..  ************************
       9    2021-08-21 17:45    43  ************************
      10    2021-08-21 18:44    42  ***********************
      11    2021-08-21 19:43    42  ***********************
      12    2021-08-21 20:42    42  ***********************
      13    2021-08-21 21:41    41  **********************
      14    2021-08-21 22:40    40  *********************
     ...    ..(  2 skipped).    ..  *********************
      17    2021-08-22 01:37    40  *********************
      18    2021-08-22 02:36    36  *****************
      19    2021-08-22 03:35    34  ***************
      20    2021-08-22 04:34    34  ***************
      21    2021-08-22 05:33    33  **************
      22    2021-08-22 06:32    33  **************
      23    2021-08-22 07:31    34  ***************
      24    2021-08-22 08:30    34  ***************
      25    2021-08-22 09:29    34  ***************
    
    SCT Error Recovery Control:
               Read:     70 (7.0 seconds)
              Write:     70 (7.0 seconds)
    
    Device Statistics (GP Log 0x04)
    Page  Offset Size        Value Flags Description
    0x01  =====  =               =  ===  == General Statistics (rev 1) ==
    0x01  0x008  4               3  ---  Lifetime Power-On Resets
    0x01  0x010  4            4962  ---  Power-on Hours
    0x01  0x018  6     33621310344  ---  Logical Sectors Written
    0x01  0x020  6        31017486  ---  Number of Write Commands
    0x01  0x028  6    174169209259  ---  Logical Sectors Read
    0x01  0x030  6       244034956  ---  Number of Read Commands
    0x01  0x038  6               -  ---  Date and Time TimeStamp
    0x03  =====  =               =  ===  == Rotating Media Statistics (rev 1) ==
    0x03  0x008  4            4929  ---  Spindle Motor Power-on Hours
    0x03  0x010  4            1748  ---  Head Flying Hours
    0x03  0x018  4           19009  ---  Head Load Events
    0x03  0x020  4               0  ---  Number of Reallocated Logical Sectors
    0x03  0x028  4               0  ---  Read Recovery Attempts
    0x03  0x030  4               0  ---  Number of Mechanical Start Failures
    0x03  0x038  4               0  ---  Number of Realloc. Candidate Logical Sectors
    0x03  0x040  4               0  ---  Number of High Priority Unload Events
    0x04  =====  =               =  ===  == General Errors Statistics (rev 1) ==
    0x04  0x008  4               0  ---  Number of Reported Uncorrectable Errors
    0x04  0x010  4               0  ---  Resets Between Cmd Acceptance and Completion
    0x04  0x018  4               0  -D-  Physical Element Status Changed
    0x05  =====  =               =  ===  == Temperature Statistics (rev 1) ==
    0x05  0x008  1              35  ---  Current Temperature
    0x05  0x010  1              38  ---  Average Short Term Temperature
    0x05  0x018  1              36  ---  Average Long Term Temperature
    0x05  0x020  1              44  ---  Highest Temperature
    0x05  0x028  1              11  ---  Lowest Temperature
    0x05  0x030  1              41  ---  Highest Average Short Term Temperature
    0x05  0x038  1              14  ---  Lowest Average Short Term Temperature
    0x05  0x040  1              36  ---  Highest Average Long Term Temperature
    0x05  0x048  1              26  ---  Lowest Average Long Term Temperature
    0x05  0x050  4               0  ---  Time in Over-Temperature
    0x05  0x058  1              70  ---  Specified Maximum Operating Temperature
    0x05  0x060  4               0  ---  Time in Under-Temperature
    0x05  0x068  1               5  ---  Specified Minimum Operating Temperature
    0x06  =====  =               =  ===  == Transport Statistics (rev 1) ==
    0x06  0x008  4              11  ---  Number of Hardware Resets
    0x06  0x010  4               3  ---  Number of ASR Events
    0x06  0x018  4               0  ---  Number of Interface CRC Errors
    0xff  =====  =               =  ===  == Vendor Specific Statistics (rev 1) ==
    0xff  0x008  7               0  ---  Vendor Specific
    0xff  0x010  7               0  ---  Vendor Specific
    0xff  0x018  7               0  ---  Vendor Specific
                                    |||_ C monitored condition met
                                    ||__ D supports DSN
                                    |___ N normalized value
    
    Pending Defects log (GP Log 0x0c)
    No Defects Logged
    
    SATA Phy Event Counters (GP Log 0x11)
    ID      Size     Value  Description
    0x000a  2            4  Device-to-host register FISes sent due to a COMRESET
    0x0001  2            0  Command failed due to ICRC error
    0x0003  2            0  R_ERR response for device-to-host data FIS
    0x0004  2            0  R_ERR response for host-to-device data FIS
    0x0006  2            0  R_ERR response for device-to-host non-data FIS
    0x0007  2            0  R_ERR response for host-to-device non-data FIS
    
    

     

     

  3. On 1/22/2021 at 7:45 PM, Cleas said:

    please add WiFi feature to unRaid, I don't really need it support all different Wifi adaptor however, small number adaptor is support is good enough.

    I haven't tried, so I may be way off base, but wouldn't it be possible, and far more flexible, to pass through the WiFi adapter to a VM in Unraid (like we do for GPUs, USB controllers, etc.), and then have the VM run, say, OpenWRT or whatever else, to handle Wifi?

  4. Polar vortex time again, and my HDD temps are already falling. (Monday will be even colder.)

     

    Throwing some masking tape over the vents in front of the drives as a temporary fix.

     

    It really would be nice if Unraid could give notifications for low drive temperatures, seeing that low enough HDD temps can be as bad, or worse, than high HDD temps.

     

  5. One of my 4TB data drives failed while waiting for my two new 10TB drives to arrive.  (All drives were 4TB.)

     

    No problem.  I figured unraid would let me rebuild the failed 4TB drive onto the 10TB drive, then I could replace the 4TB parity with the other 10TB parity...  but unraid doesn't seem to like this Idea.   

     

    Please don't tell me I need to purchase a 4TB drive just to get back up and running before I can install my 10TB drives?

     

    Unraid 6.8.3.

     

  6. 4 minutes ago, johnnie.black said:

    Click on the device and you can set a custom temperature.

    Crap.  I see that now for devices under the array.  Unfortunately, some of my NVMe devices are Unassigned Devices.  Doesn't look it's supported there.  Guess this is a question for the Unassigned Devices plugin thread.

     

    Thanks.

     

  7. Is there a way to have certain devices, like NVMe drives, use a higher threshold for temperature warnings? 

     

    NVMe can and do run at much hotter temperatures than HDDs.   When I set the threshold to warn me when HDD temps are getting too high, I end up getting constant warnings when my NVMe devices get heavy write activity.

     

  8. 9 minutes ago, BRiT said:

    focus on NAS features. Once its added then go for the fringe cases.

    But I currently have no use for multiple array support.  I'd much rather have HA and replication as core features.

     

    But above that, I'd like GPU drivers baked in and officially supported.

     

    May I suggest cheering in the multiple array thread instead of jeering in the GPU one?

    • Like 2
  9. 1 hour ago, BRiT said:

    I am not confusing anything. I'd rather even just 5 minutes of LimeTech time be spent on genuine NAS functions and features than on any and all GPU-related work.

    Yeah, let's just get rid of docker, VMs, the whole plugin system ... all that stuff that's not "genuine NAS", whatever that means.

    • Like 1
  10. 4 hours ago, BRiT said:

    I'd rather Limetech focus their limited development efforts and time on NAS functions and features.

     

    Sure, if they had unlimited time then include everything. Alas, that's not reality.

    That could be said about any feature request.  You may not find it useful, but many others would. 

     

    For this specific feature, I think some of you may be conflating the burden for Linuxserver.io to do each Unraid Nvidia release with what it would be for Limetech.   The burden on Limetech would be many orders of magnitude less, which is why this feature would be beneficial. 

    • Like 1
  11. Please add GPU drivers to unraid builds.  Doesn't Limetech already do this for some NICs, SAS controllers, etc?  If so, I see no reason why GPU drivers (specifically for Nvidia in my case) shouldn't be added as well.

     

    There's no need to keep up to date with the latest driver, unless there are serious bugs/exploits that need to be patched, just like with any other driver Unraid uses. 

     

    I really don't get all the "sky is falling" negativity surrounding this request.  The work involved for Limetech to include these drivers is far less than for the Linuxserver.io team (and a huge thanks to each of them for that effort!) to add them after the fact.

     

    • Like 1
  12. On 1/27/2020 at 5:09 PM, BRiT said:

    As time goes on, there will be less need for video card specific features, not more in a NAS server platform. Upgrade your clients and you wont have a need for transcoding which removes the need for this Nvidia build.

    Until there's ubiquitous, unmetered, gigabit internet just about everywhere, there's going to be a need for transcoding.    Transcoding to lower bit rates for streaming, and syncing, while on the road is one of my main uses of Unraid Nvidia.  I don't see that going away any time soon.

    • Like 1
  13. 11 hours ago, rollieindc said:

    1) Is there any inherent value in "Unraid Nvidia", outside of the obvious speed increases in media transcoding?

    Letting the GPU handle decoding/encoding leaves your CPU open for other work.   Generally, the encodes from a GPU (hardware encoder) will be lower quality than the encodes from the CPU (when using x264), though newer generation hardware encoders are starting to close this gap.  (the K10 does not have a newer generation hardware encoder.)   If you have a powerful enough CPU to keep up with the transcodes being requested and everything else being requested of your unraid server, you'll likely see no real benefit with a hardware encoder. 

     

    11 hours ago, rollieindc said:

    2) Can "Unraid Nvidia" take advantage of the GPUs and memory on a Tesla card? (K10, K80, M40, V100, etc)

    "Unraid Nvidia" lets you use the GPU with Plex/Emby/etc. dockers. See this list for what you can expect in Plex.  The K10 isn't included in that list, but if it works at all, I'd expect it to perform something like the other GK104's

    11 hours ago, rollieindc said:

    3) Has anyone had experience with a Tesla card and unRAID? (In VM or with nVidia unRAID) - And was it worth the time involved?

    3A) Or for that matter, anyone using homelab applications utilizing a Tesla card under unRAID that they can share tips on?

    No experience with any of the Tesla cards, but how useful it will be depends on your specific use case.

     

    I use a Quadro P400 for nvenc/nvdec in my Plex Docker, leaving my CPU (and GTX 1080) open for use with a gaming VM.   For me, it was worth the time involved passing the 1080 GTX through to my gaming VM, and using Unraid Nvidia to offload Plex transcoding to the P400.  Allows me to dedicate more CPU cores to the Gaming VM.

  14. I just use a second NVMe for transcoding and other stuff that makes more sense on a scratch disk.  This saves some endurance and bandwidth on the main cache SSD, where I'd rather not have its appdata and domains mounts go down, and leaves ram open for more useful endeavors like VMs, dockers, read cache, etc.

     

    If you're confident you don't need a huge amount of space for transcodes / scratch disk, a 64GB (or even 32GB, if you don't have that many plex clients) intel Optane will have much higher endurance than a typical NVMe.   However, I just use a pair of 500GB 970 EVOs -- one for a standard cache drive + docker/vm storage, the other for transcode/scratch.  I'll likely need to replace them every 3 to 5 years.

     

     

    • Upvote 1
  15. 3 hours ago, knalbone said:

    I'm planning on buying a GPU for use with this plugin and a Plex container. Can I buy any GPU listed here and be pretty much assured it will work, or is there any kind of compatibility list to watch out for?

    Make sure you pay attention to what's highlighted in green/yes.  The 1030, for example, is a GPU you should NOT get. 

     

    This list has more detailed information on GPUs with Plex.

  16. On 5/27/2019 at 11:59 AM, Xaero said:

    @CHBMB I too see this high power consumption. I know why it's happening, too. 

    Basically, the nvidia driver doesn't initialize power management until an Xorg server is running. The only way to force a power profile on Linux currently is to use nvidia-smi like so:
    nvidia-settings --ctrl-display :0 -a "[gpu:0]/GPUPowerMizerMode=2"

    Which requires a running Xorg display. I've been trying to dig around in sysfs to see if there is another place that this value is stored, but there doesn't seem to be. It looks like the cards are locked into performance mode... Perhaps this is worth bringing up to nvidia?

    In the meantime, I'm going to continue digging to see if I can find a way (perhaps an nvidia-settings docker?) to force the power state.

     

    I'm not exactly sure how much extra power my GTX 1050 is chewing sitting idle without being in a power saving mode, but my guess (based on readings from a "Kill A Watt" with the GTX installed vs. not.) is around 15 to 20 watts.  15 to 20w x $0.15 per kwh = ~ $20 to $25 per year.

     

    I'd certainly be willing to donate at least that much towards getting this done.

     

  17. I think I'm seeing this same issue.  When mover is running, Plex is for the most part frozen.

     

    Happened both before and after installing an old Geforce 1050 (using Unraid-Nvidia) for transcoding, so it seems unrelated to Unraid Nvidia.

     

    I'm not certain if this happened before I upgraded to 6.7.0 as I seldom manually ran the mover when I was on 6.5.x -- though I don't recall it happening before the upgrade to 6.7.0.

     

  18. I'd like an option for critical and warning thresholds for low HDD temperature in addition to high.

     

    Due to server location (by an air-intake vent), I noticed my HDD temps had dropped to ~ -19°C as outdoor temperatures fell to -33°C.

     

    Also, it would be helpful to have an option to keep the array spun up if any single HDD or SDD is below a specified "keep spun up" temperature threshold.  

     

  19. 16 minutes ago, johnnie.black said:

    It is, usually minimum operating temp is 0C, and even that is too cold and not good for the disks, operating disks at very low temps (<30C) can be as bad as high temps, there's a Google study about it.

     

     

    Exactly.  Hence my concern.   

     

    I've resorted to taping up the vents on the case and keeping all drives spun up. Currently have them at around 55°F (~ 13°C).  I'll need to find a permanent solution, though my preferred solution would be to shut off select (in my case, all) chassis fans based on HDD temp, however keeping the array spun up is also needed.

     

    An unraid feature to force spinup when HDD temps are too low would be nice, however since temperature isn't available when they are spun down, it could be difficult to implement/use, requiring at least one HDD or an SSD be kept spinning as a canary.   Low temp alerts in addition to high temp alerts would also be nice.  (Guess I'll go create a feature request for these.)