Jump to content

rorton

Members
  • Posts

    209
  • Joined

  • Last visited

Posts posted by rorton

  1. I made a post before, summary is that I've had a Crucial MX500 installed as a cache drive for just under 3 months, and the SMART data is saying that i have used 17% of the SSD's life, and i have 83% remaining.

     

    Specs of the drive suggest an endurance of  100TB Total Bytes Written (TBW), equal to 54GB per day for 5 years. 

     

    Considering i don't write a massive amount to the device anyway, this should be fine. 

     

    On my SSD as cache, i have my docker image (21gb in size), with things like SAB, Sonaar, Radaar, Emby installed, and i have a VM with Linux running the SNMP app LibreNMS. 

     

    I have done an iotop on the machine, and have had it running for 4 hours in cumulative mode, and have the below output:

     

    Total DISK READ :       0.00 B/s | Total DISK WRITE :     148.68 K/s
    Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     141.92 K/s
      PID  PRIO  USER     DISK READ DISK WRITE>  SWAPIN      IO    COMMAND                                                                                                                                                                                                                                                                                           
     5479 be/4 root        128.16 M      2.52 G  0.00 %  0.16 % qemu-system-x86_64 -name guest=LibreNMS,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-LibreNMS/m~rtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
     8623 be/4 ortonr      172.58 M   1382.31 M  0.00 %  5.13 % afpd -d -F /etc/netatalk/afp.conf
     5741 be/4 nobody       73.76 M   1376.48 M  0.00 %  0.01 % bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logappend --logpath /usr/lib/unifi/logs/mongod.log --bind_ip 127.0.0.1
     4430 be/0 root         85.92 M    794.06 M  0.00 %  0.05 % [loop2]
     4451 be/4 root          0.00 B    571.91 M  0.00 %  0.06 % [btrfs-transacti]
     4318 be/4 root        168.00 K    435.67 M  0.00 %  0.03 % [btrfs-transacti]
     4222 be/4 nobody        9.77 M    194.32 M  0.00 %  0.01 % mono --debug NzbDrone.exe -nobrowser -data=/config
     3595 be/4 nobody        4.79 M    116.98 M  0.00 %  0.01 % mono --debug Radarr.exe -nobrowser -data=/config
     8624 be/4 nobody      292.00 K     95.03 M  0.00 %  0.00 % cnid_dbd -F /etc/netatalk/afp.conf -p /mnt/disk3 -t 6 -l 4 -u ortonr
     4607 be/4 nobody        6.50 M     16.88 M  0.00 %  0.00 % java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start
     5520 be/4 daemon     1360.92 M      5.88 M  0.00 %  0.00 % EmbyServer -programdata /config -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartexitcode 3
    15808 be/4 root          8.00 K      5.08 M  0.00 %  0.16 % [kworker/u8:2-edac-poller]
    25610 be/4 root         49.00 K      3.07 M  0.00 %  0.01 % [kworker/u8:0-btrfs-endio-write]
     2306 be/4 root          0.00 B   1168.00 K  0.00 %  0.01 % [kworker/u8:3-btrfs-endio-write]
     4464 be/4 root          0.00 B    940.00 K  0.00 %  0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --storage-driver=btrfs
     7201 be/4 root          0.00 B    640.00 K  0.00 %  0.01 % [kworker/u8:1-btrfs-endio-write]
     4317 be/4 root          0.00 B    192.00 K  0.00 %  0.00 % [btrfs-cleaner]
     4347 be/4 root       1349.72 M      8.00 K  0.00 %  0.03 % shfs /mnt/user -disks 7 2048000000 -o noatime,big_writes,allow_other -o remember=0
     4145 be/4 root          0.00 B      0.00 B  0.00 %  0.01 % emhttpd
     4206 be/4 root          0.00 B      0.00 B  0.00 %  0.06 % [unraidd]
     4285 be/4 root          2.75 M      0.00 B  0.00 %  0.00 % [xfsaild/md3]
    
    
    
    
    

    the biggest process writing data seems to be LibreNMS, and the VM i have running, which it recons has written 2.52gb in 4 hours, so just under a gig an hour. 

    Even if that was a gig an hour, that's only 24gb per day, the other seems to be Unifi Docker, which has done 1gb in 4 hours, again, not a lot. 

     

    i can't understand why the drive is reducing its lifespan so quickly. With the above writes, it should last the 5 years that say it will last, but based on the smart data, its only going to last 1 year. 

     

    Plus, if LibreNMS is writing 2.52gb in 4 hours, (so say 20gb a day), where is all that data? The SSD is only 250gb, the SSD would be full in 10 days if this amount of data was being written? 

     

    Cant get my head around it at all :(

     

  2. I put in a Crucial MX500 as an SSD for Cache. Previous versions of UnRaid didn't have the database updated to show the complete attributes for the device, now I'm on 6.6.1, i can see them all, and was concerned about the SSD Lifetime:

     

    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
      1 Raw_Read_Error_Rate     POSR-K   100   100   000    -    0
      5 Reallocate_NAND_Blk_Cnt -O--CK   100   100   010    -    0
      9 Power_On_Hours          -O--CK   100   100   000    -    1978
     12 Power_Cycle_Count       -O--CK   100   100   000    -    7
    171 Program_Fail_Count      -O--CK   100   100   000    -    0
    172 Erase_Fail_Count        -O--CK   100   100   000    -    0
    173 Ave_Block-Erase_Count   -O--CK   085   085   000    -    233
    174 Unexpect_Power_Loss_Ct  -O--CK   100   100   000    -    3
    180 Unused_Reserve_NAND_Blk PO--CK   000   000   000    -    26
    183 SATA_Interfac_Downshift -O--CK   100   100   000    -    0
    184 Error_Correction_Count  -O--CK   100   100   000    -    0
    187 Reported_Uncorrect      -O--CK   100   100   000    -    0
    194 Temperature_Celsius     -O---K   072   055   000    -    28 (Min/Max 0/45)
    196 Reallocated_Event_Count -O--CK   100   100   000    -    0
    197 Current_Pending_Sector  -O--CK   100   100   000    -    0
    198 Offline_Uncorrectable   ----CK   100   100   000    -    0
    199 UDMA_CRC_Error_Count    -O--CK   100   100   000    -    0
    202 Percent_Lifetime_Remain ----CK   085   085   001    -    15
    206 Write_Error_Rate        -OSR--   100   100   000    -    0
    210 Success_RAIN_Recov_Cnt  -O--CK   100   100   000    -    0
    246 Total_Host_Sector_Write -O--CK   100   100   000    -    18932877712
    247 Host_Program_Page_Count -O--CK   100   100   000    -    559673637
    248 FTL_Program_Page_Count  -O--CK   100   100   000    -    763628335
                                ||||||_ K auto-keep
                                |||||__ C event count
                                ||||___ R error rate
                                |||____ S speed/performance
                                ||_____ O updated online
                                |______ P prefailure warning

    If you have a look at 202 - Percent_Lifetime_Remain, the value is 15 - so is that telling me that my SSD, that was new 2 months ago, only has 15% of its lifetime left!!?

  3. 3 minutes ago, rorton said:

    anyone using this on 6.6.0 its not installed after upgrading, and just shows in the tab titled plugin file installed errors

     

    SNMP_plg.thumb.png.4dbfbce39acf7ff5bbbc9950332ee1ec.png

    Seems OK now, looking in the log, it was moaning about not getting the little graphic so i think it stopped the whole install - i deleted it, reinstalled, and it seems to be working...

  4. Ahh, brilliant, thanks so much. 

     

    i hadn't got the advanced option selected in the docker settings, so could work out how it knew which network to be part of. 

     

    So i have Vlan 1 created in the Network settings, with no IP, then in docker settings, i assigned 192.168.1.0/25 to the new vlan 1 interface br0.1 and it works like a dream - thanks :)

  5. thanks for the reply. 

     

    So i removed the IP on Vlan1, and now i can no longer allocate the vlan1 interface to the docker, i think i read that you have to have an ip address allocated to the vlan if using the vlan for a docker. 

     

    really i just wanted the whole of the unraid box to be available on vlan2, apart from this 1 docker, which i want in vlan 1, without exposing the gui in vlan 1, but cant work out if thats possible

  6. Ive always ran my unraid without Vlans setup, most dockers use the host ip with different ports, and i split out my Unifi docker to have its own IP in the network. 

     

    Ive now split into Vlans and sort of have this working, but cant understand why unraid now answers on 2 Ip addresses (one in each Vlan)

     

    My setup is:

     

    Few of other Vlans i won't bore you with, CCTV, IOT etc

    Lans of interest:

     

    192.168.2.0/25  - VLAN2 (Main Network) Unraid exists here with IP 192.168.2.8 - no problems, i can get to it on the network etc

    192.168.1.0/25 - VLAN1 (Mgmt Network) I have set this up as Vlan1 (Tagged) as Vlan 2 is untagged) - this is where i want my Unifi Docker to exist

     

    UnraidNic.thumb.png.ba6e32a59bf1ed2a384fe46d499f9dbf.png

     

    Above is how i have it setup, and it sort of works - i can get to unraid on 192.168.2.8, but i can also get to the main unraid GUI on 192.168.1.8 which i wasn't expecting. I have assigned interface br0.1 to the docker that i want in Vlan 1 (my unifi docker, which is where my AP's and USG reside) and this works, the docker is in Vlan1 with IP 192.168.1.13 

     

    Now, im assuming that the vlan interface i have created is basically a 'leg' in the 192.168.1.8 subnet, so Unraid has to have an IP in that subnet, and then you give your dockers other addresses in this range. 

     

    Is this right, is this how its supposed to operate? I just wasn't expected to be able to hit the unraid gui at 192.168.2.8 and 192.168.1.8 from the 192.168.2.0/25 subnet

  7. Ive deleted the plugin marked as bad, and then reinstalled it from community applications, and its working now, odd..

     

    Still have the issue with the image not appearing that needs permission change (cant remember what it was)

     

  8. I’ve just upgraded to 6.5.3 and plugin wont run now

    un 12 20:08:23 Nas root: plugin: skipping: /boot/packages/net-snmp-5.7.3-x86_64-4.txz already exists
    Jun 12 20:08:23 Nas root: plugin: running: /boot/packages/net-snmp-5.7.3-x86_64-4.txz
    Jun 12 20:08:23 Nas root: 
    Jun 12 20:08:23 Nas root: +==============================================================================
    Jun 12 20:08:23 Nas root: | Installing new package /boot/packages/net-snmp-5.7.3-x86_64-4.txz
    Jun 12 20:08:23 Nas root: +==============================================================================
    Jun 12 20:08:23 Nas root: 
    Jun 12 20:08:23 Nas root: Verifying package net-snmp-5.7.3-x86_64-4.txz.
    Jun 12 20:08:23 Nas root: Installing package net-snmp-5.7.3-x86_64-4.txz:
    Jun 12 20:08:23 Nas root: PACKAGE DESCRIPTION:
    Jun 12 20:08:23 Nas root: # net-snmp (Simple Network Management Protocol tools)
    Jun 12 20:08:23 Nas root: #
    Jun 12 20:08:23 Nas root: # Various tools relating to the Simple Network Management Protocol:
    Jun 12 20:08:23 Nas root: #
    Jun 12 20:08:23 Nas root: # An extensible agent
    Jun 12 20:08:23 Nas root: # An SNMP library
    Jun 12 20:08:23 Nas root: # Tools to request or set information from SNMP agents
    Jun 12 20:08:23 Nas root: # Tools to generate and handle SNMP traps
    Jun 12 20:08:23 Nas root: # A version of the UNIX 'netstat' command using SNMP
    Jun 12 20:08:23 Nas root: # A graphical Perl/Tk/SNMP based mib browser
    Jun 12 20:08:23 Nas root: #
    Jun 12 20:08:24 Nas root: Executing install script for net-snmp-5.7.3-x86_64-4.txz.
    Jun 12 20:08:24 Nas root: Package net-snmp-5.7.3-x86_64-4.txz installed.
    Jun 12 20:08:24 Nas root: 
    Jun 12 20:08:24 Nas root: 
    Jun 12 20:08:24 Nas root: plugin: creating: /usr/local/emhttp/plugins/snmp/snmp.png - downloading from URL https://raw.githubusercontent.com/coppit/unraid-snmp/master/snmp.png
    Jun 12 20:08:34 Nas root: plugin: downloading: https://raw.githubusercontent.com/coppit/unraid-snmp/master/snmp.png ...#015plugin: downloading: https://raw.githubusercontent.com/coppit/unraid-snmp/master/snmp.png ... failed (Network failure)
    Jun 12 20:08:34 Nas root: plugin: wget: https://raw.githubusercontent.com/coppit/unraid-snmp/master/snmp.png download failure (Network failure)
    Jun 12 20:0

     

  9. yeah its an odd one - what got me interested was that Ubiquiti say they supply this as a docker image, so with a little fettling, it should work similar to Unifi?

     

    I wasn't sure if you really needed to create a VM, then install the docker in the VM.

     

    there does seem to be partial dockers in community for this, but i had no idea how to get them running. 

  10. great stuff thanks - i saw another model of Crucial SSD on their site, and there is an app you can install in your PC too, run the app, detects the drive, and downloads the firmware, so looks like there are options, 

     

    My box is a bit hidden  where its located, so i have stuff to move to get to it, would have been great to do it via the CLI, but if i have to pull the drive or similar, no hardship. 

     

  11. I have a new SSD as a cache drive, and just started to get alerts (had them for 2 days so far)

     

    First alert is:

     

    Event: unRAID Cache disk SMART health [197]
    Subject: Warning [NAS] - current pending sector is 1
    Description: CT250MX500SSD1_1803E10AC5FE (sdf)
    Importance: warning

     

    then - 20 mins later, a 'clear' alert

     

    Event: unRAID Cache disk SMART message [197]
    Subject: Notice [NAS] - current pending sector returned to normal value
    Description: CT250MX500SSD1_1803E10AC5FE (sdf)
    Importance: normal

    My Trim is schedules to run daily at 04:55

     

    Should i be worried, the SSD parameters are not fully populated in the smartdb -its a brand new Crucial MX500 drive, so assuming that smart needs to be updated. 

     

     

×
×
  • Create New...