Jump to content

rorton

Members
  • Content Count

    85
  • Joined

  • Last visited

Community Reputation

4 Neutral

About rorton

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thanks - where does all the data go then, i dont get that in 12 hours, my LibreNMS VM has written 14.55 GB worth of data according to this iota app, and yet the available space on the SSD hasn't reduced by 14GB, Unifi has apparently written 7gb etc,
  2. Forgot to mention. The trim plugin is installed and scheduled to run every day
  3. Thanks for replying. It’s connected to the motherboard. The machine is one of those small HP N40L microservers.
  4. I made a post before, summary is that I've had a Crucial MX500 installed as a cache drive for just under 3 months, and the SMART data is saying that i have used 17% of the SSD's life, and i have 83% remaining. Specs of the drive suggest an endurance of 100TB Total Bytes Written (TBW), equal to 54GB per day for 5 years. Considering i don't write a massive amount to the device anyway, this should be fine. On my SSD as cache, i have my docker image (21gb in size), with things like SAB, Sonaar, Radaar, Emby installed, and i have a VM with Linux running the SNMP app LibreNMS. I have done an iotop on the machine, and have had it running for 4 hours in cumulative mode, and have the below output: Total DISK READ : 0.00 B/s | Total DISK WRITE : 148.68 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 141.92 K/s PID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND 5479 be/4 root 128.16 M 2.52 G 0.00 % 0.16 % qemu-system-x86_64 -name guest=LibreNMS,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-LibreNMS/m~rtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on 8623 be/4 ortonr 172.58 M 1382.31 M 0.00 % 5.13 % afpd -d -F /etc/netatalk/afp.conf 5741 be/4 nobody 73.76 M 1376.48 M 0.00 % 0.01 % bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logappend --logpath /usr/lib/unifi/logs/mongod.log --bind_ip 127.0.0.1 4430 be/0 root 85.92 M 794.06 M 0.00 % 0.05 % [loop2] 4451 be/4 root 0.00 B 571.91 M 0.00 % 0.06 % [btrfs-transacti] 4318 be/4 root 168.00 K 435.67 M 0.00 % 0.03 % [btrfs-transacti] 4222 be/4 nobody 9.77 M 194.32 M 0.00 % 0.01 % mono --debug NzbDrone.exe -nobrowser -data=/config 3595 be/4 nobody 4.79 M 116.98 M 0.00 % 0.01 % mono --debug Radarr.exe -nobrowser -data=/config 8624 be/4 nobody 292.00 K 95.03 M 0.00 % 0.00 % cnid_dbd -F /etc/netatalk/afp.conf -p /mnt/disk3 -t 6 -l 4 -u ortonr 4607 be/4 nobody 6.50 M 16.88 M 0.00 % 0.00 % java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start 5520 be/4 daemon 1360.92 M 5.88 M 0.00 % 0.00 % EmbyServer -programdata /config -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartexitcode 3 15808 be/4 root 8.00 K 5.08 M 0.00 % 0.16 % [kworker/u8:2-edac-poller] 25610 be/4 root 49.00 K 3.07 M 0.00 % 0.01 % [kworker/u8:0-btrfs-endio-write] 2306 be/4 root 0.00 B 1168.00 K 0.00 % 0.01 % [kworker/u8:3-btrfs-endio-write] 4464 be/4 root 0.00 B 940.00 K 0.00 % 0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --storage-driver=btrfs 7201 be/4 root 0.00 B 640.00 K 0.00 % 0.01 % [kworker/u8:1-btrfs-endio-write] 4317 be/4 root 0.00 B 192.00 K 0.00 % 0.00 % [btrfs-cleaner] 4347 be/4 root 1349.72 M 8.00 K 0.00 % 0.03 % shfs /mnt/user -disks 7 2048000000 -o noatime,big_writes,allow_other -o remember=0 4145 be/4 root 0.00 B 0.00 B 0.00 % 0.01 % emhttpd 4206 be/4 root 0.00 B 0.00 B 0.00 % 0.06 % [unraidd] 4285 be/4 root 2.75 M 0.00 B 0.00 % 0.00 % [xfsaild/md3] the biggest process writing data seems to be LibreNMS, and the VM i have running, which it recons has written 2.52gb in 4 hours, so just under a gig an hour. Even if that was a gig an hour, that's only 24gb per day, the other seems to be Unifi Docker, which has done 1gb in 4 hours, again, not a lot. i can't understand why the drive is reducing its lifespan so quickly. With the above writes, it should last the 5 years that say it will last, but based on the smart data, its only going to last 1 year. Plus, if LibreNMS is writing 2.52gb in 4 hours, (so say 20gb a day), where is all that data? The SSD is only 250gb, the SSD would be full in 10 days if this amount of data was being written? Cant get my head around it at all
  5. Thanks that sounds more feasible, but still seems like a Fast reduction in its life span, 15% used in 2 months, it’s not going to last 2 years at this rate
  6. I put in a Crucial MX500 as an SSD for Cache. Previous versions of UnRaid didn't have the database updated to show the complete attributes for the device, now I'm on 6.6.1, i can see them all, and was concerned about the SSD Lifetime: SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0 5 Reallocate_NAND_Blk_Cnt -O--CK 100 100 010 - 0 9 Power_On_Hours -O--CK 100 100 000 - 1978 12 Power_Cycle_Count -O--CK 100 100 000 - 7 171 Program_Fail_Count -O--CK 100 100 000 - 0 172 Erase_Fail_Count -O--CK 100 100 000 - 0 173 Ave_Block-Erase_Count -O--CK 085 085 000 - 233 174 Unexpect_Power_Loss_Ct -O--CK 100 100 000 - 3 180 Unused_Reserve_NAND_Blk PO--CK 000 000 000 - 26 183 SATA_Interfac_Downshift -O--CK 100 100 000 - 0 184 Error_Correction_Count -O--CK 100 100 000 - 0 187 Reported_Uncorrect -O--CK 100 100 000 - 0 194 Temperature_Celsius -O---K 072 055 000 - 28 (Min/Max 0/45) 196 Reallocated_Event_Count -O--CK 100 100 000 - 0 197 Current_Pending_Sector -O--CK 100 100 000 - 0 198 Offline_Uncorrectable ----CK 100 100 000 - 0 199 UDMA_CRC_Error_Count -O--CK 100 100 000 - 0 202 Percent_Lifetime_Remain ----CK 085 085 001 - 15 206 Write_Error_Rate -OSR-- 100 100 000 - 0 210 Success_RAIN_Recov_Cnt -O--CK 100 100 000 - 0 246 Total_Host_Sector_Write -O--CK 100 100 000 - 18932877712 247 Host_Program_Page_Count -O--CK 100 100 000 - 559673637 248 FTL_Program_Page_Count -O--CK 100 100 000 - 763628335 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning If you have a look at 202 - Percent_Lifetime_Remain, the value is 15 - so is that telling me that my SSD, that was new 2 months ago, only has 15% of its lifetime left!!?
  7. rorton

    [Plug-In] SNMP

    Seems OK now, looking in the log, it was moaning about not getting the little graphic so i think it stopped the whole install - i deleted it, reinstalled, and it seems to be working...
  8. rorton

    [Plug-In] SNMP

    anyone using this on 6.6.0 its not installed after upgrading, and just shows in the tab titled plugin file installed errors
  9. read a few posts on reddit about this too, few ppl on there with the same problem. the new firmware just released was supposed to fix it but hasn't - people have got onto Crucial support and they say its with engineering. I was waiting on an online chat, but dropped the connection after i read this - will wait for next firmware release
  10. I think i marked it as solved when jonnie black said its not an unraid issue, its a firmware problem, and to wait for new firmware. Happy to unmark as solved, but not sure anyone may be able to help apart from Crucial with correct firmware. Wonder if its worth a few of us emailing crucial, perhaps they are not aware of the issue!?
  11. rorton

    (SOLVED) Vlan oddity?

    Ahh, brilliant, thanks so much. i hadn't got the advanced option selected in the docker settings, so could work out how it knew which network to be part of. So i have Vlan 1 created in the Network settings, with no IP, then in docker settings, i assigned 192.168.1.0/25 to the new vlan 1 interface br0.1 and it works like a dream - thanks
  12. rorton

    (SOLVED) Vlan oddity?

    thanks for the reply. So i removed the IP on Vlan1, and now i can no longer allocate the vlan1 interface to the docker, i think i read that you have to have an ip address allocated to the vlan if using the vlan for a docker. really i just wanted the whole of the unraid box to be available on vlan2, apart from this 1 docker, which i want in vlan 1, without exposing the gui in vlan 1, but cant work out if thats possible
  13. rorton

    (SOLVED) Vlan oddity?

    Ive always ran my unraid without Vlans setup, most dockers use the host ip with different ports, and i split out my Unifi docker to have its own IP in the network. Ive now split into Vlans and sort of have this working, but cant understand why unraid now answers on 2 Ip addresses (one in each Vlan) My setup is: Few of other Vlans i won't bore you with, CCTV, IOT etc Lans of interest: 192.168.2.0/25 - VLAN2 (Main Network) Unraid exists here with IP 192.168.2.8 - no problems, i can get to it on the network etc 192.168.1.0/25 - VLAN1 (Mgmt Network) I have set this up as Vlan1 (Tagged) as Vlan 2 is untagged) - this is where i want my Unifi Docker to exist Above is how i have it setup, and it sort of works - i can get to unraid on 192.168.2.8, but i can also get to the main unraid GUI on 192.168.1.8 which i wasn't expecting. I have assigned interface br0.1 to the docker that i want in Vlan 1 (my unifi docker, which is where my AP's and USG reside) and this works, the docker is in Vlan1 with IP 192.168.1.13 Now, im assuming that the vlan interface i have created is basically a 'leg' in the 192.168.1.8 subnet, so Unraid has to have an IP in that subnet, and then you give your dockers other addresses in this range. Is this right, is this how its supposed to operate? I just wasn't expected to be able to hit the unraid gui at 192.168.2.8 and 192.168.1.8 from the 192.168.2.0/25 subnet