Jump to content

rorton

Members
  • Content Count

    90
  • Joined

  • Last visited

Community Reputation

5 Neutral

About rorton

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. thanks for this - works perfectly
  2. hi, yep, it worked fine, and its now seen all the old data, so im really happy. I think for me, the bit i was missing is initially mapping then as share on the mac - i just opened time machine and expected it to appear - makes sense i guess, but after i mapped it, and the mac knew about it, it appeared, and worked as expected!
  3. yeah, if i could help it, i didn't want to blow away the 3 years of backups i have on there, I've started it off, and it seems to have found it i think, has half hour left to run, so will report back later and let you know the status Seems that TM only understand what's on the disk when it starts the backup, goes off to interrogate or something like that!
  4. With your tip about mapping the drive first - i too can now see my SMB drive in time machine - that was the key there! Only problem i see now, is that I'm setting it to backup, and it doesn't seem to see the data that's on there, so i have to create a whole new backup which doesnt seem right
  5. Im struggling to get my SMB time machine share to appear also. Ive done all the settings in SMB and when I go into Time Machine, there is no share to find - if i switch on AFP - it works fine. Ive tried rebooting unraid - rebooting the mac in question, neither makes the share appear
  6. thanks - where does all the data go then, i dont get that in 12 hours, my LibreNMS VM has written 14.55 GB worth of data according to this iota app, and yet the available space on the SSD hasn't reduced by 14GB, Unifi has apparently written 7gb etc,
  7. Forgot to mention. The trim plugin is installed and scheduled to run every day
  8. Thanks for replying. It’s connected to the motherboard. The machine is one of those small HP N40L microservers.
  9. I made a post before, summary is that I've had a Crucial MX500 installed as a cache drive for just under 3 months, and the SMART data is saying that i have used 17% of the SSD's life, and i have 83% remaining. Specs of the drive suggest an endurance of 100TB Total Bytes Written (TBW), equal to 54GB per day for 5 years. Considering i don't write a massive amount to the device anyway, this should be fine. On my SSD as cache, i have my docker image (21gb in size), with things like SAB, Sonaar, Radaar, Emby installed, and i have a VM with Linux running the SNMP app LibreNMS. I have done an iotop on the machine, and have had it running for 4 hours in cumulative mode, and have the below output: Total DISK READ : 0.00 B/s | Total DISK WRITE : 148.68 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 141.92 K/s PID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND 5479 be/4 root 128.16 M 2.52 G 0.00 % 0.16 % qemu-system-x86_64 -name guest=LibreNMS,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-LibreNMS/m~rtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on 8623 be/4 ortonr 172.58 M 1382.31 M 0.00 % 5.13 % afpd -d -F /etc/netatalk/afp.conf 5741 be/4 nobody 73.76 M 1376.48 M 0.00 % 0.01 % bin/mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --logappend --logpath /usr/lib/unifi/logs/mongod.log --bind_ip 127.0.0.1 4430 be/0 root 85.92 M 794.06 M 0.00 % 0.05 % [loop2] 4451 be/4 root 0.00 B 571.91 M 0.00 % 0.06 % [btrfs-transacti] 4318 be/4 root 168.00 K 435.67 M 0.00 % 0.03 % [btrfs-transacti] 4222 be/4 nobody 9.77 M 194.32 M 0.00 % 0.01 % mono --debug NzbDrone.exe -nobrowser -data=/config 3595 be/4 nobody 4.79 M 116.98 M 0.00 % 0.01 % mono --debug Radarr.exe -nobrowser -data=/config 8624 be/4 nobody 292.00 K 95.03 M 0.00 % 0.00 % cnid_dbd -F /etc/netatalk/afp.conf -p /mnt/disk3 -t 6 -l 4 -u ortonr 4607 be/4 nobody 6.50 M 16.88 M 0.00 % 0.00 % java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start 5520 be/4 daemon 1360.92 M 5.88 M 0.00 % 0.00 % EmbyServer -programdata /config -ffmpeg /bin/ffmpeg -ffprobe /bin/ffprobe -restartexitcode 3 15808 be/4 root 8.00 K 5.08 M 0.00 % 0.16 % [kworker/u8:2-edac-poller] 25610 be/4 root 49.00 K 3.07 M 0.00 % 0.01 % [kworker/u8:0-btrfs-endio-write] 2306 be/4 root 0.00 B 1168.00 K 0.00 % 0.01 % [kworker/u8:3-btrfs-endio-write] 4464 be/4 root 0.00 B 940.00 K 0.00 % 0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --storage-driver=btrfs 7201 be/4 root 0.00 B 640.00 K 0.00 % 0.01 % [kworker/u8:1-btrfs-endio-write] 4317 be/4 root 0.00 B 192.00 K 0.00 % 0.00 % [btrfs-cleaner] 4347 be/4 root 1349.72 M 8.00 K 0.00 % 0.03 % shfs /mnt/user -disks 7 2048000000 -o noatime,big_writes,allow_other -o remember=0 4145 be/4 root 0.00 B 0.00 B 0.00 % 0.01 % emhttpd 4206 be/4 root 0.00 B 0.00 B 0.00 % 0.06 % [unraidd] 4285 be/4 root 2.75 M 0.00 B 0.00 % 0.00 % [xfsaild/md3] the biggest process writing data seems to be LibreNMS, and the VM i have running, which it recons has written 2.52gb in 4 hours, so just under a gig an hour. Even if that was a gig an hour, that's only 24gb per day, the other seems to be Unifi Docker, which has done 1gb in 4 hours, again, not a lot. i can't understand why the drive is reducing its lifespan so quickly. With the above writes, it should last the 5 years that say it will last, but based on the smart data, its only going to last 1 year. Plus, if LibreNMS is writing 2.52gb in 4 hours, (so say 20gb a day), where is all that data? The SSD is only 250gb, the SSD would be full in 10 days if this amount of data was being written? Cant get my head around it at all
  10. Thanks that sounds more feasible, but still seems like a Fast reduction in its life span, 15% used in 2 months, it’s not going to last 2 years at this rate
  11. I put in a Crucial MX500 as an SSD for Cache. Previous versions of UnRaid didn't have the database updated to show the complete attributes for the device, now I'm on 6.6.1, i can see them all, and was concerned about the SSD Lifetime: SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 100 100 000 - 0 5 Reallocate_NAND_Blk_Cnt -O--CK 100 100 010 - 0 9 Power_On_Hours -O--CK 100 100 000 - 1978 12 Power_Cycle_Count -O--CK 100 100 000 - 7 171 Program_Fail_Count -O--CK 100 100 000 - 0 172 Erase_Fail_Count -O--CK 100 100 000 - 0 173 Ave_Block-Erase_Count -O--CK 085 085 000 - 233 174 Unexpect_Power_Loss_Ct -O--CK 100 100 000 - 3 180 Unused_Reserve_NAND_Blk PO--CK 000 000 000 - 26 183 SATA_Interfac_Downshift -O--CK 100 100 000 - 0 184 Error_Correction_Count -O--CK 100 100 000 - 0 187 Reported_Uncorrect -O--CK 100 100 000 - 0 194 Temperature_Celsius -O---K 072 055 000 - 28 (Min/Max 0/45) 196 Reallocated_Event_Count -O--CK 100 100 000 - 0 197 Current_Pending_Sector -O--CK 100 100 000 - 0 198 Offline_Uncorrectable ----CK 100 100 000 - 0 199 UDMA_CRC_Error_Count -O--CK 100 100 000 - 0 202 Percent_Lifetime_Remain ----CK 085 085 001 - 15 206 Write_Error_Rate -OSR-- 100 100 000 - 0 210 Success_RAIN_Recov_Cnt -O--CK 100 100 000 - 0 246 Total_Host_Sector_Write -O--CK 100 100 000 - 18932877712 247 Host_Program_Page_Count -O--CK 100 100 000 - 559673637 248 FTL_Program_Page_Count -O--CK 100 100 000 - 763628335 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning If you have a look at 202 - Percent_Lifetime_Remain, the value is 15 - so is that telling me that my SSD, that was new 2 months ago, only has 15% of its lifetime left!!?
  12. Seems OK now, looking in the log, it was moaning about not getting the little graphic so i think it stopped the whole install - i deleted it, reinstalled, and it seems to be working...
  13. anyone using this on 6.6.0 its not installed after upgrading, and just shows in the tab titled plugin file installed errors
  14. read a few posts on reddit about this too, few ppl on there with the same problem. the new firmware just released was supposed to fix it but hasn't - people have got onto Crucial support and they say its with engineering. I was waiting on an online chat, but dropped the connection after i read this - will wait for next firmware release
  15. I think i marked it as solved when jonnie black said its not an unraid issue, its a firmware problem, and to wait for new firmware. Happy to unmark as solved, but not sure anyone may be able to help apart from Crucial with correct firmware. Wonder if its worth a few of us emailing crucial, perhaps they are not aware of the issue!?