• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by LammeN3rd

  1. to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.
  2. you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB):
  3. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck
  4. that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
  5. Hi, I would recommend against this workaround. Besides the complications and performance impact from running this vm the main reason not to do this and use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....
  6. please add this to the help notes in Unraid, when I started with Unraid I googled around for this info and having it in the help text will help a lot of new users
  7. LammeN3rd

    Squid is 50!

    Happy birthday squid!!
  8. found a few more places with the same issue, these are all places I found: opening log from top right corner opening the log of an individual VM Settings --> VM Manager --> View Libvirt LOG opening Console from a Docker (both from docker tab as well as Dashboard) The following places do not have the issue and work as normal: opening Terminal from top right corner opening a docker log from docker tab if I find any others I will update
  9. systemlog under the tools tab works fine, so no reason to switch browsers just yet
  10. In safari opening the log window (top right corner) shows the login screen, the terminal and or the logs from docker containers work as normal. In Firefox the log window opens and shows the log like in 6.7.2
  11. I suggest looking at your BIOS! from the above log it seems you are running a 2011 version the last version available is from feb 2019 for my Poweredge T630 the BIOS from Feb 2019 fixed my issue!
  12. Quick update, I've done a lot of testing removing plugins, disabling Virtual machines and Docker before I rolled back the firmware of my intel NIC (i350) and Bios that I had updated recently. After the BIOS downgrade to 2.9.1 I've not seen the issue for days upgrading tot 2.10.5 brings back the crashes! I've not reinstalled my plugins so I can't be 100% sure its the BIOS but if anyone else has this issue I highly recommend looking at your BIOS!!
  13. Before unRaid 6.7.x I used AFP but had a lot of issue's with the introduction of unRaid 6.7.x time-machine over SMB works flawless!
  14. Ive been noticing the same kind of kernel warnings, system keeps running though. (I also run Unraid on a Dell Poweredge T630) I've been running 6.7.2 since it's release and these error's are more recent so it can't be due to a core Unraid change, only updated plugin's or dockers are suspect. Oct 17 06:09:47 unRAID kernel: WARNING: CPU: 18 PID: 0 at net/netfilter/nf_nat_core.c:420 nf_nat_setup_info+0x6b/0x5fb [nf_nat] Oct 17 06:09:47 unRAID kernel: Modules linked in: xt_CHECKSUM veth ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 xt_nat iptable_mangle ip6table_filter
  15. I've been noticing the same messages in my unraid log: every seems to keep working but these errors keep coming back, any ideas?
  16. Write amplification is *&%^$#& as well!
  17. As long as you have anything but intel SSD's, these tend to go read-only when they reach their predicted life all to protect the customer intel sales!
  18. The English datasheet 200TBW you can always try to claim warranty based on the German site... I found myself some cheap enterprise NVMe drives on eBay I highly recommend them, powerless protection very consistent write speed and above all else 1366 TBW
  19. It could be an option to add a check to the Update Assistant from the Fix Common Problems Plugin hat community tool already supports plugin compatibility.
  20. Same here, just with 4x NVMe in Raid 1 and both sonarr and plex use /mnt/user/appdata I've been running 6.7.1 RC and 6.7.0 before that never had any corruption issues!
  21. There is a plugin to disable them link: plugin-disable-security-mitigations
  22. Already 13 hours uptime on 6.7.1-rc2, no issues everything runs great! Really like the new Dashboard draggable fields, would be even better if they could move between tables!
  23. upgraded to 6.7.1-rc1 this morning, no issues so far! really appreciate the quick inclusion of this patch!