LammeN3rd

Members
  • Content Count

    73
  • Joined

  • Last visited

  • Days Won

    1

LammeN3rd last won the day on April 30 2018

LammeN3rd had the most liked content!

Community Reputation

19 Good

1 Follower

About LammeN3rd

  • Rank
    Advanced Member
  • Birthday April 30

Converted

  • Gender
    Male
  • Location
    Amsterdam

Recent Profile Visitors

1058 profile views
  1. to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.
  2. you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB):
  3. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck
  4. that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
  5. Hi, I would recommend against this workaround. Besides the complications and performance impact from running this vm the main reason not to do this and use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....
  6. please add this to the help notes in Unraid, when I started with Unraid I googled around for this info and having it in the help text will help a lot of new users
  7. LammeN3rd

    Squid is 50!

    Happy birthday squid!!
  8. found a few more places with the same issue, these are all places I found: opening log from top right corner opening the log of an individual VM Settings --> VM Manager --> View Libvirt LOG opening Console from a Docker (both from docker tab as well as Dashboard) The following places do not have the issue and work as normal: opening Terminal from top right corner opening a docker log from docker tab if I find any others I will update
  9. systemlog under the tools tab works fine, so no reason to switch browsers just yet
  10. In safari opening the log window (top right corner) shows the login screen, the terminal and or the logs from docker containers work as normal. In Firefox the log window opens and shows the log like in 6.7.2
  11. I suggest looking at your BIOS! from the above log it seems you are running a 2011 version the last version available is from feb 2019 https://www.dell.com/support/home/nl/nl/nlbsdt1/drivers/driversdetails?driverid=0f4yy&oscode=ws8r2&productcode=poweredge-r710 for my Poweredge T630 the BIOS from Feb 2019 fixed my issue!
  12. Quick update, I've done a lot of testing removing plugins, disabling Virtual machines and Docker before I rolled back the firmware of my intel NIC (i350) and Bios that I had updated recently. After the BIOS downgrade to 2.9.1 I've not seen the issue for days upgrading tot 2.10.5 brings back the crashes! I've not reinstalled my plugins so I can't be 100% sure its the BIOS but if anyone else has this issue I highly recommend looking at your BIOS!!
  13. Before unRaid 6.7.x I used AFP but had a lot of issue's with the introduction of unRaid 6.7.x time-machine over SMB works flawless!
  14. Ive been noticing the same kind of kernel warnings, system keeps running though. (I also run Unraid on a Dell Poweredge T630) I've been running 6.7.2 since it's release and these error's are more recent so it can't be due to a core Unraid change, only updated plugin's or dockers are suspect. Oct 17 06:09:47 unRAID kernel: WARNING: CPU: 18 PID: 0 at net/netfilter/nf_nat_core.c:420 nf_nat_setup_info+0x6b/0x5fb [nf_nat] Oct 17 06:09:47 unRAID kernel: Modules linked in: xt_CHECKSUM veth ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 xt_nat iptable_mangle ip6table_filter
  15. I've been noticing the same messages in my unraid log: every seems to keep working but these errors keep coming back, any ideas?