LammeN3rd

Members
  • Posts

    79
  • Joined

  • Days Won

    1

LammeN3rd last won the day on April 30 2018

LammeN3rd had the most liked content!

1 Follower

About LammeN3rd

  • Birthday April 30

Converted

  • Gender
    Male
  • Location
    Amsterdam

Recent Profile Visitors

1701 profile views

LammeN3rd's Achievements

Rookie

Rookie (2/14)

20

Reputation

  1. BTW, the HBA330 and H330 are from a HW perspective basically the same card, the HBA330 is a better card for Unraid when you use SAS drives because it has a much higher queue depth for SATA drives it does not really matter.
  2. All drives are connected via a 26 port SAS expander backplane (8 ports to H330 and 18 for disks) the system fully supports SAS but I only use SATA drives and have never had any issue with spindown (one of the main reasons I use unRAID without aggressive spin-down the average power usage is at least 50% higher I still run 6.9.2 (server is remote and until I have the iKVM connected again I don't dare to do remote upgrades P.S. the HW is a Dell Poweredge T630 so that should be extremely similar to your R530. All the firmware is u2date but I have been running this system for about 5 years and never had any of the issues you describe
  3. I don't have a HBA330 but I'm running unRAID on a H330 in HBA mode for the last 5 years and have never had this issue. no special settings or mods just a fairly plan 6.9.2 no-matter how i start the parity check I get around 200MB/s per disk (as long as there is no other IO)
  4. Is there a compelling reason that there is a Red error notification when there is a update for the plugin? from my perspective this should not be red since it's just a plugin update not something really bad.... There have been quite some Updates the last couple of weeks and I still jump every time I see a red error 😬
  5. Great to hear! I've switched my UniFi Docker back to macvlan will report back if the issue comes up again
  6. to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.
  7. you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB):
  8. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck
  9. that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
  10. Hi, I would recommend against this workaround. Besides the complications and performance impact from running this vm the main reason not to do this and use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....
  11. please add this to the help notes in Unraid, when I started with Unraid I googled around for this info and having it in the help text will help a lot of new users
  12. LammeN3rd

    Squid is 50!

    Happy birthday squid!!
  13. found a few more places with the same issue, these are all places I found: opening log from top right corner opening the log of an individual VM Settings --> VM Manager --> View Libvirt LOG opening Console from a Docker (both from docker tab as well as Dashboard) The following places do not have the issue and work as normal: opening Terminal from top right corner opening a docker log from docker tab if I find any others I will update
  14. systemlog under the tools tab works fine, so no reason to switch browsers just yet