-
Posts
83 -
Joined
LammeN3rd's Achievements
Rookie (2/14)
29
Reputation
-
I think this is a long overdue change! chasing more and more new users is just not sustainable in the long run! All current users being grandfathered in is great but as a user of a Pro licence for the last 6 years I would recommend looking at the model of Nabucasa (home assistant) and developing additional services for unRAID that do require a subscription for legacy users! I love Unraid and hope to use it for many more years, the company behind it being healthy is crucial to make that happen.
-
When Its Done™️ 😅
-
LammeN3rd started following Secure Remote acces service for unraid, docker and VM's
-
Secure Remote acces service for unraid, docker and VM's
LammeN3rd posted a topic in Feature Requests
I would love to have a secure way to access my unraid server, docker containers and VM's remotely without VPN My server is great but lacks access to dockers and VM's. Secure full access to my unraid server would be a service I would happily pay a few Euro per month for -
Sounds like a great Idea, having all changes in a single topic without any noise is very nice and it makes sure bug reports are where they need to be and not buried somewhere and forgotten.
-
All drives are connected via a 26 port SAS expander backplane (8 ports to H330 and 18 for disks) the system fully supports SAS but I only use SATA drives and have never had any issue with spindown (one of the main reasons I use unRAID without aggressive spin-down the average power usage is at least 50% higher I still run 6.9.2 (server is remote and until I have the iKVM connected again I don't dare to do remote upgrades P.S. the HW is a Dell Poweredge T630 so that should be extremely similar to your R530. All the firmware is u2date but I have been running this system for about 5 years and never had any of the issues you describe
-
Is there a compelling reason that there is a Red error notification when there is a update for the plugin? from my perspective this should not be red since it's just a plugin update not something really bad.... There have been quite some Updates the last couple of weeks and I still jump every time I see a red error 😬
-
Great to hear! I've switched my UniFi Docker back to macvlan will report back if the issue comes up again
-
I was wondering the same thing
-
LammeN3rd started following Warning: Unraid Servers exposed to the Internet are being hacked
-
LammeN3rd started following Data Integrity Enabled by Default on BTRFS Array File Systems? and Data Loss Problems
-
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7
LammeN3rd replied to jbartlett's topic in Docker Containers
to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there. -
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7
LammeN3rd replied to jbartlett's topic in Docker Containers
you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB): -
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7
LammeN3rd replied to jbartlett's topic in Docker Containers
Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck -
DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7
LammeN3rd replied to jbartlett's topic in Docker Containers
that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity. -
Share VM local lan network access to Unraid Host
LammeN3rd replied to jaddel's topic in General Support
Hi, I would recommend against this workaround. Besides the complications and performance impact from running this vm the main reason not to do this and use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....