Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Assuming you didn't change any networking settings from default, that sounds like your VPN server doesn't support port-forwarding.
  2. With regards to the EverCool 2x5.25" -> 3x3.5". I have two and the biggest problem with it is that the fan is very ineffective in cooling 3x3.5" drives i.e. the disks run hot (very hot). You can potentially replace the fan but then it's not easy to find thin small fans with high static pressure.
  3. TRIM is turned off in the array so for SSD, it's almost always better to use it in the cache pool where write performance is important (e.g. VM vdisk, docker, appdata etc.). The Unraid cache pool has evolved to being pretty essential for a full experience with Unraid so you should have a cache disk anyway.
  4. Unassigned Device (i.e. this plugin) is the right tool. Install it and mount your volume. It should show up under /mnt/disks Just in case: make sure you don't include your existing drive in the array / cache pool or there's a risk it will be erased. What the statement means is that if you format a disk as (for example) xfs, you can include it in the array without Unraid asking you to reformat the disk (and thus keep your existing data on the disk). You will still have to rebuild parity.
  5. I have 2 VMs on the same bridge and they can ping each other. What's your ping output?
  6. And naturally, this is my 1000th post. 😅
  7. Your use case is extremely niche and specific for anyone to give you a concrete yay or nay really. What I can tell you is: It is possible to daisy-chain (for a lack of better terms) VMs internet connection i.e. using 1 VM as the gateway for the other VM. It is possible to use 1 VM as a proxy for another VM (e.g. using privoxy). It is possible to pass through 1 NIC to each VM and physically plug 1 VM NIC to the other VM NIC to "share" Internet with a physical connection - similar to how a router works (of course, assuming IOMMU and pass-through support). I have done all 3 in the past so I'm pretty sure they work with the right setup. What I can NOT tell you is whether your firewall management will work because I have never done it. It looks like you might need some advanced networking config know-how.
  8. did you change the profile under intel turbo boost section of the plugin?
  9. Constantly spinning up and down is generally way worse than constantly on.
  10. Brilliant freaking idea! 😱 Using rclone + gdrive I can essentially do this (and more) without needing VPN or exposing my server to the Internet.
  11. testdasi

    6.7.3 RC?

    Just noticed some bug reports about 6.7.3 RC but don't remember seeing announcement for it. Did I miss something?
  12. Repeated disk spin up is an absolutely pain in the backside to diagnose. You might want to install the file activity plugin and see if there's anything using something that may be in that disk. And then try running one docker of a time to see if it helps isolating which docker is the culprit. Otherwise there's no way to know. With regards to your other errors. Ignore the DHCP warning. It basically says your IP is not being renewed by your DHCP server (usually the router) so Unraid is manually triggering renewal. As long as you don't have any network issue, you probably can ignore it. You can use static IP to see if it helps reduce the warning. Try pcie_aspm=off but as far as I know those PCIe errors are generally harmless so probably can you ignore them.
  13. You are overthinking it a bit. 😅 Your CPU is not a chiplet design so it's much simpler Keep core 0 free for Unraid Preferably also keep the HT sister of core 0 free but if you really can't avoid it, it's not that big of a deal as long as it doesn't get loaded 100% all the time Isolate CPU cores for VM - which you have already done Use 1 logical core (i.e. thread) for the emulator per VM From my experience, using more than 1 thread/VM for the emulator does absolutely nothing You can use the HT sister of core 0 for the emulator but should not for best performance That's about it really.
  14. Sharing storage? potentially using Unassigned Devices to mount smb shares. Other kind of resources? Not really.
  15. Based on a previous user report, probably won't work as a single GPU.
  16. There is nothing of note on 29 Jul. However, on previous days you have a lot of these: Jul 25 02:48:43 Tower kernel: bond0: link status definitely down for interface eth0, disabling it Jul 25 02:48:43 Tower kernel: bond0: making interface eth1 the new active one Jul 25 02:48:43 Tower kernel: device eth0 left promiscuous mode Jul 25 02:48:43 Tower kernel: device eth1 entered promiscuous mode Jul 25 02:48:43 Tower kernel: e1000e: eth1 NIC Link is Down Jul 25 02:48:43 Tower kernel: bond0: link status definitely down for interface eth1, disabling it Jul 25 02:48:43 Tower kernel: device eth1 left promiscuous mode Jul 25 02:48:43 Tower kernel: bond0: now running without any active interface! That suggests NIC problem and/or router problem and/or cable problem. Given you said there were issues on 29 Jul but there was nothing seen in syslog, I would assume it's more likely a router / cable problem.
  17. I would recommend to start things from scratch. I haven't found any tool that reliably does that.
  18. Install the Tips and Tweaks plugin for a non-command-line way to do it.
  19. How would you describe the noise level and characteristic? Understand it's in the basement so it's not an issue for you but I think it would be useful for others who may be considering building a similar server but don't have isolated spaces.
  20. My PM983 is slower in CDM. However, it's a 3.84TB model so it might very well be due to the larger capacity (same controller + larger capacity = generally slower). In real life though, I have not noticed any diff. In terms of percentage used, both of mine are still 0% so I have no basis to assess these 2. For the Intel 750 I have though: Intel 750 1.2TB A actual 104 TBW / rated 127 TBW (yes, 127!) = 82% The 750 has the same TBW rated for 400GB, 800GB and 1.2TB models, which doesn't make sense, so assuming the rating was conservatively done based on the 400GB then: actual 104 TBW / rated 381 TBW = 27% percentage used = 3% Intel 750 1.2TB B actual 47 TBW / rated 381 TBW = 12% percentage used = 1% Considering 104 = 2.2 x 47, it kinda matches 3% vs 1% (taking into accounts rounding diff). Based on that, the more realistic TBW rating for the 750 should be about 4000TB (or about 1.8 DWPD for 5 years) So I guess the conclusion is actual percentage used and actual TBW scales rather well, at least for the same model. However, you can't use the manufacturer rated TBW to estimate your expected percentage used. Manufacturers just don't necessarily rate their drives realistically. I notice that none of my SSD appears to exhibit your rather extreme level of write amplification. The read:write ratios are: Intel 750 A 1.02 Intel 750 B 1.04 Samsung 970 EVO 2.04 Samsung PM983 3.81 (1) and (2) spent a significant portion of their life as my cache drives; however, I am very particular about separating write-intensive and read-intensive data (e.g. the PM983 is almost exclusively write-once-read-many while the cache is most of the time write-once-read-once), as well as following good practices e.g. regular trimming, soft over-provisioning etc. So at least anecdotally, I very much trust my methods. CDM bench: PM983 3.84TB Sequential Read (Q= 32,T= 1) : 3043.498 MB/s Sequential Write (Q= 32,T= 1) : 1433.328 MB/s Random Read 4KiB (Q= 32,T= 1) : 291.462 MB/s [ 71157.7 IOPS] Random Write 4KiB (Q= 32,T= 1) : 264.787 MB/s [ 64645.3 IOPS] Sequential Read (T= 1) : 1566.127 MB/s Sequential Write (T= 1) : 1392.059 MB/s Random Read 4KiB (Q= 1,T= 1) : 24.914 MB/s [ 6082.5 IOPS] Random Write 4KiB (Q= 1,T= 1) : 64.947 MB/s [ 15856.2 IOPS] 970 EVO 2TB Sequential Read (Q= 32,T= 1) : 3543.933 MB/s Sequential Write (Q= 32,T= 1) : 2499.500 MB/s Random Read 4KiB (Q= 32,T= 1) : 282.467 MB/s [ 68961.7 IOPS] Random Write 4KiB (Q= 32,T= 1) : 239.125 MB/s [ 58380.1 IOPS] Sequential Read (T= 1) : 1893.878 MB/s Sequential Write (T= 1) : 2411.367 MB/s Random Read 4KiB (Q= 1,T= 1) : 37.470 MB/s [ 9147.9 IOPS] Random Write 4KiB (Q= 1,T= 1) : 79.055 MB/s [ 19300.5 IOPS]
  21. Start a new topic. This is a different issue. 110MB/s ceiling is likely due to network limit.
  22. When that happens again, Tools -> Diagnostics, attach the zip file here. If "all" your docker apps were stopped (i.e. blue red square icon) then it's likely something was done deliberately. First thing that comes to mind - do you have your server exposed to the Internet?
  23. My Samsung 970 Evo and PM983 work.
×
×
  • Create New...