LeetDonkey

Members
  • Posts

    21
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LeetDonkey's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I have updated the original post with some new measurements
  2. I've used a Mellanox X3 with a fiber connection to a mikrotik switch, it worked fine. Now I'm using an Intel X710-DA2 with a DAC for reduced power consumption. I purchased a Dell branded one on ebay and crossflashed it to an original Intel firmware and removed the restrictions on the modules it will use. TBH, I would go with the X710, even if it requires a little bit of extra work to make it accept all modules, just for the sake of reduced power consumption.
  3. I believe it is accurate, I also have some smart plugs I could use for comparison, but this energy meter is somewhat expensive compared to smart plugs, and also endorsed by the national power companies, so I would assume it has some accuracy. In my Unraid server I have both APC smart UPS which is reasonably accurate and a Corsair AX760i which is definately not accurate. But I would like to wait until I have a good sense of what hardware to use before taking it apart. I am actually considering the switch you made, currently I have a 9305-24i(Actually a 9306-24i but it's the same hardware, power consumption wise) to ASM1166. I have one ASM1166 coming in soon to get some indication of what kind of power figures I would be looking at. On the downside I would need 4 x ASM1166 combined with a carrier board with some kind of PCIE packet switch as my board does not support Bifurcation. The packet switch will probably use a decent amount of power, so I would probably end up with something that consumes about the same as my current controller. The only upside would be that it would support ASPM. What kind of controller did you have before? And how much did it reduce your consumption to use ASM1166? The second thing I considers was a 9500-16i that supports ASPM and either replace some 4 TB drives for a 20TB to bring the sata interface need down to 16, or add an ASM1166 as an extra PCIE controller, giving me a total of 22 drives where I am currently in need of 19. But it all depends on the impact it will have on the total consumption. I have jumpers to disable onboard audio and ethernet ports, it did not make much of an impact, maybe 2W at most, I think the ethernet ports already go into low power mode when no cable is attached, so not a big impact, but everything counts.
  4. I use a power meter on the primary side of the power supply. My point is, IF the difference between C3 and C8 is only 2-3 watts, then chasing PCIE devices with ASPM support for the sole purpose of the CPU PKG going beyond C3 is probably not worth it. But something else might be in play here, which is why I am asking.
  5. Hello, I am in the process of upgrading my Unraid system to the following setup: Supermicro X12SAE Intel Xeon 1290P 4 x 32GB ECC RAM. Since I had a running system already and had plenty of time to conduct a few tests, I decided to see if I could determine what kind of impact adding components would have to the power consumption. The PSU used for the following is a Seasonic X-400 Fanless GOLD. A single 120mm fan is attached to cool the CPU. Mouse, Keyboard and USB flash drive attached. Monitor attached via display port I am booting off a Ubuntu Live USB, and letting it sit idle in the console RAM is Kingston Premier KSM32ED8/32HC ECC Udimm 1 x RAM sticks ~15.0W without ethernet cable in ~15.8W with ethernet cable in ~14.0W with ethernet cable in and powertop --auto-tune ~11.6W powertop --auto-tune and everything removed (USB flash drive, ethernet cable, keyboard, mouse and display port monitor) 2 x RAM sticks ~17.4W with ethernet cable in ~14.8W with ethernet cable in and powertop --auto-tune 3 x RAM sticks ~17.4W with ethernet cable in ~14.8W with ethernet cable in and powertop --auto-tune 4 x RAM sticks ~17.8W with ethernet cable in ~15.3W with ethernet cable in and powertop --auto-tune According to powertop all above scenarios are able to reach C8 on CPU pkg 4 x RAM sticks + JMB585 PCIE controller ~19.5W with ethernet cable in ~17.8W with ethernet cable in and powertop --auto-tune With JMB585 CPU PKG is only able to reach C3 lspci -vvv confirms that ASPM is not supported on that card, so that makes sense So a few questions: Why does adding RAM make such a little impact on consumption? Is it because I need to have some activity to make it actually consume power? Secondly, I was under the impression that only being able to reach C3 on CPU pkg would have a bigger impact on the power consumption? If what I've measured is true, it barely makes any sense to specifically go for PCIE cards with ASPM support in order to save power. (9500-16i/9600-24i vs 9305-24i) The measurements are a bit difficult as it is a gold power supply and is being loaded with like 5% making the efficiency drop. This may also be why I am not seeing alot of change when adding for example RAM sticks. I tried limiting the CPU pkg to C3 in the bios without adding PCIE cards: 4 x RAM sticks ~20W with ethernet cable in ~18.0W with ethernet cable in and powertop --auto-tune Confirmed that CPU PKG does not go below C3, it seems barely worth the effort if this is correct.. Update 21-04-2024: Some more measurements: All theese are with 4 RAM sticks, ethernet cable in and powertop --auto-tune: Intel X710-DA2 no cable attached: CPU PCI-E: 19.1W C7 confirmed PCH: 18.4W C8 confirmed LSI 9306-24i no cables attached: CPU PCI-E: 32.7W C3 confirmed PCH: 31.7W C8 confirmed LSI 9306-24i(PCH) + X710-DA2(CPU) no cables attached: 34.7W C7 confirmed 1 x Lexar NM790(M2 slots are attached to PCH): 15.9W C8 confirmed 2 x Lexar NM790(M2 slots are attached to PCH): 15.9W C8 confirmed LSPCI confirms that the NVME is present LSI 9306-24i(PCH) + X710-DA2(CPU) no cables attached + 2 x NM790(M2 slots are attached to PCH): 34.7W C7 confirmed So, as long as I stick to PCH PCI-E slots, then it does not matter if the PCIE device supports ASPM
  6. I have 3 x X20 20TB, they have been running for about 6 months and have LCC ranging between 600 and 800. I would assume your 10TBs may be an older generation? It looks like they made it less aggressive on X20. I haven't tampered with anything other than spinning disks down after an hour of inactivity, but two of them are my parity drives and they are in the same range as the one that's just a member of the array
  7. Thank you for investigating, this certainly helps me pick the correct hardware. For ASPM support I was considering a carrier board with a PCIE packet switch and 4 x ASM1166 controllers, but I think the sheer amount of connectors and components make the setup a bit janky and error prone. With your feedback I will definately put the 9600-24i into consideration. I am currently awaiting the new hardware and will conduct some experiments on how big an impact ASPM support will make on the total power consumption, and then make a decision.
  8. Thank you for the clarification on SATA 1, 2 and SAS 1, I did not know that. I don't think it will be an issue since all my drivers are SATA-3. My main concern now is ASPM support, so I hope someone can confirm/deny that.
  9. Hello, I am considering a 9600-24i for Unraid. I am going to use it purely for mechanical SATA drives. Are there any quirks or issues I should be aware of? Also, is anyone able to confirm that it supports ASPM and will allow the CPU PKG to go beyond C3 state?
  10. I found 4TB QVO on sale and ended up purchasing two of them. I put them in RAIDZ-0(Why bother with parity for lancache?) Initital tests allow them to saturate a 1Gbit link when stuff is in the cache. Within a month I will upgrade to 10Gbit LAN for Unraid and 2.5 Gbit for clients, so it will be interesting to see how it holds up then. I will write a follow-up if it begins to behave in an unexpected way.
  11. Hello, What are your thoughts on a Samsung QVO 870 8TB for lancache use? I need something to use as storage for lancache, and in the 8TB segment they seem to be the cheapest drive by far. The reason I believe it would be a decent drive for lancache is that once the cache is prefilled, it would only be the occasional patches and updates, and therefore would not be affected too much by the performance decrease once the write cache is filled.
  12. I edited the dockerfile to look for the neolink.toml file in a subdirectory rather than /etc/neolink.toml I don't think this is a good way of doing it since it will no longer follow the thirtythreeforty repository. Anyways, if you'd like you can simply change the repository to leetdonkey/neolink: Then change the neolink_config path configuration: then put neolink.toml in /etc/mnt/user/appdata/neolink/ and start the docker. Note that the container use the name neolink.toml instead of config.toml If there is a way of linking directly to a file instead of a directory I am not aware of it, but I must confess that I am very new at this docker stuff
  13. Hello I hope it is alright I post questions about Neolink here, it wasn't specifically mentioned in the topic. Anyways, I've tried to install Neolink, but I run into a small problem Unraid 6.9.1 seems to insist that the variable 'neolink_config:' is a directory, but it seems that Neolink expects it to be a file: [2021-04-11T09:13:55Z INFO neolink] Neolink 0.3.0 (unknown commit) release Error: IoError(Os { code: 21, kind: Other, message: "Is a directory" }) I've tried deleting the directory and making it a config.toml file, but this causes the docker to throw an error "Execution error: bad parameter" and not start at all. I'm bit at a loss on how to proceed from here, and any help would be greatly appreciated
  14. I decided to give it a try, and it seems to be working. The controller along with the arrays was detected in Unraid 6.9.1 After installing unassigned devices plugin and unassigned devices plus, I was able to mount the arrays. You can even share them via SMB and access them from a windows PC.
  15. Hello Is it possible to use an Adaptec 52445 Controller with 2 RAID 6 arrays in Unraid? It currently has a NTFS partition on each, and I was hoping to copy the content to a new Unraid Array. If the controller is supported I would assume it was just a matter of using unassigned devices to mount it and then copy the contents. Another option would be to install a windows VM and pass the controller to the VM and then copy the contents from there. The machine in question is currently running windows with a few network shares, I would prefer to migrate to Unraid without building an additional machine just to copy the contents Any thoughts on this?