koaly

Members
  • Posts

    29
  • Joined

Everything posted by koaly

  1. I would share my experience. I also have 5000 series APU, but I went to a PRO 5750GE version (35W TDP), wich supports ECC RAM and runs with 128Gb (4x64Gb) very well. I also had ASUS TUF GAMING X570-PLUS, but recently upgraded it to ASUS proart x570 for its integrated 10Gbe and 3x PCIe x16, sacrifycing 2 SATA ports for it. PCIe4 works only the 3Rd PCIe x16, while the first two x16 are gen 3. This is an APU limitaition. I will also put Hyper card there and will look how it works. Nevertheless, this MB is also overfilled as the 3Rd M.2 slot deactivates a couple of SATA ports. I saw that Asrock now released it's RACK version for AM5 (ASRock Rack B650D4U-2L2T/BCM), but it is limited with everything: only 4 SATA6, 1x PCIe 16x, 1x PCIe 4x and 1x PCIe 1x. So, I'd rather keep my x570 setup for now and when Unraid fully supports AM5 lineup, I would also go for a ASUS proart x670e. Additional drives can be attached to LSI 9308-8i card if needed in the future. NVMe drives will remain with Asus Hyper card. I am not sure whether PCIe lanes will be enough for everything, but so far it is not a bottleneck. Ryzen 5700 or 5900 (only 65W TDP) with a cheap discrete GPU might be also an option.
  2. Hello everyone, I need to extend my array and decided to upgrade HDD size to 20 Tb. I will start with replacing a parity drive and plan a budget around 300 Euro per drive. With this budget there are actually only two options. First is WD Elements with 2 years warranty from WD, which after shucking appeared to be a re-labeled and re-flashed Ultrastar HC570 with 7200RPM. Second option is Toshiba MG10ACA20TE with 5 years warranty, but only over re- or e-tailer. Direct warranty claims from consumer side are impossible. I have many shucked WD drives and they work fine, but only with 2 years warranty vs 5 years with potential RMA difficulties from Toshiba. What would be your arguments which variant shall I go? Thanks in advance
  3. Hi everyone, I have 4x 14Tb xfs HDDs in the array, unprotected btrfs NVME drive for system and appdata, and also btrfs SATA SSD 480Gb single cache drive. I watched the Ed's videos and converted my cache SSD into ZFS. So far no problem. I am thinking about converting the drives in the array as single ZFS with an understanding that it may improve data compression. I am not concerned about additional workload for Ryzen pro 5750GE CPU. The process is well described in the videos. The question is whether I should also convert the parity drive into ZFS or it does not make any sense? Thanks in advance for explanations.
  4. Many thanks! Changing to smtp.gmail.com did the trick. It works under reverse proxy and bridge now. Although the container does not start under host at all, showing an error "Address already in use"
  5. Hello guys, I have installed the Vaultwarden for the first time with a SWAG reverse proxy. I can access the server's webUI from Internet, but I am stack on SMTP settings. I set up SMTP with my Gmail App password at the Vaultwarden's Admin setting page as per screenshot. Then I get an error "Error sending SMTP test email SMTP error: Connection error: Cannot assign requested address (os error 99)" I googled this error and tried many things as another App password, "force_tls" with the port 465, "Plain" and "Login" and so on. I also tried another mail service with the same result. Could you please help me to find out a SMTP error root cause?
  6. Thanks for the reply. I was just wondering what these lines in the log may mean? I switch off the server daily as there are no overnight tasks and also in order to save some electricity, which became 3 times more expensive in Europe. As for the benchmark, it did not show any significant difference between 4 drives. I will do the parity check next time and post an update with the diags if it does not improve.
  7. Hi to everyone! There is a question about a hard drive and a parity check speed. May be these issues are independent, I cannot say. The array is 4x14TB WD White from WD Elements. 3x WD140EMFZ-11A0WA0 1x WD140EDFZ-11A0VA0 They have been working a little over 2 years without problems. I noticed that for the last 2 monthly Parity checks it took 4 hours longer, since the average speed fell from 158 MB/s to 136 MB/s. Action Date Size Duration Speed Status Errors Parity-Check 2023-03-06, 16:34:51 (Monday) 14 TB 1 Day, 4 HR, 3 Min, 57 Sec 138.6 MB/S OK 0 Parity-Check 2023-02-03, 02:47:57 (Friday) 14 Tb 1 Day, 4 HR, 30 Min, 12 Sec 136.4 MB/S OK 0 Parity-Check 2023-01-07, 07:53:01 (Saturday)14 Tb 1 Day, 30 Min, 26 Sec 158.7 Mb/S OK 0 Parity-Check 2022-12-05, 09:06:33 (Monday) 14 Tb 1 Day, 25 Min, 40 Sec 159.2 Mb/S OK 0 I checked the logs of the discs, the WD140edFz showed the following entry twice in the log "kernel: ata7.00: supports DRM functions and may not be fully accessible". WD140emfz does not have such lines in the log. HW is all the same, I did not change anything in the config either. Bios has been updated in early December, before the last quick parity check. I tried to google it, but did not find an understandable answer. Could you advise me, what may this mean and how to deal with it? Could it be the SATA cable issue leading to both slower speed and errors in the log?
  8. The device name did not change and it worked well after upgrade from 6.9 up to 6.10 rc7. I do not remember whether the ID changed, but it could. Nevertheless I did not do any drive wipe or anything. By the first reboot after the upgrade from RC to Stable Unraid failed to load and I did not have a chance to see what happened, because even HDMI showed nothing.
  9. @JorgeB Many thanks for the help! It works now. The output is showed on the screenshot. What happened with btrfs on the nvme drive during the OS upgrade? Cache drive is also btrfs, but it survived somehow.
  10. Hi guys, I have upgraded from 6.10 rc7 to 6.10.0 and immediately got my NVME drive with all docker containers unmountable. Array started, but this drive did not. It says: "Unmountable: wrong or no file system" I tried repairing the Btrfs in the Maintenance mode using GUI, but it also does not work, reporting that it cannot open the file system. Please advise if there is a chance to make it working. I would not like to format the drive and re-install all the dockers. Screenshots and diagnostics are in the attachment. Thanks in advance! koaly-tower-diagnostics-20220519-1835.zip
  11. Hi guys, I found only one solution, which is shitty in any case. I made manual backup of all user data and reinstalled both Nextcloud and MariaDB with a full wipe of Appdata. After a clean install I restored user data and it started working. Updates do not work anyway. This is crap and makes me crazy. Yesterday by update from 23.0.0.0 to 23.0.0.2 I got the same way multiple errors in various steps from 3 to 6. I do not want to waste time on further attempts and again reinstalled. Shit. No support from anybody is available. No relevant info is available.
  12. Hi guys, I have got an error by updating Nextcloud from 20.0.7 to 20.0.14 for some reason. I checked the updater log and it says that I am stuck on the step 6. Restarting does not work. I tried to change in the config file Maintanance to "false" and also deleted ".step" file in the updater folder, but it does not help. After that I need to change update secret in the config file and the process repeated with the same error. Sometimes in the detailed response in the red field it shows "Step 6 is currently in process. Please reload this page later." I am stuck Could you please advise what else can i do? I'd prefer to avoid reinstalling to secure that the user data is not deleted. Thanks in advance
  13. I got the final response from ASRock on the non-working WOL from PCIe devices. They confirmed that X570m Pro4 does not support it, despite of the specification for the Motherboard on their website: Here's what I received exactly: "got final feedback from headquarter: After checking with BIOS and HW RD, we found it is a compatibility issue between X570M Pro4 and some specific LAN card. We also verify AMD reference board and it also has the same problem. If User would like use the WOL function, please help to ask user using the onboard i211 LAN." PS: WOL is works well on Gigabyte x570 motherboards. I checked it with the MASTER edition.
  14. There is nothing else in BIOS settings of x570m pro4 for WOL. Actually only Intel chips provide additional section in BIOS settings. I have this option with N/A on X550, just like you shown. But Intel's 10Gbe NICs have no WOL at all. Only if I activate "Boot from LAN", I see additional options for DHCP of NICs, but it had no influence on WOL functionality. My new MB from Asus is already on the way and I will check WOL on it. Another thought is to get the Asrock MB replaced by the same model through the Asrock support.
  15. Thanks for the link. Although I would not solder anything on the motherboard for a sake of having a valid warranty. BIOS shows an option of WOL from PCIe, but it does not work. Asrock support insisted in an email response that when they tested PCIe NICs on this MB, WOL worked well. May be I try another sample of this model just to avoid unneccessary changes
  16. Thank you for the comment. I have tested WOL on in total four different 10Gbe NICs: Intel X550T2 and 3 versions on AQC107 (TP-Link, Qnap and Synology) in my Unraid server. Non of them woke up the server, despite of indicated WOL in specs. TP-Link support service appeared to be very responsive and re-confirmed that WOL must be working. Then I have double checked the NICs on another motherboard (Gigabyte Aorus x570 Master) and WOL does work well with TP-Link TX401 and Qnap QXG-10G1T. Seems like Synology E10G18-T1 does not have WOL. Intel X550 has no working WOL and it is confirmed by Intel. So, my problem lies in a defctive or not comptible motherboard Asrock x570m pro4. Now I'm challenged to replace the motherboard and it seems there is no similar cheap mATX alternative on x570. The only non-Asrock option with 8-SATA I found is Asus TUF x570-plus. But this is ATX board and will lead me to a change of the Fractal Design Node 804 case to an ATX one. I decided to give it a try to build in a Fractal Design Meshify 2 (6x HDD holders is a standard set, optional up to 14). By the way TP-Link implementation of AQC107 is way better from all tested. Both Synology and Qnap have poor (smaller/thinner) heatsinks with 2 plastic pin mounts and completely dry thermopads underneeth. I hope these findings help anybody
  17. Sorry for misunderstanding. Unraid does not have Wifi capability as I understand. Wifi shall be implemented in VM, right? Then the choice of equipment is on MB implementaiton and drivers for particular OS in VM, right? This might be the only way for the topic starter, but I have no experience with VMs on Unraid yet.
  18. Thanks for the hint. Unfortunately I have only one physical LAN cable coming to this area. Shall assigning the IP be blocked on a router level? Right?
  19. Hi everyone, I have a home server on Unraid, based on Asrock x570m pro4 and R7 pro 5750GE, which work fine in a combination. Mainboard has only 1Gbe LAN and I want to add 10Gbe NIC with RJ45 and keep only one connection to the router. That means on-board LAN will not be in use. My need is to have only one LAN connetiona with a possibility for WOL over the NIC. It works very well with on-board LAN from S5 state with enabled ACPI, but I could not make it working with either Intel X550 or AQC107 cards. Intel confirmed that the chip itself has a WOL capability, but it is not implemented in NICs. Then I tried several AQC107 NICs. Asus ASUS XG-C100C has confirmed that WOL is not implemented. Then I requested TP Link about WOL on TP Link TX401 and received an email, saying that WOL should work. Nevertheless, TP Link TX401 does not show any signs of WOL working at all and about to be returned back to the shop. I know there are also AQC107 implementations from both Qnap and Synology and I will try them as well. Did anybody have WOL working on AQC107 at all? Chip specs say it's capable, but I have not found any working implementation yet. thanks in advance
  20. As a hint, I have changed the SB fan to Noctua A4x20 FLX. Simply removed the useless heatsink and the build-in fan and attached the Noctua fan with the double sticky tape. Then it also needs to make an adapter for the fan, but it was easy using a cable from old GPU. Noctua is absolutely silent. I use it with R5pro 3400ge with ECC DDR. I have recently had to return the MB, because all of a sudden it started giving me 5 short beeps during POST, but the CPU works just fine. Waiting for the replacement
  21. Hi, if the topic is still valid, I had the same question 6 months ago and I came to a conclusion that for a reasonable price I could get only one option. This is Asrock x570m pro4. It has 8 SATA ports and 2 NVMe. With the last BIOS update you can also have the SB fan stopped most of the time. It works in my NAS for about 5 months now with Ryzen 5 pro 3400ge and 4x16 ECC RAM. x570/x470 was the only choice for me as b550 does not support my CPU. with x570 mATX your choice is very limited. There are very few MBs to chose from. Other alternatives were MB from Biostar, but it's limited by SATA and Asrock Rack, but it's 3 times more expensive. Other chipsets implemented in mATX like b450 does not fit to NAS, because even if the MB has 6x SATA ports, 2x of them will switch off as soon as you install NVMe.
  22. Unraid boots both in normal and GUI mode. In the GUI mode display shows the loading process until the moment, when the login page must appear. Then the screen becomes blank with the blinking underscore in the left top corner. I need GUI mode to manage Unraid locally. CSM is not supported by new Renoir APUs. This is what Asrock has written me on my support request about disabled CSM.