GregPS

Members
  • Posts

    11
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

GregPS's Achievements

Noob

Noob (1/14)

0

Reputation

  1. The pc has pcie3x16 and Nas has pcie4/3x16. I believe both should be good for those kinds of transfer rates
  2. Since I have installed the 10 GbE network I have only been able to achieve 3.16 Gb/s (iperf3). I have searched through the forum and tried Jumbo Frames, disabling flow control, direct i/o etc but absolutely no change. I have adjusted these setting on both sides and no change. Not sure what to try next and would appreciate some help. NAS CPU-AMD Ryzen 5 3400G Motherboard - Asus ROG Strix X570-E RAM - Corsair Vengeance LPX DDR4 2133 MHZ -16GB PSU - Corsair HX-850 Storage - WD Red+ 10TB drives x 6 (Cache Samsung 970 EVO plus NVMe M.2 2TB x 2) NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic PC Windows 10 NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic Switch QNAP QSW-M408S
  3. I had originally built and tested my NAS machine and thought I would upgrade to 10 Gb/s network. I had zero issues with anything in terms of configuration, set up, everything was working. I decided to test my connection speed with iperf3 and realized it was only around 3-3.5 Gb/s and started researching reasons why. I found Spaceinvaderone's video on MTU size and was working on trying to increase on the NAS (no problem) my PC (no problem) but was struggling on my QNAP switch because for whatever reason could not access the interface to change the settings. While I was messing around with all these things I for some reason lost access to my NAS. It is showing on the router list but as the MAC address of the NIC and not the NAS name like it used to, same as my PC also showing as the MAC address of the NIC and not the name. If I attempt to access the NAS through chrome with ip address it says the site cannot be reached. It pings, very confused because I don't think I did anything that should resulted in this. Any help would be appreciated with first the connection issue and second with the speed, thanks in advance. NAS CPU-AMD Ryzen 5 3400G Motherboard - Asus ROG Strix X570-E RAM - Corsair Vengeance LPX DDR4 2133 MHZ -16GB PSU - Corsair HX-850 Storage - WD Red+ 10TB drives x 6 (Cache Samsung 970 EVO plus NVMe M.2 2TB x 2) NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic PC Windows 10 NIC - X520-10G-1S-X8 (SFP+) - connected with fiber optic Switch QNAP QSW-M408S Edit - Not sure if this helps but it is showing in Windows Explorer under network but when I click on that the folder is empty even though I have made a few shares. Currently no data is on the array Edit - apparently it is accessible from another PC on the network, this PC doesn't have a 10Gb NIC Edit - apparently it is also accessible on the original PC through the 1 Gb NIC; the NAS is still using its 10 Gb NIC so thinking the problem is somewhere on the 10 Gb NIC configuration on the PC Edit - fixed the configuration issue on my PC's NIC. NAS is visible but still have only 3 Gb/s transfer rates. Cannot figure out how to change to Jumbo Frames MTU=9000 on the switch, hoping this is the culprit
  4. Update - got the drive replaced, all seems to be fine with the new drive. Time to build the array. Thank you very much for the support, it is really appreciated
  5. I do have a docking station I could try it out on, but it will be just as easy to take it back to the retail store and get a new one. I can ask Asus about the FF code and if it is something to worry about.
  6. I tried different cables, power supplies connections, I swapped with other visible drives and no matter what combination all the visible drives were visible and this one drive was not. I have been thinking it was a bad drive or unconfigured in some way or another, but when I saw it in Unraid I wasn't so sure. So likely my next course of action should be to stop worrying about FF code and exchange the drive and start configuring my array
  7. Purchased all 6 from a reputable location all at the same time. All had the same packaging and looked to be the same.
  8. I found the fault found reference after a quick google, the more I look into it the more I am starting to doubt it as well. Asus FAQ said it was reserved for future error messages. So does anybody know for sure this is nothing to worry about?
  9. Before I even attempted to run unraid I tried different SATA cables, power supplies and ports. New ones or swapped ones from other drives. Always never visible in bios. Seeing if it was visible in unraid was just a curiosity, when it was I was surprised and then disappointed when it disappeared. And then really disappointed when I saw the FF code on the motherboard. I am a relatively techy guy built several computers over the years however have never heard of the 3.3 v issue. What could it mean to this situation and what do you make of the FF code? Are they or could they be related?
  10. I had turned the machine off before I read this response. I booted it up this morning and the exact same thing happened drive was showing until I tried to assign it then it disappeared again. M.2 doesn't have a temperature warning this AM (M.2's are at 30C and 34C all other drives at 24C). Attached is the Diagnostics. I appreciate any support in advance. sunas-diagnostics-20200903-0722.zip
  11. CPU-AMD Ryzen 5 3400G Motherboard - Asus ROG Strix X570-E RAM - Corsair Vengeance LPX DDR4 2133 MHZ -16GB PSU - Corsair HX-850 Storage - WD Red+ 10TB drives x 6 (Cache Samsung 970 EVO plus NVMe M.2 2TB x 2) I have two questions 1. In the BIOS (2606 x64) I see all but one of storage hard drives, I played around with different SATA cables and ports and power connections but this one particular drive is not seen in the BIOS however when I load unraid it is visible to unraid (array not set up yet). I am curious if this is normal or something to even worry about. 2. The computer boots up just fine, no signs of issues, when I am in BIOS there is an A9 code which I believe is not an issue in BIOS but when I loaded unraid I did notice an FF code prior to configuring the array. Is this normal at this stage? I believe it stands for fault found, but wouldn't that fault found be found even at the BIOS stage? Is it possible that it has something to do with unidentified drive? Or does it have something to do with the fact that my PSU did not come with 24x8x4 power connections only 24x8. Any insight would be appreciated as this is my first build and configuration. *** Update when the drive that wasn't visible in the BIOS was selected in unraid it disappeared and was no longer visible. Also one of the M.2 drives had a high temperature alarm