stomp

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by stomp

  1. I have received my Intel X540-T1 and did some testing. The Intel NIC was surprisingly recognized as 1GbE by unRAID, transfer was stable. With the same cable, the Aquantia was recognized as 5GbE, transfer was not stable with frequent loss of connection. Ends up the cable was the problem even though it should handle 10GbE speed (cat6 15m, might be damaged). With another cable, both NIC were recognized as 10GbE. With the Intel NIC, I got max 3GbE stable speed. With the Aquantia, I got 10GbE stable speed (so far). Both were tested with 9000 MTU. I will now use the Aquantia and see how it behaves, especially regarding stability. I'm not sure how temperature might affect the NIC performance now that the unRAID case has been closed and put back where it belongs. But it seems like the Aquantia is a better NIC with unRAID in my network configuration (unexpected result). The X540 chipset being a rather old chipset, I'm not sure if it is expected to do full 10GbE speed in all scenarios. That does not solve the issues I had in the past with onboard Aquantia NIC with my Windows machine (frequent loss of connection). If that happens again, I'll test the Intel NIC in one my Windows machines. I will probably rewire everything with better cables when I found the courage. Conclusion: hope is not lost for those of us with Aquantia NIC (at least with Linux machines).
  2. I have the same problem. At best link is reported as 5G but that changes sometimes to 2.5G and I've got some heavy link dropouts. I've got two other Aquantia NIC in Windows machines (onboard NIC) and I've got issues with all of them. I'm switching to Intel NIC for all my machines. High temperature as well as poor driver might explain this behaviour.
  3. It should. If not, first update Unraid if needed.
  4. I have the XG-C100C for a while and it works.
  5. I’ve got 106 TB of usable space, plus 32 TB as double parity. Some time ago I got rid of my Supermicro 5x3 cages and replaced them with 4x3 cages with 12 mm fans (much quieter). I had to reduce the number of array drives so it took some time to replace the previous drives with larger ones. I’m going for 16 TB drives at the moment (2 at the moment used for parity). The smallest one is 8 TB. I’m planning to replace current drives with larger ones when I need more space. I don’t plan to do any kind of upgrade in terms of case in order to increase again the number of drives.
  6. Thanks. I ran TRIM and just did one test. It seems like it fixed my problem. I thought that by 2019 TRIM was standard in everything, seems like I was wrong :-)
  7. Hello, I recently changed my cache drive, went from SATA SSD to M.2 NVMe. The goal was to enjoy better writing speed on my 10 GbE network. At first everything went fine, speed increased from around 500 MB/s to 750 MB/s and was very stable. At that point I thought my CPU was the bottleneck with high CPU usage during writes (90% on average). Now for a couple weeks, writing speed has drastically degraded. It usually starts at 750 MB/s but about 30 sec later goes to around 120 MB/s and stays that way. Sometimes it even stops to write, then goes back on. CPU usage is unaffected, staying close to 90%. I cannot remember having done any software or hardware change in the meantime. tower-diagnostics-20190128-2212.zip Log looks okay to me and I don't know how to monitor for anything that could slow down the process. I usually hear something in the server when the speed goes down, probably some HDD doing some things, can't say what's happening though. I already tried to restart the server, disable docker and uninstall every plugin. You'll find the diagnostic attached. Thanks in advance. EDIT: I also ran DiskSpeed to check the drive speed per se and it varies between 750 and 1000 MB/s on the all drive (512 GB).
  8. Now that all the trouble is behind me, I’m a little less angry of course :-) I still lost about 8 TB of data. Not sure how that was possible. Temperature is a real problem with my Supermicro 5x3 cages. I would not purchase them again. At least not with the case I have now. More thinking about getting some Vertex cages. Still I think there’s room for improvement for unRAID. As I said I think it should be able to repair everything by itself, or at least perform some checks in order to confirm that a specific disks is really « faulty ». I will build a small FreeNAS sever in order to try it out. But there’s pros with unRAID that you can’t find elsewhere. I have the feeling that my current build is not safe enough to run 16 disks array considering the temp and cable quirks I might have encountered.
  9. So I just lost 8 TB of data. And I was running two parity disks. What a mess. It's always the same thing with this crappy software (that I paid for, not that it's free). I specifically added a second parity disk because I consistently got trouble with unRAID, losing data and sh*t. Well that wasn't enough. First it's buggy. It constantly fails. Then it can't correct its own mess. You got to post in the support forum, wait for some kind enough guys (and I know there are some here) to help you fix unRAID's mess. Not that it is always easy for us mere peasants. The fix always comprises some very dark command lines. Then you're back to normal, after days of trouble and data unavailability. I searched the forum for "unmoutable" and there is tons of posts about it. Not that I'm just unlucky. We all are I guess. Time to move to a professional software.
  10. Hello, Two days ago, my 5x3 Supermicro cage started to beep because of "overheating" (45 C). I clean stopped the server in order to get rid of the noise. Then I installed two more fans in the case. Once restarted, the server showed two faulty drives: Parity 1 (sdc) and Disk 13 (sdk). Usual stuff. You just can't move your server an inch otherwise unRAID becomes your worst nightmare (again, thanks). Anyway, I checked all connections and stuff, booted the server again and did the following: - Ran SMART tests on Parity1 and Disk 13: everything looked fine - Removed Parity1 from the array - Removed Disk 13 from the array - Added back Disk 13 from the array - Rebuilt Disk 13 The rebuild went smooth. Everything looked fine, at least in maintenance mode. When I start the array normally though, Disk 13 status' shows "Unmountable: No file system". Content is not emulated and I'm missing some files. What's next? Attached is the diagnostics. Thanks in advance. tower-diagnostics-20180802-2147.zip
  11. Thanks for your quick reply. I put the original disk back and started the array in maintenance mode. Here is the diagnostic. Tell me if I should do New config before diagnostic, it wasn't clear for me. tower-diagnostics-20180413-1959.zip
  12. Yes it works. I got around 500 Mo/s write speed to the SSD cache disk (write speed limited by the drive speed). It was not easy to set up a 10Gbe network though and I still struggle to understand how everything somehow works. On the PC side, I got strange behavior with one NIC used by IPv4 usenet servers and one NIC used by IPv6 usenet servers concurrently. That's one of the strange behavior of my network ATM.
  13. Hello, So today I tried replacing one of my disk with one bigger. I noticed at reboot that something was wrong because the new disk made constant noise/bip and unRAID was logging errors on this particular disk. I already experienced several faulty disks with that specific Seagate models so I was not so surprised: I will end up buying WD from now on, lesson learned. Now in the meantime, I tried to put my old little disk in the slot. unRAID got stuck mounting disk1 (not the one I'm trying to replace). I rebooted the server. unRAID got stuck at disk2, disk1 was showing "not installed" I think. Reboot after disable automount: now parity1 is "faulty) in maintenance mode. To sum up, and this already happened last time with me losing data on 2 disks, everytime I try to replace one disk with one another, something goes wrong. And aside from the faulty new Seagate drive, unRAID do not behave normally as parity1 is now faulty (and I haven't even touched it). You will find the diagnostic below. What should I do? As I said, unRAID do not behave normally. It already happened once, now twice. Plus in the past, when adding disks, I also add problem with already installed disks not being recognized anymore (and needed a few reboots). Can someone explain to me what is happening to drives that I don't even touch? Thanks. tower-diagnostics-20180413-1931.zip EDIT: currently running xfs_repair -v on the faulty parity1 disk and it already showed "bad superblock" which looks bad because that is exactly what happened last time I lost 2 drives...
  14. I tried both 192.168.1.140 and 192.168.1.151 with no luck. I’ll try the old switch as well as one 1GbE port on the new switch.
  15. Thanks for your feedbacks. @johnnie.blackI produced the previous log file with eth0 as on board NIC (it was just to showcase the problem from a log perspective). Here is the log file with eth0 as the 10GbE NIC. Switching NICs between eth0 and eth1 never worked though. @BensonI'm using only one switch (Asus XG-U2008). My PC is connected to the switch using 10GbE port 1. My unRAID build is connected to the switch using 10GbE port 2. I disconnected the cable between the on board NIC and the switch (both 1GbE ports). I tried the following: a) setting eth0 as 10GbE NIC and disabling eth1 b) setting eth0 as 10GbE NIC and changing subnet mask of eth1 to 255.255.254.0 c) setting eth0 as 10GbE NIC, changing subnet mask of eth1 to 255.255.254.0 and removing eth1 entry from the routing table Nothing worked. You'll find the log file after a). I don't intend to use both NICs at the same time, I'm fine using only the 10GbE NIC. But that doesn't work yet. tower-diagnostics-20171003-2120.zip
  16. Here is the log file. Something looks wrong indeed. tower-diagnostics-20170930-1734.zip
  17. Doesn't work. I can get an IP, everything seems fine on the router side and on the switch side but I can't reach the server...
  18. It should be compatible because drivers were added to the rc7 and because it is properly recognized in unRAID. I said in the post you mention that the NIC was working alas I was actually using the 1st NIC (onboard NIC) without noticing. Hence my question: how to enable the 2nd NIC for every usage?
  19. Hello, I installed a second NIC in my server, it's an Aquantia 10 Gbe NIC. I'm using the latest RC (with Aquantia drivers) and the interface is recognized as eth1 10Gbe. I'm also using a 10 Gbe switch and another 10 Gbe NIC in my workstation. Eth0 and eth1 are configured with static IP with the following IP: 192.168.1.140 and 192.168.1.141 (same submask). I want to use it as the main NIC. What I tried so far: - Switch eth1 with eth0 in Network settings - Disable onboard NIC (1st NIC) in the BIOS - Assign a higher metric to eth0 in Network settings - Reboot rooter several times while doing the above actions None of the above actions worked. It resulted in the impossibility to reach the server over network. Otherwise I could reach the server but with 1 Gbe speeds (writes to cach disk at 110-115 Mo/s). Any suggestion? Thanks.
  20. I can confirm that it works with the latest rc. I tested only 1GBe for the moment, waiting on the second NIC for my workstation.
  21. I've ordered one card but I will only be able to give a feedback on compatibility not speed. Speed will be poor due to my current config (PCIe 2.0 x1 and Celeron CPU). Any feedback is appreciated especially regarding the CPU load. Sent from my iPhone using Tapatalk
  22. Thanks for your answers. Today I deactivated the cash drive and tried to copy some files to the share. The transfer speed was not as fast as with the cache drive but was very good nonetheless (average of 80 MB/s instead of 115 MB/s). And no need to call Mover in this configuration. So there is something strange going on with Mover as I would expect it to be at least as good as a direct file transfer with simultaneous parity calculation. Nevertheless I will move to XFS asap.
  23. Hi, For several months now, I have the following problem when mover is running: - Slow to move files: it can take hours to move a few GB - Intermittent unresponsiveness: via explorer or streams stutters Some background on Tower: CPU is a G620, 14 data drives, 43 TB, most drives are 99% to 100% full except the last added drive. When mover is running, CPU utilization is usually close to 50% but can go up to 100%. Memory usage seems stable at 25%. I am not running any plugins. Parity check speed is good (around 80-90 MB/s). Do you have any recommendation to avoid this problem? Upgrade CPU? Free some space on data drives? Others? Thanks a lot. stomp