Jump to content

MSattler

Members
  • Posts

    150
  • Joined

  • Last visited

Everything posted by MSattler

  1. So I have a Quadro RTX4000 that works splendid 99.9% of the time. Sometimes though my family complains that most movies will simply not play. When I check the Plex Logs I see: TPU: hardware transcoding: enabled, but no hardware decode accelerator found A restart of the plex container does not fix this, only a complete restart of unraid get's me back to transcodes working. Now I remember for years we sometimes had issues where containers would have the ability to communicate with the video card, but not transcode. I'm assuming this is the same thing. is there a way for me to forcefully reset the video card without a reboot? Right now I have a script running that emails me when this starts occurring so I can reboot it myself. If I can script the resolution without a reboot, that would be preferable. Drivers I'm running: NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 Thanks!
  2. Resurrecting this thread to see if this old patch works on the newer builds. I have a different card on the way with LSI firmware but like to see if I can get this 8003 card working.
  3. If the interface is for VM's, the interface itself doesn't need a gateway. The VM's themselves would have the gateway set, the nic would just be the connection. At least that's how it works on the VMWare side. I don't run any VM's on my unraid boxes.
  4. Typically you should only have one default gw. You can add a route specific for the /24 on eth 2. Not sure where to store it so it's added on every future reboot.
  5. Ah ok. Yup that makes total sense. Will do that tomm. thanks!
  6. All, I have a somewhat weird config. eth0 - 1Gbe NIC Onboard eth1 - 10Gb Mellanox ConnectX-2 NIC Basically what I want to do is use the 1Gbe NIC for managing the unraid host. The 10Gb interface would be used for host access. The thing is that I cannot find anyway to assign the gateway/dns to eth1. unRaid automatically wants me to assign the gateway to eth0. Basically I have this setup, and it works, but it looks funky in the gui: eth0 - 10.10.10.42 / 255.255.255.0 / GW 192.168.1.1 / DNS 192.168.1.1 eth1 - 192.168.1.42 / 255.255.255.0 Is there a way to switch the interfaces around so that I'm assigning the gateway with the right interface?
  7. So just updated one tower from 6.2.4 to 6.3.1. Everything went find except I no longer see all network interfaces. Both servers have the onboard NIC, and then a quad port nic. 6.2.4 is still fine. On 6.3.1 I see eth0, eth1, and then eth118. I had to unplug the last two connections else I would drop pings. I never had an issue before connecting 5 x 1 Gbe through the LAG on my switch. I would see all 5 interfaces come up. Working tower on 6.2.4: root@Tower:~# dmesg | grep eth [ 9.573807] tg3 0000:08:00.0 eth0: Tigon3 [partno(BCM57781) rev 57785100] (PCI Express) MAC address bc:5f:f4:87:19:e6 [ 9.573892] tg3 0000:08:00.0 eth0: attached PHY is 57765 (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 9.574016] tg3 0000:08:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 9.574068] tg3 0000:08:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 9.765278] igb 0000:03:00.0: eth1: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:08 [ 9.765340] igb 0000:03:00.0: eth1: PBA No: Unknown [ 9.953454] igb 0000:03:00.1: eth2: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:09 [ 9.953521] igb 0000:03:00.1: eth2: PBA No: Unknown [ 10.159261] igb 0000:04:00.0: eth3: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:0c [ 10.159334] igb 0000:04:00.0: eth3: PBA No: Unknown [ 10.365182] igb 0000:04:00.1: eth4: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:0d [ 10.365232] igb 0000:04:00.1: eth4: PBA No: Unknown [ 20.042353] tg3 0000:08:00.0 eth0: Tigon3 [partno(BCM57781) rev 57785100] (PCI Express) MAC address bc:5f:f4:87:19:e6 [ 20.042358] tg3 0000:08:00.0 eth0: attached PHY is 57765 (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 20.042361] tg3 0000:08:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 20.042364] tg3 0000:08:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 20.252389] igb 0000:03:00.0: eth1: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:08 [ 20.252393] igb 0000:03:00.0: eth1: PBA No: Unknown [ 20.459400] igb 0000:03:00.1: eth2: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:09 [ 20.459404] igb 0000:03:00.1: eth2: PBA No: Unknown [ 20.656405] igb 0000:04:00.0: eth3: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:0c [ 20.656409] igb 0000:04:00.0: eth3: PBA No: Unknown [ 20.853399] igb 0000:04:00.1: eth4: (PCIe:2.5Gb/s:Width x4) 00:1b:21:42:8a:0d [ 20.853403] igb 0000:04:00.1: eth4: PBA No: Unknown [ 21.518242] bond0: Enslaving eth0 as an active interface with a down link [ 21.769348] 8021q: adding VLAN 0 to HW filter on device eth1 [ 21.769559] bond0: Enslaving eth1 as an active interface with a down link [ 22.022354] 8021q: adding VLAN 0 to HW filter on device eth2 [ 22.022541] bond0: Enslaving eth2 as an active interface with a down link [ 22.275346] 8021q: adding VLAN 0 to HW filter on device eth3 [ 22.275539] bond0: Enslaving eth3 as an active interface with a down link [ 22.528361] 8021q: adding VLAN 0 to HW filter on device eth4 [ 22.528571] bond0: Enslaving eth4 as an active interface with a down link [ 24.232944] igb 0000:03:00.0 eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 24.264441] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [ 24.366671] tg3 0000:08:00.0 eth0: Link is up at 1000 Mbps, full duplex [ 24.366679] tg3 0000:08:00.0 eth0: Flow control is off for TX and off for RX [ 24.366683] tg3 0000:08:00.0 eth0: EEE is enabled [ 24.457949] igb 0000:03:00.1 eth2: igb: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 24.464455] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [ 24.464458] bond0: link status definitely up for interface eth2, 1000 Mbps full duplex [ 24.774960] igb 0000:04:00.0 eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 24.864470] bond0: link status definitely up for interface eth3, 1000 Mbps full duplex [ 25.026975] igb 0000:04:00.1 eth4: igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX [ 25.064480] bond0: link status definitely up for interface eth4, 1000 Mbps full duplex [ 34.874785] mdcmd (34): set md_write_method Non working Tower on 6.3.1: root@Tower2:~# dmesg | grep eth [ 12.685324] alx 0000:03:00.0 eth0: Qualcomm Atheros AR816x/AR817x Ethernet [e8:39:35:0e:06:41] [ 12.882026] e1000e 0000:06:00.0 eth1: (PCI Express:2.5GT/s:Width x4) e8:39:35:0e:06:41 [ 12.882190] e1000e 0000:06:00.0 eth1: Intel® PRO/1000 Network Connection [ 12.882422] e1000e 0000:06:00.0 eth1: MAC: 0, PHY: 4, PBA No: D98771-010 [ 13.049996] e1000e 0000:06:00.1 eth2: (PCI Express:2.5GT/s:Width x4) e8:39:35:0e:06:40 [ 13.050160] e1000e 0000:06:00.1 eth2: Intel® PRO/1000 Network Connection [ 13.050391] e1000e 0000:06:00.1 eth2: MAC: 0, PHY: 4, PBA No: D98771-010 [ 16.125588] alx 0000:03:00.0 eth0: Qualcomm Atheros AR816x/AR817x Ethernet [e8:39:35:0e:06:41] [ 16.313898] e1000e 0000:06:00.0 eth1: (PCI Express:2.5GT/s:Width x4) e8:39:35:0e:06:41 [ 16.313911] e1000e 0000:06:00.0 eth1: Intel® PRO/1000 Network Connection [ 16.313989] e1000e 0000:06:00.0 eth1: MAC: 0, PHY: 4, PBA No: D98771-010 [ 16.314664] e1000e 0000:06:00.0 eth118: renamed from eth1 [ 16.489959] e1000e 0000:06:00.1 eth1: (PCI Express:2.5GT/s:Width x4) e8:39:35:0e:06:40 [ 16.489963] e1000e 0000:06:00.1 eth1: Intel® PRO/1000 Network Connection [ 16.490040] e1000e 0000:06:00.1 eth1: MAC: 0, PHY: 4, PBA No: D98771-010 [ 46.735221] bond0: Enslaving eth0 as an active interface with a down link [ 46.736114] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 46.996271] 8021q: adding VLAN 0 to HW filter on device eth1 [ 46.996569] bond0: Enslaving eth1 as an active interface with a down link [ 46.998030] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [ 47.008680] e1000e 0000:06:00.1 eth1: changing MTU from 1500 to 9014 [ 47.204165] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 47.211283] device eth0 entered promiscuous mode [ 47.211350] device eth1 entered promiscuous mode [ 49.720842] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 49.812009] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [ 106.966331] udevd[1459]: Error changing net interface name eth118 to eth0: File exists [ 131.519290] alx 0000:03:00.0 eth0: Link Down [ 131.553699] bond0: link status definitely down for interface eth0, disabling it [ 138.635053] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 138.729493] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [ 151.880355] mdcmd (34): set md_write_method [ 193.170137] device eth0 left promiscuous mode [ 193.170204] device eth1 left promiscuous mode [ 193.211403] bond0: Releasing backup interface eth0 [ 193.211406] bond0: the permanent HWaddr of eth0 - e8:39:35:0e:06:41 - is still in use by bond0 - set the HWaddr of eth0 to a different address to avoid conflicts [ 193.225806] bond0: Releasing backup interface eth1 [ 193.412912] e1000e: eth1 NIC Link is Down [ 193.413122] e1000e 0000:06:00.1 eth1: changing MTU from 9014 to 1500 [ 193.650607] bond0: Enslaving eth0 as an active interface with a down link [ 193.651480] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 193.904120] 8021q: adding VLAN 0 to HW filter on device eth118 [ 193.904438] bond0: Enslaving eth118 as an active interface with a down link [ 193.905864] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [ 194.160101] 8021q: adding VLAN 0 to HW filter on device eth1 [ 194.160454] bond0: Enslaving eth1 as an active interface with a down link [ 194.173071] e1000e 0000:06:00.0 eth118: changing MTU from 1500 to 9014 [ 194.367164] e1000e 0000:06:00.1 eth1: changing MTU from 1500 to 9014 [ 194.559910] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 194.567501] device eth0 entered promiscuous mode [ 194.567586] device eth118 entered promiscuous mode [ 194.567601] device eth1 entered promiscuous mode [ 196.845673] e1000e: eth118 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 196.855830] bond0: link status definitely up for interface eth118, 1000 Mbps full duplex [ 197.060656] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 197.063839] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [ 222.386136] e1000e: eth118 NIC Link is Down [ 222.463073] bond0: link status definitely down for interface eth118, disabling it [ 230.686266] alx 0000:03:00.0 eth0: Link Down [ 230.702895] bond0: link status definitely down for interface eth0, disabling it [ 237.764696] e1000e: eth1 NIC Link is Down [ 237.774638] bond0: link status definitely down for interface eth1, disabling it [ 245.289946] alx 0000:03:00.0 eth0: NIC Up: 1 Gbps Full [ 245.366477] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [ 377.865520] e1000e: eth118 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 377.906686] bond0: link status definitely up for interface eth118, 1000 Mbps full duplex [ 410.712768] e1000e: eth118 NIC Link is Down [ 410.769687] bond0: link status definitely down for interface eth118, disabling it [ 415.221476] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 415.241635] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [ 416.348617] e1000e: eth1 NIC Link is Down [ 416.385529] bond0: link status definitely down for interface eth1, disabling it [ 419.081273] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 419.089441] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [ 548.350670] e1000e: eth118 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 548.421823] bond0: link status definitely up for interface eth118, 1000 Mbps full duplex thoughts? Thanks!
  8. I like that. That way when copying things between unRaid servers I'm not crushing the main interfaces. Granted, drive access may be slowed if they grab something from that drive. I was thinking about grabbing a Quanta LB6M, and trunking the ethernet ports to my main ethernet switches, and switching all my servers over to IPoIB.
  9. Did you get any feedback on this elsewhere? Thanks!
  10. So what are you guys using for your config file? I am now running 2 CyberPower 1500's. One for my Emby Server, ESXi Host, and router, another for 2 unRaid servers and a Cisco switch. I'd like the one 1500 to shut down both unraid servers. Since there is only one USB cable I figured the unraid server I connect the cable to will have a script in it's shutdown script to shut the other server down as well? thanks!
  11. Is irqbalance enabled? Years ago I ran into issues like this on xenserver because irqbalance was disabled, and a single threaded application would make core 0 very busy, and then cause network issues. Enabling irqbalance would then let any cores deal with network requests.
  12. Hello, Am running two unRaid 6 servers, and I'm curious if the Quad Port Intel 82571EB Gigabit Ethernet Controller is supposed to be supported? The host does not seem to see the interface. thanks!
  13. I think you are seeing what I saw with my prior issue. Your SATA controller card has some theoretical top throughput that it can handle. When you are doing a parity check you are initially starting out with all disks, and the speed of it at that point can be limited by 1) the speed of your slowest drive, and 2) the throughput capability of your SATA controllers. If I hook up a firehose to a garden hose, I limit the water going through, same thing here with the data. Ton of data going through that controller, perhaps you are getting close to the actual throughput your card can handle, and typically your smaller drives are going to be older and slower. So once you get above 2TB's, your 2TB drives spin down and won't be used anymore, if the bigger drivers are faster you will now see higher throughput and your sync completion time will get lower. You also are now sending less data through the controller, which helps as well. Typically my numbers start VERY low, and as the 2TB disks drop off it increases, when it gets to just the 4TB disks it flies. Totally expected. The best you can do, is figure out out to best split up your drives between the AOC-SAS2LP-MV8 controller and the onboard controller. And remember, these numbers are totally pointless, it is rare that all your drives will ever be spinning up at the same time, except for parity checks.
  14. If you limit access to the unraid server it should not be a big deal, but any additional plugins/docker apps could make it more vulnerable.
  15. I just upgraded to BETA9 and was looking through the syslog when I saw the following entry: VMware vmxnet3 virtual NIC driver - version 1.2.0.0-k-NAPI Is it normal for the Xen to be using a VMWare driver? Thanks, Marcus
  16. So.... I don't actually want to backup my user shares, but is it possible to setup crashplan to backup to the mnt/share location? I have a secondary tower that I would like to use purely just for backups, but I want the storage to go to the array not the cache disks. thanks!
  17. The point is that I, and probably others, would worry about it. Yes, there's a two year warranty, but the fact is that they specify a two year warranty constrained by a ridiculously low power on hours figure, bearing in mind that they also promote (in the first document quoted) the drives as suitable for home servers and NAS devices, which are typically powered 24/7. It simply does not make sense. I would generally prefer to buy drives and other parts where I know for sure what I am getting. I agree, I had bought the first drive on the basis of their "Best-fit applications" list and that it had a 2 year warranty, it was when I was considering buying a second that I looked further (I wanted to check the power requirements) and saw the odd 2400 hour issue and this has made me stop to consider what to do. Heck I'm spending the first 100 hours (4.2%) of the drive's life just running a 2-cycle preclear... Regards, Stephen I'm sure I could read the fine print on my things I buy on a daily basis that would make me ponder this. If we were talking about a drive manufacturer that sells 1000 drives a month, and get's 10 RMA's a month, then I would worry about this. With as many drives as Seagate sells, there is no easy/feasible way for them to check every drive upon receipt, and approve/deny the warranty according to power on hours. If Seagate suddenly changes their RMA procedures, then I would worry.
  18. To me, all this is moot, the drive has a 2 year warranty. In the last 6 months I had 6 1.5TB Seagate 7200 RPM drives, and a couple 2 TB Seagate drives which were nearing their warranty expiration. Of these drives 6 of them in total had high re-allocated sectors near or above 100. For each of these I requested an RMA, and received a replacement drive. Some of the Power On hours were very high, in the 20,000-28,000 hour area and not once did they balk at this. I wouldn't worry about this one bit. -Marcus
  19. You could try using just a 4GB VMDK and installing unRAID there?
  20. No, but the price is quite reasonable http://www.newegg.com/Product/Product.aspx?Item=N82E16811235039&Tpk=zalman%20MS800 Nice looking case, too bad they don't have a 12 bay full tower.
  21. Just wait, no doubt it's because they will put out an Antec 1200 Two, like the ANtec 300 Too and Antec 900 Two, which have hardware support for USB 3.0 at the front. I love the Antec 900s (got 3) but you do need to flatten the 4 tabs to fit in the 5x3 cages. Once in they slide in and out no problem. I am now looking to replace 120mm fans for quieter ones. The Antec 1200 is actual on Version 3 now. It finally came into stock, and I got it last week. It has the tabs as well, sliced myself pretty good taking them out. The 2 icy dock cages I have so fit perfectly. Of course, the upgrades resulted in then going to Microcenter buying an AsRock Extreme 4 board and an i3 3225 proc. It's running much quieter now, and I have connections for all the fans, and for USB 3.0 up front <old board had no USB3>. Pictures to follow soon.
  22. Interesting. I already have an Icy Dock 5 in 3, so I would prefer to keep them the same. What would be the best case to use 4 of the Icy dock 5 in 3's? The Antec 1200? I like the Antec 1200 but it appears to be out of stock everywhere. Thanks!
  23. Can anyone confirm if the Xigmatek case, http://www.newegg.com/Product/Product.aspx?Item=N82E16811815011, will hold 4 x Icy Dock 5 in 3's? Currently have 1 Icy dock bay on the way, and have 9 drives currently <1 Parity, 1 Cache, 7 Data>. I want to ensure the next case I buy allows me to expand to 20 drives as needed. Thanks!
×
×
  • Create New...