Output

Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by Output

  1. Been using this for a long time and really like it! I have a server with backplane and I though it would be great if we could bind the port to a slot instead of a drive to a slot? When removing or moving a drive the drive in the slot would change automatically then
  2. Well, Im in the process of removing 4 3gb drives that now have reached EOL with over 7 years of operation. I'm replacing them with 2 10gb drives instead and really would like this built in even if it would take a long time. Doing it manually is just such a pain
  3. +1 , would be great to be able to have Unraid handle the removal of a drive, that is shrink the array. unBalance is a great tool but it does not handle the fact that unRaid can still write to the disk during the process. If Unraid handled the process it could be a one click solution where the system: -Ensures writes are not done to the disk -Moves all data to other disks -New config -notification when drive is removed from the array
  4. Just want to chime in. As a user since 2012(not that long) Unraid has really had a long path of development and features added to it. I’m really grateful to all developers that create plugins and dockers and I can understand the amount of time it takes to create and test. I would wish everyone to get along and continue the saga. I how ever in this instance have to side with Limetech due to the fact that I too have a concern for unofficial builds(virus, bugs). Also the solution that Limetech uses has nothing to do with what 3rd party devs have done. It should be apparent to everybody that HW transcode / encode is really needed for most of the users(4k) and I cant see why credit is needed here. Its now apparent that from a user point of view the heavy 3rd party dependency is an issue. I would really have an issue if swag was removed from CA or github and all that is needed for that seems to be hurt ego. Hope my comments land correctly, don’t mean to piss on someone. Btw, awesome work with 6.9!
  5. Here is the callstack from my setup, note that this only happens with one of the ports. I have not changed any other settings other than what port br0 is using. Nov 6 10:14:04 Tower kernel: WARNING: CPU: 0 PID: 13593 at net/netfilter/nf_conntrack_core.c:945 __nf_conntrack_confirm+0xa0/0x69e Nov 6 10:14:04 Tower kernel: Modules linked in: xt_nat macvlan iptable_filter xfs dm_crypt dm_mod dax md_mod i915 i2c_algo_bit iosf_mbi drm_kms_helper drm intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops nct6775 hwmon_vid iptable_nat ipt_MASQUERADE nf_nat_ipv4 nf_nat ip_tables wireguard ip6_udp_tunnel udp_tunnel mlx4_en mlx4_core e1000e igb(O) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd mpt3sas glue_helper wmi_bmof intel_cstate intel_uncore i2c_i801 i2c_core wmi intel_rapl_perf raid_class ahci libahci video scsi_transport_sas pcc_cpufreq backlight thermal acpi_pad button fan [last unloaded: mlx4_core] Nov 6 10:14:04 Tower kernel: CPU: 0 PID: 13593 Comm: kworker/0:2 Tainted: G O 4.19.107-Unraid #1 Nov 6 10:14:04 Tower kernel: Hardware name: ASUSTeK COMPUTER INC. System Product Name/WS C246 PRO, BIOS 1201 04/15/2020 Nov 6 10:14:04 Tower kernel: Workqueue: events macvlan_process_broadcast [macvlan] Nov 6 10:14:04 Tower kernel: RIP: 0010:__nf_conntrack_confirm+0xa0/0x69e Nov 6 10:14:04 Tower kernel: Code: 04 e8 56 fb ff ff 44 89 f2 44 89 ff 89 c6 41 89 c4 e8 7f f9 ff ff 48 8b 4c 24 08 84 c0 75 af 48 8b 85 80 00 00 00 a8 08 74 26 <0f> 0b 44 89 e6 44 89 ff 45 31 f6 e8 95 f1 ff ff be 00 02 00 00 48 Nov 6 10:14:04 Tower kernel: RSP: 0018:ffff88884b803d90 EFLAGS: 00010202 Nov 6 10:14:04 Tower kernel: RAX: 0000000000000188 RBX: ffff8887b4cf7c00 RCX: ffff888716a4c198 Nov 6 10:14:04 Tower kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81e08e94 Nov 6 10:14:04 Tower kernel: RBP: ffff888716a4c140 R08: 00000000701ca1cd R09: ffffffff81c8aa80 Nov 6 10:14:04 Tower kernel: R10: 0000000000000098 R11: ffff8887c109a800 R12: 0000000000006225 Nov 6 10:14:04 Tower kernel: R13: ffffffff81e91080 R14: 0000000000000000 R15: 00000000000060b7 Nov 6 10:14:04 Tower kernel: FS: 0000000000000000(0000) GS:ffff88884b800000(0000) knlGS:0000000000000000 Nov 6 10:14:04 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 6 10:14:04 Tower kernel: CR2: 0000000000d4c270 CR3: 0000000001e0a001 CR4: 00000000003606f0 Nov 6 10:14:04 Tower kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Nov 6 10:14:04 Tower kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Nov 6 10:14:04 Tower kernel: Call Trace: Nov 6 10:14:04 Tower kernel: <IRQ> Nov 6 10:14:04 Tower kernel: ipv4_confirm+0xaf/0xb9 Nov 6 10:14:04 Tower kernel: nf_hook_slow+0x3a/0x90 Nov 6 10:14:04 Tower kernel: ip_local_deliver+0xad/0xdc Nov 6 10:14:04 Tower kernel: ? ip_sublist_rcv_finish+0x54/0x54 Nov 6 10:14:04 Tower kernel: ip_rcv+0xa0/0xbe Nov 6 10:14:04 Tower kernel: ? ip_rcv_finish_core.isra.0+0x2e1/0x2e1 Nov 6 10:14:04 Tower kernel: __netif_receive_skb_one_core+0x53/0x6f Nov 6 10:14:04 Tower kernel: process_backlog+0x77/0x10e Nov 6 10:14:04 Tower kernel: net_rx_action+0x107/0x26c Nov 6 10:14:04 Tower kernel: __do_softirq+0xc9/0x1d7 Nov 6 10:14:04 Tower kernel: do_softirq_own_stack+0x2a/0x40 Nov 6 10:14:04 Tower kernel: </IRQ> Nov 6 10:14:04 Tower kernel: do_softirq+0x4d/0x5a Nov 6 10:14:04 Tower kernel: netif_rx_ni+0x1c/0x22 Nov 6 10:14:04 Tower kernel: macvlan_broadcast+0x111/0x156 [macvlan] Nov 6 10:14:04 Tower kernel: macvlan_process_broadcast+0xea/0x128 [macvlan] Nov 6 10:14:04 Tower kernel: process_one_work+0x16e/0x24f Nov 6 10:14:04 Tower kernel: worker_thread+0x1e2/0x2b8 Nov 6 10:14:04 Tower kernel: ? rescuer_thread+0x2a7/0x2a7 Nov 6 10:14:04 Tower kernel: kthread+0x10c/0x114 Nov 6 10:14:04 Tower kernel: ? kthread_park+0x89/0x89 Nov 6 10:14:04 Tower kernel: ret_from_fork+0x1f/0x40 Nov 6 10:14:04 Tower kernel: ---[ end trace 0daf68da82e3639c ]---
  6. I've run into this issue after adding a Mellanox x-2 10gbe card. I've never seen this issue with the built in Intel Corporation Ethernet Connection (7) I219-LM (rev 10). Adding the mellanox card, Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) to the bridge and unplugging the Intel card will after a while result in kernel panics. I have 2 VLANs and split dockers between vlans and the untagged lan. From the panics I've had cache pools issues where the cachepool gets issues with writes and corrupts the pool, this results in a freeze where no output to screen and loss of network.
  7. Started using this container yesterday, so far its working great
  8. upgraded from 6.8-rc7 without any issue. Great work guys!
  9. From other threads I've concluded that the M1015 might not work at all in a 4x slot, this could be dependent on the firmware on the M1015 card. I have not tried it myself since I've opted for a 8x connection for the cards.
  10. I want to control my fans from the HDD temp and not the system temp
  11. The server is still running strong, no issues at all!
  12. Nope, I keep all my drives spinning all the time
  13. Upgrading my 2011 system I wanted to move to iGPU encode and decode for Plex. Also I wanted to get the latest chipset / cpu that’s for servers but not really a need for a Xeon cpu. Choice fell to the c246 chipset and that narrowed the boards down to Supermicro X11SCZ-F or the Asus WS C246 PRO. I really wanted a IPMI board since the board I’m replacing has IPMI. Reading about issues with IPMI and iGPU at the same time I dropped that requirement. Speaking for the Asus board seemed to be the ability to use more lanes to the PCI slots by turning off 4 SATA ports and not having the SM fan controller. Since I have the x-case 420 PRO case it has an issue with the SM fan controller and the built-in fan controller in the case. Thus I choose the Asus board. Specs: Intel Core i7-8700K Corsair Vengeance LPX Black 32GB (CMK32GX4M2A2666C16) X-Case 420 PRO 2 IBM M1015 IT mode 3 SSD for cache 17 HDD The board booted without a extra graphics card present both with the shipped bios and the latest bios version 0802. Both M1015 cards works too in slot 1 and 2, this seems important since the cards do not like 4x slots. Unraid (6.6) does not find the fancontroller without adding modprobe -v nct6775 to the GO file. Adding that row and FAN1-7 can be controlled independently ( I think). Also iGPU enconding works like a charm. IOMMU group 0: [8086:3ec2] 00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07) IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07) [8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 07) [1000:0072] 01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) [1000:0072] 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) IOMMU group 2: [8086:3e92] 00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop) IOMMU group 3: [8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) [8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10) IOMMU group 4: [8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10) IOMMU group 5: [8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10) IOMMU group 6: [8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0) IOMMU group 7: [8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0) IOMMU group 8: [8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0) IOMMU group 9: [8086:a332] 00:1d.2 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 (rev f0) IOMMU group 10: [8086:a309] 00:1f.0 ISA bridge: Intel Corporation Device a309 (rev 10) [8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) [8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) [8086:15bb] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10) IOMMU group 11: [8086:1533] 06:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
  14. Sounds bad with the ipmi issue. When you run igpu and no ipmi do you have issues with fan control?
  15. I really would like to get the SM board since it has IPMI, asus has it's own that I guess is not IPMI compatible. The SM board does have a challange for me, it only has 2 full length card slots and I want to fit in 2 LSI SATA cards and a GTX 1030, it seems not possible unless I place one of the cards in a 4x slot. Then I would also have no room for anything else Question is if I go with the 8700K chip, is there a better board that has IPMI and also the correct chipset for iGPU support?
  16. I myself is in the same situation, plex streams transcoded kills my unraid server. I today have a xeon 1240v2 that has about 10000 passmarks, thats for some files enough to transcode one 4k sdr to 1080p. For 4k HEVC there is no way it can cop and plex states that around 15000 and it seems like ATMOS and such just needs way more than that. My thinking is that CPU power will just not be enough to solve the issue, sure 16000 x2 would be able to transform one stream, but not 2 4k HEVC Atmos files at the same time. For me a build with hardware encode/decode is the only option, I do know that currently there are issues with tone mapping, but it seems to be software and not hardware related. Still pondering what cpu and mother board to get though....
  17. Great info, thanks. I don’t need ECC, but I guess the E-2100 series support non ECC memory too? If so then a C246 would not be a compromise I guess. HW Transcode does have issues now with HDR, but I think the issues will be fixed by Plex later since the encoder and decoder supports HDR. Is there a better MD for this task, and are there other chipsets that support igpu hw?
  18. So a lot of my plex library is now 4k and the transcode is really killing my server. Its a xeon 1240v3 with 32gb ram. This platform does not support igpu hw decode/encode thus I now think its time to upgrade. I'm thinking of a Intel i7-8700K cpu with 32GB ram. Im a bit stuck on what motherboard I should get since I understand it so the chipset also has to support igpu. I was thinking of a c246 chipset board and I would like to have IPMI if possible. The IMPI board I can find really lacks on PCI slots and thats not soo good. I know Asus (no IPMI) has 2 C246 boards and it has many slots. Is there any other boards I should consider? I would also want to be able to run the fan plugin too.
  19. This is really cool, is anyone running this 24/7 without issues ?
  20. I think you should segment your network with VLAN and place iot on one and drop everything to/from that network thats coming from your main lan?
  21. I really dont know the complexity of getting this to work, but I would presume that having the drivers installable from the user via CA or such would be really hard. Just compiling AMD and nvidia drivers with unRaid should be the most simple way to do it. You will though loose the ability to choose driver version and not be able to install others like tuners but I think this would solve 90% of the use cases
  22. Would really like this too since lastpass does not till this in on chrome either on macos or windows. A real login page would be awsome