calmasacow

Members
  • Posts

    21
  • Joined

  • Last visited

calmasacow's Achievements

Noob

Noob (1/14)

0

Reputation

  1. My Next cloud has been working great until today "as far as I know" I used it like a week or 2 ago and it was fine but not every time I goto the web page it says that it is in maintenance mode. I know that there waw a update a bit ago. anyone else having this issue?
  2. So I installed Ubuntu 22.04 on the unraid box and interestingly the speed is not 3-4Gbits both ways between it and the windows box. going to try to install windows on it and try that
  3. yes perhaps. but I have seen many users with systems of lower spec getting around 20Gbits. Nor is there any significant CPU load during testing
  4. Agreed and also why is it 8x fast the other direction. and even 8x is slow I have seem almost same hardware typically is in the neighborhood of like 20-25Gbits. gonna try to boot to Linux on the windows box and see if that makes any difference. some have suggested that it is a windows issue.
  5. I'm using Blackmagic disk speed tester. My only concern is actually read and write performance via via network nfs/smb. I don't know how to perform iperf test between unraid and windows. I switched to using NFS and now I get about 130MB write and about 550MB read. But still it is well below the performance that I should be getting of these drives. individually these drives get about 2GB/ in both write and read. I have 4 of them in a Z1 pool. I would think I would get at least single drive performance. Ok I found out how to run iperf3 to the server from the windows box and it shows: [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.82 GBytes 1.56 Gbits/sec sender [ 4] 0.00-10.00 sec 1.82 GBytes 1.56 Gbits/sec receiver When I run it back the other way from unraid server to the windows box I get about 11-12 Gbit. which seems odd. This also seems low both have 40GB fiber cards in them. is there something I can alter that may be contributing to this low throughput. I have jumbo frames set to 9000 on both.
  6. CPU is dual Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz 20 cores total. and 192GB of ram. it shouldn't be an issue and shouldn't be this slow. it is a little faster when I use NFS instead of SMB but still seems very low. Seems should be able to get at least the single drive performance.
  7. Ok I have tried everything I can think of and still this pool crawls performance wise and I cannot for the file of me figure out why. I have 4x 1TB Samsung 970 Pro NVME drives in a ZFS Z1 Pool. but I can only get about 40-50 MB/sec write and about 400-500 MB/sec read speed from it. This set up should produce well over 2GB a second in write speed and prolly couple that for read. I have 40GB fiber between the machines locally. When I made the share I set it on the nvme-pool with no secondary storage. There has to be something I'm missing here. One user suggested enabling disk shares and I did the test using that. The performance about doubled but is still very low for the hardware. Also... iperf3 to the server from the windows box and it shows: 1.56 Gbits/sec iperf3 from server to windows box: 12Gbits/sec I have jumbo frames set to 9000 on both ends and on the Mellenox SX6036 switch. I have seen others with almost the same setup but with slower drives in the system get about 8GB/sec of transfer over SMB so does anyone know what I'm doing wrong? "though I don't think that drives come into play in the iperf test, I may be wrong." Also, the Windows box has the drivers from the Nvidia website installed. latest version. in unraid it says that the info for the card is : FW Version: 2.43.7010 FW Release Date: 6.5.2019 Product Version: 02.43.70.10 Rom Info: type=PXE version=3.4.662 type=UEFI version=14.9.90 cpu=AMD64 Device ID: 4103 Description: Node Port1 Port2 Sys image GUIDs: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff MACs: e41****62720 e41d2d762721 VSD: PSID: MT_1090111023 Which is a little weird. since according to the nvidia website the latest firmware is: fw-ConnectX3Pro-rel-2_42_5000-MCX314A-BCC_Ax-FlexBoot-3.4.752.bin Am I wrong or is that not older firmware? Am I reading the version wrong? Systems are: Windows 11 Pro box is as follows: i9-13900K 64GB DDR5 Z790 chipset Samsung 990 SSD 2TB Unraid is on: HP ProLiant ML350 Gen9 2X Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz 20 cores total 192GB DDR4 ECC Memory 4x Samsung 980 PRO 1T nvme drives in a Z1 pool 4TB total Any insight would be very much appreciated. I have been at this for like 3 days and cannot figure it out. Thank you in advance
  8. I have deleted about 2 TB worth of files off the zfs pool that I have "40TB total" and the free space has not moved. I have tried stopping and starting the array. I have tried rebooting and thing has done anything is there some trick to this?
  9. Thank you that seems to have fixed it!
  10. I need to run a script every 5 minutes I set to custom and I entered */5 but that doesnt seem to work can someone shed some light on what I'm doing wrong?
  11. So forgive me if this is dumb question. I have a quadro P1000 in my unraid box. can more than one container access a single GPU? for instance say plex and Tdarr?
  12. in my system devices I have this Bus 001 Device 001:ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002:ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 001 Device 003:ID 1a86:55d4 QinHeng Electronics SONOFF Zigbee 3.0 USB Dongle Plus V2 Bus 001 Device 005:ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS Bus 001 Device 006:ID 214b:7250 USB2.0 HUB Bus 001 Device 007:ID 1a86:55d4 QinHeng Electronics 800 Z-Wave Stick Bus 002 Device 001:ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002:ID 0781:5583 SanDisk Corp. Ultra Fit Bus 002 Device 003:ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub my sonoff zigbee radio and my zoos z-wave radio appear to have the same USB Hardware ID. How can I rectify this?
  13. will this have a significant impact on drive lifespan? I was wondering about maybe putting the cache dive into a raid-0 type config "not even sure that is possible" the writes I do are so big that they fill the cache and cause all of the dockers and VMs to stop running.