calmasacow

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by calmasacow

  1. My Next cloud has been working great until today "as far as I know" I used it like a week or 2 ago and it was fine but not every time I goto the web page it says that it is in maintenance mode. I know that there waw a update a bit ago. anyone else having this issue?
  2. So I installed Ubuntu 22.04 on the unraid box and interestingly the speed is not 3-4Gbits both ways between it and the windows box. going to try to install windows on it and try that
  3. yes perhaps. but I have seen many users with systems of lower spec getting around 20Gbits. Nor is there any significant CPU load during testing
  4. Agreed and also why is it 8x fast the other direction. and even 8x is slow I have seem almost same hardware typically is in the neighborhood of like 20-25Gbits. gonna try to boot to Linux on the windows box and see if that makes any difference. some have suggested that it is a windows issue.
  5. I'm using Blackmagic disk speed tester. My only concern is actually read and write performance via via network nfs/smb. I don't know how to perform iperf test between unraid and windows. I switched to using NFS and now I get about 130MB write and about 550MB read. But still it is well below the performance that I should be getting of these drives. individually these drives get about 2GB/ in both write and read. I have 4 of them in a Z1 pool. I would think I would get at least single drive performance. Ok I found out how to run iperf3 to the server from the windows box and it shows: [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.82 GBytes 1.56 Gbits/sec sender [ 4] 0.00-10.00 sec 1.82 GBytes 1.56 Gbits/sec receiver When I run it back the other way from unraid server to the windows box I get about 11-12 Gbit. which seems odd. This also seems low both have 40GB fiber cards in them. is there something I can alter that may be contributing to this low throughput. I have jumbo frames set to 9000 on both.
  6. CPU is dual Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz 20 cores total. and 192GB of ram. it shouldn't be an issue and shouldn't be this slow. it is a little faster when I use NFS instead of SMB but still seems very low. Seems should be able to get at least the single drive performance.
  7. Ok I have tried everything I can think of and still this pool crawls performance wise and I cannot for the file of me figure out why. I have 4x 1TB Samsung 970 Pro NVME drives in a ZFS Z1 Pool. but I can only get about 40-50 MB/sec write and about 400-500 MB/sec read speed from it. This set up should produce well over 2GB a second in write speed and prolly couple that for read. I have 40GB fiber between the machines locally. When I made the share I set it on the nvme-pool with no secondary storage. There has to be something I'm missing here. One user suggested enabling disk shares and I did the test using that. The performance about doubled but is still very low for the hardware. Also... iperf3 to the server from the windows box and it shows: 1.56 Gbits/sec iperf3 from server to windows box: 12Gbits/sec I have jumbo frames set to 9000 on both ends and on the Mellenox SX6036 switch. I have seen others with almost the same setup but with slower drives in the system get about 8GB/sec of transfer over SMB so does anyone know what I'm doing wrong? "though I don't think that drives come into play in the iperf test, I may be wrong." Also, the Windows box has the drivers from the Nvidia website installed. latest version. in unraid it says that the info for the card is : FW Version: 2.43.7010 FW Release Date: 6.5.2019 Product Version: 02.43.70.10 Rom Info: type=PXE version=3.4.662 type=UEFI version=14.9.90 cpu=AMD64 Device ID: 4103 Description: Node Port1 Port2 Sys image GUIDs: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff MACs: e41****62720 e41d2d762721 VSD: PSID: MT_1090111023 Which is a little weird. since according to the nvidia website the latest firmware is: fw-ConnectX3Pro-rel-2_42_5000-MCX314A-BCC_Ax-FlexBoot-3.4.752.bin Am I wrong or is that not older firmware? Am I reading the version wrong? Systems are: Windows 11 Pro box is as follows: i9-13900K 64GB DDR5 Z790 chipset Samsung 990 SSD 2TB Unraid is on: HP ProLiant ML350 Gen9 2X Intel® Xeon® CPU E5-2640 v4 @ 2.40GHz 20 cores total 192GB DDR4 ECC Memory 4x Samsung 980 PRO 1T nvme drives in a Z1 pool 4TB total Any insight would be very much appreciated. I have been at this for like 3 days and cannot figure it out. Thank you in advance
  8. I have deleted about 2 TB worth of files off the zfs pool that I have "40TB total" and the free space has not moved. I have tried stopping and starting the array. I have tried rebooting and thing has done anything is there some trick to this?
  9. Thank you that seems to have fixed it!
  10. I need to run a script every 5 minutes I set to custom and I entered */5 but that doesnt seem to work can someone shed some light on what I'm doing wrong?
  11. So forgive me if this is dumb question. I have a quadro P1000 in my unraid box. can more than one container access a single GPU? for instance say plex and Tdarr?
  12. in my system devices I have this Bus 001 Device 001:ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002:ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub Bus 001 Device 003:ID 1a86:55d4 QinHeng Electronics SONOFF Zigbee 3.0 USB Dongle Plus V2 Bus 001 Device 005:ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS Bus 001 Device 006:ID 214b:7250 USB2.0 HUB Bus 001 Device 007:ID 1a86:55d4 QinHeng Electronics 800 Z-Wave Stick Bus 002 Device 001:ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 002 Device 002:ID 0781:5583 SanDisk Corp. Ultra Fit Bus 002 Device 003:ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub my sonoff zigbee radio and my zoos z-wave radio appear to have the same USB Hardware ID. How can I rectify this?
  13. will this have a significant impact on drive lifespan? I was wondering about maybe putting the cache dive into a raid-0 type config "not even sure that is possible" the writes I do are so big that they fill the cache and cause all of the dockers and VMs to stop running.
  14. Is there a way that I can increase my write speed from network? Is there something that can be done to prove writing data to the array from computers on the network? I have 4X 14TB 7200rpm 256mb cache drives with one parity drive. then I have a cache array in parity of 2X 1TB nvme drives. CPU is Intel Xeon CPU D-1541 @ 2.10GHz 8 core 16 threads 32GB of DDR4 It seems that after about 60-100 GB of data is slows to a crawl . It may actually be less. I have to do dumps of raw footage for a show that I work on about once a week and it takes hours. it is usually almost a TB of data. So any ideas on speeding it up would be appreciated. The server has 10GB ports but seeing as how I'm not even saturating the 1GB port I don't see the point of switching to 10GB network hardware yet.
  15. can you elaborate on this and perhaps give and stab example? I'm getting terrible performance like 12mb/s
  16. ok here is the situation I have 2 locations both have residential internet with dynamic IPs Site 1 Primary Site 1GB down 35mb up Will be upgrading to 1GB fiber in both directions soon. 23TB array with about 5TB of data to be synced Secondary Site 1GB down 30~mb up no hope of upgrade in bandwidth any time soon. 12TB array will need to have the 5TB of data stay in sync both ways Both have latest unraid. My thought was to get both boxes synced then deliver the box to the secondary site. the primary Site will have next cloud running for remote access and syncing of files with with laptops other remote systems. can the secondary unraid box simply be a client of next cloud? or can it be a next client server as well and both of the server also sync with each other? Forgive if this is dumb I'm new to unraid only had for like a week. Great software so far though!
  17. hmm looks like turning off the enhanced mac thing fixed it does anyone know if this will present issues when I try to connect from one of the macintosh systems. I have several that will be here in a day or two.
  18. Ok so I have tried this from 2 different windows computers to the Unraid system and I get the behavior from both of them. I have aa media share on the Plex and when I try to copy a file to it appears to copy normally then right as it is about to hit 100 percent the progress bar resets to zero and starts over but the second time then once it gets to 100 the second time if gives an error stating the the This is no longer located in "source path to the file". Verify the item's location and try again I see this in the log file: Jan 28 21:34:37 UNIMATRIX-0 smbd[28641]: [2022/01/28 21:34:37.578242, 0] ../../source3/smbd/dfree.c:140(sys_disk_free) Jan 28 21:34:37 UNIMATRIX-0 smbd[28641]: sys_disk_free: VFS disk_free failed. Error was : No such file or directory Jan 28 21:34:43 UNIMATRIX-0 smbd[28641]: [2022/01/28 21:34:43.827951, 0] ../../source3/smbd/dfree.c:140(sys_disk_free) Jan 28 21:34:43 UNIMATRIX-0 smbd[28641]: sys_disk_free: VFS disk_free failed. Error was : No such file or directory Has anyone see this before?
  19. Cache on the NVME cards fills up then they will run very slowly. Most NVME drives are very slow when not writing to the DRAM cache onboard once that gets full the performance tanks. This is the main selling point of the samsung Pro series