frodr

Members
  • Posts

    526
  • Joined

  • Last visited

Everything posted by frodr

  1. Sorry, I don't know. Lets wait for some skilled forum guys shows up.
  2. Try Network model e1000. That should give you Internet access. Set pin. Then change back to virtio and install driver.
  3. I see. But how come a basic Windows PC can utilise NVMe speed almost fully and not Unraid? Are you also saying that a Treadripper Pro based server can only perform 1/4 of a I7-13700 which again is less than 1/3 of the NVMe drive speed, see line 3. This is very strange to me. No, I haven't tried 1M recordsize and without a logs vdev. The first only works with data added after the change in recordsize, right? And to try without log vdev means that I have to take down the hole Pool. I rather set up another pool for testing some day. Can you help me with a command for testing hdd drive speed on a drive that´s in a zfs pool? Cheers
  4. But the server 1 has the faster CPU, AMD Ryzen ThreadRipper PRO 3955WX, server 2 has I7-13700. Yes, some overhead is expected. but here it is more than 60%. And how can the huge difference in line 3 be explained.
  5. For testing the performance on a single hdd in a ZFS Pool is it same to run this command: dd if=/dev/zero of=/<mount_point>/testfile bs=1M count=<number_of_blocks> status=progress
  6. Thanks for taking time coming back to me. Much appreciated. Update the table with the pv command. 1. line: How can it be that server 1 from NVMe to NVMe is only 1.25 GB/s, while server 2 is 3.01 GB/s (at the seq write speed spec at 3.0 GB/s)? Seq. write speed on on NVMe on server 1 is 4.2 GB/s. 2. line: Server 1 seq. write speed on hdd´s are 250 MB/s. Let's say 200 x 6 RaidZ2 should be 1.2 GB/s, not 473 MB/s. Or am I missing something here. For server 2 with 8 x sata ssd (500 MB/s) 3. line: A good reading here should be about 70% of seq write at about 3 GB/s, not 1.2 GB and certainly not 320 MB/s. The server is basically copying to itself. All NVMe drives are pcie 4, for server 1 is min. seq read 4700 MB/s write 4200 MB/s. Cheers,
  7. First two lines are on the servers, the third line between server and VM running on the same server.
  8. Some test on both servers: Drive share to drive share, SMB multi channel. Server 1 Win11 VM is on wirtio-net (wirtio not working) and server. 2 VM is on wirtio. I had a session with SpaceinvaderOne, but we could not solve it with the time we had left. How can server 1 perform that poorly? Server 2 does not perform that great either. In the TrueNAS blog post it uses 100 MB/s in hdd streaming speed. Is streaming speed the same as a hdd´s write speed, today typically 200-250MB/s? helmaxx-diagnostics-20230823-1442.zip kjell-diagnostics-20230822-2004.zip
  9. "If you don't see the idiot in the room........., its you." For some reason one switch assigned only 100 Mbps to its clients. No wonder I had trottling on high bit rate streams. That's fixed, then it's only Atmos on Plex to solve. I might very well be the on the client, but it's nothing more than reinstalling (which is done) on the ATV. So possible solution must other places.
  10. It should, but it don't. I reinstalled the the OS on ATV. I got Atmos in Netflix, but not from Plex.
  11. Yes, I believe so. I had hoped for a strange hidden setting in Plex would solve it. So, ATV is not decoding Atmos in Plex, nor Netflix. But it does in the TV+ app. Apppppppeeeellllllllll! Shield Pro did not trottle today, and Atmos works fine. I wish the updated the GUI though.
  12. It is Direct Play I´m referring to. Tried Shield Pro/Emby: Same trottling. Apple TV / Emby: Not able to connect to server. Apple TV (purchased 11/2022) / Plex: No trottling, but don't decode Atmos. Video is Direct Play, but it transcodes the audio. May that's the reason why I don't get Atmos sound. Why does it transcode audio? See reason in picture.
  13. I have an annoying trottling in high bitrate files, Direct Play, about 80Mbps and above. I had it on three different servers, two different routers and switches. Added info: Streamer is Nvidia Shield Pro. File is HECV. Is this a common problem, I had no hits searching "trottling". I added the diagnostics. From log: Brought to you by linuxserver.io ─────────────────────────────────────── To support LSIO projects visit: https://www.linuxserver.io/donate/ ─────────────────────────────────────── GID/UID ─────────────────────────────────────── User UID: 99 User GID: 100 ─────────────────────────────────────── **** Server already claimed **** **** permissions for /dev/dri/renderD128 are good **** **** permissions for /dev/dri/card1 are good **** **** permissions for /dev/dri/card0 are good **** Docker is used for versioning skip update check [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. Critical: libusb_init failed Cheers, kjell-diagnostics-20230815-0051.zip
  14. TrueNAS blog article on performance with 12 drives. As I understand it the base speed on each drive is 100MB/s with read/write of 900MB/s on the Z2 example. Meaning read/write speed of 75% of combined drives speed. If I understand correctly. I typical drive should be capable double the speed of 100MB/s, or do "capable of 100MB/s streaming reads and writes" something else?
  15. I'm curious about what speeds to expect from zfs pools, so I asked chatGPT. Question: What is expected in read and write speed relative to drive speed on a 6 drive zaidz2 zfs pool given enough cpu power? ChaiGPT: In a standard RAID-Z2 configuration with 6 drives, assuming sufficient CPU power, large media files, and proper configuration, you can generally expect the following in terms of read and write speeds relative to the drive speeds: Read Speeds: Read speeds can be expected to approach or even exceed the aggregate read speed of all the drives combined. This is because RAID-Z2 can leverage parallelism to read data from multiple drives simultaneously. Therefore, for large sequential read operations, you could see read speeds approximately 5 to 6 times the speed of a single drive. Write Speeds: Write speeds in a RAID-Z2 configuration are influenced by parity calculations. While RAID-Z2 has a good balance of protection and performance, the write speeds might be somewhat lower compared to read speeds, especially for large sequential writes. Due to the dual parity calculations of RAID-Z2, write speeds could be around 2 to 3 times the speed of a single drive. Keep in mind that these are rough estimates and actual performance can vary based on factors such as hardware efficiency, CPU power, cache configuration, fragmentation, and other environmental conditions. Is this a realistic speed assumption? I look kind of optimistic.
  16. The use case: I would like to run VM´s on a separate VLAN. With the possibility to isolate (do that in the Unify router) the VLAN. I would like the network interface bridged so several VM´s can utilise it. Current setup not recommended/crashed: I used to set up the network interface automatic/static ip on the VLAN´s subnet incl. gateway as specified in the router. Also tried to add VLAN on the network interface. But several time the server network crashes. @JorgeB says adding several gateways can cause problems. Is there a way to set this up in a way that do not cause problems? Cheers,
  17. I want to connect an interface (as bridge) to the VLAN on the router so I can run an VM on that VLAN. Shouldn't that be done this way? Reset network, up and running not adding another gateway.
  18. So setting up an interface (bridge) on the network vlan is not recommended/working?
  19. Ok, but not if the interface is connected to a switch? Only when direct connection if I understand correctly?
  20. I was to get the network back by renaming the network.cfg file. But when adding and setting up a new nic, ConnectX-3, it's lost again. Two Unraid servers connected to the same switch. Added ConnectX nic on both to setup one direct connection. Add a static ip on both interfaces on both servers. Mapped SMB share from both servers. Started a sync. Shortly after the error as included. Then renamed network.cfg again and restarted. The server network was ok after restart. When setting up network, see picture, the dns error appeared. I'm pretty sure I need some advice now...... helmaxx-diagnostics-20230810-1803.zip
  21. I did a short test copying from a nvme ssd one one server to the other server with 6 x 2x18 sata in raidz2. I say steady 400MB/s. Whether is was the pools performance or the drive I'm not sure.
  22. The heading is actually true. I guess "Recertified" is a nicer word for used. I purchased 6 units, set up a ZFS pool (4:2). https://serverpartdeals.com/collections/hard-drives/products/seagate-exos-2x18-st18000nm0092-18tb-7-2k-rpm-sata-6gb-s-512e-dual-actuator-3-5-recertified-hard-drive
  23. I did that, and got the server online again. But had problems with the router showing different ip than networks settings. I pulled the IPMI card first, and then the T540 NIC. Now I'm up and running. Are these T540 purchased from "ping-pong" known to make trouble?
  24. After removing one pool and and making another, the server have some dns problems it seems. I have not made any other changes. My local network is working. Internet access is ok. The other server is working fine with the same dns settings. I have restarted the modem, router and switch. I can connect to the GUI. The startup sequence show ip address. Not done: Changed cat cable. I seems that I need some advice...... Happy for any. Added: VM on network source br0 on the same Nic as the server is on internet. kjell-diagnostics-20230801-2155.zip