bastl

Members
  • Posts

    1267
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by bastl

  1. You did some custom edits in the xml for the CPU. Not sure if you found it on the unraid forum or somewhere else. Usual it's used to get better performance and compatibility with Ryzen CPUs. This long list of defined feature sets I never used myself before, only a couple of them. As long as it works for you and the performance is ok, I see no issue here. <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <vendor>AMD</vendor> <feature policy='require' name='x2apic'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='clwb'/> <feature policy='require' name='umip'/> <feature policy='require' name='stibp'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='xsaves'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='perfctr_core'/> <feature policy='require' name='clzero'/> <feature policy='require' name='wbnoinvd'/> <feature policy='require' name='amd-ssbd'/> <feature policy='require' name='virt-ssbd'/> <feature policy='require' name='rdctl-no'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='mds-no'/> <feature policy='require' name='pschange-mc-no'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='svm'/> </cpu>
  2. @Mattaton Unraid 6.9 will be able to read the sensor data from your CPU. Sensor data from Ryzen gen 3 and Threadripper TX40 are currently not fully supported in 6.8.3.
  3. @Ernie11 Do you have to enter a passphrase before getting to the login screen?
  4. @ernestp I had the same issue caused by Kaspersky Internet Security installed inside the VM. By default KIS checks if any virtualisation features are available and try's to use it. Any time i tried to change some settings in KIS or even uninstalling it, it will crash the VM. I couldn't even dissable this function. Changing it to "host-model" or to "custom" allowed me to dissable this feature in Kaspersky. I guess for you there are features like Hyper-V enabled which sees some CPU features from the host system and tries to use them. Try to dissable Hyper-V if you don't need them or use the "custom" or "host-model" cpu flag.
  5. @Ernie11 Is the vdisk encrypted during the initial Ubuntu installation? If so, it will be shown as fully allocated for Unraid. Every distro I tested with luks in the past are always fully allocated no matter how full the vdisk really was. Compressing the vdisk changes nothing because unraid has no access to the filesystem.
  6. @luca2 You can use a modified 7zip version for example. Either you install that modified version or like I did, only use the desired zstd codec in the already installed mainline 7zip version. https://github.com/mcmilk/7-Zip-zstd#zstandard-codec-plugin-for-mainline-7-zip You should be also able to decompress the file using tar https://stackoverflow.com/questions/45355277/how-can-i-decompress-an-archive-file-having-tar-zst
  7. @luca2 Sry, i thought you're using the plugin, what basically has the same functionality and is based on the older script. In the script you can find the zst compession options starting at line 75
  8. ZST files are compresses vdisk files if you're using the "Zstandard compression" option. Usually the vdisk files is copied first, compressed into a ZST file and the vdisk copy is been removed after compression. In your case there is an older backup in the same folder which isn't compressed. You have to clean your backup destination first or you will end up in 2 big files, wasting space. Check the help!
  9. For me Pfsense in a VM only worked with Q35-2.6 and older. Newer Q35 machine types caused all sorts of issues during install or if running the VM.
  10. @christopher2007 Did you already tried to give your server a fixed IP adress (outside of dhcp range of course)? Right now it is set to use DHCP and maybe you have a device on your network with that same IP already in use or with the same IP set as static. That could be a reason why the connection gets dropped. Just an idea. Even if I don't think DHCP is the issue here it's worth a try. If you have any IP address conficts you should see it in your router/dhcp-server logs. Here another thread where "carrier lost" is logged very often but in a bonding nic scenario with dhcp. Static IP in this case was the solution. I found a couple other reports on the web. One example an RasberryPI dropping lan connection with similar error logs caused by an underpowered power supply. Can you check how many watts your server is pulling from the wall and check your PSU. I guess the PSU is strong enough but maybe check if all the cabling is ok. Some boards have an extra 4 or 6 pin connector which is needed if you have lots of PCIE slots and all are populated with devices like GPUs which can pull so much power, that the standard 4/8pin 12v power delivery can become unstable. Maybe try the card in another PCIE slot in case the slot is shared with other devices. Maybe the slot the card is used with has issues. Who knows. Do you ever had any instabilities with your server? Overclocked? memory XMP profile? Cooling ok? Could also be a configuration on the switch/router to limit the amount of MAC addresses allowed on a single port that causes the dropout https://stackoverflow.com/questions/17564620/what-will-make-carrier-changed-or-lost-on-linux
  11. @christopher2007 Found something. Maybe the cards throttling under load because of an cooling issue of the chip. https://forum.level1techs.com/t/asus-xg-c100c-heat-sink-standoffs/130065 Maybe check if the heatsink isn't loose and makes contact to the chip. I also found another thread where a user reported that after short amount of time the speeds for them dropping below 100MB/s.
  12. @christopher2007 Also try to set the MTU size (Jumbo Frames) on all devices (unraid, client, switch) to 9000 and try if that helps. On Unraid you can find it on the settings page for the specific interface and on Windows itself you should be able to find the settings in the device manager for the card. Either there is a "Jumbo Frames" entry you have to enable or MTUSize you can define.
  13. @christopher2007 I forgot the following. You can check with "ethtool eth0" if the speeds for the card are reported correctly and you can check for the used driver version with "ethtool -i eth0" Btw my Macrium backup job finished without an issue. A 736GB file is created on one of my array disks and verification of the backup looks like it runs through.
  14. Quick search on the forum for the Asus card shows that this card should work since Unraid version 6.7 Not sure if you have to increase the MTU size server and client side from I think default 1500 to something around 9000 to gain better performance, but without using a highspeed storage like an ssd you won't see that much of an icrease compared to a 1gig nic. Remember writting directly to the array only saturates one disc and the drive is the the limiting factor. On my last Unraid build I had a Aquantia 10G nic on my board, but never used it because never had an second client providing that speed nor did I had a 10gig switch. Usually the drivers come with the kernel itself or been added by Limetech in a newer Unraid build. As people reported Aquantia nics working I guess the current 6.8.3 should already come with it and from your devices list the atlantic driver is loaded. 01:00.0 Ethernet controller [0200]: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] [1d6a:07b1] (rev 02) Subsystem: ASUSTeK Computer Inc. Device [1043:8741] Kernel driver in use: atlantic Kernel modules: atlantic Some parts of your logs for the nic Jul 18 13:09:54 TS-Alt kernel: atlantic: link change old 1000 new 0 Jul 18 13:09:54 TS-Alt kernel: br0: port 1(eth0) entered disabled state Jul 18 13:09:55 TS-Alt dhcpcd[1751]: br0: carrier lost Jul 18 13:09:55 TS-Alt dhcpcd[1751]: br0: deleting route to 192.168.0.0/24 Jul 18 13:09:55 TS-Alt dhcpcd[1751]: br0: deleting default route via 192.168.0.1 Not sure if it depends on you changing some network configs earlier or not, but I guess it's not an error dropping the connection. You might watch this in your logs if you test with the ASUS nic later if it somehow reports that carrier lost when transfering your backup. I first saw your timezone is the same as mine and than your geizhals link. Just adding 1+1 together 😁
  15. WTF Seagate... So all his reported Seak or Read errors are 8 digits and it's just fine and normal? 😂 Why is something even possibel that a drive reports errors and there are actually no errors. Isn't this something that maybe Limetech can patch or is it a "some Seagate drive thing" only?
  16. Depends on how you manage your local DNS settings. If you have DNS lookup issues on your network, direct connections via IP should work without DNS. In my current test I'am using Unraids DNS entry as path for Macrium to backup to. Still running and 300 gigs already written. I found a couple things in the smart logs for your disks you might have to look into for sdb for example there are lots of read and seak errors SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-- 100 065 044 - 188393 3 Spin_Up_Time PO---- 091 089 000 - 0 4 Start_Stop_Count -O--CK 100 100 020 - 50 5 Reallocated_Sector_Ct PO--CK 100 100 010 - 0 7 Seek_Error_Rate POSR-- 077 060 045 - 48327554 9 Power_On_Hours -O--CK 100 100 000 - 404 These errors are shown for all your array disks. One of my 3 spinners only shows one single seak error. My disks are almost 6 years old now. Could be an issue with your controller as well. Maybe @johnnie.black can also have a look into your logs if he has time and might have some hints for you.
  17. For me it somehow looked either to an networking issue from the Windows client or Unraid itself the way how you describe it. I currently have Macrium 7.2 Free installed and testing to backup with defaults a 1TB SSD, 780GB filled to an default unraid share not using the cache. Small difference, I only have the client in a VM on the same Unraid build, but with it's own IP. The disks where I backing up to have 1,5TB free space. As for now everything looks ok to me. 100 gigs already transferred and I'am able to brows other shares without any hickups or slowdowns. Either your switch, your cables or windows itself has some issues I guess. Any extra virus scanners installed? Do you have another Windows client where you can test how stable the connection during the backup is? Do you have any IDS system on your network analysing your LAN traffic?
  18. @christopher2007 From your screens everything looks ok to me. Enough space to store 500GB, default minimum free space of 0Kb as you said tested, not using the cache. Did you looked into your logs from unraid around the time where the data transfer stopped? Maybe have a look into the smart reports for the disks as well if there are any errors logged. Next time the backup "freezes" pull down your diagnostics from unraid before restarting the server and posting them. Also have a look at you disk temps when the backup job is running.
  19. @christopher2007 Do you have a "Minimum free space" limit set for that share? Maybe post a screenshot of your share settings and a screen from your main tab so we can see the current allocation of your disks.
  20. @christopher2007 Just a small question, does Macrium have an option to split the created backup file? I ask because all backup software I know are having this option and for most of them splitting is the default setting. The reason for this option is for most cases users upload their backups to cloud storage or via network to a nas or some sort of a server. Network traffic can be interrupted, packets getting dropped and have to be resend. Smaller files in this case have a huge advantage. Lets say your software splits the backups into 50MB chunks locally, starts to upload the first file. After finished it checks if the hash of the remote file matches the local one. In case it doesn't the software only reuploads that 50MB file. This is way quicker with smaller files than creating a huge 500GB file, waiting for the upload and having that huge file reupload again. Unraid doesn't know how large your file will be in the end and depending how you set the share it will store it on the first drive where space is. I guess reaching the limits for the drive and starting to move it to another disk can cause an interruption which breaks the datastream from Macrium to the share.
  21. @rvijay007 left click the icon of the VM and you get all the options you're asking for 👍
  22. What Ghost82 said in general is true, but all these devices are in the same group and have to be passed through to make it work. In your case that will break a lot of things. You have a couple options to separate the GPU in it's own group. You can try to use a different PCIE slot and check if the groupings are better or you use the ACS Override Option to split the groups. Most modern motherboard BIOS also have an option to enable IOMMU, which can help to split up the groups. Don't try to pass all these devices to a VM, or unraid will loose access to all these devices (network, USB, sata controller). You can even break it by only trying to use one device in a VM and the rest can lag out, crash or completely dissapear from unraid.
  23. @Valiran Do you have any performance issues? There are a couple things you can play around with. Set the core count without counting in the HT of the VM to a multiple of 6 (6, 12, 18, 24) so it uses a full chiplet. In your case, you have selected 8 cores. That means you're using a full die + 2 cores from another one where maybe other processes from docker or other VMs are running on. The 3960x has 4 chiplets each with 6 cores. Isolating the cores you wanna use can also help in this case. You might see some better memory performance. Windows power settings to "high performance" can also help a bit. Have you tried the Q35 machine type? Maybe a thing to look at it and test with this if the performance for your specific need is better. To use Q35 you have to setup a new VM, keep that in mind. I did a quick test for comparisson on a 12core/24thread Q35 VM (16GB, 1080ti, NVME) and the numbers looking close to yours. I'm not having any issues with my VM.
  24. @Stevenson Chittumuri You have to shutdown the VM before you change the CPU pinnings for it.