Xcage

Members
  • Posts

    36
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Xcage's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Heya, Just stumbled upon support for Spikhalskiys Zerotier container , been using it (forever?) and it works , the latest 6.12 network changes are what changed the way unraid listens to interfaces therefore theres a need to add interfaces extra in network section to be able to get to unraid via ZeroTier or Tailscale VPN. Thank you Spikhalskiy , when 6.12 reboot issue which makes it necessary to re-add the interfaces extra will be fixed it will come back to "just" working as it was before
  2. Heya ppl, ive 5 servers now at different places for different purposes , and they all , including this one are running fine and doing what they are supposed to do, but there is one giant issue with this new server i built , its 5 NVME and 1 SSD as unassigned - diags attached. The issue is slow write speeds , more specifically its VM backup , i am using the plugin , all it actually does is runs cp command , but it doesnt matter if i run cp command manually (which i did before i got diag file so i capture the file while the slow transfer is happening) what happens on other servers with nvme drives where my VM file is on Cache nvme drive (btrfs) and the copy destination is array(2,3 nvme drives) is that speeds are never lower than maybe 200MBps on array copy with parity , or like 1GBps if its just nvme drive with no parity, but here the speeds are 10-15MBps , and i have no idea why, everything seems to be ok. its worth mentioning that the VM.img is raw and was installed from scratch , no wonky copying from other hypervisiors , the only difference is VirtIO drivers and qemu agent (here its newer version) , but its tricky to reinstall those now , since VM is in production now and i cant really stop it , i might have opportunity to do that next month. Also if i do: cp /mnt/cacheshare/vmfolder/vm.img justabackup.img its fast , as fast as nvme writing should be. but any other disk as destination its crawling. Please enlighten me. Edit: i do have this bcz of samsung 980 firmware issue that reports wrong temperatures untill i get a chance to update the firmware on those disks append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 but then again , i have another server with that line bcz there are samsung 980s there too , and the performance there is fine. newtechsrv-diagnostics-20231002-0704.zip
  3. Hello, I have a need to set the model of the cpu shown in windows to something custom what would be the best way to do this without losing performance? The emulated QEMU sustom version 2.5 XXXX works , but it degrades performance , i did not test how much yet , but still would rather that not happen and pass through. its a threadripper 2970wx box with NVME drives and all, but id like to set a cpu name to something different , like if i assign 4 cores to a vm it would show "4 cores @3GHZ" is that an option?
  4. been more than 20 hours that 3 Samsung 980s stopped throwing 84c warnings hourly, fix works Running Unraid 6.11.5 3xSamsung 980 1tb NVMEs 2xSamsung 980 PRO 1tb NVMEs
  5. i second this , all(5) of my NVME drives are samsung , 3samsung 980 and 2 samsung 980 pro , the pros arent the ones that have this bug tho. so my 3 drives are exactly 84c every now and then 1ctower13thgen-diagnostics-20221224-1451.zip now i found a thread that with the explanation and diagnostics that explains that the issue is only happening with regular samsung 980 and not the pro version
  6. Sorry my bad should've mentioned hardware The new setup is: i9-13900K MPG Z790 EDGE WIFI (MS-7D91) 128GBDDR5 3xSamsung_SSD_980_1TB 2xSamsung_SSD_980_PRO_1TB Old one is: 9900k Z390 GAMING X-CF 96GBRAM 1xSamsung_SSD_980_PRO_1TB (cache) 3xSamsung_SSD_860_EVO_1TB 1xKIOXIA-EXCERIA_PLUS_SSD (the drive that was unmountable and was formatted) Old diagnostics attached old-diagnostics-20221223-2148.zip
  7. sorry to add to an old topic , but i just had one of my drives die and i just replaced it and formatted the new one and didnt rebuild and all my dockers and vm settings dissapeared, i didnt really need the data or the settings because i transferred all the data to the new Unraid server before that happened am upgrading to , had 3 SATA SSDs and 2 NVMEs , and now its 5 NVMEs (its a VM machine) So my question would be , granted ive 5 NVME drives , would it really hurt the performance if i had appdata on the array with parity?
  8. Yep, I actually set it to 15 days, because it was wonky with 14 or 13 I don’t remember exactly , and it actually saves 3 copies for those which images “number of days to keep” doesn’t recognize , but the issue was that it counted from the day it started at 23:00 and sometimes took more than an hour which made it think it’s more days old than it actually was, sadly it was too long ago for me to remember exactly but you get the gist
  9. same happens with my 7 VMs , some of them get backed up , some dont , and they are all win10 so its something to do with unraid being able to identify the images. I ended up opting for timing option tho , just set it to 14 days (in my case i do backups every week and want to keep 2 old copies) also one thing to consider: VMs that are backed up properly , and those that don't changed when i changed the name of the VM maybe that is something that could give ppl ideas on whats causing it.
  10. Well , i would agree if that would be true for all the backups of all VMs , but it only happens to some therefore script does follow the correct logic , but not in all cases
  11. Hello every1, First of all Giant thanks for the plug, its been serving me for almost a year on 3 different machines and does work. However i do have a strange issue on one of my heavyVM machines. There are 7 VMs there , all with their VMdisks in .img format, and all are working. for some reason tho SOME of them do not get their overhead copies of .img files deleted. for example i have 2 machines that are called "Askedfor1" and "Askedfor2" and they are completely similar in how they were created etc. the issue tho affects only one of them , where backup script DOESNT see the Vdisk backup file to delete. They are backed up at the same time and those are SSDs/NVME drives and i do have that issues with like 2 more of my VMs but those had spaces in the names , so i thought thats the reason so i changed the names , backup created new folder but still does the same with "did not find any Vdisk image files to remove" What would be the cause of that issue? 2022-03-05 00:38:23 information: AskedFor2 can be found on the system. attempting backup. 2022-03-05 00:38:23 information: creating local AskedFor2.xml to work with during backup. 2022-03-05 00:38:23 information: /mnt/user/VMsBackupShare/AskedFor2 exists. continuing. 2022-03-05 00:38:23 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. AskedFor2 is running. can_backup_vm set to y. 2022-03-05 00:38:23 information: actually_copy_files is 1. 2022-03-05 00:38:23 information: can_backup_vm flag is y. starting backup of AskedFor2 configuration, nvram, and vdisk(s). 2022-03-05 00:38:23 information: copy of AskedFor2.xml to /mnt/user/VMsBackupShare/AskedFor2/20220305_0000_AskedFor2.xml complete. 2022-03-05 00:38:23 information: copy of /etc/libvirt/qemu/nvram/232c1702-aff4-5622-15ff-9f3557867781_VARS-pure-efi.fd to /mnt/user/VMsBackupShare/AskedFor2/20220305_0000_232c1702-aff4-5622-15ff-9f3557867781_VARS-pure-efi.fd complete. 2022-03-05 00:38:23 information: able to perform snapshot for disk /mnt/disk4/Disk4SSD1TB/askedfor2.img on AskedFor2. use_snapshots is 1. vm_state is running. vdisk_type is raw 2022-03-05 00:38:23 information: qemu agent found. enabling quiesce on snapshot. 2022-03-05 00:38:25 information: snapshot command succeeded on askedfor2.snap for AskedFor2. 2022-03-05 00:44:51 information: copy of /mnt/disk4/Disk4SSD1TB/askedfor2.img to /mnt/user/VMsBackupShare/AskedFor2/20220305_0000_askedfor2.img complete. 2022-03-05 00:44:51 information: backup of /mnt/disk4/Disk4SSD1TB/askedfor2.img vdisk to /mnt/user/VMsBackupShare/AskedFor2/20220305_0000_askedfor2.img complete. 2022-03-05 00:44:57 information: commited changes from snapshot for /mnt/disk4/Disk4SSD1TB/askedfor2.img on AskedFor2. 2022-03-05 00:44:57 information: forcibly removed snapshot /mnt/disk4/Disk4SSD1TB/askedfor2.snap for AskedFor2. 2022-03-05 00:44:57 information: extension for /mnt/user/isos/virtio-win-0.1.190-1.iso on AskedFor2 was found in vdisks_extensions_to_skip. skipping disk. 2022-03-05 00:44:57 information: the extensions of the vdisks that were backed up are img. 2022-03-05 00:44:57 information: vm_state is running. vm_original_state is running. not starting AskedFor2. 2022-03-05 00:44:57 information: backup of AskedFor2 to /mnt/user/VMsBackupShare/AskedFor2 completed. 2022-03-05 00:44:57 information: number of days to keep backups set to indefinitely. 2022-03-05 00:44:57 information: cleaning out backups over 2 in location /mnt/user/VMsBackupShare/AskedFor2/ 2022-03-05 00:44:57 information: removed '/mnt/user/VMsBackupShare/AskedFor2/20220226_0000_AskedFor2.xml' config file. 2022-03-05 00:44:57 information: removed '/mnt/user/VMsBackupShare/AskedFor2/20220226_0000_232c1702-aff4-5622-15ff-9f3557867781_VARS-pure-efi.fd' nvram file. 2022-03-05 00:44:57 information: did not find any vdisk image files to remove. 2022-03-05 00:44:57 information: did not find any vm log files to remove. 2022-03-05 00:44:57 information: removing local AskedFor2.xml. 2022-03-05 00:44:57 information: AskedFor1 can be found on the system. attempting backup. 2022-03-05 00:44:57 information: creating local AskedFor1.xml to work with during backup. 2022-03-05 00:44:57 information: /mnt/user/VMsBackupShare/AskedFor1 exists. continuing. 2022-03-05 00:44:57 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. AskedFor1 is running. can_backup_vm set to y. 2022-03-05 00:44:57 information: actually_copy_files is 1. 2022-03-05 00:44:57 information: can_backup_vm flag is y. starting backup of AskedFor1 configuration, nvram, and vdisk(s). 2022-03-05 00:44:57 information: copy of AskedFor1.xml to /mnt/user/VMsBackupShare/AskedFor1/20220305_0000_AskedFor1.xml complete. 2022-03-05 00:44:57 information: copy of /etc/libvirt/qemu/nvram/1654e5f3-480a-e5c8-0a8e-f9922c315dc0_VARS-pure-efi.fd to /mnt/user/VMsBackupShare/AskedFor1/20220305_0000_1654e5f3-480a-e5c8-0a8e-f9922c315dc0_VARS-pure-efi.fd complete. 2022-03-05 00:44:57 information: able to perform snapshot for disk /mnt/disk4/Disk4SSD1TB/askedfor1.img on AskedFor1. use_snapshots is 1. vm_state is running. vdisk_type is raw 2022-03-05 00:44:57 information: qemu agent found. enabling quiesce on snapshot. 2022-03-05 00:44:58 information: snapshot command succeeded on askedfor1.snap for AskedFor1. 2022-03-05 00:53:02 information: copy of /mnt/disk4/Disk4SSD1TB/askedfor1.img to /mnt/user/VMsBackupShare/AskedFor1/20220305_0000_askedfor1.img complete. 2022-03-05 00:53:03 information: backup of /mnt/disk4/Disk4SSD1TB/askedfor1.img vdisk to /mnt/user/VMsBackupShare/AskedFor1/20220305_0000_askedfor1.img complete. 2022-03-05 00:53:08 information: commited changes from snapshot for /mnt/disk4/Disk4SSD1TB/askedfor1.img on AskedFor1. 2022-03-05 00:53:08 information: forcibly removed snapshot /mnt/disk4/Disk4SSD1TB/askedfor1.snap for AskedFor1. 2022-03-05 00:53:08 information: extension for /mnt/user/isos/virtio-win-0.1.190-1.iso on AskedFor1 was found in vdisks_extensions_to_skip. skipping disk. 2022-03-05 00:53:08 information: the extensions of the vdisks that were backed up are img. 2022-03-05 00:53:08 information: vm_state is running. vm_original_state is running. not starting AskedFor1. 2022-03-05 00:53:08 information: backup of AskedFor1 to /mnt/user/VMsBackupShare/AskedFor1 completed. 2022-03-05 00:53:08 information: number of days to keep backups set to indefinitely. 2022-03-05 00:53:08 information: cleaning out backups over 2 in location /mnt/user/VMsBackupShare/AskedFor1/ 2022-03-05 00:53:08 information: removed '/mnt/user/VMsBackupShare/AskedFor1/20220226_0000_AskedFor1.xml' config file. 2022-03-05 00:53:08 information: removed '/mnt/user/VMsBackupShare/AskedFor1/20220226_0000_1654e5f3-480a-e5c8-0a8e-f9922c315dc0_VARS-pure-efi.fd' nvram file. 2022-03-05 00:53:09 information: removed '/mnt/user/VMsBackupShare/AskedFor1/20220226_0000_askedfor1.img' vdisk image file. 2022-03-05 00:53:09 information: did not find any vm log files to remove. 2022-03-05 00:53:09 information: removing local AskedFor1.xml.
  12. How would i limit the miner to only about 20% of each core ? or run only when the cpu core is idle? affinity settings work , but it loads the chosen cores 100% , and i would like it to mine only when cores arent used --cpu-priority=0 doesnt do anything , changing the value from 0 to anything up to 5 doesnt matter , it always loads the same cores to 100%
  13. So I could have maximum network load of whatever my network is (up to 10gbit) and any modern Cpu will handle it? Say 11th gen at least 4 cores
  14. Hi all, question is pretty simple , but I want to be sure that I am not missing something. what would be the maximum size for the array? I mean before I get hit on performance, of course not counting VMs or anything. something like a zerotier/tailscale couple of other small dockers and a big array. Currently planning on 20 16tb mechanical drives, 2 2tb NVME for cache, maybe some ssds for frequently changed data. what would be by CPU needs? Ram needs etc I do have 5 unraid servers, but all of those aren’t exceeding 10tb each exclude networking , got that figured out