Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Your statements are very fanboy vague. How bad is really bad? Can you quantify it please.
  2. Your problem is why I use qcow2 instead of raw. I can then qemu-img convert and the copy is automatically sparsed. The benefit is you can tell exactly how much space the image occupies instead of the max size.
  3. Can't please everyone. I bet my beer that if you remove IOWAIT, the forum will be flooded with "why is my server laggy, CPU shows 20% usage".
  4. When moving between i440fx and Q35, always start a new template. Btw, you need to boot the VM in UEFI. SeaBios and Windows don't work with Q35.
  5. Q35 machine. Do you have this section of code in the xml, before </domain>? If not, add it. Otherwise your GPU runs at PCIe x1. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  6. Is your disk 14 the old disk 18? I think you have 1 disk that is already on its last leg, regardless of controller.
  7. Hmm... nothing stands out as being wrong. I suggest maybe try start a FRESH template and use Q35 instead of i440fx. Don't start the VM, edit the xml and add these lines before </domain>, then start. Hopefully Q35 works better. Remember to start a NEW FRESH template, not edit your current one. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  8. What do you mean by not showing up on the ui?
  9. Attach diagnostics. And the VM xml.
  10. It depends on what you meant by screwed. Do you have another disk with enough spare capacity?
  11. It usually will be plug-and-play. There are some controllers out there that truncates the serial number, then you will need to readd. You can ask around on the forum for recommendations.
  12. Would it be possible to introduce TRIM for SSD in the array as an unofficial command-line-only feature? It is not different from how NVMe worked unofficially for a while before going official, or how multiple cache pool is an unofficial command-line-only feature. I think it would be extremely useful, especially for users who don't run parity to have this option. It would also be useful for LT to make it available so that someone can even try to see if TRIM actually messes up parity or not.
  13. That's your problem for insisting on using Krusader then. I told you what's wrong with it (and that is a generic problem regardless of file type). Personally I don't see what's so hard about using CA Users Script -> create new script -> put this line in the script cp -rv /mnt/diskX/* /mnt/diskY/ -> Save -> Run in Background -> Go have a beer Also I don't think you are right about those container files. Unless you mount them, the file system exposes it to Krusader (or any other tool) as a single file. Krusader won't even attempt to mount such image to read what is underneath to move the files within. Any competent coder knows it is extremely silly to do so and I trust that Krusader coders are competent. PS: you asked a similar question 1.5 years ago.
  14. Are you doing the 3 disks in parallel or in sequence? And are you seeing the same slow down if doing cp (or mv) from console? I have found Krusader to be terrible for large file moves. It starts at normal speed but it grinds to an excruciating halt after some hours. It also creates many fragmentation patches, which cp / mv doesn't do. So maybe you should try doing it with cp / mv instead. And I don't think your proposal will at all help with transfer speed. The reason it's slow is because small files are inherently slower to read and write due to latency. Introducing compression the middle isn't going to save you time because you still have to read those same files from the old disk and write those same files to the new disk. If anything it will take even more time because you lose the smoothing effect of ram cache.
  15. I think Unraid defaulting to i440fx is a legacy thing because Q35 used to be rubbish with Windows some years ago with the old qemu. From Unraid 6.6 onwards though, the progress made in qemu was good enough to make Q35 at least on par with i440fx and potentially even superior with regards to PCIe passthrough. I remember seeing posts in forum about some GPU that would not passthrough without Q35. A few tips for you: Q35 doesn't like SeaBIOS + Windows. So make sure you are on OVMF i.e. boot UEFI before switching. Start a new template. I think the i440fx pci -> Q35 pcie move is way too complicated for the Unraid GUI to handle. Don't forget that last bit of code to add manually to the xml. Wasted me an hour wondering why the F my 1070 doesn't work. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
  16. Your rebuild speed is limited by your slowest disk(s). So 1 disk running at 95MB/s means nothing. You have some disks with non-zero read error rate: disk 5 (WCC4N08) disk 18 (WCC1T04) disk 21 (WCC1T14) disk 8 (WMC1T03) Disk 18, in particular has 1 reallocated sector and 96 pending. That sounds like a disk on its last leg. You might want to double check with jonnie.black. He's well-versed with disk errors and recovery etc. At least good thing you have dual parity.
  17. About 25TB. Took me 10 days, 2 of which on 150Mbps, 3 on 250Mbps (because my old router can't handle gigabit) and the rest on 1Gbps. Average speed is about 30MB/s but given 2/3 of the object counts are my offsite backup which are full of tiny files in the KB range, that's not too shabby. Compare that with Crashplan which last took 10 days to backup 200GB worth of files in the MB ranges and I'd say it's a huge improvement.
  18. Oh dear. My prayers for you and your family, mate.
  19. And trust that the people on your network doesn't do silly things, like accidentally exposing your server to the Internet, that sort of silly.
  20. Hi Max. I suggest you post a request in the feature request forum. Not saying that LT will implement that but there's a better chance than posting in this old topic.
  21. It depends on how many years and what games. Some games indeed don't play game with RDP (#BadPun). Some are unplayable (think 30fps with CS:GO, that sort of unplayable). So YMMV basically. That's why I recommended the OP to try it out as a proof of concept first.
×
×
  • Create New...