eribob

Members
  • Posts

    100
  • Joined

  • Last visited

Everything posted by eribob

  1. I have the exact same problem. Followed SpaceinvaderOne's guide to setup appdata folders as ZFS datasets. Then tried to remove old folders containing data for apps that I do not use anymore. I tried first using the "ZFS master" plugin in destructive mode, but got "Operation not permitted" error. The same when using the command line: `zfs destroy -fr [DATASET]`. It worked when I first use your command `zfs set mountpoint=none [DATASET]`. Why is this necessary? Is there a way to automate this so that I can remove pools using the GUI?
  2. thanks, did not read carefully before buying. Placed an order for that cable now.
  3. Hi again! Finally got around to buying an M.2 drive! I ended up with an intel p4510 8TB drive instead. It is PCI-e 3.0 but I figure that is enough and the IOPS are great. Anyway, it works fine, but I am getting slower read/write speeds than expected. I tested writing a large file from the drive to a RAM disk and got around 1.2GB/s (expected around 3GB/s). I am wondering if the cable is the issue? I bought this cable: https://www.amazon.se/dp/B097BDG3TX/ref=pe_24982401_503747021_TE_SCE_dp_1 It says SAS 12G or internal NVMe, could it be that it is limited to 1.2GB/s? Perhaps 100cm is too long to allow for max speeds? In that case I will buy the same cable you bought and try again. The PCI-e card I bought is this one: https://www.amazon.se/dp/B0B6CJ889T/ref=pe_24982401_503747021_TE_SCE_dp_1 - could it be the problem? However, it should just be a dumb link from the PCI-e slot to SFF-8643 ports so perhaps it should be fine? /Erik
  4. Thanks, I think the problem was actually that I updated the BIOS today and that reset the IOMMU setting so It was suddenly off and no PCI passthrough was working, when I enabled that again the warning messages about the flash drive disappeared. I think I will run a memtest as well though, had some other problems with BTRFS today.
  5. Hi! Several problems at once today... Suddenly I started getting messages that my flash drive is corrupted. Attached diagnostics. I can however browse the contents in the Unraid GUI and I managed to create a flash backup. Do I need to replace the flash drive? /Erik monsterservern-diagnostics-20240130-1543.zip
  6. Thank you for the reply! So that would mean that my NVMe slot on the motherboard suddenly stopped working? The drive has been in it for 1-2 years without issues. Sounds more likely that it is an issue with the drive in that case? Moving the NVMe drive is not trivial hehe I have to disassemble the server... I also tried another suggetstion from another thread: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 pcie_aspm=off But this did not help, rather I got even more problems afterwards. Maybe just a coincidence though. But can this code make things worse in some circumstances?
  7. Hi! One of my NVMe drives has suddenly started giving me a lot of BTRFS errors. Se attached syslog. Jan 30 07:35:27 MONSTERSERVERN kernel: I/O error, dev loop2, sector 37325840 op 0x1:(WRITE) flags 0x100000 phys_seg 4 prio class 2 Jan 30 07:35:27 MONSTERSERVERN kernel: I/O error, dev loop2, sector 37300584 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2 Jan 30 07:35:27 MONSTERSERVERN kernel: loop: Write error at byte offset 16593756160, length 4096. Jan 30 07:35:27 MONSTERSERVERN kernel: I/O error, dev loop2, sector 32409680 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2 Jan 30 07:35:27 MONSTERSERVERN kernel: loop: Write error at byte offset 19110830080, length 4096. Jan 30 07:35:27 MONSTERSERVERN kernel: I/O error, dev loop2, sector 37325840 op 0x1:(WRITE) flags 0x100000 phys_seg 4 prio class 2 Jan 30 07:35:30 MONSTERSERVERN kernel: btrfs_dev_stat_inc_and_print: 330006 callbacks suppressed [...] Jan 30 07:35:30 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1: state EA): bdev /dev/nvme0n1p1 errs: wr 172, rd 2403805, flush 0, corrupt 0, gen 0 Jan 30 07:35:30 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1: state EA): bdev /dev/nvme0n1p1 errs: wr 172, rd 2403807, flush 0, corrupt 0, gen 0 Jan 30 07:35:30 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1: state EA): bdev /dev/nvme0n1p1 errs: wr 172, rd 2403809, flush 0, corrupt 0, gen 0 [...] Jan 30 07:35:37 MONSTERSERVERN kernel: I/O error, dev loop2, sector 37430928 op 0x0:(READ) flags 0x80700 phys_seg 4 prio class 2 [...] Jan 30 09:28:14 MONSTERSERVERN kernel: nvme0n1: I/O Cmd(0x2) @ LBA 1066408400, 8 blocks, I/O Error (sct 0x3 / sc 0x71) Jan 30 09:28:14 MONSTERSERVERN kernel: I/O error, dev nvme0n1, sector 1178887112 op 0x0:(READ) flags 0x80700 phys_seg 2 prio class 2 [...] Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 3, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 4, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 1, rd 4, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 2, rd 4, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 3, rd 4, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0 Jan 30 09:28:14 MONSTERSERVERN kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 3, rd 6, flush 0, corrupt 0, gen 0 Do you think this means that the drive has gone bad or are the errors caused by a problem with my RAM? I have non-ECC DDR4, and recently applied an XMP profile to run it at its native speed (3200MHz). Maybe that was too stressful for the RAM? Previously I ran it at 2133MHz for stability. My cache pool and another NVMe drive are also using BTRFS so I want to know whether there is a risk that they might fail as well. Is there a way to recover data on the drive or should I just format it? Thank you in advance! monsterservern-syslog-20240130-0635.zip monsterservern-diagnostics-20240130-0938.zip
  8. Big thank you for the quick reply. I will make a new config and cross my fingers that the brand new seagate exos drive does not fail! I have cloud backup… /Erik
  9. Hi! I wonder if there is a quick way to achieve the following: - I had an array of 3x4TB disks + 1x2TB. The parity was 8TB. - I bought 2x18TB disks with the idea to transform the array to be 18TB parity and 18+8TB for data (to increase space from 14 to 26TB and reduce the number of drives from 5 to 3, as this allows me to remove my HBA card to free up a pci-e slot). I already replaced the parity drive with an 18TB drive and added the second 18TB drive to the array. I am now using the unbalance plugin to move all data on the array to the 18TB drive so that the three 4TB and the 2TB drives will be empty. When all data is moved, do I need to remove the smaller drives one by one, or can I remove all 4 of them at once and create a new config? Or will removing more than one drive at a time increase the risk of data loss? All the drives that I am removing will be empty. Thank you! Erik
  10. Thanks alot! Especially for the ebay seller tip. That is how I was planning to do it as well.
  11. cool! can you link the adapter you used? raid0 like a boss :) thanks! /Erik
  12. Hi all! I found these listings on ebay (several similar ones are there, all from china): https://www.ebay.com/itm/374341484174?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=OxLNCOGvR0q&sssrc=2047675&ssuid=6Vb57j3CRuK&widget_ver=artemis&media=COPY They sell Kioxia NVME U.2 drives 7.68TB that are PCIE gen 4 capable as far as I understand. Great read and write speeds and extremely durable. Price around 500USD. I am hooked and would like to put 2 of them in my unraid server but I do not want to fall into some loophole that I did not think about, since it is still a lot of money. So I have some questions for you pros! 1. Have anyone bought these from the chinese sellers? Are they scamming you or are they for real? The price seems very (too?) good and they claim that the drives are brand new... At the same time the sellers are top rated with good reviews. 2. A PCI-E adapter: I have an Asrock Taichi X570 motherboard with a ryzen 3950x processor, so not a lot of PCIE lanes but enough for 2 more drives. I think the board should be capable of PCIE bifurcation (hard to confirm online though) and therefore wanted to buy something like this: https://www.ebay.com/itm/304837578258?mkcid=16&mkevt=1&mkrid=711-127632-2357-0&ssspo=u0r4ab5cr6-&sssrc=2047675&ssuid=6Vb57j3CRuK&widget_ver=artemis&media=COPY with proper cables. The motherboard has 3 slots that are x16 size but they are running at x8 x8 x4 speeds unless you only populate one of them. I will have a GPU (GTX1070Ti) as well in the x8 first slot and a network card in the x4 slot. An x8 slot should still have enough bandwidth to run 2 drives at full speed, and maybe 4 drives at half speed if I buy more later on? Or how would that work with the bifurcation? - The alternative is to buy a card with a PCI-E switch on it but they are more expensive and I have not found them in PCI-E 4.0. 3. What about airflow? Will the drives or the adapter become very hot? I have a normal non-server chassi with quiet fans... 4. Software configuration: I was thinking to use zfs and run them in the equivalent of raid 0. Backups will be regularly made to the hard drive array that has parity. They have awsome endurance and are expensive so I do not want to waste space by running them in raid 1. 5. Networking: To utilise the drives I want to add a 10GbE NIC to the final x4 slot. I have CAT6a wired at home so I want to use RJ45. Except for the price (especially on switches... man they are expensive!), are there disadvantages of using RJ45 over SFP+ for 10GbE? 6. CPU and RAM requirements: I have 128Gb of RAM and the CPU have high single thread performance. I was planning not to use zfs cache in the RAM since the drives are so fast anyway, does that make sense? I need the RAM for VM:s including my workstation/gaming VM that I use regularly with the GPU passed through. Looking forward to responses!
  13. eribob

    Making her sweat

    Sure will. Both carrot and stick needed to make my monster perform.
  14. eribob

    Making her sweat

    Finally made my 3950x sweat today when rendering vector tiles for the entire world map.
  15. I have 2 identical VM:s running because that way, me and someone else can game at the same time, for example me and my brother. We both connect to the server using parsec. I have 2 GPU:s, one for each VM. I am well aware of how vdisks work, I have several VM:s running already. I would like to avoid copying the vdisks because it will use twice the space and all updates and new game installs need to be done on all disks.
  16. Thank you for the advice. I would like to try this. How can I implement iSCSI on unraid? Will this allow for me to share the same game data between several gaming VM:s? Can the VM:s be on at the same time?
  17. Hi! I have 2 gaming VM:s with two GPU:s passed through to them (one for each). I would like to store all games on a single virtual disk and clone the disk for the other VM, so that I do not have to install all games twice, and could thus save space. Preferably, one disk would be the "master disk" and when games are added or updated on that one, it would be nice if these updates could be easily propagated to the other VM. - However I am not sure how this would work with game saves etc. Both VM:s already have an OS drive (virtual), that I am thinking I should keep, and where save data etc could be saved, but I am not sure if game saves can be easily separated from the game files (I am using steam and blizzard games at the moment). Before, I used btrfs and the "cp reflink" function, but the disks go corrupted after a while. Maybe btrfs is not stable enough to do this? So I thought that zfs could perhaps be used instead? However, I have never used zfs before so I would like some advice on how to set it up. I have a 1 TB SATA SSD that I am planning to use for this. And I have installed the zfs plugin for unraid. Thank you in advance!
  18. +1! However, I have a feeling that this feature is not possible for the unraid devs to implement
  19. No ideas? How do you all create your VM:s? I am looking for expert tips...
  20. VM creation takes a lot of time... 1. Using the GUI to set the number of cores, RAM, create vdisks etc 2. Running through the OS installation 3. Install necessary drivers, programs etc and adjust settings This is a hassle when you want to create a VM just to test something new... Is there a way to speed this up? Lately I have started using reflinks since my VM disks are on btrfs pools (cp --reflink /path/to/old/VM/vdisk1.img /path/to/new/VM/vdisk1.img). This saves HDD space and gets rid of the need for step 2 and 3, but I instead need to have base images that I can clone from. Is there a better or more accepted way to quickly create VM:s? Are there pre-installed vdisks available for common OSes like ubuntu/centos/windows10? Cloud providers let you create VMs by clicking a button in the web GUI and wait a minute or two but I guess unraid was not designed for this... Happy to get any suggestions!
  21. Awsome work!! Got it to work with Radeon Rx580 passthrough as well. Nice to be able to explore the new win11 however I can not say that I am very impressed with the new features so far. It looks mostly like a pretty skin on windows 10 and microsoft trying to steal more of my privacy.
  22. You saved the iso in /mmt/user/isos/ but you look for it in /mnt/user/ISO…