eribob

Members
  • Posts

    86
  • Joined

  • Last visited

Everything posted by eribob

  1. I have 2 identical VM:s running because that way, me and someone else can game at the same time, for example me and my brother. We both connect to the server using parsec. I have 2 GPU:s, one for each VM. I am well aware of how vdisks work, I have several VM:s running already. I would like to avoid copying the vdisks because it will use twice the space and all updates and new game installs need to be done on all disks.
  2. Thank you for the advice. I would like to try this. How can I implement iSCSI on unraid? Will this allow for me to share the same game data between several gaming VM:s? Can the VM:s be on at the same time?
  3. Hi! I have 2 gaming VM:s with two GPU:s passed through to them (one for each). I would like to store all games on a single virtual disk and clone the disk for the other VM, so that I do not have to install all games twice, and could thus save space. Preferably, one disk would be the "master disk" and when games are added or updated on that one, it would be nice if these updates could be easily propagated to the other VM. - However I am not sure how this would work with game saves etc. Both VM:s already have an OS drive (virtual), that I am thinking I should keep, and where save data etc could be saved, but I am not sure if game saves can be easily separated from the game files (I am using steam and blizzard games at the moment). Before, I used btrfs and the "cp reflink" function, but the disks go corrupted after a while. Maybe btrfs is not stable enough to do this? So I thought that zfs could perhaps be used instead? However, I have never used zfs before so I would like some advice on how to set it up. I have a 1 TB SATA SSD that I am planning to use for this. And I have installed the zfs plugin for unraid. Thank you in advance!
  4. +1! However, I have a feeling that this feature is not possible for the unraid devs to implement
  5. No ideas? How do you all create your VM:s? I am looking for expert tips...
  6. VM creation takes a lot of time... 1. Using the GUI to set the number of cores, RAM, create vdisks etc 2. Running through the OS installation 3. Install necessary drivers, programs etc and adjust settings This is a hassle when you want to create a VM just to test something new... Is there a way to speed this up? Lately I have started using reflinks since my VM disks are on btrfs pools (cp --reflink /path/to/old/VM/vdisk1.img /path/to/new/VM/vdisk1.img). This saves HDD space and gets rid of the need for step 2 and 3, but I instead need to have base images that I can clone from. Is there a better or more accepted way to quickly create VM:s? Are there pre-installed vdisks available for common OSes like ubuntu/centos/windows10? Cloud providers let you create VMs by clicking a button in the web GUI and wait a minute or two but I guess unraid was not designed for this... Happy to get any suggestions!
  7. Awsome work!! Got it to work with Radeon Rx580 passthrough as well. Nice to be able to explore the new win11 however I can not say that I am very impressed with the new features so far. It looks mostly like a pretty skin on windows 10 and microsoft trying to steal more of my privacy.
  8. You saved the iso in /mmt/user/isos/ but you look for it in /mnt/user/ISO…
  9. I agree this is a cool idea to try out. Both proxmox and unraid uses kvm so it should be possible?
  10. Same error for me today. Could not even download diagnostics until I restarted nginx. monsterservern-diagnostics-20210707-1012.zip
  11. Wow this is awsome! Great work crenn! I had lost all hope of solving this problem… I have upgraded my system and sold the huananzhi board already but I will save this post in case I decide to build a second system in the future. /Erik
  12. Rebuild worked fine. Lets hope that the CRC errors do not rise further. Thank you again for the quick support.
  13. Thank you. I am rebuilding the drive now. Fingers crossed.
  14. Since replacing the SATA cable did not help is there anything else in the SMART report that indicates why the drive failed? The drive is a Seagate IronWolf 4TB. # ATTRIBUTE NAME FLAG VALUE WORST THRESHOLD TYPE UPDATED FAILED RAW VALUE 1 Raw read error rate 0x000f 077 064 044 Pre-fail Always Never 55465484 3 Spin up time 0x0003 095 093 000 Pre-fail Always Never 0 4 Start stop count 0x0032 100 100 020 Old age Always Never 132 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 7 Seek error rate 0x000f 089 060 045 Pre-fail Always Never 803115697 9 Power on hours 0x0032 088 088 000 Old age Always Never 11183 (164 86 0) 10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 020 Old age Always Never 132 184 End-to-end error 0x0032 100 100 099 Old age Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 188 Command timeout 0x0032 100 099 000 Old age Always Never 1 189 High fly writes 0x003a 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0022 071 062 040 Old age Always Never 29 (min/max 29/30) 191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 3 193 Load cycle count 0x0032 071 071 000 Old age Always Never 59695 194 Temperature celsius 0x0022 029 040 000 Old age Always Never 29 (0 17 0 0 0) 197 Current pending sector 0x0012 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0010 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x003e 200 199 000 Old age Always Never 42 240 Head flying hours 0x0000 100 253 000 Old age Offline Never 3737 (41 241 0) 241 Total lbas written 0x0000 100 253 000 Old age Offline Never 30794778888 242 Total lbas read 0x0000 100 253 000 Old age Offline Never 237940858011 /Erik
  15. Thank you for the suggestion, Unfortunately I tried to replace the SATA cable (it is connected to an HBA card, so I switched it to another SATA connector from that card). The drive is still disabled by unraid. All other drives that are connected to that HBA card seem to be working normally. The UDMA CRC Error count has increased over the last couple of months from about 5 to 42 now. /Erik
  16. Hi! One of my drives failed today. I have attached the diagnostics. The SMART error I think is the culprit is "UDMA CRC Error Count". It seems to not always be so serous though, do you have any thoughts on it? I have more space than I need on the array and would like to remove the drive from the array, but keep the data that was on it. The drive is 4TB and I have more than 4TB free on the array. The guide in the Unraid wiki is a bit confusing, I would be very happy if someone could walk me through how to achieve this? Thank you! Erik monsterservern-diagnostics-20210419-1028.zip
  17. Updated yestrday. So far no isssues at all.
  18. Same problem here. System log is suddenly full of these messages: Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 7525#7525: worker process 21758 exited on signal 11 Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21883#21883: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21883#21883: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21883#21883: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 21883#21883: *483824 header already sent while keepalive, client: 192.168.2.165, server: 0.0.0.0:80 Nov 29 10:05:45 Monsterservern kernel: nginx[21883]: segfault at 0 ip 0000000000000000 sp 00007fff9d8f5f58 error 14 in nginx[400000+22000] Nov 29 10:05:45 Monsterservern kernel: Code: Bad RIP value. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 7525#7525: worker process 21883 exited on signal 11 Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: *483826 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: *483827 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /dockerload. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 21884#21884: *483828 header already sent while keepalive, client: 192.168.2.165, server: 0.0.0.0:80 Nov 29 10:05:45 Monsterservern kernel: nginx[21884]: segfault at 0 ip 0000000000000000 sp 00007fff9d8f5f58 error 14 in nginx[400000+22000] On the dashboard the memory for the log is 100% full. monsterservern-diagnostics-20201129-1235.zip
  19. Update! I ran a Memtest and after about 15 minutes I got a lot of errors. So I removed my two oldest RAM-sticks and re-ran the test for about 25 minutes without error. I know that is a bit short (not even one pass hehe) but I figured that since I got the errors so soon the first time I would get them again if the remaining RAM-sticks were the faulty ones. So it was probably a memory issue? I just hope that I will not get any more corruption in my BTRFS now... fingers crossed. I also ran "btrfs check --readonly /dev/nvme0n1p1" and "btrfs check --readonly /dev/nvme1n1p1" (the two disks that are part of the BTRFS pool in question) and got no errors. Can I then assume that my BTRFS filesystem is intact for that pool? BIG thanks! /Erik
  20. Hi, The solution worked for a couple of days, but just now one of my BTRFS pools again went into read only mode. I changed my RAM to 2133MHz (the "auto" setting in BIOS). The system log says the following: Nov 22 19:44:56 Monsterservern kernel: BTRFS error (device nvme0n1p1): block=1141445836800 write time tree block corruption detected Nov 22 19:44:56 Monsterservern kernel: BTRFS: error (device nvme0n1p1) in btrfs_commit_transaction:2323: errno=-5 IO failure (Error while writing out transaction) Nov 22 19:44:56 Monsterservern kernel: BTRFS info (device nvme0n1p1): forced readonly Nov 22 19:44:56 Monsterservern kernel: BTRFS warning (device nvme0n1p1): Skipping commit of aborted transaction. Nov 22 19:44:56 Monsterservern kernel: BTRFS: error (device nvme0n1p1) in cleanup_transaction:1894: errno=-5 IO failure Diagnostics are attached. What is the problem? It is really annoying now... /Erik monsterservern-diagnostics-20201122-1953.zip
  21. Thank you it worked nicely. Too bad I did not know about the risk from using the RAM at higher speeds. The RAM sticks themselves were rated at 3200MHz so I simply thought that it would work.
  22. Wow that is great help! I was wondering why these issues were building up. So since I run 4 RAM sticks I should limit them to 2667? I guess both pools need to be reformatted then. Is it worth trying to do a "btrfs check --repair" first? It seems that it can corrupt your pool, but I have nothing to loose if I am about to wipe it anyway? In that case, can you give me an example of how to run such a command? Also, what is the easiest way to format the cache pool? Thanks! Erik