eribob

Members
  • Posts

    79
  • Joined

  • Last visited

eribob's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. No ideas? How do you all create your VM:s? I am looking for expert tips...
  2. VM creation takes a lot of time... 1. Using the GUI to set the number of cores, RAM, create vdisks etc 2. Running through the OS installation 3. Install necessary drivers, programs etc and adjust settings This is a hassle when you want to create a VM just to test something new... Is there a way to speed this up? Lately I have started using reflinks since my VM disks are on btrfs pools (cp --reflink /path/to/old/VM/vdisk1.img /path/to/new/VM/vdisk1.img). This saves HDD space and gets rid of the need for step 2 and 3, but I instead need to have base images that I can clone from. Is there a better or more accepted way to quickly create VM:s? Are there pre-installed vdisks available for common OSes like ubuntu/centos/windows10? Cloud providers let you create VMs by clicking a button in the web GUI and wait a minute or two but I guess unraid was not designed for this... Happy to get any suggestions!
  3. Awsome work!! Got it to work with Radeon Rx580 passthrough as well. Nice to be able to explore the new win11 however I can not say that I am very impressed with the new features so far. It looks mostly like a pretty skin on windows 10 and microsoft trying to steal more of my privacy.
  4. You saved the iso in /mmt/user/isos/ but you look for it in /mnt/user/ISO…
  5. I agree this is a cool idea to try out. Both proxmox and unraid uses kvm so it should be possible?
  6. Same error for me today. Could not even download diagnostics until I restarted nginx. monsterservern-diagnostics-20210707-1012.zip
  7. Wow this is awsome! Great work crenn! I had lost all hope of solving this problem… I have upgraded my system and sold the huananzhi board already but I will save this post in case I decide to build a second system in the future. /Erik
  8. Rebuild worked fine. Lets hope that the CRC errors do not rise further. Thank you again for the quick support.
  9. Thank you. I am rebuilding the drive now. Fingers crossed.
  10. Since replacing the SATA cable did not help is there anything else in the SMART report that indicates why the drive failed? The drive is a Seagate IronWolf 4TB. # ATTRIBUTE NAME FLAG VALUE WORST THRESHOLD TYPE UPDATED FAILED RAW VALUE 1 Raw read error rate 0x000f 077 064 044 Pre-fail Always Never 55465484 3 Spin up time 0x0003 095 093 000 Pre-fail Always Never 0 4 Start stop count 0x0032 100 100 020 Old age Always Never 132 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 7 Seek error rate 0x000f 089 060 045 Pre-fail Always Never 803115697 9 Power on hours 0x0032 088 088 000 Old age Always Never 11183 (164 86 0) 10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 020 Old age Always Never 132 184 End-to-end error 0x0032 100 100 099 Old age Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 188 Command timeout 0x0032 100 099 000 Old age Always Never 1 189 High fly writes 0x003a 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0022 071 062 040 Old age Always Never 29 (min/max 29/30) 191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 3 193 Load cycle count 0x0032 071 071 000 Old age Always Never 59695 194 Temperature celsius 0x0022 029 040 000 Old age Always Never 29 (0 17 0 0 0) 197 Current pending sector 0x0012 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0010 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x003e 200 199 000 Old age Always Never 42 240 Head flying hours 0x0000 100 253 000 Old age Offline Never 3737 (41 241 0) 241 Total lbas written 0x0000 100 253 000 Old age Offline Never 30794778888 242 Total lbas read 0x0000 100 253 000 Old age Offline Never 237940858011 /Erik
  11. Thank you for the suggestion, Unfortunately I tried to replace the SATA cable (it is connected to an HBA card, so I switched it to another SATA connector from that card). The drive is still disabled by unraid. All other drives that are connected to that HBA card seem to be working normally. The UDMA CRC Error count has increased over the last couple of months from about 5 to 42 now. /Erik
  12. Hi! One of my drives failed today. I have attached the diagnostics. The SMART error I think is the culprit is "UDMA CRC Error Count". It seems to not always be so serous though, do you have any thoughts on it? I have more space than I need on the array and would like to remove the drive from the array, but keep the data that was on it. The drive is 4TB and I have more than 4TB free on the array. The guide in the Unraid wiki is a bit confusing, I would be very happy if someone could walk me through how to achieve this? Thank you! Erik monsterservern-diagnostics-20210419-1028.zip
  13. Updated yestrday. So far no isssues at all.
  14. Same problem here. System log is suddenly full of these messages: Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 7525#7525: worker process 21758 exited on signal 11 Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21883#21883: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21883#21883: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21883#21883: nchan: Out of shared memory while allocating channel /var. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 21883#21883: *483824 header already sent while keepalive, client: 192.168.2.165, server: 0.0.0.0:80 Nov 29 10:05:45 Monsterservern kernel: nginx[21883]: segfault at 0 ip 0000000000000000 sp 00007fff9d8f5f58 error 14 in nginx[400000+22000] Nov 29 10:05:45 Monsterservern kernel: Code: Bad RIP value. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 7525#7525: worker process 21883 exited on signal 11 Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: *483826 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: *483827 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [crit] 21884#21884: ngx_slab_alloc() failed: no memory Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: shpool alloc failed Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [error] 21884#21884: nchan: Out of shared memory while allocating channel /dockerload. Increase nchan_max_reserved_memory. Nov 29 10:05:45 Monsterservern nginx: 2020/11/29 10:05:45 [alert] 21884#21884: *483828 header already sent while keepalive, client: 192.168.2.165, server: 0.0.0.0:80 Nov 29 10:05:45 Monsterservern kernel: nginx[21884]: segfault at 0 ip 0000000000000000 sp 00007fff9d8f5f58 error 14 in nginx[400000+22000] On the dashboard the memory for the log is 100% full. monsterservern-diagnostics-20201129-1235.zip
  15. Update! I ran a Memtest and after about 15 minutes I got a lot of errors. So I removed my two oldest RAM-sticks and re-ran the test for about 25 minutes without error. I know that is a bit short (not even one pass hehe) but I figured that since I got the errors so soon the first time I would get them again if the remaining RAM-sticks were the faulty ones. So it was probably a memory issue? I just hope that I will not get any more corruption in my BTRFS now... fingers crossed. I also ran "btrfs check --readonly /dev/nvme0n1p1" and "btrfs check --readonly /dev/nvme1n1p1" (the two disks that are part of the BTRFS pool in question) and got no errors. Can I then assume that my BTRFS filesystem is intact for that pool? BIG thanks! /Erik