Struck

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Struck

  1. I will run an extended SMART test after reboot. I have now inserted a new SSD, that is supposed to replace the one i currently use. I will try memtest later, but i haven't had any problems before i installed the SSD. The array is unaffected of this problem it seems
  2. I used the instructions to retore the data, formatted the drive and copied the data back afterwards. It worked for less than three days. Now the issue is the same. The log is filled with stuff like this: Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759626240 csum 0x21417709 expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 5, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759699968 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759699968 csum 0x108cc45f expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 6, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759708160 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759708160 csum 0x7d0b155f expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 7, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759736832 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759736832 csum 0xabb5631a expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 8, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15760031744 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15760031744 csum 0xb842b40e expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 9, gen 0 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): parent transid verify failed on 8343076864 wanted 3304 found 3233 Aug 31 04:29:37 ChiaTower kernel: BTRFS info (device sdm1): no csum found for inode 12305 start 15759298560 Aug 31 04:29:37 ChiaTower kernel: BTRFS warning (device sdm1): csum failed root 5 ino 12305 off 15759298560 csum 0xff2de314 expected csum 0x00000000 mirror 1 Aug 31 04:29:37 ChiaTower kernel: BTRFS error (device sdm1): bdev /dev/sdm1 errs: wr 0, rd 0, flush 0, corrupt 10, gen 0 Aug 31 04:30:13 ChiaTower kernel: verify_parent_transid: 10 callbacks suppressed Aug 31 04:30:13 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:13 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:44 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:30:44 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:15 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:15 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:46 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:31:46 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:18 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:18 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:49 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:32:49 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:20 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:20 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 Aug 31 04:33:51 ChiaTower kernel: BTRFS error (device loop2): parent transid verify failed on 4708515840 wanted 203356 found 202764 And my guess is that if i try to reboot the machine the cache drive parition cannot be mounted. Even though I can access the cache drive fine before the reboot. Is the drive bad? I have multiple of these drives, so i can try and replace it. Would i be having less issues if i run multiple of them in the cache pool? chiatower-diagnostics-20210831-1815.zip
  3. Diagnostics attached the restart also triggered a parity sync. I don’t know why this is, since the array seems to be unaffected of this problem. chiatower-diagnostics-20210829-1504.zip
  4. This morming the docker service crashed with one docker running it had filed the log, so i tried to restart. now it seems that my cache drive wont mount. the btrfs seems to be unmountable. the cache drive log says this. How do I fix this problem? the cache drive was added less than one week ago,
  5. I bought two. and it seems to be working fine.
  6. Latest does not support flax yet. I don't know when a new release will be made.
  7. It seems like you have created plots_dir yourself and not used the one in the template. Try to look after it again. Remember to look under "show more settings"
  8. Use colons instead of commas. Also take a look in your mainnet/config/config.yaml file to see what is parsed into it.
  9. I have not yet updated to the newest test version. Don't know what changed there. I am running the 4 days old test version.
  10. There you go. This should not be nessesary, that is why it is a requirement to set these in the container information like above. It should automatically get updated. @SpazheadMaybe you can double check the information in here with your configuration
  11. I have not tried colon seprating them as the description says. But i can tell that it works when comma separating them. /plots, /plots2 etc
  12. Yes and Yes Saw your edit. I saw that the key of blockchains was set to chia, instead of blockchains this fixed it for me atm
  13. I just tried installing on another fresh unraid server. But flax does not come up. I have set blockchains; to chia, flax. Unraid log shows this: not even a mention of flax.
  14. It has to be said that this only happens if it is the VM that does the transfer. If i do it from a physical machine accessing the network share, not problems occur. The VM is accessing the data from a SMB share, so it is not 'local' to the VM. The VM is pushing the data to a network mounted share in unraid, with unassigned disks plugin. I run windows 10 on both VM and physical machines.
  15. Hi. When i use a VM to transfer a file larger than 10GB or so, to another server on the physical network. The network transfer fails, with code 0x8007003B (unexpected network error) And i loose connection to the VNC session. A ping command from another physical machine also times out when this happens. I can see in the unraid log that the ethernet interface is hanging and is being restarted explaining the behavior. What can i do to fix this issue? hotbox-diagnostics-20210609-1427.zip
  16. How do i run a plotter only? I have farmer and harvester elsewhere, but need more plotters.
  17. Thank you, this did it for me. An observation i made, the chia version this docker is running is not 1.1.6. it is running a dev version 1.1.7.dev0 which should not be used for production.
  18. I want to run a plotter/harvester, but i cannot get it to work. I have set farmer node to 192.168.1.24 which is my full node farmer, port to 8447 as suggested earlier, and harvester_only to true. Also i set the paths to /plot and /plotter. The docker simply fails and stops. The logs only show this:
  19. Hi. Do unraid support the Adaptec- ASR-78165 in HBA mode? I am thinking of both using it for HDD but also SSDs in the future. Is Trim supported on the card in unraid? Br
  20. After a reboot, this seems to be happening again. and again the container is unlinked from my dropbox account. It is likely because i am using a basic plan, and have 7 devices connected to it, (max 3 allowed) but that does not explain why this container has excessive writes when unlinked. Idea: Some write amplification? log file or anything else?
  21. Hi I experienced a problem after starting my dropbox container yesteday, and it ran overnight. This morning i saw a lot of writes comming from the container, and it has been going on for the whole night, just since i started the container. The writes are increasing every little time. Any idea what is is? I can say that during this time, it seems that the dropbox container was not linked to my account, though i had set it up once before, working fine. I attached an image from my grafana-telegraf setup showing the writes to my cache drive. In this time interval there was no activity on my Dropbox account. Explanations: 23:30 i manually turned on Dropbox container 11:20 i shutdown the Dropbox Container because i saw all the writes going on. 12:00 i restarted the container and linked it to my account. After 12:00 i have not seen any unexpected behaviour from the Dropbox Container. All of this is done on Unraid 6.9.1 Any idea what happend?, it causes about 500GB wear on my new NVME SSD in about 12 hours.
  22. Hi I am using two ubuntu VMs on my unraid server. but sometimes one of them happens to be stock. VNC shows a static image, and all communication to the VM stopps (SMB share inside VM, and ping is saying hos is not unreachable). This happens again today, while working on the system. The VM was under heavy load, and the unraid dashboard, shoed that the vm continiued to be under heavy load after the vm seemed to be locked up. I have had this problem both during heavy and light loads, in both my VMs, which are speced very differently. Although most of the lock ups have occurred during heavy loads, they have all happened when i was using VNC to view them. I hope someone can help improve the reliability of the VMs, so they don't lock up like this. The diagnostics are attached below. The VM that is locked up is the one names "Ubuntu-02", The other "ubuntu" names VM works as it normally does. On another note, maybe unrelated, i have seen some link related events in the unraid logs. but have no idea what they refer to. hotbox-diagnostics-20201102-1445.zip
  23. It could have been that. I can't remember.
  24. As expected a restart solved the problem. But i had problems stopping the array, (which is normally not a problem) So had to powerdown from the terminal and pressing the power buttom - still a safe shutdown though. The problem seemed to be unmounting disk shares, of which i have none i think.