WEHA

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by WEHA

  1. I have 2 linux vm's, ubuntu 18 & centos 8, 2 x freebsd and 4 windows. Only the linux vm's loose network over time. Restarting the network on the vm itself via vnc makes it work again (at least on the ubuntu), after a vm reboot it also works again. Not sure what to provide for extra information, let me know what you want. There is nothing special I can see in the syslog nor vm log, still have to check the linux log itself. I've changed the network adapter to virtio-net to see if that makes a difference. Can anyone tell me what the difference is between virtio & virtio-net?
  2. I'm not sure if this is a problem in the gui or a trigger happy antivirus but I thought I'd mention it: Bitdefender Endpoint Security Tools blocked this page The page you are trying to access contains a malicious attack attempt. Detected attack: Exploit.CommandInjection.Gen.123 Access from your browser has been blocked.
  3. So I found some threads going back years with the same endpoint error. After a reboot, all shares returned to normal. Still trying to recover from the "share is deleted message" I hope you can find the curlprit with my logs. I found this: Nov 8 18:04:32 Tower emhttpd: Starting services... Nov 8 18:04:32 Tower move: move: file .... Nov 8 18:04:32 Tower kernel: shfs[94877]: segfault at 0 ip 000000000040546a sp 000014710c7c7840 error 4 in shfs[403000+d000] Nov 8 18:04:32 Tower kernel: Code: 48 8b 45 f0 c9 c3 55 48 89 e5 48 83 ec 20 48 89 7d e8 48 89 75 e0 c7 45 fc 00 00 00 00 8b 45 fc 48 63 d0 48 8b 45 e0 48 01 d0 <0f> b6 00 3c 2f 74 43 8b 0> Nov 8 18:04:32 Tower move: move: create_parent: /mnt/disk6 error: Software caused connection abort
  4. I was moving data from the array to the cache with mover. I noticed that the share was going to be too big to fit. I then changed the share setting to "no" and when I clicked apply it said "share sync has been deleted" After having a small heartattack I checked the disks & cache folder for the share, it was still there. However /mnt/user now gives this error: -bash: cd: user: Transport endpoint is not connected Clicking "shares" in the gui only shows an entry in disk shares called cache. I assume I can just stop and start the array to get everything working again? My vm's are still running... tower-diagnostics-20201108-1812.zip
  5. When making some changes it's sometimes necessary or preferred to start docker / vm manager without auto-start enabled. That way you can just start whichever docker or vm you want. So what I'm asking is: add a 3rd option to the enable docker / enable vms dropdown like Yes, no auto start
  6. So I read about the "bug" that causes many writes to sdd's especially evo... Mine have a 1200TBW and are around 1500TBW now (in 2 years time) In the new beta there is a solution, but there also issues. My thought is, can I upgrade to the new beta, recreate the cache (on new drives) with the new partition layout and revert back to 6.8.3 if the need arises?
  7. I could use this feature as well, just for a machine that runs vm's and no shares.
  8. I always found the array health report to be hard to read as all the disks appear on one line. One day I logged in via my webmail only to see it with nice line breaks. It seems that Outlook can not interpret the line breaks present in the disk list (all the rest is fine). Maybe a missing \r?
  9. There are arguments for both but I still think the share list gives a false sense of security. I should have never started the array with the "bad" one still connected, as this corrupted everything on domains & system share. It's disappointing that it seems too difficult to check btrfs stats after n years and show NOCOW / COW status more clearly. For being a NAS at it's core, it seem quite relevant and important.
  10. True but that is negated when we're talking about SSD is it not? I too prefer integrity and was not aware my data was not safe.
  11. I never knew about COW & NOCOW because the shares are created by default and all the rest are set to auto by default... This should be marked more abundantly on the shares list like the warning sign. That warning sign being set to a green light is to me that the share is save, but in reality it's not... I just spent 16 hours overnight with no sleep to get everything working again because of a silly parameter... Very dissapointing, thank you for your quick responses though.
  12. Sure but like you also mentioned to hopefully use btsfs stats in the near future.. that was 2 years ago I have the notification in the linked thread, only this error count occurred after a reboot (it was 0 before) and unraid just happily started the array... So the notification was useless in this case.
  13. Scrub has finished, no uncorrectable errors. I have started to move date of the cache, the vm's that were started before are still having problemset, I suppose it was "permanently" corrupted when they were started before the scrub. Didn't try docker yet, I'm assuming this will be the same. Is there a reason these errors are not taken into account when starting the array? I'm guessing I will have to spend hours getting my vm's fixed... not to mention possible other data that has been corrupted while there was still a good drive from what I can tell :(
  14. Ok, balance canceled and now running scrub. It now says that it's 4TB instead of 2TB (it's 2x 2TB nvme) Running status now is counting up corruption & generation errors on the bad one, is this a problem? It was 0 just before the scrub.
  15. So should I just cancel it then? Because it will take a while to finnish...
  16. Run the scrub while the balance is running?
  17. So I started the array with the "bad" nvme unassigned but it seems that it is still using it? Status has both devices, syslog containers many errors about it again. On the main page it still shows the "bad" one as unassigned This scares me... When I click the cache disk in the gui it says a balance is running? :s
  18. In meanwhile I disconnected the nvme that generated the errors to recover from the other nvme disk, I did not start the array yet. Then I saw your message and reconnected it, but now unraid is started and does not recognize the reconnected disk?? Am I f'ed now? EDIT: just to be clear, the device is listed but blue. So when selecting it it says "data will be overwritten"
  19. I shut down my unraid server to replace the ram (64GB to 128GB) Once restarted I enabled docker and virtual machines... Docker page says service failed to start, vm's are barely starting claiming corruption everywhere. Checked the syslog, saw these messages everywhere: BTRFS error (device nvme0n1p1): parent transid verify failed on 1326431879168 wanted 214008171 found 213104704 BTRFS error (device nvme0n1p1): parent transid verify failed on 612809326592 wanted 213820848 found 213103567 dev stats: [/dev/nvme0n1p1].write_io_errs 291117377 [/dev/nvme0n1p1].read_io_errs 382285749 [/dev/nvme0n1p1].flush_io_errs 2039859 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 51 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0 fi usage Overall: Device size: 3.64TiB Device allocated: 3.46TiB Device unallocated: 177.91GiB Device missing: 0.00B Used: 3.00TiB Free (estimated): 326.36GiB (min: 326.36GiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- -------------- ------- -------- --------- ----------- 1 /dev/nvme0n1p1 1.73TiB 4.00GiB 64.00MiB 88.95GiB 2 /dev/nvme1n1p1 1.73TiB 4.00GiB 64.00MiB 88.95GiB -- -------------- ------- -------- --------- ----------- Total 1.73TiB 4.00GiB 64.00MiB 177.91GiB Used 1.50TiB 1.62GiB 272.00KiB What would be the best course of action to make sure my data is not gone? I can still access shares but I'm concerned about that state How can I identify the problem, is it the nvme disk or motherboard? nvme disk is on a motherboard slot so no addon card Stopping and starting the array does not make the error count go up though..
  20. When I try to run this container it consumes almost everything of my cpu, even though it's pinned to 4 threads. Unusable for me FYI: my cpu has 48 threads so it's not that it's a low grade cpu...
  21. Nice interface! Would it be possible to add a listen port configuration? So I can define my own listening port for like Unifi / Emby / etc. I now add a host and manually change the config file but I it would be nice if this was included Thanks!