Vr2Io

Members
  • Posts

    3666
  • Joined

  • Last visited

  • Days Won

    6

Vr2Io last won the day on September 8 2023

Vr2Io had the most liked content!

3 Followers

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

7340 profile views

Vr2Io's Achievements

Proficient

Proficient (10/14)

372

Reputation

63

Community Answers

  1. The image can be put to any path you want, it doesn't matter. If nothing have change and only restart the HA VM then image file delete, then I will assume something wrong in your HA restore. Pls try re-create the VM, start it and don't restore it, then restart the VM again ..... and check image file gone or not. Below are the VM setting look like and the VNC console I believe it have self update and the console look different now. ( As mention I change to use docker now ) Just type the command "host reboot" in console, then it reboot normally.
  2. Strange, I use HA docker for longtime . In previous, the qcow2 image HA VM never disappear and VNC console always work too.
  3. To summarize, Pls try - disable host access - disable bridge - disable bonding - stop VM Then check does different mac have same IP (assign / static) and promiscuous mode flapping gone. ( you may need waiting enough aging time for switch forgot the mac ) If positive, we can assume problem come from VM then focus on VM part / setting.
  4. To be honest, those are your assumption. We need solid founding.
  5. I think you should change the device mode from mass storage to modem / network mode. https://askubuntu.com/questions/1145645/huawei-e3131-modem-shows-as-mass-storage https://askubuntu.com/questions/776497/huawei-modem-does-not-work-with-16-04
  6. That's know, but why OP have the said problem ( flipping and source destination same mac )
  7. For vhost got same IP problem, this likely due to host access enable and your network environment relate. in general said, once enable that always trouble. I use macvtap at all ( ipvlan also fine ), just disable bridge and host access. For same mac on source destination, this usually because routing relate and easy happen when use multiple NIC. To simple my question, does all problem gone if you disable bridge and host access and never touch UR routing table? ( exclude HA unreachable and HA can't access share issue )
  8. I m not expert of docker network, but interest on what other people doing and why need preserve custom network, in first, I m thinking does any VPN plugin / docker cause problem. Anyway you found another solution, btw due to I put docker path to /tmp so every reboot need re-download all docker, that may help fix some hidden issue. And I also apply VLAN to separate stuff for what I need.
  9. My first HA VM was use official qcow2 image, just need to note if you change the XML then you need manual config image type to qcow2 again. I just restore the backup from Raspberry then migrate finish. https://www.home-assistant.io/installation/linux
  10. But still no report for testing from @bing281 and JorgeB estimate min. throughput down 73MB/s per disk.
  11. If that, does it make conflict ? What special of that custom networks must preserve and no other method to eliminate it.
  12. Thats great, but do you have UPS ? For non-UPS protect system, I will config limit / minimize continue writing ( i.e. security camera recording ) to one NVMe / disks instead most of storage. This will major prevent FS corruption in suddenly power off. This not a problem if you never mind the warning. I like btrfs more then ZFS, mainly bcz better performance. That depends on what capacity you need and how you define pool / array is main / backup. For me, I use array as backup tier, but it also share for some user because array has better user access control. As I want to transfer file quickly, so pool are 1st place to store file then periodic sync to array. ( I don't like mover design, so array and pool always independent )
  13. In longtime ago, I also use USB enclosure for the build, I buy two different 5 bays enclosure, the first one never reliable and the 2nd one finally reliable by adding a USB hub ( dedicate chips ) and change the firmware of the enclosure. Both enclosure architecture are USB to SATA phy + port multipler but they use different chips solution. Nowadays, some USB enclosure use different architecture, they are USB to USB hub + multiple USB-SATA phys, this may be more reliable, but still many issue could cause problem. In simple say, you could try adding USB 3.X hub in between and test any different.
  14. You should provide more detail how they connect, does all running long cable in SATA protocol so cause error ? If that, you can fix it by make them in SAS protocol in all long path, only last mile in SATA. With connect HBA to SAS expander, then this path will be SAS and support long distance < 10m, in each enclosure then fan-out from expander to SATA harddisk will be SATA and make them shorter then 1m. Next was rack disk swap case, rack case have backplane, so it eliminate mass power cabling to power up all disks. But also due to backplane only have small area to passthrough air flow, so it very hard to cool well even you putting 3000rpm or high pressure fan for cooling. I build in two box, it is a 3U expander (w mobo) case + 2U raw case (disk only), I add a expander ace-82885t in 2U enclosure and use one 8644 cable ( 50cm ) to connect them, the HBA are 9300-4i4e. It have different generation, the first version use two 9211-8i, one for internal and one for external, the external use a pair 8087 2m cable and IBM expander for the 2U path. The second version was 9211-8i + 1000-8e .... and currently only use one 9300-4i4e.
  15. Yes, /sys/class/ path should better I also make some test on my fan, low range dead zone was 15 ( 328rpm, lower then 15 will be 0rpm ) and no high range dead zone ( 3000rpm ) .... but this should be fan spec. related.