JorgeB

Moderators
  • Content Count

    30573
  • Joined

  • Last visited

  • Days Won

    362

JorgeB last won the day on April 6

JorgeB had the most liked content!

Community Reputation

3624 Hero

About JorgeB

  • Rank
    Advanced Member

Converted

  • Gender
    Male

Recent Profile Visitors

15825 profile views
  1. Initial snapshot need to be sent in full to the dest server: btrfs send /mnt/data/1st_snaphot | ssh root@ip "btrfs receive /mnt/backups" After that you send only the changes: btrfs send -p /mnt/data/1st_snaphot /mnt/data/2nd_snaphot | ssh root@ip "btrfs receive /mnt/backups" and so on.
  2. There's some info here on how btrfs snapshots and send/receive work, difference is there it's sending locally, to send changes to another server you'd use for example: btrfs send -p /mnt/data/old_snaphot /mnt/data/new_snaphot | ssh root@ip "btrfs receive /mnt/backups" There's no GUI support, so you have to use the console or a script.
  3. Yes, you can use btrfs send/receive over ssh, that's how I backup all my servers.
  4. Note it's only not moving from array to pool, it still does from pool to array, even with a space in the name.
  5. It's a bug, mover isn't working for shares with a space in the name, you should create a bug report. As a workaround you can rename the share, using for example an underscore.
  6. Unfortunately you rebooted since the rebuild, did you save those diags by any chance? Still the errors suggest the rebuilt was corrupt.
  7. Share doesn't yet exist on that disk, once it does space will be correct.
  8. According to the mover log there's nothing to be moved, from what share are you trying to move data?
  9. If any rebuild was done after the controller dropping that disk will be corrupt, GUI would show errors on multiple disks, and you'd be notified if system notifications are enable.
  10. NIC supports and is advertising 10GbE: Advertised link modes: 100baseT/Half 100baseT/Full 1000baseT/Full 10000baseT/Full Problem is this: Link partner advertised link modes: 100baseT/Full 1000baseT/Half 1000baseT/Full
  11. The HBA dropped offline and was re-detected, causing all disks connect there to be inaccessible: pr 11 19:22:48 YAMATO kernel: md: disk1 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk2 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk3 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk5 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk6 read error, sector=0 Apr 11 19:22:48 YAMATO kernel: md: disk29 read error, sector=0 Rebooting should fix it but it might happen again, make sure it's well seated, sufficiently cooled, you can al
  12. sda is a flash drive, but since it-s spamming the log disconnect it if not in use, enable this and then post that log and the complete diagnostics after a crash.