Jump to content

madmax969

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

madmax969's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I have another questiion Is the parity drive still rebuilding? I see the following message Stopped. Upgrading disk/swapping parity.
  2. I understand now , for some reason I thought It would be more complicated . I started the copy , I have a little chile to go. 6 TB to copy .. Thank you for all your Help @itimpiand @Gragorg I will let you know how everything goes , once completed
  3. Before doing the Parity drive swap, I have a drive in my array which is disable and the contents are being emulated . What I do first , will I lose everything on my drive which is disabled when I do a Parity drive swap ?
  4. Hi I have a question , I need to replace a disable drive, so I decided to upgrade my 4tb to an 8 TB , but my parity drive is 6tb. I hear people saying if your parity drive failes during the copy to the new one , you lose everything . In my case what is the best practice method ? ( I was thinking can I add the 8 TB parity drive as a second Parity drive and then remove the old 6 TB partity, is this possible. )
  5. I can see my dockers I am uploading the latest diagnostic report Please let me know if there are any other errors zeus-diagnostics-20240313-1025.zip
  6. the system folder has been copied over from disk 1 to cache zeus-diagnostics-20240311-0201.zip
  7. I started the moved and received the following message Use sparse option Overwrite existing files rsync: [receiver] write failed on "/mnt/cache/system/docker/docker.img": Read-only file system (30) rsync: [receiver] chown "/mnt/cache/system/docker/.docker.img.XaaVPz" failed: Read-only file system (30) rsync: [receiver] rename "/mnt/cache/system/docker/.docker.img.XaaVPz" -> "docker/docker.img": Read-only file system (30) rsync error: error in file IO (code 11) at receiver.c(380) [receiver=3.2.7] rsync: [sender] write error: Broken pipe (32)
  8. HI , I am not sure exactly what you mean by "So you just need to get rid of the top level folder named system on cache, and move the top level folder named system on disk1 to cache. i see folders in Disk 1 like .Trash- Backup Downloads system and on Cache drive .Trash-99 appdata system
  9. Hi Thanks for the quick response I do not have have any VMS How do I get rid of and move the files "Probably if you get rid of the newly created system share on cache, and move the system share from disk1 to cache where it belongs, you will be good."
  10. sorry for the late reply here is the new log zeus-diagnostics-20240305-1227.zip
  11. it worked with removing the -n hase 1 - find and verify superblock... writing modified primary superblock sb root inode value 128 inconsistent with calculated value 96 resetting superblock root inode pointer to 96 sb realtime bitmap inode value 129 inconsistent with calculated value 97 resetting superblock realtime bitmap inode pointer to 97 sb realtime summary inode value 130 inconsistent with calculated value 98 resetting superblock realtime summary inode pointer to 98 Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... sb_icount 0, counted 352576 sb_ifree 0, counted 2450 sb_fdblocks 0, counted 66035540 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done
×
×
  • Create New...