Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About gideva

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

576 profile views
  1. gideva

    Disk Fail

    Super.... Thanks
  2. gideva

    Disk Fail

    Sorry but I want to be absolutely sure of what I do.... 1) Stop array and turn off the server 2) Disconnect the cables and plug them in again (is this really necessary or can I skip it since the server is difficult to access) 3) Remove the disk 4) start the array 5) Stop array 6) reinstall disk 7) start the array again and rebuilt Correct? Thnx
  3. gideva

    Disk Fail

    Done as suggested... This is what I got... What is next? Thanks monstruo-smart-20200616-0934.zip monstruo-diagnostics-20200616-0935.zip
  4. gideva

    Disk Fail

    Hi Guys, this morning I got this... This is not the first time but every time I messed up with something and I lost data... So this time I would like to do it correctly and hope I will not have the previous bad experience. Attached a diagnostic file if it helps to verify the integrity of the system. Thanks for patience and help. monstruo-diagnostics-20200615-0908.zip
  5. Hi Guys, I am having the same issue here. Just for fun I installed both the official PMS and the binhex version. If I have both installed both works but as soon as I remove binex PMS even the official one does not work... Any idea how to solve the issue?
  6. Will give it a try. Thnx
  7. Hi guys, here is my problem… I was installing Nextcloud on my server some weeks back but I had some issues and I had to wipe my appdata folder with all in it. Today I started the installation again but apparently something is wrong since, every time I try to reinstall everything I get an error like the following (after I set up Mariadb and Nextcloud Docker and I try to create a new adminaccount): Username is invalid because files already exist for this user I tried to change the username, the password and I also tried to reinstall everything again and again… same problem. Any clue? Thanks
  8. Yes this is what I want to do... But actually I do not know if I am doing correctly... What I am doing is: 1) Stop the array 2) go to Tools and New Config 3) Preserve current assignments: All 4) Go to main (all disks are now with a blue square as they are considered as New Devices) and start Parity Sync Sorry but I am scared to loose everything....
  9. Hi there... Just moved all data from the failing disks.... Before I will do a total disaster while proceeding with the New Config.... I reorder all the disks (I did not touch or moved the Parity that is still were is supposed to be) and now I have to pick one of the options but I do not want to screw everything... What shall I choose? I am sorry for the stupid question but I want to be sure
  10. Super!!!! That someone/somebody is me!!! Really appreciated. Stay Safe
  11. Was able to mount but there is NOTHING inside....
  12. Linux 4.19.107-Unraid. Last login: Mon Apr 13 19:47:06 +0400 2020 on /dev/pts/0. root@Monstruo:~# xfs_repair -vL /dev/sdj1 Phase 1 - find and verify superblock... - block cache size set to 735904 entries Phase 2 - using internal log - zero log... Log inconsistent or not a log (last==0, first!=1) empty log check failed zero_log: cannot find log head/tail (xlog_find_tail=22) - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:126) is ahead of log (1:2). Format log to cycle 4. XFS_REPAIR Summary Mon Apr 13 19:48:25 2020 Phase Start End Duration Phase 1: 04/13 19:47:37 04/13 19:47:37 Phase 2: 04/13 19:47:37 04/13 19:47:53 16 seconds Phase 3: 04/13 19:47:53 04/13 19:47:53 Phase 4: 04/13 19:47:53 04/13 19:47:53 Phase 5: 04/13 19:47:53 04/13 19:47:53 Phase 6: 04/13 19:47:53 04/13 19:47:53 Phase 7: 04/13 19:47:53 04/13 19:47:53 Total run time: 16 seconds done root@Monstruo:~#
  13. I am sorry to give you this pain... Here it is monstruo-diagnostics-20200413-1932.zip