mr_lego

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

mr_lego's Achievements

Noob

Noob (1/14)

0

Reputation

  1. hi. I have removed those small files and all seems to be working fine. Thank you very much for your support guys. Much appreciated.
  2. hi. I have run the unraid in maintenance mode and ssh to the box and run command "xfs_repair -v /dev/md1" which has fixed the issue within less than a minute. I have noticed that the new share called "lost+found" has been created with a few files in it. Could you please confirm if I could remove/delete this share and those files inside ? Thank you. T.
  3. thank you for prompt reply. I am going to go through the steps and I will let you know how it goes....
  4. HI, I have just done the upgrade the ios from version 6.1.9 to the newest ver 6.3.5 and have noticed that the content of my 2 shared (out of 9) has dissapead (Install and Cbtnuggets). When I try to open these 2 shares I get the message "The folder is empty" and I cannot see any files and folders.The content of other shares seems to be in place. I have also tried to downgrade the ios to the version 6.3.4 but the issue wasn't sorted out.I have had the same outcome. I have attached the diagnostic tools. Would you be able to advise how this could be fixed, please? Thank you T. blue-diagnostics-20170627-1037.zip
  5. hi. Thank you for your explanation. I thought something went wrong as I didn't remember to see these shares at the begininng. Thank you once again.
  6. HI, I had some issues a few months ago and have noticed 3 additional shares "domains" "isos" and "system". Only the fist one seem to exist if I try to browse to the main path of my unraid but they don't seem to contain any details. The last share "system" doesn't seem to be visibile when I browse the main path of unraid (\\192.168.0.100\). Could you please advise if I could remove them from unraid without causing any issue ? I have attached the diagnostic details a two snaphots. I wouldn't like to remove anything extra or damage my unraid. If you have any questions please let me know. Thank you T. blue-diagnostics-20170626-1150.zip
  7. thank you for explanation and for your assistance on that. Much appreciated.
  8. HI. I don't remember using command line to shutdown/reboot unraid. I think about 1 month ago I upgraded the IOS on unraid and upgraded from ver 6.1.9 to the version 6.2.4 and performed reboot afterwards. Thank you very much for your support guys. It was very helpful.
  9. HI Guys, I have run this morning the following command which fixed my issue: xfs_repair -v -L /dev/md1 Could someone please advise why this happened in the first place and what could cause this issue ? Thank you T.
  10. I will wait until the process of the rebuilding array is done and I will run xfs_repair with option -L Thank you for your comments and prompt respond. Will let you know how it goes. Probably tomorrow. T.
  11. HI I have just tried to stop array, unassigned the Disk1, start array, stop array, assigned the Disk1, start array again and now I can see the that Data has started rebuilding from the gui. Will let you know how it goes. T.
  12. HI, Thank you for your prompt respond. Sorry but I've forgotten to mention that I use ssd only for running VM and it's not part of this array. I have just run xfs_repair command and got the following output: root@Blue:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 2991032 entries Phase 2 - using internal log - zero log... zero_log: head block 337074 tail block 337067 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. root@Blue:~# Could you please confirm if I should run this command with the option "-L" ? Thank you T.
  13. HI I am having issue for the last 3 days with one of my hard drive which reports "Unmountable disk present" (please see attached snapshost and dignostic file). I used to use for the last month the verion 6.2.4 but yesterday I reverted the unraid ios to the version 6.1.9 and this didn't make any difference. I have got currently two 6TB WD Red hard drives and one ssd 500GB. One of the 6TB WD Red hard drive is reporting the issue and the status is "Unmountable". I have tried to check the parity and run the smart full test which took about 11 hours on Disk1 which reports "unmountable" status but I couldn't find any errors. I also cannot see my shared any more. I have attached all diagnostic files but I don't know how to resolve this. Could someone please advise what would be the next step which I should go for ? Thank you very much in advance. T. blue-diagnostics-20170216-0028.zip