Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About daemian

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You were exactly right. Started looking into some other issues I had and there was something wrong with the new disk I inserted. Not sure if my failed disk wrote to corrupted data to parity or what exactly happened but at the end of the day I had to re-format my new disk. Some data was certainly lost - but I will see if I can get anything off the old disk or restore from backups. The good news is my stuff is working again. Thanks anyhow!
  2. Hi - hoping I can get some help. I have been happily running this container for a few years - its awesome. Unfortunately, I recently had some issues with a disk that was failing (it has been replaced and parity rebuilt). During this time I also had to hard down the unraid server once. I ran the parity check after before proceeding with the disk replacement. Now - my nextcloud docker will start, but my clients do not connect (red icon) and when I try to browse to the webpage I get Internal Server Error The server encountered an internal error and was unable to complete your request. Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report. More details can be found in the server log. If i look at the docker log I see: ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 40-config: executing... chown: changing ownership of '/data': I/O error [cont-init.d] 40-config: exited 0. [cont-init.d] 50-install: executing... [cont-init.d] 50-install: exited 0. [cont-init.d] 60-memcache: executing... [cont-init.d] 60-memcache: exited 0. [cont-init.d] 99-custom-files: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-files: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. I tried running the Docker Safe New Permissions. I am not sure if it actually ever finished (the page errored out after like 12 hours) - but it did not help. My nextcloud share (mapped to /data in the docker) has the following permissions on the share folder, and the folder below (I obfuscated the user folder names): root@ur01:/mnt/user# ls -al | grep nextCloud drwxrwx--- 1 nobody users 252 Jan 7 2018 nextCloud/ root@ur01:/mnt/user# ls -al nextCloud/ total 112856 drwxrwx--- 1 nobody users 252 Jan 7 2018 ./ drwxrwxrwx 1 nobody users 48 Nov 4 11:57 ../ -rw-rw-rw- 1 nobody users 4096 Jan 26 2017 ._.DS_Store -rw-rw-rw- 1 nobody users 324 Oct 8 10:40 .htaccess -rw-rw-rw- 1 nobody users 0 Jan 26 2017 .ocdata drwxrwxrwx 1 nobody users 44 Jan 31 2017 admin/ drwxrwxrwx 1 nobody users 115 Aug 21 2018 appdata_ocnuvolb7k5t/ drwxrwxrwx 1 nobody users 96 Apr 5 2018 user1/ drwxrwxrwx 1 nobody users 6 Oct 8 10:40 files_external/ -rw-rw-rw- 1 nobody users 0 Jan 26 2017 index.html drwxrwxrwx 1 nobody users 96 Jan 7 2018 user2/ -rw-r----- 1 nobody users 110798670 Nov 8 10:00 nextcloud.log -rw-rw-rw- 1 nobody users 581632 Jan 26 2017 owncloud.db I have also deleted the docker.img file (because of issues on a different docker) and the problem persists. Any idea what i can look at to fix this? Thank you!
  3. I Love unraid and it just keeps getting better with every release. Happy Birthday!
  4. Thanks Johnnie. I upgraded to 6.5.3 and tried xfs_repair against. Still no luck. Putting this disk in another machine is not really an option for me with this one (I am remote to the site, and there are not much in the way of resources there). I think I may need to bite the bullet and just format the drive, conceding that the data from that drive is lost. Its probably not really that big of a deal. Obviously not ideal, but I don't think I have much other choice. Would I just format that drive and then run a parity check to be sure everything is ok?
  5. Thanks johnnie. I believe i got the controller running in AHCI mode now instead, but the xfs_repair still fails the same. How could I confirm that it is now running in AHCI?
  6. well -L didn't get me any further root@dt-ur01:~# xfs_repair -Lv /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 2290880 entries Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5)
  7. Thanks for pointing out the cache drive - I will check that out when i can. For the original issue, when I try to run xfs_repair I get the following error: root@dt-ur01:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 2290880 entries Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) ERROR: The log head and/or tail cannot be discovered. Attempt to mount the filesystem to replay the log or use the -L option to destroy the log and attempt a repair. Do i try it with the -L options? It sounds like that may result in [more] data lose, but perhaps I don't really have any other option? Thank you again for all of your time and assistance.
  8. OK - so the rebuild is completed. Now in the GUI disk 1 shows as "Unmountable: No file system"
  9. Sorry, to be a pest, when I click start its warning me "Parity disk(s) contents will be overwritten" -your sure, right?
  10. So I just want to double check, this is what the screen looks like now: I have issues this command at the CLI I have not refreshed or left the page. Now I am going to start the array, without the "Parity is already valid" selected. Is that all correct? Thank you for your help!
  11. I am pretty certain it is WCC4N0334109. I say that because i put all of the drives in as data drives, and strted the array (with no parity). The other 3 looked fine, but that one showed "Unmountable: No file system". I presume that would be because the power failure occurred before the parity sync finished. The 6TB drive is the parity.
  12. Sure, version 6.5.3 single parity config. Diagnostics attached. Thanks dt-ur01-diagnostics-20181023-0850.zip
  13. Good Morning, I had a disk fail, which i replaced. However, sometime before the parity sync/rebuild finished there was a power outage. When i booted the unraid back up - the disk assignments were lost. Through my research it sounds like its just critical that i get the parity disk correct - the other disks can be put in any location without negative consequence. So, i determined which disk was the parity, and have put it in the parity slot, and my other disks all in the disk # slots. What I am unsure of, is should I now start the array as normal, start it with "parity is already valid" selected, or do something else entirely? Whats throwing me off is that all of the disks are recognized as a "New Device" right now (blue square). I want it to rebuilt the data on the failed disk, and trust the data on the others and the parity. How do I go about this without destroying everything? Thanks!