cephaswiebe

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by cephaswiebe

  1. 46 minutes ago, testdasi said:

    You can try installing a Linux VM and then pass through the drives to the VM (using the ata-id method - see Spaceinvaderone video) and try recovering data in the VM (whatever you do, don't write directly to the 2 disks). There ought to be a guide somewhere out there e.g. maybe google recover synology raid in ubuntu or something like that and have a read up.

     

    Unraid is a very simple version of Linux so trying to do advance stuff like mdadm might be a little too risky.

    VM it is!  Thanks for the quick answer guys.  Much appreciated

  2. So my parents had a fire in their house and the synology enclosure seems to have bit the dust.  It was set up as 2x1 1Tb drives in a raid 1 - the 2 disks both seem to be fine, I can attach the disk to Unraid and it shows up using unassigned disks plugin (shows as fs linux_raid_member), but I can't mount it.  I have read that using linux you can set it to be a non raid drive using mdadm --detail command.  Anyone know if this is possible using Unraid?  I don't have easy access to a linux system to mess with and try recover the data.

  3. Alright, so i have a story of a person who made a stupid mistake.  Just wondering if there is any hope for me.

     

    I had a 24GB array that was comprised of 1,2,3 and 5TB disks, it was bulky and loud, so i decided to get some larger disks and replace all the smaller ones.  So, I bought 4x8TB drives and created a new array.  I then took the parity disk out of my old array (5TB) and put it in as a cache disk in the new array.  I grabbed the first disk (the other 5TB one) put it in the new array and mounted it using the unassigned drives plugin.  Shared it out and used krusader to copy the data over.  But.....  I made a mistake, i copied from a share (called 5TB) to a disk (disk1).  I KNEW that i had to go disk to disk, or share to share, but I hadn't used the unassigned disk tool before, and made the mistake that since the share was under /disk/5TB it was a disk.  Spent the next 2 days moving data over to the array, and when it finished....  Most of it wasn't there, it's not on the old disk and most of it isn't on the new array either....  Oddly enough the unassigned drives plugin still shows that i have 3.6TB of data on the disk i was moving from, but there is nothing on it anymore when i browse through Krusader.  

     

    Am i totally screwed?  Is there any way to fix this?  I was using reiserFS on the old disk if that makes any difference.

  4. 23 minutes ago, trurl said:

    If you are making a new build, probably the best and simplest way to get the data from your old build onto the new is to just get the new build going with those new disks (formatted as XFS), leaving one port free. Then plug in the old disks one at a time and mount them using the Unassigned Devices plugin to copy their data.

     

    If instead, you rebuild these disks onto larger disks before making the move, then those rebuilt disks will still be ReiserFS and you will need to move the data off of them so you can reformat as XFS.

    i originally thought about doing that, but do i need to purchase a 2nd unraid key to make this work?  Oh never mind....  I GET IT, thanks, now why didn't i think about this a week ago :)

  5. 1 hour ago, trurl said:

    Well the disks don't look very full, but you really should consider converting them to XFS.

     

    And your dockers would perform better if you had a cache disk for your appdata, domains, and system shares. If you're not running VMs domains doesn't matter but you could disable VMs and there would be no need for libvirt.img.

     

    I don't know of any particular docker that might be to blame, but you may not have enough RAM to run many dockers. And in general, your hardware is a bit weak to expect much.

    Oh yeah, i totally agree.  The thing that started this whole process is that i'm moving to a new, faster system.  I have also purchased 4x8TB drives.  The problem was that the new system only has support for 6 drives total, so i was going to be removing 2x2TB and 1x1TB to bring it down.  Then move it over, keep the 2x5TB drives and replace the other drives with the 8TB ones :).  I think at this point i can close this ticket, since the system is back up and running (albeit poorly), and hopefully i can move it to the new system and start fresh there.  Thanks again for your help and suggestions.  I'll use a caching disk in the new system for my docker stuff.  If i run into any new issues that I can't overcome myself I'll post a new ticket.  

  6. So i am pretty sure it's an issue with something to do with docker.  I moved the docker.img file and the appdata directory off of a disk to a different one (as i'm removing some 1TB and 2TB disks and eventually putting in 8TB's to replace it).  I rebooted in "safe mode" and it still didn't start, but the cpu was pinned at 100% the entire time.  Using htop I killed a few docker processes, and the array then came online.  Here's the diagnostic after it came back online.  Thanks a lot for your help so far!  I know I won't lose any data, but it's still pretty nerve wracking :)

    scruffy-diagnostics-20190422-2329.zip

  7. So after probably 5 years of no issues at all, I decided to do some work on my array.  I had a few smaller drives that i was going to remove, so i used krusader to move a bunch of files around.  During this process I broke docker, which I managed to get fixed, and then reinstalled my containers.  Everything seemed to be going fine until my cpu pinned out at 100% and then i eventually had to hard power down the server.  Now after i brought it back up, it's stuck on array starting mounting disks.  I didn't find anything obvious to me in the logs, but to be honest i'm not the strongest command line user.  Any help would be GREATLY appreciated.  I've attached my syslog, and the version of unraid i'm using is 6.6.7.  I had sabnzbd, sonarr, krusader and deluge containers installed on docker.

    syslog.txtsyslog.txt

     

    below is my hardware :
     

    Model: Custom

    M/B: ASUSTeK Computer INC. - P5KPL-CM

    CPU: Intel® Core™2 Duo CPU E8500 @ 3.16GHz 

    HVM: Enabled

    IOMMU: Disabled

    Cache: 64 kB, 6144 kB

    Memory: 4 GB (max. installable capacity 4 GB)

    Network: eth0: 1000 Mb/s, full duplex, mtu 1500

    Kernel: Linux 4.18.20-unRAID x86_64

    OpenSSL: 1.1.1a

    scruffy-diagnostics-20190422-2121.zip