cephaswiebe

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by cephaswiebe

  1. I love how simple it is to use i’d like to see support for more drives / disk configurations
  2. VM it is! Thanks for the quick answer guys. Much appreciated
  3. So my parents had a fire in their house and the synology enclosure seems to have bit the dust. It was set up as 2x1 1Tb drives in a raid 1 - the 2 disks both seem to be fine, I can attach the disk to Unraid and it shows up using unassigned disks plugin (shows as fs linux_raid_member), but I can't mount it. I have read that using linux you can set it to be a non raid drive using mdadm --detail command. Anyone know if this is possible using Unraid? I don't have easy access to a linux system to mess with and try recover the data.
  4. So i figured it out, no idea what I did but i copied the data into the wrong location.. Thanks for the reassurance that I didn't do anything catastrophic!
  5. Alright, so i have a story of a person who made a stupid mistake. Just wondering if there is any hope for me. I had a 24GB array that was comprised of 1,2,3 and 5TB disks, it was bulky and loud, so i decided to get some larger disks and replace all the smaller ones. So, I bought 4x8TB drives and created a new array. I then took the parity disk out of my old array (5TB) and put it in as a cache disk in the new array. I grabbed the first disk (the other 5TB one) put it in the new array and mounted it using the unassigned drives plugin. Shared it out and used krusader to copy the data over. But..... I made a mistake, i copied from a share (called 5TB) to a disk (disk1). I KNEW that i had to go disk to disk, or share to share, but I hadn't used the unassigned disk tool before, and made the mistake that since the share was under /disk/5TB it was a disk. Spent the next 2 days moving data over to the array, and when it finished.... Most of it wasn't there, it's not on the old disk and most of it isn't on the new array either.... Oddly enough the unassigned drives plugin still shows that i have 3.6TB of data on the disk i was moving from, but there is nothing on it anymore when i browse through Krusader. Am i totally screwed? Is there any way to fix this? I was using reiserFS on the old disk if that makes any difference.
  6. i originally thought about doing that, but do i need to purchase a 2nd unraid key to make this work? Oh never mind.... I GET IT, thanks, now why didn't i think about this a week ago
  7. Oh yeah, i totally agree. The thing that started this whole process is that i'm moving to a new, faster system. I have also purchased 4x8TB drives. The problem was that the new system only has support for 6 drives total, so i was going to be removing 2x2TB and 1x1TB to bring it down. Then move it over, keep the 2x5TB drives and replace the other drives with the 8TB ones . I think at this point i can close this ticket, since the system is back up and running (albeit poorly), and hopefully i can move it to the new system and start fresh there. Thanks again for your help and suggestions. I'll use a caching disk in the new system for my docker stuff. If i run into any new issues that I can't overcome myself I'll post a new ticket.
  8. So i am pretty sure it's an issue with something to do with docker. I moved the docker.img file and the appdata directory off of a disk to a different one (as i'm removing some 1TB and 2TB disks and eventually putting in 8TB's to replace it). I rebooted in "safe mode" and it still didn't start, but the cpu was pinned at 100% the entire time. Using htop I killed a few docker processes, and the array then came online. Here's the diagnostic after it came back online. Thanks a lot for your help so far! I know I won't lose any data, but it's still pretty nerve wracking scruffy-diagnostics-20190422-2329.zip
  9. So after probably 5 years of no issues at all, I decided to do some work on my array. I had a few smaller drives that i was going to remove, so i used krusader to move a bunch of files around. During this process I broke docker, which I managed to get fixed, and then reinstalled my containers. Everything seemed to be going fine until my cpu pinned out at 100% and then i eventually had to hard power down the server. Now after i brought it back up, it's stuck on array starting mounting disks. I didn't find anything obvious to me in the logs, but to be honest i'm not the strongest command line user. Any help would be GREATLY appreciated. I've attached my syslog, and the version of unraid i'm using is 6.6.7. I had sabnzbd, sonarr, krusader and deluge containers installed on docker. syslog.txtsyslog.txt below is my hardware : Model: Custom M/B: ASUSTeK Computer INC. - P5KPL-CM CPU: Intel® Core™2 Duo CPU E8500 @ 3.16GHz HVM: Enabled IOMMU: Disabled Cache: 64 kB, 6144 kB Memory: 4 GB (max. installable capacity 4 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.18.20-unRAID x86_64 OpenSSL: 1.1.1a scruffy-diagnostics-20190422-2121.zip