Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About dasx86

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Don't get the Fractal Design Define XL R2 Get the new Define 7 XL! https://www.fractal-design.com/products/cases/define/define-7-xl/black/
  2. Put https:// in front of the ip:port manually, it should load once you do that.
  3. Unfortunately I've trashed my docker.img after this initially happened, way before realizing the "rootfs" destination issue we've been talking about in the last 10 posts. I've also switched from the old Krusader docker to the new one from binhex, so all of that is long gone.
  4. Ah, everything I had read seemed to be in reference to additional volumes, but it makes complete sense that it would apply to any volume. If this indeed the case... well, maybe I've missed a recommendation to point the "main" volume directly to user shares, thereby restricting the user from browsing to devices outside the array. Or perhaps the recommendation was made to use RW/slave across the board and I've missed it? Or perhaps this isn't the issue?
  5. Does anyone have any insight if this is part of the issue? eg. why browsing to /mnt/disks from /mnt within the container points to rootfs, while a separate mount that points to /mnt/disks directly does go to the partitions on drives as intended?
  6. Thanks for taking the time to write such a detailed reply, pwm. I've edited my earlier posts to accurately convey mappings as they are set up. Here's the relevant part of the docker config: Scenario 1: Point well taken. I do understand that having multiple mappings opens up the kinds of problems one can have when copying from disk shares to user shares (don't do it!) As I'm only using this docker to move files from user share to user share, OR unassigned device to user share, I don't think I'm opening myself up to that issue in this case. Scenario 2: I don't believe I've set myself up for issues here with my behavior so far, but understood how it would be easy to. I will ponder this part of your response again in the morning after a full night's sleep At the end of the day, I'll probably switch the container path "/media" to point to host path "/mnt/user", and only access unassigned devices through the container path "/UNASSIGNED" which points to host path "/mnt/disks" which is behaving just fine. I hope I can get to a solid understanding of why my current container path "/media/disks" shows my unassigned devices as folders, but points to rootfs for each of them....
  7. Please look at the path on the left side in the bottom picture, it still shows as rootfs even after seemingly diving into the disk volume...
  8. Any reason you can think of that browsing to /mnt/disks (unassigned devices) in Krusader would point to rootfs?
  9. Actually, you can see on the left side that it's pointing to rootfs, which is in memory, rather than the partition on disk which is XFS. So why is the /mnt/disks path pointing to rootfs? Certainly explains why I lost my files the first time - they were just hanging out in RAM!
  10. I have been able to replicate the behavior that caused me to lose files in the first place. Can someone explain what's happening here? I am not well versed in mount points, etc.... Latest version of the Krusader docker by binhex. I have container path /media pointing to /mnt, and I've set up container path /UNASSIGNED to point to /mnt/disks. Left is /media/disks, and right is /UNASSIGNED After diving into the Samsung SSD (fourth on the list) on both paths, here's what I see: I've copied files into both of these paths for testing. The path on the right is what I can see exposed via SMB, and is what seems to be correct. Also, notice the drive stats at the top - the full drive capacity is shown on the right, while the left path still shows ~63GB. Note: if I dive into ANY of the unassigned devices via /media/disks on the left, I cannot see the contents of ANY device. I'm fairly confident that if I were to restart my server, the files I've copied to /media/disks/*samsung ssd* would vanish into the ether, just like the situation that caused me to create this thread to begin with. 1) Why is /media/disks/ seemingly not actually pointing to the drives, while /UNASSIGNED is? They're referencing the same paths on the host 2) Where the hell is the file that I've copied (on the left) actually going? This is the black hole that ate my files the first time Thanks edit: fixed cases on mount points
  11. Yeah, I didn't expect the latest version to have a significantly different engine or abilities than the prior version, but it really made a difference for me.
  12. I don't think I could give you a percentage breakdown, but I can give you a rough idea of my situation. Root directory files, all recoverable 9GB MSSQL backup file 800MB MSSQL backup file 10-15 files under 1MB: some PDFs, a couple zip files, other random Intact subdirectories, all recoverable 1 folder with ~6 SQL files (text) 1 folder with ~10 SQL files (text) Not intact subdirectories An extracted ISO of MSSQL 2014 SP1, which has tons of files in many directories and subdirectories. Based on the size, I'd guess that folder had about half of the contents of the extracted ISO. This is an ISO that has thousands of files and subdirectories going many levels deep. I can only begin to speculate how these subdirectories broke out. However, I could see missing content, by going to the are of the recovery that is a "lost and found" of sorts. However, the folder names and structure were lost. Top level folders in the "lost and found" would have names like "Directory$4928" but the names of subfolders in that directory would be fine. There was no way of telling where a certain folder should have been located, or whether the structure within it was complete..... Of course, I didn't care about recovering that directory at all so no harm done. Not sure if that makes sense; don't have this in front of me. But bottom line, if you're looking for specific files like I was, I would give it a shot. Downloading and running a scan is free, just costs money once you try to recover files over 250KB. I'll add more in here later. Depending on how motivated I feel, I might start a new thread and write up a comparison of the programs I used during my recovery attempts. Not that any of it has anything to do with unRaid at all, nor am I qualified to speak out about XFS or any other filesystem, but it isn't the first thread created on this subject here and may be a useful reference.
  13. I plan to keep this thread updated to help anyone out who might search in the future. After scanning both the source and destination drives with many different recovery programs, I say with confidence that the files did indeed blow up. (Scanning all other disks and volumes came up with nothing. 99% sure I did not move them elsewhere.) After painfully trying many different programs, I was able to retrieve from the source drive the couple files I was concerned about along with many more of less importance. I had success this morning with Recovery Explorer Standard version 6.16.2. I didn't have luck with any of the other software from SysDev Labs, who seems to rename their software with every major version. Earlier versions (UFS Explorer and Raise Data Recovery) were not able to process the filesystem as completely and with as many correct filenames as the latest version. As far as other vendors, I had no success with DiskDoctor's XFS Data Recovery, nor another program called ReclaiMe. If I remember right, XFS Data Recovery was unable to read the filesystem at all, even the normal existing files. ReclaiMe froze up at about 10% into the scan and seemed to be running suspiciously fast for the size of the drive. YMMV may vary with any of these. At the moment, I'm running a parity check WITHOUT correcting parity errors, to see if hooking an XFS drive that is part of the storage pool to a Windows box for recovery would cause any writes to throw parity off. If it did, fine, I'll rebuild parity, no big deal. EDIT: Can confirm that hooking the array's XFS drives up to windows for scanning did NOT throw off parity whatsoever. YMMV The next step will be to replicate my behavior (to the best of my human extent) that I believe caused this - moving files from a user share in an unRaid array to an M2 SSD via PCIE that is outside of the array, from within a Krusader docker. I will try two ways: the initial way, when I had /mnt/disks mounted in RW mode, and then in RW/Slave mode. Will report back. In the meantime kids, eat your vegetables, floss your teeth, and back up your files.