dasx86

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by dasx86

  1. Had the same issue, this addressed it for me: https://github.com/linuxserver/docker-nextcloud/issues/288
  2. Cool, welcome to the hobby! If you can set up that 5 port switch upstairs, then plug your "access point" and that PC into that switch, I think that'll address the immediate need. I think you might be putting clients downstream from the "access point" into a double NAT situation (and maybe that has something to do with your current issue?) I'm not enough of a networking guy to explain why double NAT is a problem, I just know enough to know if you put yourself in that situation unintentionally with consumer gear, problems can arise
  3. Sorry, I skimmed your first post and didn't read thoroughly enough - so I think some of what I said in my prior post is inaccurate too. But, it still sounds like you're trying to piece together a network with what you have on hand, and it's resulting in some undesired behavior. I don't think the age of the "access point router" has anything to do with it. I wouldn't recommend replacing it with the same router as your "main" router, unless that model is set up with ability to chain together multiple to "bridge and extend" another router. Not sure if you're just new to Unraid or you're new to servers and Linux in general - some people start in this hobby by running an extra laptop with an external hard drive, fast forward 2 years and they're running a setup like this guy. So if that's you, maybe this is an excuse to level up your network hardware 🤷‍♂️😆
  4. edited: I did not read the original post correctly - seems like something is going on with name resolution (NetBIOS?) This is not an SMB issue. I think by having a router (as an access point) downstream from another router, you've actually created two separate networks. I would guess that any computer connected to the "access point" cannot even ping another machine that's on the upstream router's network, and vice versa. I believe you could set up manual routing between the two, but I don't think this will be intuitive at all (if even possible) on the regular consumer gear I'm assuming you have. I would focus on investing in a better networking setup - one where multiple access points are actually the same network. I've been happy with my Ubiquiti gear. If I were in your situation and starting from scratch, I'd look at a Ubiquiti UDM Pro, plus two access points - one for where your router is now, and another for where you've extended your network with another access point. Ubiquiti makes it easy to have multiple access points on the same network. It's a bit pricey though - perhaps others can chime in with other recommendations. Good luck
  5. Don't get the Fractal Design Define XL R2 Get the new Define 7 XL! https://www.fractal-design.com/products/cases/define/define-7-xl/black/
  6. Put https:// in front of the ip:port manually, it should load once you do that.
  7. Unfortunately I've trashed my docker.img after this initially happened, way before realizing the "rootfs" destination issue we've been talking about in the last 10 posts. I've also switched from the old Krusader docker to the new one from binhex, so all of that is long gone.
  8. Ah, everything I had read seemed to be in reference to additional volumes, but it makes complete sense that it would apply to any volume. If this indeed the case... well, maybe I've missed a recommendation to point the "main" volume directly to user shares, thereby restricting the user from browsing to devices outside the array. Or perhaps the recommendation was made to use RW/slave across the board and I've missed it? Or perhaps this isn't the issue?
  9. Does anyone have any insight if this is part of the issue? eg. why browsing to /mnt/disks from /mnt within the container points to rootfs, while a separate mount that points to /mnt/disks directly does go to the partitions on drives as intended?
  10. Thanks for taking the time to write such a detailed reply, pwm. I've edited my earlier posts to accurately convey mappings as they are set up. Here's the relevant part of the docker config: Scenario 1: Point well taken. I do understand that having multiple mappings opens up the kinds of problems one can have when copying from disk shares to user shares (don't do it!) As I'm only using this docker to move files from user share to user share, OR unassigned device to user share, I don't think I'm opening myself up to that issue in this case. Scenario 2: I don't believe I've set myself up for issues here with my behavior so far, but understood how it would be easy to. I will ponder this part of your response again in the morning after a full night's sleep At the end of the day, I'll probably switch the container path "/media" to point to host path "/mnt/user", and only access unassigned devices through the container path "/UNASSIGNED" which points to host path "/mnt/disks" which is behaving just fine. I hope I can get to a solid understanding of why my current container path "/media/disks" shows my unassigned devices as folders, but points to rootfs for each of them....
  11. Please look at the path on the left side in the bottom picture, it still shows as rootfs even after seemingly diving into the disk volume...
  12. Any reason you can think of that browsing to /mnt/disks (unassigned devices) in Krusader would point to rootfs?
  13. Actually, you can see on the left side that it's pointing to rootfs, which is in memory, rather than the partition on disk which is XFS. So why is the /mnt/disks path pointing to rootfs? Certainly explains why I lost my files the first time - they were just hanging out in RAM!
  14. I have been able to replicate the behavior that caused me to lose files in the first place. Can someone explain what's happening here? I am not well versed in mount points, etc.... Latest version of the Krusader docker by binhex. I have container path /media pointing to /mnt, and I've set up container path /UNASSIGNED to point to /mnt/disks. Left is /media/disks, and right is /UNASSIGNED After diving into the Samsung SSD (fourth on the list) on both paths, here's what I see: I've copied files into both of these paths for testing. The path on the right is what I can see exposed via SMB, and is what seems to be correct. Also, notice the drive stats at the top - the full drive capacity is shown on the right, while the left path still shows ~63GB. Note: if I dive into ANY of the unassigned devices via /media/disks on the left, I cannot see the contents of ANY device. I'm fairly confident that if I were to restart my server, the files I've copied to /media/disks/*samsung ssd* would vanish into the ether, just like the situation that caused me to create this thread to begin with. 1) Why is /media/disks/ seemingly not actually pointing to the drives, while /UNASSIGNED is? They're referencing the same paths on the host 2) Where the hell is the file that I've copied (on the left) actually going? This is the black hole that ate my files the first time Thanks edit: fixed cases on mount points
  15. Yeah, I didn't expect the latest version to have a significantly different engine or abilities than the prior version, but it really made a difference for me.
  16. I don't think I could give you a percentage breakdown, but I can give you a rough idea of my situation. Root directory files, all recoverable 9GB MSSQL backup file 800MB MSSQL backup file 10-15 files under 1MB: some PDFs, a couple zip files, other random Intact subdirectories, all recoverable 1 folder with ~6 SQL files (text) 1 folder with ~10 SQL files (text) Not intact subdirectories An extracted ISO of MSSQL 2014 SP1, which has tons of files in many directories and subdirectories. Based on the size, I'd guess that folder had about half of the contents of the extracted ISO. This is an ISO that has thousands of files and subdirectories going many levels deep. I can only begin to speculate how these subdirectories broke out. However, I could see missing content, by going to the are of the recovery that is a "lost and found" of sorts. However, the folder names and structure were lost. Top level folders in the "lost and found" would have names like "Directory$4928" but the names of subfolders in that directory would be fine. There was no way of telling where a certain folder should have been located, or whether the structure within it was complete..... Of course, I didn't care about recovering that directory at all so no harm done. Not sure if that makes sense; don't have this in front of me. But bottom line, if you're looking for specific files like I was, I would give it a shot. Downloading and running a scan is free, just costs money once you try to recover files over 250KB. I'll add more in here later. Depending on how motivated I feel, I might start a new thread and write up a comparison of the programs I used during my recovery attempts. Not that any of it has anything to do with unRaid at all, nor am I qualified to speak out about XFS or any other filesystem, but it isn't the first thread created on this subject here and may be a useful reference.
  17. I plan to keep this thread updated to help anyone out who might search in the future. After scanning both the source and destination drives with many different recovery programs, I say with confidence that the files did indeed blow up. (Scanning all other disks and volumes came up with nothing. 99% sure I did not move them elsewhere.) After painfully trying many different programs, I was able to retrieve from the source drive the couple files I was concerned about along with many more of less importance. I had success this morning with Recovery Explorer Standard version 6.16.2. I didn't have luck with any of the other software from SysDev Labs, who seems to rename their software with every major version. Earlier versions (UFS Explorer and Raise Data Recovery) were not able to process the filesystem as completely and with as many correct filenames as the latest version. As far as other vendors, I had no success with DiskDoctor's XFS Data Recovery, nor another program called ReclaiMe. If I remember right, XFS Data Recovery was unable to read the filesystem at all, even the normal existing files. ReclaiMe froze up at about 10% into the scan and seemed to be running suspiciously fast for the size of the drive. YMMV may vary with any of these. At the moment, I'm running a parity check WITHOUT correcting parity errors, to see if hooking an XFS drive that is part of the storage pool to a Windows box for recovery would cause any writes to throw parity off. If it did, fine, I'll rebuild parity, no big deal. EDIT: Can confirm that hooking the array's XFS drives up to windows for scanning did NOT throw off parity whatsoever. YMMV The next step will be to replicate my behavior (to the best of my human extent) that I believe caused this - moving files from a user share in an unRaid array to an M2 SSD via PCIE that is outside of the array, from within a Krusader docker. I will try two ways: the initial way, when I had /mnt/disks mounted in RW mode, and then in RW/Slave mode. Will report back. In the meantime kids, eat your vegetables, floss your teeth, and back up your files.
  18. I sure hope so! Occam's razor.... Yep, nothing to do with parity or filesystem issues. I didn't think Windows would write anything to it, without support for XFS. Assumption on my part. If parity is invalidated, so be it....
  19. Hey all, Using Krusader, I moved files from a user share (array of spinning disks using XFS) to an unassigned device (M2 SSD via PCIE adapter, formatted with XFS.) Unfortunately, that seems to have grenaded the files - I no longer see them at the source, nor the destination. Seems like Krusader thought the move operation went smoothly and the files were removed from the source, but didn't actually land at the destination. Not a big deal that the files are gone (I'd have had them backed up better if it were,) however, I'm trying to come up with a strategy to recover them if possible. I stopped the array immediately upon realizing what had happened. I do know that I did not initiate any other write activities, didn't have other plugins/dockers running, so all write activity if any should be minimal. Current strategies to recover: 1. Stick the disks I believe had the data in a Windows box and scan using the various tools created by SysDev Laboratories (Googling "xfs recovery" comes up with at least 3 differently named softwares from this company.) I've scanned one of the disks and I'm getting partial results, though the folder structure is a bit garbled. Continuing this approach now Another thought: 1. Can I mount the array in a read-only mode, to browse the array without risk of writing to it? I may have moved these files to a different destination than I thought (really don't think so, but human error is possible.) Is "maintenance" mode helpful here? I can go through the drives one by one to accomplish my goal here, but perhaps there's a straightforward approach to this in Unraid Questions on what happened: 1. At the time of the move, the path to the unassigned devices ("disks" folder) was mounted as RW, and not RW/slave as they should have been. If Krusader could see the unassigned devices at the time, could this have contributed? 2. Issues with M2 -> PCIE devices? Though, I'm unable to find other instances of this problem despite finding posts where people are mounting devices in this manner Any thoughts on the strategy to recover, the possibility of a read-only array, or speculation on how this happened is appreciated!