Jump to content

KptnKMan

Members
  • Posts

    279
  • Joined

  • Last visited

Everything posted by KptnKMan

  1. Thanks, I'm reaching out to the mailing list to see if there's anything.
  2. Well, that sucks. I really enjoy using Unraid, but I'm getting frustrated with losing all my appdata every other month because of an update or bug. It always seems to happen at the worst moment. Anyway... Attached, array started, I've not formatted the first nvme yet. Thanks for taking a look. blaster-diagnostics-20200128-0950.zip
  3. I'm in need of some serious help, if anyone has time to help me out. At this point, I've managed to get both disks installed back in the original server, and put into the cache pool. However, the cache pool seems unmountable, and I need help troubleshooting. I'm also warned that Unraid wants to format the disk that was previously removed: Will this delete all data and permanently lose everything? Any advice on how to proceed?
  4. Thanks for the reply. The array was created in 6.7.0/6.7.1, server is running 6.8.1 now. I saw this mentioned in another thread. Does this^ mean there is no recovery data? Here is a log of the commands in the FAQ: Linux 4.19.94-Unraid. root@blaster:~# mkdir /bt root@blaster:~# mount -o usebackuproot,ro /dev/nvme1 nvme1 nvme1n1 nvme1n1p1 root@blaster:~# mount -o usebackuproot,ro /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o degraded,usebackuproot,ro /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o degraded,usebackuproot,ro /dev/nvme0n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error. root@blaster:~# mount -o ro,notreelog,nologreplay /dev/nvme1 nvme1 nvme1n1 nvme1n1p1 root@blaster:~# mount -o ro,notreelog,nologreplay /dev/nvme1n1p1 /bt mount: /bt: wrong fs type, bad option, bad superblock on /dev/nvme1n1p1, missing codepage or helper program, or other error. root@blaster:~# /dev/nvme1n1p1 /bt -bash: /dev/nvme1n1p1: Permission denied root@blaster:~# btrfs restore -v /dev/nvme1n1p1 /bt bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 250059317248 Could not open root, trying backup super root@blaster:~# btrfs restore -vi /dev/nvme1n1p1 /bt bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 250059317248 Could not open root, trying backup super root@blaster:~# btrfs check --repair /dev/nvme1n1p1 enabling repair mode WARNING: Do not use --repair unless you are advised to do so by a developer or an experienced user, and then only after having accepted that no fsck can successfully repair all types of filesystem corruption. Eg. some software or hardware bugs can fatally damage a volume. The operation will start in 10 seconds. Use Ctrl-C to stop it. 10 9 8 7 6 5 4 3 2 1 Starting repair. Opening filesystem to check... bad tree block 479137857536, bytenr mismatch, want=479137857536, have=0 Couldn't setup device tree ERROR: cannot open file system
  5. I seem to be having bad luck with Unraid. Today, one of my nvme cache disks seemingly failed, and I sourced a larger replacement SSD to replace it. I read docs and shutdown the array and started it without the bad cache disk, replaced the disk and added it back to the cache pool replacing the missing broken disk and waited. At this point, I saw that the cache pool is empty, without any data. Not sure why this happened. So I'm assuming that my cache is broken. Great. My 'failed' nvme seems to be mysteriously working again, and I've put it back in the server to try recover data from it but it appears without a FS. I've been trying steps from here: Using those steps tried to mount the disk to copy data from it, but I get errors that I cant mount the FS. Can anyone help me to recover the files off the remaining old cache disk that I can access?
  6. At this point, I'm wondering if I should cut my losses here, and just backup my USB, reformat USB and start over as a "new server", then import my array disks. Is that possible?
  7. I'm afraid not, it's very strange. It seems like the docker UI is somehow only half connected to the config. When I changed the port, I noticed the the Web UI link did not update, but I fixed that and it still doesn't work when pointed to correct port. My VMs start fine, and the dockers can start, but the docker networking seems to be all weird.
  8. Switched them back to bridge, and they are still inaccessible. New port shows up in UI, but nothing is accessible on those ports. Emptied my browser cache, etc, which I've seen suggested in other threads. I'm starting to think that this installation is borked.
  9. Docker log for the container is repeating: 2019-10-12 21:19:49,920 DEBG 'start' stderr output: No protocol specified tint2: could not open display! 2019-10-12 21:19:59,926 DEBG 'start' stdout output: [info] tint2 not running 2019-10-12 21:19:59,935 DEBG 'start' stderr output: No protocol specified tint2: could not open display! 2019-10-12 21:20:09,939 DEBG 'start' stdout output: [info] tint2 not running 2019-10-12 21:20:09,946 DEBG 'start' stderr output: No protocol specified tint2: could not open display!
  10. Its the strangest thing, I set new ports in the UI. I've ven tried deleting the container, and starting again. And it still shows as 6080, even after deleting, new, set correctly. I'm not sure what's going on.
  11. I manually changed the ports to 6081, and restarted containers. Still redirecting to my VM, and it doesnt seem to reflect in Unraid UI.
  12. Yes, they use the same port. Why was I able to run them together before without changing anything? Seems like something has changed.
  13. If anyone is able to help though, I'm experiencing a strange networking problem bringing my docker containers back online. Before restore, I was able to run containers that used VNC, but now I seem to be getting some strange port conflicts. I have a VM running, with VNC enabled, and now any dockers using VNC redirect to it. I'm also unable to run multiple dockers with VNC, like binhex-crusader and binhex-preclear. I get an "execution error", and I can't see the logs showing anything useful.
  14. Thanks @Squid I managed to get everything restored. I used UFS Explorer Standard Recovery and ReclaiMe Pro to recover my files. For anyone who might want to do this in future, using UFS Explorer for each drive individually (Not as RAID) works fine, as the virtual share filesystem is split out across the actual separate filesystems of the disks. I restored each disk separately, to another location, then recombine the filesystem manually. I'm starting to copy them back to the array now.
  15. As I suspected, there was a syntax error in my command, in particular with the '-delete' placement. It should always be at the end or will ignore the filter and assume deletion of everything. I find this pretty stupid, but it is what it is, I should have checked the small print. So I'm the stupid one. Ironically, what seems really lost was my 'backup' share that I had setup. This was intended to be the source of a clone to my old decommissioned server, when that was brought online as a backup server later this week. I'm trying to restore this, as the array has not been overwritten. Unfortunately, this happened first. As it turns out, after more careful review, most of what is lost can be replaced or regenerated. The 'system' and 'appdata' is basically lost, and locked onto the cache drives in btrfs, which I'm trying to reconstruct. I noticed that all my VM and Docker configurations disappeared from the UI. I'm reconstructing the cache drives, and trying to pull the file structure off it. I'm having some success using recovery tools. ReclaiMe Pro and UFS Recovery seem to be doing pretty well so far, but we'll see. Saying that, is there any way I can restore my docker settings using a backup (Meaning what I pull of)? Also, for example in a normal backup/restore scenario, how do I backup/restore 'system' data and configuration?
  16. I'm not using a Mac anymore, hence why I wanted rid of these files. I didn't have an issue with this before, not a random command, but this is very odd behaviour I think. I'm still pretty shocked this happened, I think it may have been a syntax error. Hey thanks, I'm currently looking at: https://www.ufsexplorer.com/raise-data-recovery-xfs.php I think this is an older version though, so best I scan with the newest. Thanks for your help.
  17. Hi, if anyone can help me please I would be appreciative. I just got my Unraid server up and running and everything has been running pretty sweet for a few weeks. Today, I have attempted to delete the '.DS_Store' and '._.DS_Store' files from my server so I ran a few commands in the unraid terminal. The commands I ran: find . -name '.DS_Store' -type f -delete && find . -name '._.DS_Store' -type f -delete find . -type f -delete -name '.DS_Store' -name '._.DS_Store' I definitely didn't run any other commands, or had any other malicious process running. Luckily I managed to stop the process when I noticed some files had gone. As it is, a bunch of critical stuff got deleted. I'm not sure how this happened, but it seems that a bunch of personal and critical data has been deleted. In a panic, I stopped everything, and rebooted the server. Now it seems as though the system files were deleted also, and the server has come up without any of my VMs or Docker containers present. The array drives are formatted using XFS. Does anyone know how this could have happened using these commands? Does anyone know how I can restore my XFS drives? I've stopped the array and trying to research what to do. I know I should backup, but I had not yet configured a Cloud Backup for this server. It's definitely been high on my list.
  18. I have not tried emulated CPU, but is there any detriment to doing this? I'm trying to build this Win7 VM as a dedicated Steam gaming system, due to Win10 incompatibility with so many games.
  19. I have a Windows 10 VM that I regularly use, and have no problems with it.
  20. I setup the i440 VM without any devices, and that is when the multiple core issue occurs. When I shutdown, add cores and restart, I get the reboot crash loop causing startup repair. I'm glad this isn't a Ryzen-specific issue, but I tried Q35 before, and the installer just hangs every time I do. When using Q35, the Win7 installer starts, and sits at the glowing Windws icon forever. So I cant even get it to install with Q35, not sure why.
  21. Hi, I'm running a Ryzen unraid server, on 6.7.2 I've been encountering a strange issue with my Windows 7 VM, where it wont start with more than 1 core. I setup the VM is a pretty standard fashion, and it would not start installer with more than 1 core (I understand this is a known issue), so I continued with a single core for installation and setup. However, now I cannot even start the VM with more than 1 core enabled. It just hangs and reboots before loading Windows, then the startup repair appears. This continues nonstop. Has anyone encountered this or know a workaround?
  22. I'm having the same issue, when trying to setup a Windows 7 VM. Got everything running, but cannot add more cores, as it hangs immediately and will not boot. Has anyone managed to understand what is happening?
  23. Hi great advice and thanks. Ironically, I was thinking about how to shrink arrays too, so this is much appreciated advice. Don't think I'll be doing it anytime soon though, but it was definitely on my mind. Thanks for this. I currently use all the 3TB disks in a 15TB array on my old (not unRAID) server, but I'm planning to migrate everything to my new unRAID and reuse that as something else. As much storage as possible is what I'd like, until I choose to replace disks with larger ones. 6TB seems a good balance for a while, unless I see some good deals. At this point, I'll probably just add all of my older 3TB disks in, as that will even things out (As if I was using all the 6TB). I've not had any issues with my old disks, as I tend to swap them out every few years before they become an issue (Or if I see errors occurring). I currently backup my most important data to a cloud backup, but I do need to think of a new solution that will incorporate unRAID better. Any advice for an unRAID-integrated backup solution? Any plugins or Apps?
  24. Hi, I have finished building my new unRAID server and I'm setting up my array at the moment. I understand that I can use different size disks in unRAID and it doesnt care, but I am wondering what the best disk configuration for me is? I'm planning to consolidate as many disks into my new server as possible, ideally in a single array. I'd like to maximise the usage of my storage server if possible. I have: 2x 250GB nvme (new, for cache) 4x 6TB disks (new) 5x 3TB disks (from my old server) 1x 2TB (old server) 2x 1.5TB (old server) 2x 1TB (old server) So I have a few questions: - Should I setup the array with 2 parity drives or 1? - Should I use 2x 6TB drives for parity? - Can I remove drives later, when I want to replace with higher capacity? - Should I only create a single array from the largest drives? Will this matter? I'm a new user to unRAID so I'm not sure here.
×
×
  • Create New...