Dr. Ew

Members
  • Posts

    45
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dr. Ew's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. Ah, I got the last one. I had to go deeper down to make a snapshot writable. It was a snapshot of a directory that included a snapshot of a directory of a snapshot of a directory, haha. Once I made the offending directory writable, I was able to delete. i also forgot that the Subvolume location is /mnt/disk2/rosnapdel2 and not at /mnt/user/rosnapdel2. thanks for the help, tips, and guidance
  2. I was able to delete rosnap1del and rosnap3del. I had to make the entire path writable (/mnt/disk1/backup/snaps/rosnap2del). However, on the rosnap2, i try to make it writable, just like the others (/mnt/disk1/backup/snaps/rosnap2del), and it tells me it is not a BTRFS file system. Very strange.
  3. Thank you. That worked for TimeMachine. I've still got the read. only snapshots I can't delete, though. I tried that with the Btrfs sub delete command, and still am unable.
  4. I think I may have figured it out. After another reinstall, I had the docker window open and a warning popped up, and told me CrashPlan is exceeding inotify's max watch limit. I followed these instructions, to increase that limit: https://support.code42.com/CrashPlan/6/Troubleshooting/Linux_real-time_file_watching_errors For now, it seems to have fixed the issue.
  5. Hi. I've been trying to get this to consistently work for over a week, and I'm not sure what I am doing wrong. At first, the initial login worked as expected, I setup an initial test backup. After the initial login, I was not able to get into the WebUI again. I tried everything I could think of. So I uninstalled the docker and resinstalled it, then it allowed me to login again. I tried both setting up as new device, then replacing an existing device, neither options makes it all the way through the setup process. The app just gets stuck on connecting no matter what I do. So I tried to restart the docker, and it either shows a black screen, or sometimes shows the 'connecting' screen, which it indefinitely hangs on. So, I uninstalled, removed the image, manually delete the settings, and then it will allow me to login again. I then go through the setup process, and either at the end, or towards the end, it gets stuck on 'connecting' again. Any tips? I've tried allocating anywhere from 8gb to 64gb to the container. I've currently got 16 cores allocated to the docker. I've set permissions as root, and assigned priority of -20. Not sure what else I can do. *correction: Here are the three different screens i get stuck/hung on. I was able to backup a large portion day 1, but after that it hasn't backed up anything. I even tried removing the initial device in (crsahplan for business main UI), and then starting over from scratch, but that didnt work either.
  6. Thank you. That was my mistake on stating /mnt/TimeMachine. It was /mnt/user/TimeMachine. I couldn't delete it in place, but I was able to move the folder to /mnt/disks/cache, and delete it there. As far as the BTRFS snapshots go, I created the snapshot as readonly (as root). chmod doesn't work for the readonly snapshot, unfortunately.
  7. Hi guys, I have two read-only snapshots which I need to delete. Also, somehow an old TimeMachine share founds its way to /mnt/TimeMachine. I need to delete all three items. As far as I know, the only way to delete a read only item is to make it writable, and then delete it. The only way I know how to do such is by executing the following command: btrfs property set -ts /path/to/snapshot ro false However, I have executed the command as Root, and when I execute the command to delete the snapshot, the following is returned: rm: cannot remove '/path/to/snapshot' : read-only file system. What other methods may I utilize to delete these two snapshots? My assumption is I should be able to utilize the same command to delete /mnt/TimeMachine. Help is much appreciated. Thank you.
  8. Did you ever find your solution? I'm having the same problem, but it happens when using krusader, or just transferring a file over the network, I transferred folder X with the following specs: 445gb, 625,432 files, 415 folders. From an NvME drive on a client machine to unRAID cache, over SMB. The end result is what appears to be a successful transfer. Upon trying to access the folder within the unRAID share, it shows a total of 832mb, 125,212 Files, 108 Folders. Obviously something wrong. However when I drill down one leve to X\Folder1 it shows folder 1 as containing 36gb, 1600 files, 25 folders, which is pretty close to being correct. However, I can't tell if it transferred all files or what. It's highly unacceptable, and i am not sure why. I will only transfer .rar or 7z for storage on unraid now. Anything that isn't an compressed file, i don't trust to transfer to unraid without error. I transfer the same folder to my FreeNAS server via SMB, the end result is 445gb, 625,432 files, 415 folders. Exactly what it's supposed to be. Same thing when I transfer the same folder to a RedHat Server. What could be going on here? Let me know if you found a solution. Thanks!
  9. This happened once before, and I had the capability to restore the cache from a very recent backup, which was faster than potential troubleshooting, so I am unsure of how to correct now. I have 3 HBA's in this particular server. One HBA failed. I replaced the HBA. Once booted again, with the replaced HBA, 4 disks #'s have been re-assigned, so the cache pool shows 4 drives in their correct slots, and 4 slots which have no device. I put the drives back in the correct corresponding order, and unraid informs me it will wipe my drives upon spinning up the array. How do I correct this now?
  10. I can post diagnostics shortly. Is there anything I can do here. I tried a different port. No difference. When I bring the array up, system says no mountable file system in data array, and on the first cache drive, both prompting me to format. I think I need to create new config, and then reassign the drives, but I'm not sure.
  11. UnRAID keeps randomly losing the configuration of both Array and Cache Disks. It's usually just frustrating, but this time, on my cache array it is telling me all existing data will be overwritten upon starting the array. 12 disk raid10, all drives fine, online. However, of the 12 disks, unRAID only shows it remembers one of them. This happens after every few restarts. Typically I just re-assign the drive it doesn't have allocated, and I start the array. I've seen this once before, whee it tells me it will format all drives, I can't remember how I solved it. I have a small amount of data on the cache array, that I can't lose. Wha's my process for getting the array online without reformatting?
  12. I’ve posted on this topic before, but the topic got a bit sidetracked (by me). Are there any known issues in using NvME’s in a RAID5 Cache Pool? In one of my UnRAID servers, right now, there are 6 2tb NvME’s, RAID5. Writing to Cache Pool, very slow at 350MB/s, read at 750MB/S. As an unassigned Drive, a single NvME r/w at 900 MB/s+. I then tried 6 2.5” 1tb SSD’s in RAID 5, 1.4GB/s read and write. Same server, same settings. There must be some sort of bug or incompatibility, to be getting only 350MB/s on the array, when disk speed test shows each drive capable of 2000+ MB/s, and an array of the same number of 2.5” Drives fully saturates the line. I had a similar issue before, when testing 40GbE, but the r/w speed was capable of 10GbE saturation, just not 40GbE. I rebalanced the array too. No difference Any ideas here?
  13. I was going through the chelsio manual to tune a T-580. Next step was to unload the drivers. I did. The step after was to reload, but that is not working for me. The NIC now doesn't show up. Did everything I can think of. Can someone tell me how to get it back?
  14. Send me a PM, I can help you out? Has anyone had luck getting 10GbE to work in High Sierra? I spun up a few MacOS VM's and can't get networking active. I only tried in HS, trying in mojave next. I may have to plug in gigabitE to get it to work in HS. Curious if anyone else has the same issue?