Jump to content

Dr. Ew

Members
  • Posts

    45
  • Joined

  • Last visited

Posts posted by Dr. Ew

  1. Ah, I got the last one.  I had to go deeper down to make a snapshot writable. It was a snapshot of a directory that included a snapshot of a directory of a snapshot of a directory, haha. Once I made the offending directory writable, I was able to delete.

     

    i also forgot that the Subvolume location is /mnt/disk2/rosnapdel2 and not at /mnt/user/rosnapdel2.

     

    thanks for the help, tips, and guidance

  2. 17 hours ago, johnnie.black said:

    That's the correct command to delete btrfs snapshots, post a list of current snapshots:

     

    
    btrfs sub list /path

    There are very many of them. But here is an excerpt which includes the three I am trying to get rid of (highlighted)

     

    root@nAR1:~# btrfs sub list /mnt/cache
    ID 395 gen 243944 top level 5 path loops.samples
    ID 397 gen 234007 top level 5 path mixes
    ID 399 gen 167630 top level 5 path domains
    ID 400 gen 232037 top level 5 path media
    ID 402 gen 243868 top level 5 path gDrive
    ID 645 gen 167635 top level 397 path gDrive/Snaps/gDrive.11.28.2019
    ID 646 gen 167640 top level 395 path loops.samples/snaps/loops.samples.12.21.2019
    ID 647 gen 167684 top level 400 path media/snaps/media.12.28.2019
    ID 775 gen 232626 top level 5 path media

    root@nAR1:~# btrfs sub list /mnt/disk1
    ID 3100 gen 6598 top level 5 path Backup
    ID 5384 gen 6429 top level 3100 path backup/snaps/rosnap2del
    ID 5386 gen 6599 top level 5 path archive
    ID 5387 gen 6442 top level 5386 path archive/pre.2020.archive/snaps/nARarchive.12.28.2019

    root@nAR1:~# btrfs sub list /mnt/disk2
    ID 3470 gen 3720 top level 5 path rosnap1
    ID 3471 gen 3239 top level 3470 path rosnap1/snaps/rosnap1del

    ID 3472 gen 3796 top level 5 path archive
    ID 3473 gen 3245 top level 3472 path archive/pre.2020.archive/snaps/rosnap3del
    ID 3474 gen 3806 top level 5 path Backup

     

    I was able to delete rosnap1del and rosnap3del. I had to make the entire path writable (/mnt/disk1/backup/snaps/rosnap2del).

     

    However, on the rosnap2, i try to make it writable, just like the others (/mnt/disk1/backup/snaps/rosnap2del), and it tells me it is not a BTRFS file system. Very strange. 

  3. On 1/5/2020 at 7:22 AM, Djoss said:

    Do you have anything interesting in the container's log?

    I think I may have figured it out. After another reinstall, I had the docker window open and a warning popped up, and told me CrashPlan is exceeding inotify's max watch limit. 

     

    I followed these instructions, to increase that limit: https://support.code42.com/CrashPlan/6/Troubleshooting/Linux_real-time_file_watching_errors

     

    For now, it seems to have fixed the issue. 

  4. Hi. I've been trying to get this to consistently work for over a week, and I'm not sure what I am doing wrong. At first, the initial login worked as expected, I setup an initial test backup. After the initial login, I was not able to get into the WebUI again. I tried everything I could think of. So I uninstalled the docker and resinstalled it, then it allowed me to login again. I tried both setting up as new device, then replacing an existing device, neither options makes it all the way through the setup process. The app just gets stuck on connecting no matter what I do. 

     

    So I tried to restart the docker, and it either shows a black screen, or sometimes shows the 'connecting' screen, which it indefinitely hangs on. 

     

    So, I uninstalled, removed the image, manually delete the settings, and then it will allow me to login again. I then go through the setup process, and either at the end, or towards the end, it gets stuck on 'connecting' again. 

     

    Any tips? I've tried allocating anywhere from 8gb to 64gb to the container. I've currently got 16 cores allocated to the docker. I've set permissions as root, and assigned priority of -20. Not sure what else I can do. 

     

     

    *correction: Here are the three different screens i get stuck/hung on. I was able to backup a large portion day 1, but after that it hasn't backed up anything. I even tried removing the initial device in (crsahplan for business main UI), and then starting over from scratch, but that didnt work either. 

    DE7CFDEF-5E30-47C6-B354-3165E85F8C53.jpg

    35015B8F-5B4F-4877-986C-7E09E3BF46E9.jpg

    35015B8F-5B4F-4877-986C-7E09E3BF46E9.jpg

  5. 3 hours ago, Squid said:

    Don't think that's the correct path, as thats in RAM, and a simple reboot would remove it.

     

    Assuming that the path is /mnt/user/... and you're getting read-only file system, are you sure the file system on the disk(s) aren't mounted read-only?  I don't use BTRFS, but doesn't a simple chmod work?

    Thank you. That was my mistake on stating /mnt/TimeMachine. It was /mnt/user/TimeMachine. I couldn't delete it in place, but I was able to move the folder to /mnt/disks/cache, and delete it there. 

     

    As far as the BTRFS snapshots go, I created the snapshot as readonly (as root). chmod doesn't work for the readonly snapshot, unfortunately. 

  6. Hi guys,

     

    I have two read-only snapshots which I need to delete. Also, somehow an old TimeMachine share founds its way to /mnt/TimeMachine. I need to delete all three items. 

     

    As far as I know, the only way to delete a read only item is to make it writable, and then delete it. The only way I know how to do such is by executing the following command: btrfs property set -ts /path/to/snapshot ro false

     

    However, I have executed the command as Root, and when I execute the command to delete the snapshot, the following is returned: rm: cannot remove '/path/to/snapshot' : read-only file system. 

     

    What other methods may I utilize to delete these two snapshots? My assumption is I should be able to utilize the same command to  delete /mnt/TimeMachine.

     

    Help is much appreciated. Thank you.

  7. On 5/27/2019 at 9:32 AM, bounsky said:

    Hi jonathanm, 

    /mnt/disk4/Backup4/_2019-01-03-D/gerry/Documents/__IOT/__ Projects/T11_RFID_Tutorial_Sketch/Arduino-EEPROMEx-master/extras/Documentation/html
    141 characters including backslashes

    After a successfully finished media/Disk4 to NTFS 4TB copy with Kusader 
    There was 25 files missing under \html on the 4TB disk

    Did you ever find your solution?

     

    I'm having the same problem, but it happens when using krusader, or just transferring a file over the network, 

     

    I transferred folder X with the following specs: 445gb, 625,432 files, 415 folders. From an NvME drive on a client machine to unRAID cache, over SMB. 

     

    The end result is what appears to be a successful transfer. Upon trying to access the folder within the unRAID share, it shows a total of 832mb, 125,212 Files, 108 Folders. Obviously something wrong. However when I drill down one leve to X\Folder1 it shows folder 1 as containing 36gb, 1600 files, 25 folders, which is pretty close to being correct. However, I can't tell if it transferred all files or what. It's highly unacceptable, and i am not sure why. I will only transfer .rar or 7z for storage on unraid now. Anything that isn't an compressed file, i don't trust to transfer to unraid without error.

     

    I transfer the same folder to my FreeNAS server via SMB, the end result is 445gb, 625,432 files, 415 folders. Exactly what it's supposed to be. Same thing when I transfer the same folder to a RedHat Server. 

     

    What could be going on here? Let me know if you found a solution. Thanks!

  8. This happened once before, and I had the capability to restore the cache from a very recent backup, which was faster than potential troubleshooting, so I am unsure of how to correct now. 

     

    I have 3 HBA's in this particular server. One HBA failed. I replaced the HBA. Once booted again, with the replaced HBA, 4 disks #'s have been re-assigned, so the cache pool shows 4 drives in their correct slots, and 4 slots which have no device. I put the drives back in the correct corresponding order, and unraid informs me it will wipe my drives upon spinning up the array. 

     

    How do I correct this now?

  9. UnRAID keeps randomly losing the configuration of both Array and Cache Disks. It's usually just frustrating, but this time, on my cache array it is telling me all existing data will be overwritten upon starting the array. 12 disk raid10, all drives fine, online. However, of the 12 disks, unRAID only shows it remembers one of them. 

     

    This happens after every few restarts. Typically I just re-assign the drive it doesn't have allocated, and I start the array. I've seen this once before, whee it tells me it will format all drives, I can't remember how I solved it. I have a small amount of data on the cache array, that I can't lose.

     

    Wha's my process for getting the array online without reformatting?

     

  10. I’ve posted on this topic before, but the topic got a bit sidetracked (by me). 

     

    Are there any known issues in using NvME’s in a RAID5 Cache Pool?

     

    In one of my UnRAID servers, right now, there are 6 2tb NvME’s, RAID5. Writing to Cache Pool, very slow at 350MB/s, read at 750MB/S.

     

    As an unassigned Drive, a single NvME r/w at 900 MB/s+.

     

    I then tried 6 2.5” 1tb SSD’s in RAID 5, 1.4GB/s read and write.

     

    Same server, same settings. There must be some sort of bug or incompatibility, to be getting only 350MB/s on the array, when disk speed test shows each drive capable

    of 2000+ MB/s, and an array of the same

    number of 2.5” Drives fully

    saturates the line. I had a similar issue before, when testing 40GbE, but the r/w speed was capable of 10GbE saturation, just not 40GbE. I rebalanced the array too. No difference 

     

    Any ideas here? 

     

     

  11. 39 minutes ago, TheBlueKingLP said:

    Where can I find apfs-patched.efi?

    Send me a PM, I can help you out?

     

    Has anyone had luck getting 10GbE to work in High Sierra? I spun up a few MacOS VM's and can't get networking active. I only tried in HS, trying in mojave next. 

     

    I may have to plug in gigabitE to get it to work in HS. 

     

    Curious if anyone else has the same issue?

  12. This has been something on my to-do list for a while, and feeling like knocking it out now.

     

    I want to be able to send clients files from cloud storage. Instead of utilizing gDrive or Dropbox, I want to use OwnCloud or NextCloud, or alternative to create shareable links with a custom domain.

     

    So let's say either a domain I own or ddns domain. I want to send a client files to download with: Dr. Ooh has sent you a file to download: www.nARctic.cloud/xxxxxx.rar for example.

     

    That's the bare minimum. Ideally, the landing page the client reaches is custom as well. I'm thinking that's something I could set up in a web server on one of my servers. Yeah?

     

    How do I go about it?

  13. Oh, and also, along the same lines, I jumped down a rabbit hole searching for VNC or Virtual Desktop Software that can display multiple monitors from one VM; couldn't find much yet. Anyone know of software to accoomplish this? (It looks like another feature of Grid/MxGPU.

  14. GPU pass-through is no doubt a great tool for unraid, but I am very curious about virtualized GPU's. I am just starting to research it right now. But it seems NVIDIA Grid and AMD MxGPU are the available options. I could have sworn I heard of other, open source, ways to do it.

     

    The use-case would be for remote VM's. Is it possible to use my installed GPU's to provide graphics acceleration for virtual machines? In my layman mind, it seems to me, it should be possible to do with one VM, without advanced software. Or, by rerouting the output of the GPU somehow.

     

    MxGPU and GRID do this, but are geared toward multi-users, running lots of VM's. For me, it would be an option to purchase the Tesla or Quadro GPU's and software to do this, if unraid supports such. But, even better if I can use my current GPU's to accelerate VM Graphics. In my various unraid servers, I have RX580's, 1080TI's, 1060's, and an RTX2080. Wondering if I can utilize any of those for this purpose.

  15. I couldn’t find anywhere in documentation if unraid will support direct connection and connection through switch either. I’ve not been able to get that setup either.

     

    for instance, dual port NIc connecting to two separate subnets.

     

    Port 1 -> 10GbE Switch On Subnet 1 

    Port 2 -> Direct attach to client or server on Subnet 2. 

     

    I’ve tried with no success. Is that possible too?

     

  16. 41 minutes ago, itimpi said:

    UnRAID does not support passing connections through to another server in the manner your diagrams sugges :(

     

    What are you actually trying to achieve?    If we understood better the problem you are actually trying to solve we might be able to give some sensible guidance.   For instance is it a single unified view of all the data regardless of the server it is on, or are you looking at some sort of workload balancing across the servers.

    Now that you mention it, a single unified view of all data is a desired goal, I didn’t know this was possible, but the general idea is upcoming on my list. I’d like to learn  more.

     

    The reason behind the initial post is two-fold. I’m out of ports on both 40 and 10GbE Switches, and I need to add a few more servers to my network. Chelsio NiC provides for spider mode, allowing for 4 10GbE connections to a single server. 

     

    Secondly, I’ve found that I am able to achieve faster transfer speed when initiated from VM’s. For instance I am able to achieve a 2.5 gigabyte per second transfer from a VM on Server 1 to any of my clients. Whereas initiating a transfer to the same client, from the same Share, from outside of the VM results in less than 5oo megabytes per second. No idea why. But a connection like that for select servers would allow me to create a separate network for machines that require that throughout but not that level of bandwidth.

     

     

  17. Is it possible to create a bridge between multiple physical servers? If so, how do I achieve?

     

    This is what I am looking to do with a couple of servers, either A or B. 

     

    I want all clients to connect to a switch, then bonded NIC on the first server. All clients connect to Server 1 through the switch. All clients connect to server 2 and 3 through server 1. Server 2 and 3 ideally would share internet connection with Server 1. Only difference between option A and B is in A, servers 2 and 3 connect directly to 1. Server 3 connects to 2, 2 connects to 1. I assume there is no benefit there, and probably more complexity. 

     

    Alternatively, It would actually be better for me to remove the switch from the equation. Could I connect a single client to Server 1, instead of the Switch, and then have either Option A or Option B?

     

    I tried setting this up with not much luck. But I likely wasn't approaching it in the right way. 

     

    image.png.ef93beb321b473e0caf0ab1eb8ba708d.pngimage.thumb.png.707b9be755b12187d0b8777caceb488c.png

×
×
  • Create New...