Jump to content

Normand_Nadon

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by Normand_Nadon

  1. 2 hours ago, primeval_god said:

    That is the expected behavior per my previous comment. It is the way btrfs snapshots work for nested subvolumes.

    I will admit that I did not understand it all at first... My brain is not collaborating this week, I am sick!
    Re-reading it a couple times helped and it made a lot of sense! 

    Do I understand it right that Unraid shows me:
    image.png.cb465db1d4a369cc28c65dac37fc8201.png

    But in reality, this is all smoke and mirrors created by the sorcery of symlinks and in fact it is structured as such behind the scene
    image.png.544ef5819c03c5a45c968ac233cac952.png

     

    Am I right to see it that way?

  2. 4 minutes ago, primeval_god said:

    It wont matter if you do. If you have nested subvolumes and snapshot the outer most one, the snapshot will not contain the contents of the nested subvolumes because BTRFS snapshots are non-recursive with respect to subvolumes. And of course the key piece of info is that a snapshot is a type of subvolume.

     

    To snapshot a snapshot you would have to specifically target the snapshot in the command.


    To save time, I activated the "root" option and it works fine
    I created a subvolume called ".snapshot"
    For some reason, it works... The full volume gets it's snapshot inside the .snapshot subvolume, and the .snapshot folder exists inside the snapshot, but is empty... maybe there is something in Unraid or BTRFS that keeps it from making snapshots of snapshots... can't tell!
    image.thumb.png.fe75ac6e356f46659fa21240ea9385f8.png

  3. 7 minutes ago, SimonF said:

    An option to snap the root was added, but you need to toggle the root to show with the switch. It was at a user request.

     

    image.png

    Ooooooooh! Thanks!

    Will the snapshot exclude the snapshot folder? (I don't want to snapshot the snapshot!)

  4. Oooooooh! Thank you @primeval_god and @JorgeB !!!

     

    That is what I was telling you... Snapshots have been around for millennias, so all the documentation I could find was assuming I knew what a snapshot was from the beginning, from other types of filesystems! You can clearly see that I don't :D

     

    50 minutes ago, JorgeB said:

    In any case I would still recommend creating shares as subvolumes, to make things cleaner, and for the plugin to work.


    Would you mind pointing me to a clear procedure on how to that?

    My guts tell me this: rename all the shares as [SHARE'S NAME]_OLD, create subvolumes named as [SHARE'S NAME], move the data inside those subvolumes (should be near instantaneous as it is just re-mapping the location) delete the old share...
    Does it make sense ?

    EDIT:
    Oh... and how will Fuser manage that? I have shares that are splitted between the cache and on the array

  5. 5 minutes ago, primeval_god said:

    I am confused as to what you are doing / trying to do here. From the snapshot plugin it looks like you have a subvolume called "snapshots" on /mnt/cache and you have several snapshots of that subvolume, which are located under within that subvolume at /mnt/cache/snapshots/2023-*

    I can't help you get less confused! I have no idea what I am doing! :D

     

    Here is what I want to do: I want to take hourly snapshots of the entire drive, and be able to recover if ever I make a stupid mistake...
    I know snapshots don't replace backups, but I don't want to make as many backups as I take snapshots, if it makes sense

  6. First of all, thank you @primeval_god, in 12 or so lines, you made this a million times simpler to understand for me!
    I have been trying to understand the concept for days!
    (I will admit I need to google COW and what it means... but 95% of you explanation was clear to me!)

     

    11 minutes ago, primeval_god said:

    Snapshots are not "based on a previous snap" they are a Copy on Write copy of a subvolume. For the purpose of restoration there are no dependencies between them, (all of the data sharing stuff is handled by the CoW nature of the FS).


    But what is the meaning of this then? Does it apply only to remote sends ?

    image.thumb.png.2c3cd16d62ba47188fb14987fab1ff47.png

     

     

     

    14 minutes ago, primeval_god said:

    The simplest way of restoring is to delete the live file or folder and then copy it from a snapshot directory back into place.

    Amazing! I will experiment on this and report if I still need help!

  7. First of all, thanks for the amazing work on this plugin!

    Here is my question now (with context):
    I have been using Unraid for several years now, but I just got interested in BTRFS snapshots (after destroying a lot of data due to an error I made... A noob error that should never have happened!)
    All the documentation I could find is aimed at people who already know what filesystem snapshots are and how they work... I don't!

    My use-case is:
    To be able to roll back a few hours in case I screw-up... the rest is managed by daily replication of data with rsync on another server and some other backup strategies. There is a lot of movement in the week on this server as it syncs with work related stuff, so hundreds of files might change/be created per week.

    1. When I create snapshot on a schedule, the default value is to base it on the previous snap.
      What am I supposed to do if I want to only keep a week worth of hourly snapshots, with auto-deletion of older snaps? Won't deleting a previous snap destroy data?
       
    2. How does one roll back a file, folder, or a state of the file system on unraid? We don't have Timeshift or other cool GUI stuff like that of which I am aware. Can it be done while the server is running? Do I need to mount the disk on a live boot and use Timeshift?

    Feel free to send me obvious guides that I might have missed (Esspecially in the form of videos if available)

     

    Have a nice day

    EDIT: All my snapshots are directly on the filesystem from which they originated for the moment. I don't send them to a remote. The goal is to roll-back files if I screw-up something, not back-up the entire drive on a server on the moon for preservation.

  8. I found a way... can't tell if it is the right way, but it works!
    I tried comparing the way network worked on my personal workstation with virt-manager to check why it worked here but not on Unraid.
    I ended-up manually typing the network interface settings in the xml and it worked!

    image.png.4fd938deb075f443c411cc9022a4ef39.png

    So, basically, doing a direct connexion to the br0 bridge... 
    The UI does not have this option.

  9. So, I wanted a little bit more network bandwidth from my Unraid Server to the LAN.
    I added a reclaimed 4 NIC card to the server and configured 2 links in bond mode 0 (balanced-rr)

     

    The speed on NFS is almost 2Gbps, same for docker, works amazingly.

     

    The only thing that refuses to work, is that once I activate bonding, nothing that I do will make my VM function with br0 as it used to...
    All other services happily use br0, even docker.

    The VM is set with br0 and virtio-net, as per recommended in the manual.

     

    This is not working... the VM refuses to talk to the network.

    If I set it to virbr0 it will work, but I lose access to the local network

     

    What can be done to have my VM work on br0 or any combination that would give me access to the LAN as weel as the Internet?

  10. hello @guy.davis and thank you for your work.

    I was wondering, why is Machinaris taking  so much ressources on my Unraid Server?

    I am farming only, and the hard drives are in the same machine, mounted directly inside the container.
    Fuse is not used for those drives, they are directly mounted as /mnt/disks/farmer## (4 x 6GB HDD)

     

    Machinaris takes anywhere from 6 to 8 GB of RAM and makes regular 100% spikes on cpu cores...

    What explains this behavior?

  11. Okay, I googled this one for weeks, tried many things, but can't find how to fix this issue...

     

    Since I updated to Unraid 6 this fall, from V5*, I have issues with NFS share permission... before, it used to be somehow anonymous or something... Every device that connected to the server (and was part the the whitelist) had full access to the files...

    Now, files are owned by  "99 - user #99"

    I cannot use, or modify the files...

    image.png.675305040230dae54b82ae5dfbe24295.png

     

    When I create new files on the shares, they are owned my local user... That should not be neither! 

     

    I need help!

     

    Details (some infos were redacted):

    Server, Unraid OS 6.11.5

    Share properties: set to private and [my-ip](sec=sys,rw,no_root_squash)

    (no_root_squash was added lately following some guides)

     

    Clients: Linux PCs (Pop!OS 22.04)

    Mounting options:   nofail,timeo=5,hard,intr

     

    Can someone help me with this?

     

     

    *(I know, I was far behind on updates... but I never got the notification to step to V6 before V5 was EOL ! But I accepted every update that I was presented with... can't tell what happened!) 

  12. Hey there, I looked around and could not find an answer to this, so I am asking!

    Since I updated my server to Unraid 6.11 (I was on a much older version for some reason) I can't access my Unraid/nextcloud drive through NFS, Nextcloud keeps locking the permission, even if I reset it as root

     

    Setup:
    Nextcloud docker, updated weekly.
    The server has been up for almost 3 years and the shares and folders worked fine.
    Drive shared between Nextcloud and users from NFS share:

    /mnt/user/nextcloud/

     

    When acessing through NFS client, on Linux, the share is locked, then I unlock the permissions via ssh, and they get changed a few seconds later... I remember I did something about squash root a long long time ago, but can't tell if it had something to do with that particular share

    Any thoughts?

  13. @jiggad369, I found the solution...

     

    By digging in the system logs, I realized that something has changed since the 6.9.x era... If the VM has spaces in the tilte, nginx can't forward the logs...
    I removed the spaces and special characters in the name of my VM (replaced with - and _ ) and if fixed the issue.

     

    I could fil a bug report and all for that, but I have bigger issues for the moment so if you have time to do it, it would be great!

  14. 1 hour ago, sorrow said:


    Well, they are usually linked to a specific URL, so they aren't housed on the server (as far as I'm aware) in any way. This is an example:

     

    Normal CDN URL:
    
    https://unraid-dl.sfo2.cdn.digitaloceanspaces.com/stable/unRAIDServer.plg
    
    Edited non-CDN URL where I just remove the cdn.:
    https://unraid-dl.sfo2.digitaloceanspaces.com/stable/unRAIDServer.plg
    
    

     

     

    The file linked here: https://unraid-dl.sfo2.digitaloceanspaces.com/unraid-api/dynamix.unraid.net.plg
    is editable, it has 2 links inside that have the cdn.
    You could copy the text, remove the cdns, save the file in a directory on the server, like appdata and install the plugin manually in unraid.

  15. 10 minutes ago, sorrow said:


    I'm glad that you've been able to fix stuff via a reboot. That's awesome.

    Just as a note, I took every single link that had cdn in it and just removed the cdn to make it work. 
    However, the script on the back-end (or in the plugin itself, etc..) adds in the cdn later on in the process, so kinda' stuck after that.

     


    Could you edit the script (.plg file) and remove the cdn inside?

  16. On 1/2/2022 at 10:06 AM, NMGMarques said:

    I know I can play the game in offline mode, though I can't find any reference in this thread as to how to put the server in offline mode.


    The server verifies logged users against the Minecraft main servers to make sure they are licenced... If for some reason, your server can't reach Mojang servers, the user will be refused... To mitigate that, you can change a line in your "server.properties" file, in appdata to false

    online-mode=true

     

     

    I am new to this (been running my server for less than 2 weeks), but this source helped me a lot with setting up the server:
    https://minecraft.fandom.com/wiki/Server.properties

    • Like 2
  17. 50 minutes ago, NMGMarques said:

    What am I missing? I am trying to connect to the server using 1.18.1. 

    [17:24:46] [Server thread/INFO]: /10.253.0.5:60477 lost connection: Failed to verify username!

     


    Your server cannot connect to mojang to authentify the user if I am not mistaken... There is an option in the docker version of most Minecraft docker images I have seen to disable "Online mode"...
    Or you can ensure your server has access to the Internet to query for the username

    • Thanks 1
×
×
  • Create New...