Jump to content

xylon

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by xylon

  1. Seems like the second nfs share i configured in/using the exports doesn't use the cache how would i reconfigure this in unraid?

    20240303_15h33m08s_grim.png

  2. i tested it with a new file using the following command.
     

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --bs=4k --iodepth=64 --readwrite=randrw --rwmixread=75 --size=4G --filename=/path/to/git-data/testfile

    and it begins writing to the array with about 200MB/s then it gets really slow. It never writes to the cache.

    Do you know how I could reconfigure or debug this further?

  3. The nfs share is set to cache prefer. However, when starting a stress test it is only writing to the hdd and not the sdd pool.

     

    the following are my exports:

    cat /etc/exports
    # See exports(5) for a description.
    # This file contains a list of all directories exported to other computers.
    # It is used by rpc.nfsd and rpc.mountd.
    
    "/mnt/user/tftp" -fsid=100,async,no_subtree_check *(rw,sec=sys,insecure,anongid=100,anonuid=99,all_squash)
    "/mnt/user/tftp/arch" *(rw,no_root_squash,no_subtree_check)

    the following is my nfsmount.conf

    cat /etc/nfsmount.conf 
    #
    # /etc/nfsmount.conf - see nfsmount.conf(5) for details
    #
    # This is an NFS mount configuration file. This file can be broken
    # up into three different sections: Mount, Server and Global
    # 
    # [ MountPoint "Mount_point" ] 
    # This section defines all the mount options that
    # should be used on a particular mount point. The '<Mount_Point>'
    # string need to be an exact match of the path in the mount 
    # command. Example:
    #     [ MountPoint "/export/home" ]
    #       background=True
    # Would cause all mount to /export/home would be done in
    # the background
    #
    # [ Server "Server_Name" ]
    # This section defines all the mount options that
    # should be used on mounts to a particular NFS server. 
    # Example:
    #     [ Server "nfsserver.foo.com" ]
    #       rsize=32k
    #       wsize=32k
    # All reads and writes to the 'nfsserver.foo.com' server 
    # will be done with 32k (32768 bytes) block sizes.
    #
    [ NFSMount_Global_Options ]
    # This statically named section defines global mount 
    # options that can be applied on all NFS mount.
    #
    # Protocol Version [3,4]
    # This defines the default protocol version which will
    # be used to start the negotiation with the server.
    # limetech - start negotiation with v4
    #Defaultvers=3
    Defaultvers=4
    #
    # Setting this option makes it mandatory the server supports the
    # given version. The mount will fail if the given version is 
    # not support by the server. 
    # Nfsvers=4
    #
    # Network Protocol [udp,tcp,rdma] (Note: values are case sensitive)
    # This defines the default network protocol which will
    # be used to start the negotiation with the server.
    # Defaultproto=tcp
    #
    # Setting this option makes it mandatory the server supports the
    # given network protocol. The mount will fail if the given network
    # protocol is not supported by the server.
    # Proto=tcp
    #
    # The number of times a request will be retired before 
    # generating a timeout 
    # Retrans=2
    #
    # The number of minutes that will retry mount
    # Retry=2
    #
    # The minimum time (in seconds) file attributes are cached
    # acregmin=30
    #
    # The Maximum time (in seconds) file attributes are cached
    # acregmin=60
    #
    # The minimum time (in seconds) directory attributes are cached
    # acregmin=30
    #
    # The Maximum time (in seconds) directory attributes are cached
    # acregmin=60
    #
    # Enable Access  Control  Lists
    # Acl=False
    #
    # Enable Attribute Caching
    # Ac=True
    #
    # Do mounts in background (i.e. asynchronously)
    # Background=False
    #
    # Close-To-Open cache coherence
    # Cto=True
    #
    # Do mounts in foreground (i.e. synchronously)
    # Foreground=True
    #
    # How to handle times out from servers (Hard is STRONGLY suggested)
    # Hard=True
    # Soft=False
    #
    # Enable File Locking
    # Lock=True
    #
    # Enable READDIRPLUS on NFS version 3 mounts
    # Rdirplus=True
    #
    # Maximum Read Size (in Bytes)
    # Rsize=8k
    #
    # Maximum Write Size (in Bytes)
    # Wsize=8k
    #
    # Maximum Server Block Size (in Bytes)
    # Bsize=8k
    #
    # Ignore unknown mount options
    # Sloppy=False
    #
    # Share Data and Attribute Caches
    # Sharecache=True
    #
    # The amount of time, in tenths of a seconds, the client
    # will wait for a response from the server before retransmitting
    # the request.
    # Timeo=600
    #
    # Sets all attributes times to the same time (in seconds)
    # actimeo=30
    #
    # Server Mountd port mountport
    # mountport=4001
    #
    # Server Mountd Protocol
    # mountproto=tcp
    #
    # Server Mountd Version
    # mountvers=3
    #
    # Server Mountd Host
    # mounthost=hostname
    #
    # Server Port
    # Port=2049
    #
    # RPCGSS security flavors 
    # [none, sys, krb5, krb5i, krb5p ]
    # Sec=sys
    #
    # Allow Signals to interrupt file operations
    # Intr=True
    #
    # Specifies  how the kernel manages its cache of directory
    # Lookupcache=all|none|pos|positive
    #
    # Turn of the caching of that access time
    # noatime=True
    # limetech - default is actually False, we want True
    noatime=True

     

     

     

    alfheim-diagnostics-20240303-1320.zip

  4. I have set both the system and appdata shares to use cache to 'Yes'; stopped all docker containers and vms and disabled them; invoked the mover. Yet still some files remain on the cache could somebody point me in the right direction.

     

    These are the files remaining:

    ```

    /mnt/cache/
    ├── ArrayVdisks
    ├── Docker
    ├── appdata
    │   ├── openvpn-as
    │   │   └── lib
    │   │       ├── liblber.so -> liblber-2.4.so.2.10.7
    │   │       ├── libldap.so -> libldap-2.4.so.2.10.7
    │   │       ├── libldap_r.so -> libldap_r-2.4.so.2.10.7
    │   │       ├── libmbedtls.so.9 -> libmbedtls.so.1.3.16
    │   │       ├── libtidy.so -> libtidy-0.99.so.0.0.0
    │   │       └── pkgconfig
    │   │           ├── python.pc -> python2.pc
    │   │           └── python2.pc -> python-2.7.pc
    │   └── unms
    │       ├── cert
    │       │   ├── cert -> /config/cert
    │       │   ├── live.crt -> ./localhost.crt
    │       │   └── live.key -> ./localhost.key
    │       └── usercert
    │           └── usercert -> /config/usercert
    ├── system
    │   ├── docker
    │   │   └── docker.img
    │   └── libvirt
    │       └── libvirt.img
    └── vdisks

    ```

     

    My current docker configuration has the paths set like it is recommended:

    Docker vDisk location:

    /mnt/user/system/docker/docker.img

    Default appdata storage location:

    /mnt/user/appdata/

    alfheim-diagnostics-20240211-1049.zip

  5. I am trying to setup PXE diskless booting hosted on my unraid server. My disk image is mounted by the server using the go file, however nfs is flaky with files in a mounted directory, thus I want to export this directory aswell.

    I want to export the following folders.

    /mnt/user/tftp/

    /mnt/user/tftp/arch

     

    my current exports

    /etc/exports

    "/mnt/user/tftp" -fsid=100,async,no_subtree_check *(rw,sec=sys,insecure,anongid=100,anonuid=99,all_squash)

     

    I was not able to find the correct way to export a subfolder of a share. Does anybody have any suggestions?

  6. But was it removed and I am still missing my Vdisks and Docker Containers. It still has some sort of data on it when I mounted it using unassigned devices it showed up as  480GB drive with 835 GB used and 2.16 TB free space. How can I put it back in the pool without loosing the data or what is the best course of action? I actually configured the pool to be raid0 for performance. Should I really run the command before trying to add the BPX unassigned drive back in to the pool? 

     

     

    WIN_20210220_12_50_39_Pro.jpg

    WIN_20210220_12_53_15_Pro.jpg

  7. Sorry for the confusion there was no option for me to add the drive back in without need to format it. I neither formatted the drive nor the pool. I left the system is basically untouched no rebuild was tried.

    15 minutes ago, JorgeB said:

    Snapshots are not automatic, and for now there's no way to do it using the GUI, only manually.

    well then I have no snapshots.

     

    should i try something like that?

    https://wiki.unraid.net/UnRAID_6/Storage_Management#Repairing_a_File_System

  8. I formatted none of them I rebooted into normal mode yet the problem still persists, I hope these diagnostics still are useful. 

    52 minutes ago, JorgeB said:

    Formatting would never be the thing to do but did you format that device outside the pool or the pool itself? If you still dind't reboot after that please post the diagnostics.

     

    Where were you storing the snapshots?

    I thought unraid is so configured such that it would create those snapshot?

    wahkiacusspart-diagnostics-20210220-0935.zip

  9. After I cleaned my server I and started my unraid server my main Cache drive of my cache pool wasn't recognized by Unraid and it wasn't possible for me to add it back in without formatting in safemode. Furthermore some data is missing yet other data is on both the not recognized drive and the rest of the pool. I would like to have an option an idea what I should do next. I would prefer not to wipe the drive because it should have my missing files vdisks and dockercontainers on them.

     

    Unraid community you are my only hope :)

    Thank you in advance

     

    (Edit)

    I would like to ask if it would be possible to change the btrfs snapshot to one before the cleaning would that work?

     

     

    WIN_20210219_21_03_05_Pro.jpg

    WIN_20210219_21_03_20_Pro.jpg

    WIN_20210219_21_06_07_Pro.jpg

    WIN_20210219_23_41_24_Pro.jpg

    WIN_20210219_23_44_59_Pro.jpg

    WIN_20210220_00_37_40_Pro.jpg

  10. On 7/31/2020 at 4:15 PM, Stupifier said:

    The developer for this script and the associated plugin hasn't been around for a long time....you're gonna have to think outside the box when you run into issues like this.

    Work around your issue. Turn off the function in the script which checks if the VM is on/off before running.....then simply create another separate script to schedule your pfsense vm to turn off PRIOR to the VM backup script running.

    This way, your pfsense will always be OFF when the VM backup script starts.....

    a VERY quick google for "unraid script turn off vm" shot out exactly how to turn on/off VMs via command-line or script. Super easy.

     

    Thanks didn't think of that. It is not the cleanest solution however it works and that is all that counts.

×
×
  • Create New...