Jump to content

Normand_Nadon

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by Normand_Nadon

  1. Hello, noob question here, I need directions to the right place for my researches! 

    I have been searching for 2 days on how server mods work... Clearly, I am not using the right terminology for my search!
    I would like to know if it is possible to implement mods in the binhex Minecraft server on Unraid...

    I was able to load mods on my son's local install (based on the fabric stuff...) but I can't find how to load them on our family Unraid server..

    By the way, the vanilla docker image is working like magic... 1 click deploy stuff! and it boots fast!

     

    Mods I am interested in:
    Performance boost, like Sodium

    Ressource packs

    Shaders (can that run on the server?)

    Alternate game modes for my kids and their friends to play

     

    I am basically experimenting with the kids and teaching them server stuff while learning myself! :)

    It also creates a safe "bubble" for the kids to play in without weirdos interacting with them!

  2. 26 minutes ago, Vr2Io said:

    Do you try other NFS mounting method ? I always transfer file through NFS without problem.

     

    Yeah, me too! I have been using NFS for years... And using it in Unraid for some time too.

    I had a new error I have never seen before just now... 
    Maybe it will give off a clue... It said "Splicing error" and again, the I/O thing

     

    I remember trying to mount my NFS shares with an older version of NFS before, but can't remember if it helped...

    When I transfer files, I see that the client's buffer fills super fast before the transfer starts (like 2Gb in a few seconds)... that is why I suspect cache or buffer issue.

  3. I searched for the answer for a couple of hours without success... So I figured I would ask here!

    Recently, I had to do a fresh install of Pop!OS on my main rig because I managed to destroy it!

    Since then, when trying to transfer large file to my Unraid server through NFS, I always get I/O errors and file corruption.

    I have a feeling it has to do with caching but I am not certain... can someone point me in the right direction?

    here is an example of how my shares are mounted in fstab:

    [unraid server ip]:/mnt/user/[mount point]  /home/normand/NFS/NORMAND-NAS-01/[mount point] nfs defaults,timeo=14,soft 0 0

     

    I remember having similar issues in the past but can't remember what I did to fix it.

     

     

    Have a nice day all!

  4. I found the solution... The default path on unraid is not supported by Postgress

    I changed /mnt/user to /mnt/cache and it works... Bu I have read that is is not the safest way to do things and I can't tell why...

    And here is a quick and dirty (and functionnal) letsencrypt / nginx .conf example
    (replace "server_name openproject.* with the name of your chosen subdomain and upstream_app openproject with the name of your container)
     

    # Open Project server
    
    server {
        listen 443 ssl;
    
        server_name openproject.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
    
        location / {
    	include /config/nginx/proxy.conf;
    	resolver 127.0.0.11 valid=30s;
    	set $upstream_app openproject;
    	set $upstream_port 80;
    	set $upstream_proto http;
    	proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    	}
    }

     

  5. I am sorry to ask again... But I looked hard in the forums and could not find the exact source of the issue..
    Some are talking about file permission, but nothing really comprehensible.

    When you launch an instance yourself, do you let it create it's own DB or do you connect it to an existing one?

     

  6. Hello, I tried to use your UNRAID docker image and I get this:

     

    -----> Starting the all-in-one OpenProject setup at /app/docker/supervisord...
    -----> Database cluster not found. Creating a new one in /var/openproject/pgdata...
    The files belonging to this database system will be owned by user "postgres".
    This user must also own the server process.
    
    The database cluster will be initialized with locale "C.UTF-8".
    The default text search configuration will be set to "english".
    
    Data page checksums are disabled.
    
    fixing permissions on existing directory /var/openproject/pgdata ... ok
    creating subdirectories ... ok
    selecting default max_connections ... 100
    selecting default shared_buffers ... 128MB
    selecting default timezone ... America/Los_Angeles
    selecting dynamic shared memory implementation ... posix
    creating configuration files ... ok
    LOG: could not link file "pg_xlog/xlogtemp.35" to "pg_xlog/000000010000000000000001": Function not implemented
    FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
    child process exited with exit code 1
    initdb: removing contents of data directory "/var/openproject/pgdata"
    running bootstrap script ...

     

    Can you tell me where I should search before asking you guys :P Thank you!

  7. Hello there, I followed SpaceInvader's 2019 guide  to setup the openvpn-as container and I have some issues...
    https://www.youtube.com/watch?v=fpkLvnAKen0

    Firstly, when using his guide, to the letter, I can connect to the VPN and browse the net as if I was home.

    BUT, I can't connect to anything on the home network... No response (I used my phone as a hotspot to test).

     

    Googling and "ducking" helped me find that the Interface should be set as HOST and the docker should be privileged.

    By doing so, I lose the ability to connect through the VPN and can't login as an admin on the Web GUI... So for now, unusable.

     

    Did someone else encounter similar issues and found the way to fix them?

  8. 4 minutes ago, strike said:

    Thanks for your fast response...

    I don't use PLEX, but I chose btrfs for the system... I am in the process of rebuilding 3 computers at once tonight so I can't read the whole thread... did You find a solution to this issue that you can point me to?

  9. I have googled and "ducked" this issue but could not find relevant information on the subject...

     

    My array, after 27 day of uptime had around 900 000 000 (nine hundred million) writes on the cache and about 40 000 on the long term storage...

    If this data is real, this is going to kill my cache drives really quick!

     

    I have just restarted the array for some reason and stopped every VMs and dockers to check if writes will still be high (nextcould, mariadb, PiHole dockers and an Ubuntu VM for my HTPC setup). This screen capture is after about 5 minutes of doing nothing on the array!

     

    309807464_Capturedcrandu2020-05-3120-31-41.thumb.png.e801cb58e38d52d81ae35ccdeb87c501.png

     

    Is this normal behavior? Am I right to be worried?

     

     - Normand

  10. Setup:
    UNRAID v.6.8.3

    Ubuntu 20.04 client (Pop!OS variant)

    Fixed IP on client and server side

    NFS share option 192.168.2.10(sec=sys,rw) - this is the IP of the client computer

    client's /etc/fstab entry:
    192.168.2.17:/mnt/user/MULTIMEDIAS      /home/normand/NFS/NORMAND-NAS-01/MULTIMEDIAS    nfs     defaults,timeo=14,soft 0 0
     

    Issue:
    It happens with all my shares. I use them, then, at any given (and random) times, I have a "Stale file handle" message and I have to remount the drive to continue working with it. The weirdest part, is that sometimes, a program or process is still writing to the share with no issue but I can't make anything else communicate with it (file manager, other programs). I wait until the process finishes and unmount/remount the drive to continue working...
    Is it due to the mounting options I used on the client's side? I am not understanding these NFS parameters well, even after 10 years of linux usage! :P

  11. I am comfortable with the terminal, only, I kept away from ZFS all this time as I thought it was a huge monster with 78 arms and pointy teeth for us, mere mortals...
    Also, I thought that one of the goals of using UNRAID, was to avoid ZFS in the first place... I should stop getting my information from unreliable places :D

  12. Hello fellow UNRAIDers,

     

    We have a server running on bare metal at OVH (not cloud hosting, bare metal)

    Does someone know how I can manage to run UNRAID on their installation as I love the interface and it is easy to deploy containers and VMs as needed.

    Could UNRAID run on a virtualised USB stick?

  13. On 4/23/2020 at 9:12 PM, jonathanm said:

    No.

    I love this answer... Short, precise... straight to the point :D

     

    But say I have some files on that slow HDD array that I would like to keep there, but when files are frequently accessed, I wand them to copy to the cache drive... And then, If the file has not been accessed in x amount of time, it flushes itself from the cache...
    How can I achieve that? Someone knows?

  14. When I first start using UNRAID, I thought that recently accessed files were copied to the ssd cache for faster access...

    No I seem to understand that it is not de case... Newly created files are on cache, but are moved to the long term storage on a schedule...

     

    Is there a feature that copies files to the cache when they are accessed to speed-up load times (example, my Steam library or kdenlive media files.

     

    Regards

×
×
  • Create New...