Jump to content

Guido

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Guido

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This is done by creating a 2nd or 3rd or whatever Pool Device (it's a new function in the beta versions). Therefore I have a main raid set and an ohter raid set in the pool device (in my case a 4 disk btrf raid 1 set) and the main set is a 3 disk raid 5 (xfs)
  2. I have and it doesn't as far as I can see... I need a raid set, and with the Unassigned Devices plugin I can only share a single drive... Or if it is possible to set a few drives in a raid mode, I haven't found out how... a pointer in the right direction would be nice in that case.
  3. It is on a new raid set that will only be available as a "cache". It is set to cache only. I need it to be like this cause I didn't want it to be part of the main raid setup.
  4. Yes I did change that setting. It didn't seem to help much.
  5. When I try to use Veeam as a backup target using NFS, I always have to reboot my Veeam Backup server before starting the job or I will have errors about stale file handles in the Veeam logs and the job failes. The options provided on the forum with the tunables didn't work. The best possible solution would be to implement a newer version of NFS (4.x) so we can finally get rid of all these old time errors for multiple users. I'm running beta25 at the moment (latest at the time of posting)
  6. I'm going to have a look at Ganesha-NFS later this week. I hope I can get the docker container running on it's own IP address.. that has been an issue for me so far.. maybe because I have a virtual Unraid server (ESXi based with 2x LSI cards on passthrough). Oh, and I have the same issue with the stale file handles (in the Veeam Backup Job logs: NFS status code: 70).
  7. I hope the dev will implement NFS4.x soon... I was hoping to setup Unraid as a Veeam Backup Repository, but I have issue's with it on both SMB and NFS. NFS requires me to reboot my proxy before running a backup just to get a connection to the NFS. If I don't I get a message saying I don't have write access to the location. Already checked the rights on the share, and tried with NFS share as public and unfortunately it didn't work out.
  8. Hello all, I'm having issue's running lancache... I cannot seem to get any connection to the lancache container. My UnRaid server is running on 192.168.102.200. I have given lancache a dedicated IP (192.168.102.202) in my LAN range (192.168.102.0/23), but connecting isn't possible. Network type is set to Custom: br0 If I ping the IP from a random system on my network I never receive a reply, but when checking with arp -a I do see the mac address beloning to the container (checked with ifconfig from within the container). Is there anything I should check (firewall on the unraid server or something like that)? I didn't configure any firewall rules myself (if there is a firewall on Unraid as I don't see anything about it in the menu. Thank you for your time.