Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. I don't fully understand. This is a dedicated Docker container and if you want to install that one on your server feel free to do that. My template in the CA App is based on my Docker container for AssettoCorsa. So to speak it's up to you to set it up on your server if you want to use that image. I usually don't create templates for containers that are out of my control.
  2. Yes, I've wrote above the plugin will do that automatically. I'm assuming you are using the ZFS as a backing storage type correct? Why did you do that in the first place? Please don't mess with the lxc directory itself. Delete this one dataset that you've created for this specific container per hand and it should be back to normal (hopefully). There is now only one container config available from what I see in your Diagnostics.
  3. Please post your Diagnostics. Seems something is wrong with the container, did you maybe create a dataset over the container path or something similar?
  4. Are you sure that the SSH server is running? Please double check your configuration.
  5. Then it is still not configured properly... Sorry I really can't help here, I use luckyBackup with this method from two different machines and have no issue whatsoever. Something is wrong with your authentication if you always get the prompt. I hope you connect from luckyBackup directly to Unraid with the foreign Unraid IP address and have the authorized_keys file created in /root/.ssh on Unraid itself.
  6. @threiner sicher das es nicht an deinem SAS Controller bzw. HBA liegt? Das sieht mir nach Consumer Hardware aus gepaart mit Server Hardware richtig? Schau mal ob der Controller nicht zu heiß wird, diese Controller aus Servern brauchen gute, richtig gute Kühlung. Ich hab erst vor kurzem meinen Dell H310 ausgetauscht und auf 2 x diese getauscht und bin super Zufrieden: https://www.amazon.de/dp/B09K4WKHKK (natürlich nur wenn du kein SAS brauchst). Hast du schon die Kabel getauscht bzw. kontrolliert? Hast du an der Hardware vor kurzem was geändert?
  7. Maybe try to create a directory on the source (with an empty file in it) and also a directory on the target (without the file in it) where it can write to and then start the sync.
  8. Then the path on the foreign host does not exist, ist this the destination or the source? You have to configure it properly. I would also recommend that you try first a test directory and not a directory which holds actual maybe important data. But this is in general a good sign since the connection is now established.
  9. The plugin now has (experimental) support for ZFS and BTRFS as backing storage types, default is: directory All already set up containers will use directory. To change the backing storage type to one of the two new ones go to the Settings page from LXC and select ZFS or BTRFS (the appropriate option will show up depending on which filesystem the default path is located). When creating a LXC container the plugin will automatically create a dataset on the ZFS pool where the default lxc directory is located on, with the path: <ZFSPOOLNAME>/zfs_lxccontainers/<CONTAINERNAME> The configuration will be still visible in the default lxc directory. If you find any bugs please report them here.
  10. You can change the repository to jellyfin/jellyfin but be aware that you then have to set up Jellyfin from scratch. I would recommend that you create a post in the Support thread from the container since this seems not like a Nvidia Driver issue at all. As said above, everything seems to work fine and over here with my Nvidia T400 transcoding is working in the official container.
  11. Normalerweise sollte das bei diesem Treiber eigentlich nicht passieren. Bist du sicher das deine Kabel denn in Ordnung sind und auch "up to the task"? Welchen CAT standard verwendest du denn bei den Kabeln, sind das aber nicht irgendwelche billigen Kabel oder? Du hast einen Haufen dieser Meldungen im Log was normalerweise auf einen instabilen Link hin deuted: Sep 28 10:38:15 Unraid kernel: r8125: eth0: link up Sep 28 10:38:16 Unraid kernel: r8125: eth0: link down Sep 28 10:38:19 Unraid kernel: r8125: eth0: link up
  12. What container are you using? BTW, you've deleted the wrong duplicate image, I can't open your docker run image. This seems more like a configuration error in Jellyfin itself than an issue with the Nvidia Driver plugin since everything is working from what I can see from your Diagnostics. Please try the official container first from Jellyfin and see if this is working.
  13. Hast du mal ein wenig mehr input bitte. Hat er sich wieder auf 100Mbit zurückgesetzt? Screenshots oder so hättest evtl? Diagnostics?
  14. That's no problem but do it through NPM <- so to speak a Reverse Proxy. AMP tries to pull a certificate from LetsEncrypt but you shouldn't do that because you already have a certificate in NPM that's why you usually use a Reverse Proxy for that. Disable https and I would recommend that you start with a new container since it already told you that AMP exists. I would also recommend that you do a bit of research what a Reverse Proxy is and what you can do with it. @SpaceInvaderOne did a good videos on that and how to configure them. As said above if you have already a webserver with a certificate (NPM) you route the whole http/https traffic that you want to make reachable from outside through this one webserver. But as said this is out of this scope and does not belong in this thread.
  15. Sorry, I don't see that: This is a fresh installed Debian Bookworm container with the script from Cube Coders to install AMP. Can you maybe post a screenshot? You need a reverse proxy after the installation not before. You only need the reverse proxy to forward the WebUI from AMP if you want to make it reachable from outside. Yes, but that's not necessary at all, even if it runs on port 80. Please post a screenshot from that where it asks about port 80 or 8080 and from where it is not working. I'll explain that a bit more in detail: You'll only need to port forward for Docker since only the forwarded ports are reachable but for LXC it is a completely different story, each container has it's own dedicated IP and it acts more as a VM not like a container and all ports assigned to that IP are reachable. Again, you don't have to forward anything... Don't mix Docker and LXC... a LXC container acts more like a VM with it's own dedicated IP where Docker shares the IP with the host and you have to forward the ports form each container. EDIT:
  16. On Unraid correct? This is actually a good idea because I do the same with AMP: I don't understand what you are saying with that? You mean NPM correct? For what did it try 80? You know that you can change that and by default it is running on port 8080 IIRC I really don't understand what you are saying here, the LXC container has it's own IP so it doesn't matter which port because each IP can have it's own port 80. I really don't know what you mean with the listener. If you want to port forward your AMP server, even if it runs on port 80 you need to set up a reverse proxy in NPM but this is out of scope for this thread. BTW I have something really cool coming up but that needs first approval: This is basically a pre setup AMP container with Docker support so that you can install all games within AMP.
  17. By default it is dir and that means basically that it will be a subdirectory from you specified lxc path IIRC. However this is something that needs to be tried since I think that lxc-snapshot should auto detect the filesystem where the container is on and then create a snapshot accordingly to the filesystem but I'm really not sure about that. The plugin uses the default, so to speak 'dir'. IIRC there are other special arguments for the option -B available.
  18. I would recommend doing it the other way around since then your Backup server is in charge and you allow access to your Main machine, I do it that way too. Usually the keys are created by the container itself and there is no need to create new ones anyways (they are initially created on the first container starte and will be found in /luckybackup/.ssh) If you have a fresh installed container do that: In the container go to: /luckybackup/.ssh and open up the file: ssh_host_rsa_key.pub Copy the contents of this file On the foreign system create the file: authorized_keys in the directory /root/.ssh Past the contents that you've copied in step 3 into the authorized_keys file and save it Go to the luckyBackup container Set up a sync and choose the file ssh_host_rsa_key from the directory /luckybackup/.ssh at private key file With this luckyBackup should be now be able to connect to the remote machine. Please note that I recommend restarting the SSH server after Step 4 but this shouldn't be necessary. Also make sure that the file from Step 4 is plain text! Also please note that the file needs to be authorized_keys not authorized_key (maybe a type in your initial post here).
  19. Don't do that, the guide is wrong about that, the ssh key should always belong in /luckybackup/.ssh and should be never copied over to /root/.ssh This is wrong, the authorized_keys should be on the remote machine where you want to connect to not on the local one.
  20. Doch, gibt es noch, es wird nur anders bezeichnet, @cz13 hat gemeint du musst hier auf diesen button klicken: Dann kannst du deinen Cache ganz normal anlegen.
  21. Ich weiß zwar nicht was das mit der RAM-Disk und dem Backup ist für Docker aber das hat ungefähr um die Uhrzeit gestartet: Aug 29 18:30:01 unRaid docker: Success: Backup of RAM-Disk created. Aug 29 18:31:35 unRaid kernel: docker0: port 1(veth15a2cf3) entered disabled state Aug 29 18:31:35 unRaid kernel: veth0b3ce87: renamed from eth0 Aug 29 18:31:35 unRaid kernel: docker0: port 1(veth15a2cf3) entered disabled state Aug 29 18:31:35 unRaid kernel: device veth15a2cf3 left promiscuous mode Aug 29 18:31:35 unRaid kernel: docker0: port 1(veth15a2cf3) entered disabled state Aug 29 18:31:35 unRaid kernel: docker0: port 1(vetha744f63) entered blocking state Aug 29 18:31:35 unRaid kernel: docker0: port 1(vetha744f63) entered disabled state Aug 29 18:31:35 unRaid kernel: device vetha744f63 entered promiscuous mode Aug 29 18:31:35 unRaid kernel: docker0: port 1(vetha744f63) entered blocking state Aug 29 18:31:35 unRaid kernel: docker0: port 1(vetha744f63) entered forwarding state Aug 29 18:31:36 unRaid kernel: docker0: port 1(vetha744f63) entered disabled state Aug 29 18:31:38 unRaid kernel: eth0: renamed from vethdfab6c6 Aug 29 18:31:38 unRaid kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha744f63: link becomes ready Aug 29 18:31:38 unRaid kernel: docker0: port 1(vetha744f63) entered blocking state Aug 29 18:31:38 unRaid kernel: docker0: port 1(vetha744f63) entered forwarding state Aug 29 19:00:01 unRaid docker: Success: Backup of RAM-Disk created. Noch dazu wurden zwei zusätzliche Docker gestartet um diese Uhrzeit, sicher das diese nicht dieses hohe I/O wait auslösen?
  22. Yes, simply create a dataset and then point LXC to this dataset. I think I'm not following. To create a Snapshot simply go to your LXC tab in Unraid, click on the Icon from the container that you want to snapshot and select Create Snapshot. I don't know how it could be done more easily...? Please note that this uses the built in snapshot function from LXC and not the ZFS Snapshot feature but I'm pretty sure it will create a proper snapshot <- I'm not entirely sure about that because I'm not a huge fan of ZFS, at least not more as for BTRFS, XFS and so on.
  23. No worries, both of your cards are detected: 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD107 [GeForce RTX 4060] [10de:2882] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:4116] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:22be] (rev a1) Subsystem: Gigabyte Technology Co., Ltd Device [1458:4116] 02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA102GL [RTX A4500] [10de:2232] (rev a1) Subsystem: NVIDIA Corporation Device [10de:163c] Kernel driver in use: nvidia Kernel modules: nvidia_drm, nvidia 02:00.1 Audio device [0403]: NVIDIA Corporation GA102 High Definition Audio Controller [10de:1aef] (rev a1) Subsystem: NVIDIA Corporation GA102 High Definition Audio Controller [10de:163c] However, you have a lot of ACPI errors in your syslog: Sep 27 02:45:52 Tower kernel: ACPI Error: AE_ALREADY_EXISTS, CreateBufferField failure (20220331/dswload2-477) Sep 27 02:45:52 Tower kernel: ACPI Error: Aborting method \_SB.PC00.PEG1.PEGP._DSM due to previous error (AE_ALREADY_EXISTS) (20220331/psparse-529) Sep 27 02:45:52 Tower kernel: ACPI BIOS Error (bug): Failure creating named object [\_SB.PC00.PEG1.PEGP._DSM.USRG], AE_ALREADY_EXISTS (20220331/dsfield-184) Which usually indicates an error/bug in your BIOS which I can do nothing about. Please make sure that you've enabled Resizable BAR and Above 4G Decoding in your BIOS (it could be also called Extended Address Space or similar in your PCI sub menu in the BIOS). In which slots do you have the cards? Maybe try to swap the cards in their slots. May I ask why do you need two cards for that? Seems a bit overkill since the A4500 has unlimited transcodes IIRC (you can also check here). Are you sure that you've assigned the right card with the right UUID in the Plex Docker template? As said, I think the message that spawns your syslog is the cause of the issue and that indicates a BUG in the BIOS. Please note that you use consumer hardware which is in most cases not designed for multiple GPUs anymore (with the fall from SLI) and so on.
  24. This is a bit difficult since you also need the sound drivers for that so that it will even work. Theoretically I have a plugin for that which is not released to the public since it can be a bit difficult to set up and also not all sound cards will work depending on how it is implemented in the Motherboard. Users where already successful using my sound-driver plugin (some times with the onboard sound chip, some times with a USB audio adapter) in combination with a Docker container to achieve that what you are trying to do. My point here is to use a Docker instead of a Plugin.
  25. Yes, did you enable the service/module like it is mentioned in the description from the plugin?
×
×
  • Create New...