Leaderboard

Popular Content

Showing content with the highest reputation on 08/06/22 in Posts

  1. actually this is not the plex helpdesk, but lets take a look i assume you mean these icon in plex which are missing and you use XEPG mode and they are working there (mapped) if its that you are trying to solve, some infos are missing ... xteve and plex on the same mashine ? xteve running in host mode as recommended (or at custom br0) ? image caching in xteve enabled or not ? buffer in xteve enabled or not ? which docker are you using exactly ? ... to start looking into it may a hint, xteve generates a xteve.xml file in the data dir, here are the links which are given to plex as icon source, take a look what you got there if they are reachable from the plex mashine ...
    2 points
  2. The bash command history will be lost after a reboot. Instead I like to see a symlink to the usb flash drive to have a persistent history. Workaround in my Go-File until this is not realized: # ------------------------------------------------- # Persistent bash history # ------------------------------------------------- if [[ ! -L ~/.bash_history ]]; then rm ~/.bash_history ln -s /boot/config/ssh/.bash_history ~/.bash_history fi
    1 point
  3. Hello All! As I'd alluded to in my earlier SR-IOV guide, I've been (...slowly...) working on turning my server config/deployment notes into something that'd at least have the opportunity to be more useful to others as they're using UnRAID. To get to the point as quickly as possible: The UnRAID Performance Compendium I'm posting this in the General section as it's all eventually going to run the gambit, from stuff that's 'generically UnRAID', to container/DB performance tuning, VMs, and so on. It's all written from the perspective of *my* servers though, so it's tinged with ZFS throughout - what this means in practice is that, while not all of the information/recommendations provided will apply to each person's systems, at least some part of them should be useful to most, if not all (all is the goal!). I've been using ZFS almost since it's arrival on the open source scene, starting back with the release of OpenSolaris back in late 2008, and using it as my filesystem of choice wherever possible ever since. I've been slowly documenting my setup as time's gone on, and as I was already doing so for myself, I thought it might be helpful to build it out a bit further in a form that could be referenced by others (if they so choose). I derive great satisfaction from doing things like this, relishing the times when work's given me projects where I get to create and then present technical content to technical folks... But with the lockdown, I haven't gotten out much, and work's been so busy with other things, I haven't much been able to scratch that itch. However, I'm on vacation this week, and finally have a few of them polished up to the point that I feel like they can be useful! Currently guides included are (always changing, updated 08.03.22): The Intro Why would we want ZFS on UnRAID? What can we do with it? - A primer on what our use-case is for adding ZFS to UnRAID, what problems it helps solve, and why we should care. More of an opinion piece, but with some backing data enough that I feel comfortable and confident in the stance taken here. Also details some use cases for ZFS's feature sets (automating backups and DR, simplifying the process of testing upgrades of complex multi-application containers prior to implementing them into production, things like that). Application Deployment and Tuning: Ombi - Why you don't need to migrate to MariaDB/MySQL to be performant even with a massive collection / user count, and how to do so Sonarr/Radarr/Lidarr - This is kind of a 'less done' version of the Ombi guide currently (as it's just SQLite as well), but with some work (in progress / not done) towards getting around a few of the limitations put in place by the application's hard-coded values Nextcloud - Using nextcloud, onlyoffice, elasticsearch, redis, postgres, nginx, some custom cron tasks, and customization of the linuxserver container (...and zfs) to get highly performant app responsiveness even while using apps like facial recognition, full text search, and online office file editing. Haven't finished documenting the whole of the facial recog part, nor elasticsearch. Postgres - Keeping your applications performance snappy using PG to back systems with millions of files, 10's or even hundreds of applications, and how to monitor and tune for your specific HW with your unique combination of applications MariaDB - (in progress) - I don't use Maria/MySQL much personally, but I've had to work with it a bunch for work and it's pretty common in homelabbing with how long of a history it has and the dev's desire to make supporting users using the DB easier (you can get yourself in a whole lot more trouble a whole lot quicker by mucking around without proper research in PG than My/Maria imo). Personally though? Postgres all the way. Far more configurable, and more performant with appropriate resources/tuning. General UnRAID/Linux/ZFS related: SR-IOV on UnRAID - The first guide I created specifically for UnRAID, posted directly to the forum as opposed to in github. Users have noted going from 10's of MB/s up to 700MB/s when moving from default VM virtual NIC's over to SR-IOV NICs (see the thread for details Compiled general list of helpful commands - This one isn't ZFS specific, and I'm trying to add things from my bash profile aliases and the like over time as I use them. This one will be constantly evolving, and includes things like "How many inotify watchers are in use... And what the hell is using so many?", restarting a service within an LSIO container, bulk downloading from archive.org, and commands that'll allow you to do unraid UI-only actions from the CLI (e.g. stop/start the array, others). Common issues/questions/general information related to ZFS on UnRAID - As I see (or answer) the same issues fairly regularly in the zfs plugin thread, it seemed to make sense to start up a reference for these so it could just be linked to instead of re-typing each time lol. Also includes information on customization of the UnRAID shell and installing tools that aren't contained in the Dev/Nerdpacks so you can run them as though they're natively included in the core OS. Hosting the Docker Image on ZFS - squeezing the most performance out of your efforts to migrate off of the xfs/btrfs cachepool - if you're already going through the process of doing so, might as well make sure it's as highly performant as your storage will allow You can see my (incomplete / more to be added) backlog of things to document as well on the primary page in case you're interested. I plan to post the relevant pieces where they make sense as well (e.g. the Nextcloud one to the lsio nextcloud support thread, cross-post this link to the zfs plugin page... probably not much else at this point, but just so it reaches the right audience at least). Why Github for the guides instead of just posting them here to their respective locations? I'd already been working on documenting my homelab config information (for rebuilding in the event of a disaster) using Obsidian, so everything's already in markdown... I'd asked a few times about getting markdown support for the forums so I could just dump them here, but I think it must be too much of a pain to implement, so github seemed the best combination of minimizing amount of time re-editing pre-existing stuff I'd written, readability, and access. Hope this is useful to you fine folks! HISTORY: - 08.04.2022 - Added Common Issues/general info, and hosting docker.img on ZFS doc links - 08.06.2022 - Added MariaDB container doc as a work-in-progress page prior to completion due to individual request - 08.07.2022 - Linked original SR-IOV guide, as this is closely tied to network performance - 08.21.2022 - Added the 'primer' doc, Why ZFS on UnRAID and some example use-cases
    1 point
  4. 1 point
  5. Just wanted to say that everything is back to normal now. Thank you again for all the support!
    1 point
  6. Have you set “override inform host” and the IP of your server?
    1 point
  7. Bei einem Shutdown wird immer alles beendet. Naja, abschmieren und Timeout ist schon was anderes. Warte mal bitte nach dem Array Stop ca 15 Minuten. Eigentlich würde ich erwarten, dass irgendwann mountpoint /mnt/remotes/nfstest zurückgibt, dass der Mount nicht mehr da ist. Dann ist eigentlich alles gut. Dh der NFS Server läuft zwar noch, aber der von ihm angebotene Share ist nicht erreichbar. Nicht ideal, aber ich sehe da aktuell keinen Nachteil draus, außer dass Proxmox evtl beim Sichern in ein längeres Timeout laufen könnte.
    1 point
  8. This can be marked as Solved. Thanks again.
    1 point
  9. Klugscheißermodus: Ich sehe nur, dass /mnt/user/Backups ist nicht mehr da, was logisch ist, weil das Array ja gestoppt wurde. Ich würde es eher so vorschlagen: telnet localhost 2049 Wenn NFS aktiv ist, kommt "Connected to localhost" und bei inaktivem NFS "Connection refused".
    1 point
  10. 1 point
  11. Array gestoppt. root@pbs:~# dd if=/dev/zero of=/mnt/user/backups bs=1G count=1 oflag=direct dd: konnte '/mnt/user/backups' nicht öffnen: Datei oder Verzeichnis nicht gefunden
    1 point
  12. Thanks much. Have the following to-dos. 1. Move the data from newly added 4TB on to other disks using Unbalance plugin and convert the filesystem to xfs 2. Copy over the data/ backups available in the temporary 12TB drive that I had off the array and move it to unRAID 3. Use shrink-array to consolidate few disks 4. Convert the rest to XFS 4. Use one of the left over 2TBs after the consolidation as a cache disk.. Will keep posted on the progress. Thanks all along for getting my array back on shape
    1 point
  13. würde ich empfehlen, es gibt auch vpn client docker wo ipsec können ... würde ich jedoch nur empfehlen wenn du weißt ...
    1 point
  14. And this? https://askubuntu.com/questions/6769/hibernate-and-resume-from-a-swap-file First reply may contain some useful info.
    1 point
  15. moin, das "aktuell" native VPN Protokoll der Fritz ist nicht open vpn ... daher wird das sicherlich nichts entweder (wenn machbar) umsteigen auf die Fritz Labor's, da wird aktuell wireguard implementiert ... dann per WG, oder noch etwas warten bis offiziell ... oder an der Gegenstelle hinter der Fritz einen ovpn Server setzen (RPi, ...was auch immer) kleiner Nachtrag, Standard IPSec, Wireguard kommt (da solltest du dann drauf setzen)
    1 point
  16. Moin, ich hab in deinem "Solution" Post oben den Eintrag ergänzt, ich hoffe das ist in Ordnung, nur falls nochmals jemand darüber stolpert
    1 point
  17. @topaLE Ich habe mir das für dich nochmal angeschaut. Die Lösung hab ich im Proxmox Forum. Bin jetzt voll müde. Gute Nacht. 😒
    1 point
  18. It allows you to choose between a local link for LAN (Local Area Network). Example: 192.168.10:1234. But you can also put an application on the web via a server. In this case, you can configure the WAN(Wide Area Network). Example: jellyfin.example.com. For example I use NginxProxyManager to give access to some containers with SSL access. I hope I've been clear enough.
    1 point
  19. Also ich weis nicht weshalb hier behautet wird, dass man Proxmox Backup Server (PBS) nicht mit NFS nutzen sollte. Wenn das so wäre, würden die Proxmox Staff Member einen nicht support dafür geben 🙂 https://forum.proxmox.com/threads/eperm-operation-not-permitted.112220/ NFS geht sehr gut, nur nicht mit Unraid. Was den Limetech Support angeht, ist die Kritik finde ich berechtigt. 😅 Wenn du etwas in englisch meldest, erwarte lieber keine Rückmeldung 🙈 Was die User in diese Community angeht, die sind nett, Hilfsbereit und zuvorkommend, zumindest die Meisten. Der eine oder andere kann jedoch etwas allergisch auf Proxmox regieren 😂 Ich empflehle folgendes Unraid Workaround. 1. Du erstellst ein Share z.b.: "pbs" und aktivierst NFS. 2. Du installierst dir das Plugin "User Scrips", erstellst dir einen Script und vergibst ihm einen Namen. 3. Du bearbeitest das script wie folgt #!/bin/bash PBSSHARE=`grep "/mnt/user/pbs" /etc/exports` if [ ! -z "/mnt/user/pbs" ]; then sed -i '\/mnt\/user\/pbs*/d' /etc/exports cat << MOUNTINSERT >> /etc/exports "/mnt/user/pbs" -fsid=108,async,no_subtree_check *(rw,sec=sys,insecure,anongid=34,anonuid=34,all_squash) MOUNTINSERT exportfs -r fi Bitte beachte das /mnt/user/pbs in diesem Fall das Share ist 4. Du stellst die ein scheduler ein. Z.B. " At Startup of Array" oder "At First Start Only" Dann klappt es auch mit dem Proxmox Backup Server. Vielleicht kann hier jemand erklären wieso es mit Unraid nicht geht. Edit: Das script habe ich auf die Schnelle geschrieben. Sicher gibt es schönere scripte 🙈 So also die Lösung. Mit der anongid=34 und anonuid=34 lagen wir falsch. Wir brauchen keine Scripte oder sonstiges. Der Share in Unraid muss wie folgt aussehen. Exportieren: Ja Sicherheit: Privat Regel: *(rw,no_wdelay,crossmnt,no_root_squash,insecure_locks,sec=sys,anonuid=99,anongid=100)
    1 point
  20. It should be an issue within the os guest, not an issue with qemu or kvm. The guest agent in the guest is needed to send some commands from the host to the guest. There should be no need of the qemu-kvm package in the guest. It seems that the swap disk cannot be found. See if this helps with your issue: https://bbs.archlinux.org/viewtopic.php?id=247036 It's for arch, but you can consider and search as a general linux issue. Or this: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1962359 Or this: https://forum.manjaro.org/t/some-clarification-on-hibernate-needed/108659/4
    1 point
  21. I used the search and didn't see anyone mention the Broadcom 9500-8i HBA's. I recently purchased a 9500-8i for use with Unraid and it works really well under 6.10.3. Completely plug and play due to the Kernel having the appropriate driver and the performance is as you would expect, extremely fast. Just thought I'd post this as even on Google I didn't see more than one topic where someone said they had this card combined with Unraid so thought I'd chime in with a recommend.
    1 point
  22. I posted what I hope to be a single place to reference for performance related tuning/information over in the guides/general section which had a couple pieces I thought were applicable, so I figured I'd cross post the relevant bits here - it goes into a great deal of the performance tuning one might wish to undertake with LSIO's Nextcloud+Postgres+Nginx containers, and I'm posting here in hopes that it can help some folks and reaches the right audience: Postgres - My recommendation for nextcloud's backing database - guide to keeping your applications performance snappy using PG to back systems with millions of files, 10's or even hundreds of applications, and how to monitor and tune for your specific HW with your unique combination of applications Nextcloud - Using nextcloud, onlyoffice, elasticsearch, redis, postgres, nginx, some custom cron tasks, and customization of the linuxserver container (...and zfs) to get highly performant app responsiveness even while using apps like facial recognition, full text search, and online office file editing (among many others). I've collaborated some with the Ibracorp folks on their work towards an eventual nextcloud guide to help with some performance stuff, along with my notes as something of a 'first draft' for their written version, so hopefully there'll be something of a video version of this eventually for those who prefer that. Haven't finished documenting the whole of the facial recog part, nor elasticsearch. Just for some context: I've migrated my entire family off of google drive using nextcloud, including my dad's contracting business and my wife's small wfh custom clothing shop, have about 1.1m files across 11 users (all of which are doing automatic phone photo and contacts backups) and 16 linked devices between phones and computers, and 17 calendars - the time for initial login page is less than a second, and scrolling through even the photos in gallery mode is almost instantaneous. Super impressed with the Nextcloud dev's work!!!
    1 point
  23. @S80_UK I just wanted to inform you, that it works now, got 16 GB of RAM and it just works smoothly. Thank you again!
    1 point
  24. Add User Script plugin and create a script with the relative command : zpool scrub tank Twice a month for public drives and once a month for professional drives.
    1 point
  25. @horphi noch einen kleiner Tipp: Versuche den Stick über seine ID zu übergeben, somit ist es dann auch egal an welchen USB-Port dieser angeschlossen ist. Hier steht es sehr gut beschrieben! Gruß Pierre
    1 point
  26. That worked, I adde the below to my /boot/config/go: sudo curl -L https://raw.githubusercontent.com/Realtek-OpenSource/android_hardware_realtek/rtk1395/bt/rtkbt/Firmware/BT/rtl8761b_config -o /tmp/rtl8761bu_config.bin sudo mv /tmp/rtl8761bu_config.bin /lib/firmware/rtl_bt/rtl8761bu_config.bin sudo curl -L https://raw.githubusercontent.com/Realtek-OpenSource/android_hardware_realtek/rtk1395/bt/rtkbt/Firmware/BT/rtl8761b_fw -o /tmp/rtl8761bu_fw.bin sudo mv /tmp/rtl8761bu_fw.bin /lib/firmware/rtl_bt/rtl8761bu_fw.bin However, I have to manually unplug and plug-in the usb dongle so the module reloads. Any way to do this programmatically? Also, where does the device appear? Trying to pass this through to a Home Assistant container, but can't find it!
    1 point
  27. I have a unraid server, trying to setup zoneminder, I have a generic usb webcam JIGA something or other. just trying to monitor my room when i am not around think the roommates are snooping, cant get the camera to activate, I am not linux user, I am a total noob. I built both my main rig and my server. but I am still learning this operating system
    1 point
  28. so I tried some settings on my MSI B460 Torpedo motherboard which looks like fixed the issue i have tried multiple reboots from webUI and unraid is starting up correctly without going to BIOS. 1. click settings 2 Access Advanced 3 Power Mgmt Setup 4. DTM - I changed it to Enabled 5 Save changes and reboot this did the trick for me. Why this approach work I don't know may be someone can throw some light on it
    1 point
  29. A stupid easy way to have this happen is to have a local terminal (or ssh) logged in with the current working directory set to /mnt/disk1 or /mnt/cache But you can try and figure out what's holding it up via lsof /mnt/disk1 lsof /mnt/cache
    1 point
  30. How do I remove a pool device? BTRFS A few notes: -unRAID v6.4.1 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -You can only remove devices from redundant pools (raid1, raid5/6, raid10, etc) but make sure to only remove one device at a time, i.e., you cannot remove 2 devices at the same time from any kind of pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices). -You cannot remove devices past the minimum number required for the profile in use, e.g., 3 devices for raid1c3/raid5, 4 devices for raid6/raid10, etc, exception is removing a device from a two device raid1 pool, in this case Unraid converts the pool to single profile. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) a balance and/or a device delete will begin depending on the profile used and number of pool members remaining, wait for pool activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the pool activity stops or the stop array button is available the replacement is done. ZFS A few notes: -unRAID v6.12-rc2 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -Currently you can only remove devices from 3 or 4-way mirrored pools (raid1) but make sure to only remove one device at a time, i.e., if you start with a 4-way mirror, you can remove two devices making it a 2-way mirror, but they must be removed one at a time. -Currently removing a complete vdev from a mirrored pool is not supported, removing a mirror from a 2-way mirror will work but leave the mirror degraded, i.e., a new replacement device should then be added. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) the removed device will be detached at array start. check "pool status" page to confirm everything looks good
    1 point
  31. First off, welcome! It's actually even easier than that... (give or take) No need to stop the array to change a disk share to cache only, but you will need to move the files yourself. Also, no need to fix the mapping to the container... Regardless of which disk the files are on, you can still point it to /mnt/user/appdata and if it's set to cache only, it will only be on your cache directory! So.. (1) unRAID Main > Shares > appdata : select Use cache disk: only and remove any include/exclude, it's not needed.. Cache only is cache only, apply, done. (2) Move files from disk 1 to cache drive (this can be done in multiple ways, however you ABSOLUTELY don't want to copy from a user share to disk share (or whatever way that is)). The easiest way is to just use another computer and copy the appdata folder from disk 1 to the cache drive (if in Windows, use the basic file explorer copy/paste). If you do not currently export the disk shares, click on each disk (1, and cache), set export to yes for either SMB or NFS, now use SMB or NFS from another PC to copy it to cache, and when finished delete the copy from disk 1. ----- (If you don't feel comfortable using SSH please do not, it shouldn't be needed)----- I have noticed some issues at times attempting to copy in use or permission locked files, so if that's the case and you know how to use Putty to SSH in, just type this. cp -avr /mnt/disk1/appdata /mnt/cache/appdata Once it's successfully transferred (please verify contents prior to continuing by checking your cache share for the appdata folder) rm -rvf /mnt/disk1/appdata
    1 point
  32. Same logs on both modified/default templates (unable to connect to server); I'll need to get connected first before I go any further since it seems that's half my battle. Thanks for the information so far.
    0 points
  33. It’s kind of funny if you read in the Plex Forum they say since Build 1.28.1 HA is broken because of an Update of Intel IHD driver Plex Forum and for us 1.28.1 is better then 1.28.0 don’t get it.
    0 points