Leaderboard

Popular Content

Showing content with the highest reputation on 07/23/22 in all areas

  1. Da ja nach wie vor eigentlich die effizienteste und daher sehr beliebte Plattform, nur der kurze Hinweis, dass das Board gerade Mal wieder zu einem humanen Preis (€172,26) bei Amazon UK bestellbar ist. Allerdings Lieferzeit 4-6 Wochen
    3 points
  2. Hello all, Background I have a 8 mechanical HDD 40TB array in a Unraid server v6.8.2 DVB. I have 40GB memory, only about 8GB are used. I don't use memory for caching, for now. I have a 2TB SSD mounted as cache and hosting docker appdata and VM domains, and until now I was using Unraid cache system to store only new files from a data share, with a script moving to array when SSD was 90% full. With this method, only latest written file were on the cache, so I rethought the whole thing, see below. I use plex to stream to several devices on LAN (gigabit ethernet) or WAN (gigabit fiber internet), and also seed torrents with transmission. Here is my share setup: So I wanted to dynamically cache file from data share to SSD. Main file consumers are plex and transmission, which have their data in a data share As a fail-safe, I set mover to only move file if cache usage is more than 95%. I wrote a script to handle automagically caching of the data share to use SSD up to 90% (including appdata and VMs). What the script needs: a RPC enabled transmission installation (optional) access to Plex web API (optional) path to a share on cache path to same share on array What the script do: When you start it, it makes basic connection and path checks and then 3 main functions are executed: Cleans selected share on cache to have at least 10% free (configurable). To free space, oldest data are copied back to array then deleted from cache. Retrieves the list of active torrents from transmission-rpc daemon and copy to cache without removing from array. (note: active-torrents are those downloading and seeding during the last minute, but also starting and stopping, that's a caveat if you start/stop a batch of torrent and launch the script in the minute) Retrieves the list of active playing sessions from Plex and copy (rsync, same as mover or unbalance) movies to cache without removing from array. For series, there are options to copy: current and next episode or all episodes from current to the end of season Cleans again Note: to copy, rsync is used, like mover or unbalance, so it syncs data (don't overwrite if existing) in addition, hard-links, if any (from radarr, sonarr, etc...), are recreated on destination (cache when caching, array when cleaning cache) if you manually send file to the share on cache, it will be cleaned when getting old you may write a side script then (for working files, libraries, etc..) Because of shfs mechanism accessing a file from /mnt/user will read/write fro cache if it exists, then from array. Duplicate data are not a problem and globally speed up things. The script is very useful when, like me, you have noisy/slow mechanical HDDs for storage, and quick and quiet SDD to serve files. Script installation: I recommend copy/paste it in a new script created with User Scripts. Script configuration: No parameters are passed to the script, so it's easy to use with User Scripts plugin. To configure, a relevant section is at the beginning of the script, parameters are pretty much self explanatory: Here is a log from execution: Pretty neat hum? Known Previous issues (update may came to fix them later): At the moment, log can become huge if, like me, you run the script every minute. This is the recommended interval because transmission-RPC active torrent list contain only the ones from last minute. Edit 13-02-2020: corrected in latest version At the moment, a orphan file (only on cache) being played or seeded is detected, but not synced to the array until it needs to be cleaned (i.e: fresh torrents, recent movies fetched by *arr and newsgroup, etc...). Edit 13-02-2020: corected in latest version: it sync back to array during set day (noisy) hours. I don't know if/how shfs will handle the file on cache. I need more investigation/testing to see if it efficiently read the file from cache instead of array. I guess transmission/plex need to close and reopen the file to pick it from new location? (my assumption is that both read chunks, so caching shall work). Edit 13-02-2020: yes, after checking with File Activity plugin, that's the case and its plex/transmission take the file on cache as soon as it is available! Conclusion, disclaimer, and link: The script run successfully in my configuration since yesterday. Before using rsync I was using rclone which has a cache back-end, a similar plex caching function, plus a remote (I used it for transmission), but it's not as smooth and quick as rsync. Please note that, even if I use it 1440 times a day (User Scripts, Custom schedule * * * * *), this script is still experimental and can: erase (or most likely fill up) your SSD, Edit 13-02-2020: but I did not experienced this, error handling improved erase (not likely) your array Edit 13-02-2020: Technically, this script never delete anything on array, it won't happen kill your cat (sorry) make your mother in law move to your home and stay (I can do nothing for that) break your server into pieces (you can keep these) Thanks for reading to this point, you deserve to get the link to the script (<- link is here). If you try it or have any comment, idea, recommendation, question, etc..., feel free to reply Take care, Reynald
    1 point
  3. Hi, The server is running perfectly! Not much to say about it actually. Ripped all my 1350+ movies, partly using the built in Pioneer UHD-drive and MakeMKV docker. Also added another WD drive, so it's 8 in total now. I'm not using VM's, so can't comment on that. BTW, also very satisfied with the Define 7 XL case. If you are looking for something with space this is a very nice one.
    1 point
  4. Can someone post a screen shot of the leave and rejoin buttons? Also post diagnostics zip file?
    1 point
  5. I've used ghost82s method before and it does work. You can also set most newer versions of Windows 10 to mirror between the basic display adapter and the real video card. The nice part of this setup is you can get video out from the VM even if it doesn't boot successfully, unlike with vnc or RDP running in the guest. However, some 3D software does play nice in this configuration because it gets confused by the primary adapter not supporting features it needs.
    1 point
  6. It's also worth mentioning that this issue is flooding the syslog with errors which probably is not good for the health of the system, generating over 5MB of log messages per day.
    1 point
  7. I've only took a quick look at it and gave up since my priorities shifted a bit. I also don't think that this repo is actively maintained anymore and is maybe a dead end too...
    1 point
  8. Die Neu eintreffenden Dateien werden entsprechend der neuen Regeln verteilt. Deine alten Dateien bleiben dort, wo sie sind. Geänderte Regeln fassen bestehene Dateien nicht an. Wenn Du es möchtest, kannst Du aber diese Dateien dann umkopieren/verschieben. Ich nutze dafür krusader/ich777 oder MC. je nachdem, ob es viele oder wenige Dateien sind. Es ist am sinnvollsten die Festplatten so lange wie möglich ruhen zu lassen. Und das bekommt man bei Schreibzugriffen hin, indem man eben einen ausreichend bis sehr großzügigen Cache davor schaltet. Bevorzugt einen SSD Cache, weil sich SSD selber nach getaner Speicherarbeit schnell schlafen legen. Erst der Mover liest dann zum eingestellten Zeitpunkt die SSD aus und speichert die Daten auf die (den Regeln entsprechend vorgesehenen) Festplatten. Somit können Festplatten so lange schlafen, bis man von ihnen etwas lesen will oder eben der Mover schreibt.
    1 point
  9. This mod aim to disable external firmware in the flash chip.
    1 point
  10. I've had the exact same problem for 2 months now. Unraid 6.10 wouldn't boot, a BIOS update fixed that, but that's when all my GPU problems started. Second GPU became the primary GPU. I was able to get a VM to start and give video output on the primary GPU, but would stop after a minute and the VM log would say "could not power on, device stuck in D3". I think I have fixed it. In the BIOS menu and APM Advanced Power Management was set to disabled. I suspected that the GPU was turning off for some reason, but with APM disabled I assumed it wasn't due to APM powering it down. After trying everything else I randomly tried enabling APM and now my primary GPU outputs the BIOS and boot screen. So I think setting APM to enable was the fix. Just gotta redo my cooling loop to be 100% sure. It's counter intuitive, because you'd think that the default state of APM would be to leave everything powered on?
    1 point
  11. Bei durchgehender mittlerer bis hoher CPU Last kann die CPU keinen C-State erreichen. Bei dir sind es ja knapp 30% Last. Disk4 steht, Disk7 läuft In den Disk Settings kannst du einen automatischen Spindown einstellen. Zwischen 5 und 30W kann ein Netzteil ausmachen. Titanium Netzteile sind die sparsamsten, aber auch die teuersten. Bei deinem aktuellen Verbrauch bringt ein Corsair RM550x (2021) nichts, weil es nur bei unter 30W weniger verbraucht als die meisten anderen Gold-Netzteile. Hast du so viele HDDs oder warum die SAS Karte? ITX ist tatsächlich schwer was Gutes zu finden. Ein ITX Board mit 8x SATA und 2x M.2 gibt es zB nicht, außer vielleicht das Asrock Rack E3C256D4ID2, aber das hat keine Monitoranschlüsse und da weiß auch keiner was das verbraucht. Das ginge dann ja nur mit Grafikkarte, weil auf dem Ausgang des Boards siehst du nur Unraid. Keine Ahnung was du benötigst, aber mehr als Nextcloud und Plex brauche ich nicht.
    1 point
  12. Are you sure the names are correct? From what I found they should be: kvm-amd.avic= kvm-amd.npt= kvm-amd.nested= Check your syslog when booting with your arguments and if they are wrong you should see some logged messages about invalid parameters or something similar; if parameters are wrong they are simply ignored by the kernel.
    1 point
  13. I think it's probably time Unraid supported more boot methods. Now that it has a lot of advanced functionality like Docker, VM's and is becoming more critical to peoples setups more redundancy should be available. I'm not suggesting that the USB method should be done away with, but it should be just one way to boot Unraid. As a new user to Unraid and someone who has built many redundant systems and writes server software (including resilient databases and cluster software) a few design choices do seem wrong to me but can be corrected. Please give me the benefit of the doubt with that statement as I don't mean to come in as the new guy and say you're doing it all wrong! I do like that Unraid runs from memory, keeps it fast and keeps developers constrained, they know it has to fit in memory and cannot be a resource hog. But the 32GB USB limit isn't great, it's becoming more difficult to get high quality name brand USB drives in this capacity, not impossible but difficult. It's also becoming harder to get USB 2.0 only drives, high quality ones anyway. These things can be mitigated by supporting USB 3.0 drives and higher capacity drives but then we move onto the USB flash quality. As the capacities have increased the NAND quality has decreased. We're in QLC territory now and even then with bargain barrel flash and at the extremely high transistor density level which lends itself to unreliability. Combined with that many of the good sticks run very hot and aren't at all intended to be left plugged into a USB port powered on for years at a time. I echo some earlier posters statements about heat and how some of the sticks are so hot even when just left idle without any read/writes occurring that you cannot pull them from the USB port without being burned or feeling that it's uncomfortably hot to the touch. Probably the final nail in the USB situation is you're performing writes to it when configuration changes are made but can't easily verify they worked and if you then later reboot or cold start the system that is when the USB drive is most likely to fail leaving you in a precarious situation. There's no RAID1 capability, we don't even have file checksums. What I'd really like is the capability to install Unraid on normal drives with RAID1 just for the added redundancy and flexibility. It will also allow those of us who are building more critically depended upon servers to spend the amount commiserate with our systems. I'm currently building a wildly expensive Unraid server and the lynch-pin is a $15 USB stick. If I had the option of using a SATA DOM with high quality SLC flash or a SATA or NVMe SSD with SLC flash then I would, and I would likely use RAID1. Also I recently discovered after using Unraid that you cannot enable or startup VM's unless the raid is started. And some on discord alluded to this being tied to the license which as you all are aware is tied to the USB sticks GUID. It seems very odd to me that anything should stop the VM's from being started independent of the Unraid disk array. They seem so separate to me, what if I have all my VM's on a pool device or a ZFS array. If I'm doing some kind of disk operation on my main array that means it has to be offline for several days then I cannot start up any VM's? - Seems like bad design to me and it's probably why some people on the forums and on Discord are running Unraid under a more full featured hypervisor. I know this issue isn't strictly USB drive related but it's all part of the same tangled mess. The licensing is tied to the USB drive, that's currently the only boot method, they're prone to failure, you can't run VM's without the array enabled which requires the USB drive for licensing etc I fully intend to build a whole bunch of Unraid servers for myself and friends and this is probably the biggest sore point, when you're building in so much redundancy, be it dual PSU's, dual Nics, dual parity protected disk arrays, dual SSD's in RAID1 for your cache and/or VM's but then it all comes down to such a unreliable booting method, that just seems to me something that needs to be corrected for the project to keep progressing and improving, just how TrueNAS is adding better VM support, offering Linux with better drivers, just how pfSense now supports ZFS booting with RAID1 disks etc - All these projects that are in the same sphere are examining things that made sense years ago and saying ya know, today that doesn't make sense, that needs to evolve so we can better serve the needs of our users. I hope my post comes across in the positive intent that I meant it, I'm definitely not saying the sky is falling I just want to express how important I feel more boot options are. This topic started in 2014, it's now 2022. I'd love to see this happen before the end of 2024 really.
    1 point
  14. No as you need to map as a serial port and dev name is different.
    1 point
  15. There is no solution for the moment The only thing you cant get , is the temp But now, we cant get Autofan control, i juste make my curve under my bios, this is the only way to control them for the moment
    1 point
  16. @MVLP In a console, change the ownership of the files in /mnt/user/appdata/ftbdirewolf20_118. chown -R nobody:users /mnt/user/appdata/ftbdirewolf20_118 @Z-Server I don't know which container you're running but try fixing the ownership like above, but the directory that is configured for which ever container you're running. Longer explanation. All the original containers were running the server inside using a user running under UID 1001. For anyone who created a new account on their unraid server this would have matched up with a user on the unraid host. Containers should be running as the nobody user (UID 99). So all the containers now map to the correct user on the unraid server when running in the containers. This is really only an issue when you have to write out persistent files like the Minecraft files.
    1 point
  17. It's caused by an update to the UD Preclear plugin. Update to the latest UD Preclear and then reboot. It will clear up the issue.
    1 point
  18. YASSSSSS!!! I love this community!! I have 3 "Unraid servers" in my house, one is my "walltop" PC in my office, OpenRGB is 100% needed, sometimes i just want ALL RGB off, and up until now, i couldnt do that! Here is a shot of my "Walltop" Unraid Setup =D Thank you SO much @capt.asic and @ich777
    1 point
  19. I can't seem to find this container within the Apps section in Unraid. It's still available from DockerHub. Has the template for Unraid been removed for some reason?
    1 point
  20. Das hängt davon ab wie Deine Nutzung aussieht. Wenn Du wenig genug Daten hast, daß sie alle auf eine Festplatte passen, macht es aus Stromverbrauchsgründen Sinn alles auf die eine Festplatte (mit Parität) einzustellen. Ich habe Datenmengen die zu groß sind, weshalb ich einzelne Shares über mehrere Diks aufspanne. Ob Du vielleicht aus anderen Gründen lieber trennen willst, kannst Du das natürlich machen/so wie bisher lassen. Du kannst es einfach umstellen und er wird dann entsprechend Deiner eingestellten Verteilregeln in den Shares die Disks dann alle nutzen.
    1 point
  21. If you haven't figured it out, I was able to backfill all of my Tautulli data into influxdb using their backfill script on the 'develop' tag. Unfortunately, it does have some side effects with what seems like data structure changes, but it fills out enough that its usable. I've mainly noticed some fields act up, like 'Time', 'Plex Version', and 'Stream Status'. Sometimes the 'Location' would show as 'None - None' as well, but it usually got it correctly. To get started, change your varken container to the 'develop' tag. From there, open the console for it and run these commands: This moves the script to a location it can recognize the varken python import from mv /app/data/utilities/historical_tautulli_import.py /app From there, run the script with a few arguments: python3 /app/historical_tautulli_import.py -d '/config' -D 365 -d points to your appdata/varken.ini -D is the number of days to run it for That should backfill everything into your influxdb. I tried to run this on the 'latest' tag, but unfortunately the script is broken in it, and doesn't fill any data, so you have to use the develop tag.
    1 point
  22. Plugin is now released and available in the CA App:
    1 point
  23. Usually not needed. It takes longer, but it unmounts. But I will maybe update the script, so it's done through UD as well. Of course umount (or whatever) is responsible for this and it's known since a decade and I do not blame UD for that, but only UD can solve this as long this bug is present. My apologies if understood differently. Note: I never asked for a default solution, only an optional UD setting, which does a similar thing than the SMB unremounter. But the script works as good. I'm absolutely fine with that. Nothing can guarantee this and smb shares are not only provided through permanently online servers. Every windows client supports smb shares. Not unusual. Not a good idea to mount them? Maybe. But happens. I don't see any, except of the offline hanging which is "solved" by the script. Why do you think that? I'm using the usual recent version and calling it through wget. Else it won't make sense to provide this script as nobody could use it. But I found a bug in my script. I forget to parse the server name so the URL would be wrong for most users. Will fix that in the next release. Will contain your suggestion as well. Thanks for that!
    1 point
  24. Suggestions: Change this: /boot/config/plugins/unassigned.devices/samba_mount.cfg To this: /tmp/unassigned.devices/config/samba_mount.cfg A copy of the unassigned devices configurations are moved to the /tmp file system when Unraid is started and updated whenever a change is made in UD. It is also copied back to the flash when a change is made so the flash is kept current. The idea is to cut down on the flash file accesses which are slower than ram. Also add -fl (force, lazy unmount) to the umount command. The unmount will be forced and will immediately return. And some criticism: It almost appears that you are blaming UD for the issues with commands like 'lsof'. 'lsof', and 'df' both will hang badly when a a CIFS share is off-line. What you are running into is what I found with these commands. If a CIFS share goes off-line, you can type 'df' it will indefinitely hang and not time out. To get around this, UD only queries the remote shares when a user is on the UD page. Nothing is done in the background. The commands will also time out so UD does not appear to hang. I am not a fan of mounting and unmounting a remote share like this. It seems a bit brute force and prone to problems, when a much cleaner approach would be to keep the remote server on-line. Unfortunately this depends on a solid, reliable network. You have said you like the remote shares with a local mount point for backups and I guess I get that, but you are running into the disadvantages of CIFS mounts. You are probably pushing it a bit farther than is practical. You apparently have made changes to UnassignedDevices.php. Because of this, I can offer no support to anyone using your script.
    0 points