Leaderboard

Popular Content

Showing content with the highest reputation on 01/09/23 in all areas

  1. I've done the same thing using an unassigned SSD drive. Just stop the containers, move the appdata folders over to the new location, update the appdata location in each container, and start containers back up. As long as you make the root path consistent, it's easy.
    3 points
  2. 3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST. New users continue reading 13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/BinsonBuzz/unraid_rclone_mount 17/11/21 Update: Poll to see how much people are storing I've added a Paypal.me upon request if anyone wants to buy me a beer. There’s been a number of scattered discussions around the forum on how to use rclone to mount cloud media and play it locally via Plex, Emby etc. After discussions with @Kaizac @slimshizn and a few others, we thought it’d be useful to start a thread where we can all share and improve our setups. Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk. With access to unlimited cloud space available for the cost of a domain name and around $510/pm, then this becomes a very interesting proposition as it reduces local storage requirements, noise etc etc. At the moment I have about 80% of my library in the cloud and I struggle to tell if a file is local or in the cloud when playback starts. To kick the thread off, I’ll share my current setup using gdrive. I’ll try and keep this initial thread updated. Update: I've moved my scripts to github to make it easier to keep them updated https://github.com/BinsonBuzz/unraid_rclone_mount Changelog 6/11/18 – Initial setup (updated to include rclone rc refresh) 7/11/18 - updated mount script to fix rc issues 10/11/18 - added creation of extra user directories ( /mnt/user/appdata/other/rclone & /mnt/user/rclone_upload/google_vfs) to mount script. Also fixed typo for filepath 11/11/18 - latest scripts added to https://github.com/BinsonBuzz/unraid_rclone_mount for easier editing 3/1/20 - switched from unionfs to mergerfs 4/2/20 - updated the scripts to make easier to use and control. Thanks to @senpaibox for the inspiration My Setup Plugins needed Rclone – installs rclone and allows the creation of remotes and mounts. New scripts require V1.5.1+ User Scripts – controls how mounts get created How It Works Rclone is used to access files on your google drive and to mount them in a folder on your server e.g. mount a gdrive remote called gdrive_vfs: at /mnt/user/mount_rlone/gdrive_vfs Mergerfs is used to merge files from your rclone mount (/mnt/user/mount_rlone/gdrive_vfs) with local files that exist on your server and haven't been uploaded yet (e.g. /mnt/user/local/gdrive_vfs) in a new mount /mnt/user/mount_unionfs/gdrive_vfs This mergerfs mount allows files to be played by dockers such as Plex, or added to by dockers like radarr etc without the dockers even being aware that some files are local and some are remote. It just doesn't matter The use of a rclone vfs remote allows fast playback, with files streaming within seconds New files added to the mergerfs share are actually written to the local share, where they will stay until the upload script processes them An upload script is used to upload files in the background from the local folder to the remote. This activity is masked by mergerfs i.e. to plex, radarr etc files haven't 'moved' Getting Started Install the rclone plugin and via command line run rclone config and create 2 remotes: gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive: It is advisable to create your own client_id to avoid API bans. Mount Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc Choose a suitable cron job. I run this script on a 10 min */10 * * * * schedule so that it automatically remounts if there’s a problem. The script: Checks if an instance is already running, remounts (if cron job set) automatically if mount drops Mounts your rclone gdrive remote Installs mergerfs and creates a mergerfs mount Starts dockers that need the mergerfs mount e.g. plex, radarr Upload Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and paste in the rclone_mount script Edit the config lines at the start of the script to choose your remote name, paths etc - USE THE SAME PATHS Choose a suitable cron job e.g hourly Features: Checks if rclone is installed correctly sets bwlimits There is a cap on uploads by google of 750GB/day. I have added bandwidth scheduling to the script so you can e.g. set an overnight job to upload the daily quota at 30MB/s, have it trickle up over the day at a constant 10MB/s, or set variable speeds over the day The script now stops once the 750GB/day limit is hit (rclone 1.5.1+ required) so there is more flexibility over upload strategies I've also added --min age 10mins to stop any premature uploads and exclusions to stop partial files etc getting uploaded. Cleanup Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script Create a new script using the the user scripts plugin and set to run at array start (recommended) or array stop In the next post I'll explain my rclone mount command in a bit more detail, to hopefully get the discussion going!
    1 point
  3. Goal: Easy remote WOL of my UNRAID server There are a thousand ways to this, however this worked for me and was pretty easy to do: (some steps might not even be necessary - but doing them anyway does not matter while leaving them out might...) HowTo Setup WOL for UNRAID: (assuming onboard NIC and WOL activated in motherboard's BIOS) *Put to sleep* 1. Webterminal or SSH into server 2. Type "ifconfig" and note IP-address and MAC-Address (ether) of used NIC 2. Type "ethtool -s eth0 wol g" 3. Type "echo -n mem > /sys/power/state" *Wake up* using 1. MacOS / Linux / Windows a) Download "MiniWOL2" from https://www.tweaking4all.com/home-theatre/miniwol2/ and install b) Click miniicon, push "Add" button and name the device to wake (Alias in Menu) c) set "IPv4 Address" (manually or select from ARP List) and "MAC-Address" (manually or click on 'detect') in appropriate fields d) Set "Broadcast" 255.255.255.255 e) Push "Test" to wake-up device (needs to be in sleep mode: see above) 2. Windows: (ALTERNATIVE) a) Download "wolcmd.exe" from https://www.depicus.com/wake-on-lan/wake-on-lan-cmd und unpack b) Open command line prompt and move to do dowload directory c) type "wolcmd.exe <ether> <ip-dest> 255.255.255.255 3. Linux: (ALTERNATIVE) a) Type "wakeonlan <MAC-Address>" OR b) Type "wol <MAC-Address>" I did not include any explanations on purpose, so if you need any - feel free to google :-) The only intent of posting this guide was hopefully sparing you guys some time if you just want to have it up and running. Feel free to question and comment.
    1 point
  4. Alles klar. Kleiner zusatz Tipp: Du kannst auch bei der Nextcloud bei "Folder name" Unterordner auswählen. Also z.B. "Bilder/Familie" So kann man gut mehrere Shares unter einem Ordner zusammen fassen.
    1 point
  5. Wenn du den Unraid Share "Bilder" durchreichen willst, dann musst du im Nextcloud Template unten auf " Add another Path, Port, Variable, Label or Device" klicken, dann "Path" auswählen und Name: Bilder <hier kannst du einen Namen zum wiedererkennen auswählen, freie Auswahl> Container Path: /mnt/Bilder/ <hier muss der Pfad angegeben werden, unter dem wir innerhalb des Containers den Share wieder fiden wollen> Host Path: /mnt/user/Bilder/ <das ist der Pfad von unserem Unraid Share, also hier "Bilder" der auf dem Array liegt> Default Value: Access Mode: Read/Write Jetzt muss rüber in die Nextcloud. Dort als Pfad den vom Container Pfad eintragen.
    1 point
  6. Da bin ich auch noch nicht ganz sicher. Ich habe hier bisher auf read/write gesetzt.
    1 point
  7. Thnx! It seems that the X12 will be fine here. I have IPMI and at the same time also image via the monitor. ls /dev/dri also gives hope now...
    1 point
  8. Ohne den ist innerhalb des Docker Container auch nichts vorhanden. Den Pfad, den du da einträgst den musst du dann auch innerhalb der Nextcloud angeben.
    1 point
  9. Other than shfs using a lot of CPU not seeing much wrong, try updating to v6.11.5 and post new diags if still issues.
    1 point
  10. That is always possible. Passing memtest is not always a definitive indication that there is no RAM issue (whereas a failure is always an issue).
    1 point
  11. You don't need to do a preclear on each drive however doing one can give you some confidence and usually if a drive is going to fail it does so at the very beginning of its usage or after a few years (the so-called bathtub curve as described by musclecups in the first post). So if you do a preclear what are the benefits? - You find out if a drive is bad straight away before you expand or rebuild your array. You find out it's bad during an easy return window where you can get an exchange directly from the seller as opposed to dealing with an RMA with the drives manufacturer (which could take more than a week vs a few hours to a day depending on where you bought the drive). Personally the way I do them is I do the pre-clear write and the post-clear read. I do not do the pre-read, I feel that's unnecessary personally. For the 18TB drives I buy it generally takes around 46 hours in total to do the write + post read. It does seem like a while doesn't it? but you can do multiple drives at once. In-fact I did 4 x 18TB drives at once without issue. Get a strong HBA and you can easily preclear 10+ drives at the same time without performance degradation. But also patience is key, especially with unRAID. I don't mind waiting for that extra knowledge that a drive is good. One added benefit of a pre-clear is you get to hear the drive acoustically running flat out for a few hours, sometimes drives will pass but may generate an annoying high pitched tone or a audible clicking noise, things that may give you pause about introducing it to your array.
    1 point
  12. I replaced memory modules and no reboots now. No kernel errors as well. I'm testing old modules via memtest still don't show any errors, possible some hardware incompatibility... Therefore I must admit in my case this is not related to unraid.
    1 point
  13. Hi, I love this plugin and have a suggestion for improvement. It would be nice if the plugin would also support compressing and decompressing files and folders.
    1 point
  14. Hi JorgeB, Thank you for taking a look at the logs for me and confirming my thoughts of cant find anything that indicates the reboot I've had the system running on a single 8GB stick overnight, its finishing a parity check before i play around it more I beleive one of the 8 8Gb sticks could have a fault that wasnt picked up by memtest, as so far it seems to be stable just with high ram usage! Ill add a second stick later on and look at replacing them as well
    1 point
  15. Yeah it's easy to do just takes some piecing together of info scattered around I've added a quick guide to my docs site just really for future reference https://docs.phil-barker.com/posts/upgrading-ASM1166-firmware-for-unraid/
    1 point
  16. Unfortunately there's nothing relevant logged, this usually suggests a hardware problem, or some power issue.
    1 point
  17. Jan 7 08:02:34 Tower kernel: ata11.00: qc timeout (cmd 0xec) Jan 7 08:02:34 Tower kernel: ata11.00: failed to IDENTIFY (I/O error, err_mask=0x4) Jan 7 08:02:34 Tower kernel: ata11.00: revalidation failed (errno=-5) Jan 7 08:02:34 Tower kernel: ata11: hard resetting link Jan 7 08:02:34 Tower kernel: ata12.00: failed to IDENTIFY (I/O error, err_mask=0x4) Jan 7 08:02:34 Tower kernel: ata12.00: revalidation failed (errno=-5) Jan 7 08:02:34 Tower kernel: ata12: hard resetting link Jan 7 08:02:35 Tower kernel: ata12: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 7 08:02:35 Tower kernel: ata11: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jan 7 08:02:46 Tower kernel: ata12.00: qc timeout (cmd 0xec) Jan 7 08:02:46 Tower kernel: ata11.00: qc timeout (cmd 0xec) Jan 7 08:02:46 Tower kernel: ata12.00: failed to IDENTIFY (I/O error, err_mask=0x4) Jan 7 08:02:46 Tower kernel: ata12.00: revalidation failed (errno=-5) Jan 7 08:02:46 Tower kernel: ata12: limiting SATA link speed to 3.0 Gbps Jan 7 08:02:46 Tower kernel: ata12: hard resetting link Jan 7 08:02:46 Tower kernel: ata11.00: failed to IDENTIFY (I/O error, err_mask=0x4) Jan 7 08:02:46 Tower kernel: ata11.00: revalidation failed (errno=-5) Jan 7 08:02:46 Tower kernel: ata11: limiting SATA link speed to 3.0 Gbps Jan 7 08:02:46 Tower kernel: ata11: hard resetting link Jan 7 08:02:47 Tower kernel: ata12: SATA link up 6.0 Gbps (SStatus 133 SControl 320) Jan 7 08:02:47 Tower kernel: ata11: SATA link up 6.0 Gbps (SStatus 133 SControl 320) Jan 7 08:02:52 Tower kernel: ata13: COMRESET failed (errno=-16) Jan 7 08:02:52 Tower kernel: ata13: limiting SATA link speed to 3.0 Gbps Jan 7 08:02:52 Tower kernel: ata13: hard resetting link Jan 7 08:02:58 Tower kernel: ata13: COMRESET failed (errno=-16) Jan 7 08:02:58 Tower kernel: ata13: reset failed, giving up Jan 7 08:02:58 Tower kernel: ata13.00: disable device Jan 7 08:02:58 Tower kernel: ata13: EH complete Problems with all 3 disks connected to the Marvell controller, including some dropping offline, those controllers are not recommended and if you can I suggest replacing it with a recommended model.
    1 point
  18. Das ist das vom crash oder? Leider hat vermutlich der syslogserver den Geist vorher aufgegeben bis die Kernel Panic kam. Versuch bitte mal im Legacy Mode zu booten anstatt in UEFI falls du im BIOS UEFI standardmäßig eingetragen hast. Was mir aus deinen Diagnostics auffällt das NerdPack wird auf der 6.11 nicht mehr unterstützt, bitte lösch es. Ich sehe du hast ein Gigabyte W480M VISION W, das verwenden hier doch einige oder irre ich mich? Bitte lösch das aus deinem go file: #Initialize Fans modprobe it87 force_id=0x8628 ...und auch das aus deiner syslinux.config: acpi_enforce_resources=lax und installier (beachte das diese Option mit Vorsicht zu genießen ist) stattdessen dieses Plugin: ...bitte danach rebooten, es kann sein das du deine Lüfter neu einstellen musst im Auto Fan plugin. Das plugin sollte mit deinem ITE Chip kompatibel sein soweit ich weiß. Der RAM ist auch in Ordnung bzw. hast du schon einmal den integrierten Memtest gemacht? Kannst du bitte einen Monitor anschließen und das ins go file einfügen: setterm --blank 0 damit verhinderst du das sich der Monitor ausschaltet, der Monitor muss aber angeschlossen bleiben! EDIT: Würde ich noch nicht machen bis das Problem gelöst ist.
    1 point
  19. Sorry for my late feedback but it works like a charm. Thank you so much for sharing your solution. Great!
    1 point
  20. This is definitely unexpected that this path has a space in it and I really don't know how to solve that. Will try that later on my machine. I can't help here because I simply never done it like that. I can only try if it's the same with the Linux directory if I also have a space at the end. Please also keep in mind that the path /mnt/cache/appdata/... is not done by accident and you also should use /mnt/cache/appdata/... instead of /mnt/user/appdata/... <- some games won't run on that path because it's a FUSE file path and it is set to /mnt/cache/appdata/... to avoid other weird behavior (please also make sure that you've set your appdata share in the Share settings to Use Cache "Only" or "Prefer". This path is set by the developers and nothing I can do about. Basically this container will do the same as you run the dedicated server on bare metal and install everything by hand but everything is automated.
    1 point
  21. nope, they run individually so you can run them parallel make sure your script runs and doesnt exit, sample in a while loop ... here a sample how it looks while a script runs perm while others are just running on schedule, if thats the question.
    1 point
  22. Config will survive reboot
    1 point
  23. Ich hab den Vorgänger das J4105 rein mit SSDs bei 6-7W. Du hast 2 HDD die die 2W Mehrverbrauch verantworten. Tiefer als C6 kommst Du mit dem Board nicht. Gruss, Joerg
    1 point
  24. I'm a big novice when it comes to dockers and unraid, but I'm trying to figure out if there is a way to set up an automated way to periodically run a shell command that I need to be able to give deluge access to private tracker I'm a member of (because of my dynamic ip). If I open a console by clicking on the Deluge-VPN in the docker tab of the Uraid UI, and type this curl command [edited for anonymity]: curl -c ~/foo.cookies -b ~/foo.cookies https://a.site.net/json/dynamicSeedbox.php I've already created the foo.cookies script with a prior command, I just want to figure out a way to execute this command automatically every few hours. If that's possible could someone give me specific steps on how to accomplish this? Thanks!
    1 point
  25. Evening all, Everything has been working without any issues for over a week and a half without powertop with auto tune being run. Thanks for your assistance.
    1 point
  26. Neither the name or physical location matter, just that if you had the previous drive assigned on the "disk3" line of the Main page then the new one needs to that same line replacing the failed one.
    1 point
  27. The firmware update may have solved the issue. An hour in to a preclear without any issues yet. The help is much appreciated. Still unsure why it was working in the past.
    1 point
  28. Well I can't believe i'm saying this. But that worked. I went from F11 to F12 on my bios version on my Z390 AORUS PRO WIFI. I also made sure to set the iGPU to enabled and have it be the default gpu (I have a gtx 1650 also in there). Something about those actions seems to have worked. Thanks for the suggestion.
    1 point
  29. No, this container doesn't support nvenc.
    1 point
  30. Welcome to the friendliest server community around! Tl;dr version: Be nice and cordial. Don't be a jerk or post anything illegal. This forum is where our users can collaborate, learn, and provide input on new developments from Lime Technology and its partners. We have a strong team of community moderators, devs, and Lime Technology employees who strive to help as many people as possible. Participating in this forum means agreeing to the following community guidelines and rules. These guidelines and rules must be agreed and adhered to to use this forum. Moderators and Lime Technology staff will enforce the community guidelines at their discretion. Anyone who feels a posted message doesn’t meet the community guidelines is encouraged to report the message immediately. As this is a manual process, please realize that it. may take some time to remove, edit or moderate particular messages. Rules and Community Guidelines To ensure a safe, friendly and productive forum, the following rules and guidelines apply: Be respectful. Respect your fellow users by keeping your tone positive and your comments constructive and courteous. Respect people's time and attention by providing complete information about your question or problem, including product name, model numbers and/or server diagnostics if applicable. Be relevant. Make sure your contributions are relevant to this forum and to the specific category or board where you post. If you have a new question, start a new thread rather than interrupting an ongoing conversation. Remember this is mostly user-generated content. You'll find plenty of good advice here, but remember that your situation, configuration, or use case may vary from that of the individual sharing a solution. Some advice you find here may even be wrong. Apply the same good judgment here that you would apply to information anywhere on the Internet. The posted messages express the author's views, not necessarily the views of this forum, the moderators, or Lime Technology staff. As the forum administrators and Lime Technology staff can’t actively monitor all posted messages, they are not responsible for the content posted by users and do not warrant the accuracy, completeness, or usefulness of any information presented. Think before you post: You may not use, or allow others to use, your registration membership to post or transmit the following: Content which is defamatory, abusive, vulgar, hateful, harassing, obscene, profane, sexually oriented, threatening, invasive of a person’s privacy, adult material, or otherwise in violation of any International, US or State level laws is expressly prohibited. This includes text, information, images, videos, signatures, and avatars. Also: "Rants", "slams", or legal threats against Lime Technology, another company or any person. Hyperlinks that lead to sites that violate any of the forum rules. Any copyrighted material unless you own the copyright or have written consent from the owner of the copyrighted material. Spam, advertisements, chain letters, pyramid schemes, and solicitations. (Note: we have an Unraid Marketplace Board that includes a Good Deals section and a Buy, Sell, Trade section, and they have their own rules.) You remain solely responsible for the content of your posted messages. Furthermore, you agree to indemnify and hold harmless the owners of this forum, any related websites to this forum, its staff, and its subsidiaries. The owner of this forum also reserves the right to reveal your identity (or any other related information collected on this service) in case of a formal complaint or legal action arising from any situation caused by your use of this forum. Please Note: When you post, your IP address is recorded. Repeated rule violations or egregious breach of the rules will result in accounts being restricted and or banned at the IP level. The forum software places a cookie, a text file containing bits of information (such as your username and password), in your browser's cache. Cookies are ONLY used to keep you logged in/out. The software does not collect or send any other form of information to your computer. Lime Technology may, at its sole discretion, modify these Rules of Participation from time to time. For Unraid OS software, website and other policies, please see our policies page! If you have any questions, please contact support.
    1 point
  31. Do you have MergerFS set up? You should be able to see the rclone mount there and all the files unencrypted.
    1 point
  32. There are two types of snapshots available in QEMU internal and external. Internal - Requires QCOW2 images, Doesn't support OVMF, but have seen a work around to change to rom, snap but you have to do a manual revert. External - Supports only diskonly, no revert or delete options at present, manual revert needs VM to be stopped. Which VM types are people looking to snapshot? Does anyone know if proxmox supports OVMF? I have asked the libvirt team if any ideas on timeline for full support for external snaps/ Testing so far. VM Seabios 2 x QCOW2 Disk files. root@computenode:~# virsh domfsinfo DebianSB Mountpoint Name Type Target -------------------------------------------- / vda1 ext4 hdc /media/snapb/disk2 vdb ext4 hdd root@computenode:~# virsh snapshot-create DebianSB Domain snapshot 1672498342 created root@computenode:~# virsh snapshot-list DebianSB Name Creation Time State --------------------------------------------------- 1672498342 2022-12-31 14:52:22 +0000 running root@computenode:~# virsh snapshot-revert DebianSB error: --snapshotname or --current is required root@computenode:~# virsh snapshot-revert DebianSB --current root@computenode:~# root@computenode:~# virsh snapshot-info DebainSB error: failed to get domain 'DebainSB' root@computenode:~# virsh snapshot-info DebianSB error: --snapshotname or --current is required root@computenode:~# virsh snapshot-list DebianSB Name Creation Time State --------------------------------------------------- 1672498342 2022-12-31 14:52:22 +0000 running root@computenode:~# virsh snapshot-revert DebianSB 1672498342 root@computenode:~# virsh snapshot-list DebianSB Name Creation Time State --------------------------------------------------- 1672498342 2022-12-31 14:52:22 +0000 running root@computenode:~# virsh snapshot-list DebianSB Name Creation Time State --------------------------------------------------- 1672498342 2022-12-31 14:52:22 +0000 running 1672498947 2022-12-31 15:02:27 +0000 running root@computenode:~# virsh snapshot-revert DebianSB 1672498947 root@computenode:~# virsh destroy DebianSB Domain 'DebianSB' destroyed root@computenode:~# virsh snapshot-create DebianSB Domain snapshot 1672500774 created root@computenode:~# virsh snapshot-list DebianSB Name Creation Time State --------------------------------------------------- 1672498342 2022-12-31 14:52:22 +0000 running 1672498947 2022-12-31 15:02:27 +0000 running 1672500774 2022-12-31 15:32:54 +0000 shutoff VM OVMF 2 x QCOW2 root@computenode:~# virsh snapshot-create DebianSO error: Operation not supported: internal snapshots of a VM with pflash based firmware are not supported root@computenode:~# # change pflash to rom root@computenode:~# virsh snapshot-create DebianSO Domain snapshot 1672502423 created root@computenode:~# virsh snapshot-list DebianSO Name Creation Time State --------------------------------------------------- 1672502423 2022-12-31 16:00:23 +0000 shutoff
    1 point
  33. 100% agree with the stubborn attitude of some people.
    1 point
  34. I would love to have built in Wi-Fi support. Linux has had decent Wi-Fi support for years so its not like you have to completely build it from the ground up. I know the work arounds, but they aren't as elegant or streamline as having it built in. I don't want to have to manage yet another device. (Wi-Fi Bridge) The stubbornness from a number of people suggesting this band-aid as the answer in lieu of just adding it really bothers me.
    1 point
  35. Funny to get a reply to this so many months later. That's what I was doing, but I wanted to get minor revisions automatically because I spent way too much time flipping through the webgui of my server looking for routine maintenance to do. I want to tinker with the services I'm running, not the server they're running on. No one really seemed to care as the immediate reply was from a user who clearly didn't read what I wrote, and I got nothing else until this. Which (no offense) isn't what I asked either. I've since solved the problem on my own, in a way that I will not specify as multiple users will undoubtedly descend to tell me how wrong I'm using Unraid even though I paid for it and I can do what I like. Cheers!
    1 point
  36. after a ton of research i have figured it out. under <os> put <smbios mode='host'/> directly below that put this under <features> <kvm> <hidden state='on'/> </kvm> under <cpu mode ='host-passthrough'....... put <feature policy='disable' name='hypervisor'/> and i also deleted any lines pertaining to hyper-v hope this helps anyone in the future
    1 point
  37. I'm not sure why this was so difficult to find or answer but here is how to correct the error without restating the machine. From the console, log in to the terminal using the Root account. Once logged in type the following command: /etc/rc.d/rc.php-fpm restart Done. You can now log back in via WebUI. Other useful Unraid commands can be found here: https://selfhosters.net/commands/
    1 point
  38. Grad gefunden, kannst von hier nehmen: Klick cd ~ wget https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz tar -xvf ffmpeg-release-amd64-static.tar.xz cd ffmpeg-release-amd64-static Wenn du das in einem unRAID terminal ausführst wird FFMPEG static runter geladen in dein root home Verzeichnis und du kannst dann damit machen was immer du willst.
    1 point
  39. Hätte ich nicht so viel gequatscht, wäre es auch schneller gegangen, aber ich wollte so viele Informationen wie möglich vermitteln und die 10 Minuten habe ich gerade so unterboten ^^
    1 point
  40. Thanks a lot Johnnie! I will look into it. The NVMe drive also goes offline if I copy back and forth huge files (40g). I'll report back when I'm done. EDIT: Performed a BIOS update, appended the line you mentioned and cleared the btrfs stats. Will monitor if the problem still exists. EDIT2: Still had some errors after doing that but it was better. In the end I formatted my cache drives and made a new btrfs cahe pool. After that I still had a lot of PCIe Bus Error: severity=Corrected, type=Physical Layer errors. That seems to be resolved with booting Unraid like this: Unraid OS kernel /bzimage append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 pcie_aspm=off
    1 point
  41. To be honest, I argued for when a core is isolated to not allow the GUI to let you select it for a container, as I knew that this question was going to happen over and over again. (which it has).
    1 point
  42. How about replacing it with a new feature related to shares rather than hardware ports? IOW, if you check this setting on a share, then as soon as a file in the share is accessed, all drives that participate in that share are immediately spun up as well? That setting combined with spin down timers could cover almost any scenario I can think of.
    1 point