Leaderboard

Popular Content

Showing content with the highest reputation on 01/31/23 in all areas

  1. I am surprised that there are not more comments on this new episode of Uncast, so I'll share my thoughts. To put it simply, I was disappointed by this episode. I am not looking to be mean or anything but I think it is important to acknowledge that and explain why. My goal is to provide feedback and improve things if my opinion is shared by others. Let's start by saying that I do appreciate Ed's work for the Unraid community and that I would not be capable of doing what he is doing on his channel or on this first Uncast. My disappointment principally comes from the content of this episode. It felt to me that there was a lot of retro-gaming content and only a little Unraid content sprinkled here and there. I have nothing against retro-gaming but when I launch Uncast, I am expecting to hear mainly about Unraid (70-80%). Sure, when there is a guest, I understand that there would be talk about what the guest is doing, how does it interacts with Unraid, etc. Here, it felt like it was more 20-30% Unraid. The episode was also quite long, while it's not an issue by itself if there is a lot of content I like, here it seemed to never end since it felt mostly off-topic for me,. Anyway, I am still eager to listen to the next episode, we will see what direction it goes in the future. 🙂
    3 points
  2. Two of the most excellent, polite and most helpful humans I have never met. Thankyou both for your great work on zfs over the years.
    3 points
  3. You are absolutly correct. He took my manual build process and automated it so well that I have not had to think about it at all any more! Really took this plugin to another level and now we just wait for the next Unraid release so we can depricate it
    3 points
  4. Ist auf deinem github Link ganz unten beschreiben: Unraid Template Note: An Template for Unraid can be found here : https://raw.githubusercontent.com/DeBaschdi/docker.solaranzeige/master/Templates/Unraid/my-Solaranzeige.xml Please safe it to into \flash\config\plugins\dockerMan\templates-user, after that you can use this Template in Unraids Webui. Docker > Add Container > Select Template and choose Solaranzeige mit \flash ist der Unraid Stick gemeint. Da findest du den Ordner Config usw. Screenshot vom Template ist auch dabei
    2 points
  5. HIGHLY recommended to NOT patch your docker files manually and instead use the plugin. Patching manually means that if / when you update the OS to 6.12 any manually patches that you are applying automatically will potentially interfere with the OS and be a big pain to troubleshoot
    2 points
  6. Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks
    1 point
  7. Hello All! As I'd alluded to in my earlier SR-IOV guide, I've been (...slowly...) working on turning my server config/deployment notes into something that'd at least have the opportunity to be more useful to others as they're using UnRAID. To get to the point as quickly as possible: The UnRAID Performance Compendium I'm posting this in the General section as it's all eventually going to run the gambit, from stuff that's 'generically UnRAID', to container/DB performance tuning, VMs, and so on. It's all written from the perspective of *my* servers though, so it's tinged with ZFS throughout - what this means in practice is that, while not all of the information/recommendations provided will apply to each person's systems, at least some part of them should be useful to most, if not all (all is the goal!). I've been using ZFS almost since it's arrival on the open source scene, starting back with the release of OpenSolaris back in late 2008, and using it as my filesystem of choice wherever possible ever since. I've been slowly documenting my setup as time's gone on, and as I was already doing so for myself, I thought it might be helpful to build it out a bit further in a form that could be referenced by others (if they so choose). I derive great satisfaction from doing things like this, relishing the times when work's given me projects where I get to create and then present technical content to technical folks... But with the lockdown, I haven't gotten out much, and work's been so busy with other things, I haven't much been able to scratch that itch. However, I'm on vacation this week, and finally have a few of them polished up to the point that I feel like they can be useful! Currently guides included are (always changing, updated 08.03.22): The Intro Why would we want ZFS on UnRAID? What can we do with it? - A primer on what our use-case is for adding ZFS to UnRAID, what problems it helps solve, and why we should care. More of an opinion piece, but with some backing data enough that I feel comfortable and confident in the stance taken here. Also details some use cases for ZFS's feature sets (automating backups and DR, simplifying the process of testing upgrades of complex multi-application containers prior to implementing them into production, things like that). Application Deployment and Tuning: Ombi - Why you don't need to migrate to MariaDB/MySQL to be performant even with a massive collection / user count, and how to do so Sonarr/Radarr/Lidarr - This is kind of a 'less done' version of the Ombi guide currently (as it's just SQLite as well), but with some work (in progress / not done) towards getting around a few of the limitations put in place by the application's hard-coded values Nextcloud - Using nextcloud, onlyoffice, elasticsearch, redis, postgres, nginx, some custom cron tasks, and customization of the linuxserver container (...and zfs) to get highly performant app responsiveness even while using apps like facial recognition, full text search, and online office file editing. Haven't finished documenting the whole of the facial recog part, nor elasticsearch. Postgres - Keeping your applications performance snappy using PG to back systems with millions of files, 10's or even hundreds of applications, and how to monitor and tune for your specific HW with your unique combination of applications MariaDB - (in progress) - I don't use Maria/MySQL much personally, but I've had to work with it a bunch for work and it's pretty common in homelabbing with how long of a history it has and the dev's desire to make supporting users using the DB easier (you can get yourself in a whole lot more trouble a whole lot quicker by mucking around without proper research in PG than My/Maria imo). Personally though? Postgres all the way. Far more configurable, and more performant with appropriate resources/tuning. General UnRAID/Linux/ZFS related: SR-IOV on UnRAID - The first guide I created specifically for UnRAID, posted directly to the forum as opposed to in github. Users have noted going from 10's of MB/s up to 700MB/s when moving from default VM virtual NIC's over to SR-IOV NICs (see the thread for details Compiled general list of helpful commands - This one isn't ZFS specific, and I'm trying to add things from my bash profile aliases and the like over time as I use them. This one will be constantly evolving, and includes things like "How many inotify watchers are in use... And what the hell is using so many?", restarting a service within an LSIO container, bulk downloading from archive.org, and commands that'll allow you to do unraid UI-only actions from the CLI (e.g. stop/start the array, others). Common issues/questions/general information related to ZFS on UnRAID - As I see (or answer) the same issues fairly regularly in the zfs plugin thread, it seemed to make sense to start up a reference for these so it could just be linked to instead of re-typing each time lol. Also includes information on customization of the UnRAID shell and installing tools that aren't contained in the Dev/Nerdpacks so you can run them as though they're natively included in the core OS. Hosting the Docker Image on ZFS - squeezing the most performance out of your efforts to migrate off of the xfs/btrfs cachepool - if you're already going through the process of doing so, might as well make sure it's as highly performant as your storage will allow You can see my (incomplete / more to be added) backlog of things to document as well on the primary page in case you're interested. I plan to post the relevant pieces where they make sense as well (e.g. the Nextcloud one to the lsio nextcloud support thread, cross-post this link to the zfs plugin page... probably not much else at this point, but just so it reaches the right audience at least). Why Github for the guides instead of just posting them here to their respective locations? I'd already been working on documenting my homelab config information (for rebuilding in the event of a disaster) using Obsidian, so everything's already in markdown... I'd asked a few times about getting markdown support for the forums so I could just dump them here, but I think it must be too much of a pain to implement, so github seemed the best combination of minimizing amount of time re-editing pre-existing stuff I'd written, readability, and access. Hope this is useful to you fine folks! HISTORY: - 08.04.2022 - Added Common Issues/general info, and hosting docker.img on ZFS doc links - 08.06.2022 - Added MariaDB container doc as a work-in-progress page prior to completion due to individual request - 08.07.2022 - Linked original SR-IOV guide, as this is closely tied to network performance - 08.21.2022 - Added the 'primer' doc, Why ZFS on UnRAID and some example use-cases
    1 point
  8. If you prefer video^ https://www.youtube.com/@uncastpod
    1 point
  9. Thanks for the feedback. As they say, different strokes for different folks; We do plan to have a wide range of guests and topics and it won't always be 100% or even majority Unraid-centric but will include topics and discussions relevant to Unraid users yet perhaps slightly adjacent to Unraid itself. With that said, if you have topic or show ideas, questions or comments, we are always open to them here and on speakpipe! https://www.speakpipe.com/uncast Thanks for listening!
    1 point
  10. Thank you very much for the guidance, I have created a new post where you suggested.
    1 point
  11. I'm an idiot. It was a cable issue. I'd checked all the SATA cables going into the drives but not the SAS cable in the LSI card itself. I must not have pushed it in all the way and it had come loose. Two days troubleshooting over a loose cable. Thanks for the help.
    1 point
  12. Using '/mnt/cache/Games/' solved my issue of most games not launching and throwing up memory and other errors. Just tested Psychonauts, will try a much more recent game in the future. Thanks @Josh.5 and @ich777 My previous post is here
    1 point
  13. That's what I thought, thanks!
    1 point
  14. HBA appears to be initializing correctly, though the firmware is quite old, but since it has a BIOS installed you can check there, if the drives are not detected in the HBA BIOS they also won't appear in Unraid, in that case it's either a cable/connection problem or there's some problem with the HBA.
    1 point
  15. sorry to have troubled you all ... issue seems to have been resolved as soon as I modifed the extra samba settings to be as I indicated in my last post. My PDC is now the master browser and I did not have to reboot all my boxes ... change was immediate right after making changes & re-starting UNRAID array. WOOOHOO ! Hopefully this helps someone else save time. 🙂 thanks, E
    1 point
  16. I've since updated my smb additional settings to ... [global] domain master = no preferred master = no local master = no will let you know if this works, thanks, E
    1 point
  17. Thank you, once I get my internal USB 2.0 header extension cable so I don't block a PCIe slot, due to the horrible design on my SuperMicro server motherboard. I'll submit a request to transfer it.
    1 point
  18. Dude!!! I have been trying to solve why I cannot stream from my VM without insane amounts of lag. I tried literally everything and was about to quit UNRAID until I saw your post. MY irq was doubled on both my GPU and the audio. Tried the MSI util and checked the box to switch it and my VM is running flawlessly now. Thank you so much.
    1 point
  19. We have been running the server for a month now, including a successful schedule parity check. Something so simple that I didn't even think of. Have just finished configuring the backup server using the same disks and, of course, setting the parity drive to not spin down. About to start the data transfer for the first backup. Many, many thanks for the solution.
    1 point
  20. It was mentioned before in this thread by Tom that for v6.12 it will still be required, but it won't for a future release, when multi arrays are supported.
    1 point
  21. Rein vom Ablauf her sieht das schon mal sehr gut aus (6 Uhr = Sohn, 7 Uhr = Mama, 8 Uhr = Papa): Als ich gestern meine Tochter ins Bett gebracht habe, kamen mir 3 Minuten übrigens zu wenig vor, da am Waschbecken immer noch relativ lange kein warmes Wasser kam. Daher habe ich das auf 5 Minuten verlängert.
    1 point
  22. See the discussion, you can either put individual ZFS disks in the unraid array that are protected by unraid parity just like if they were xfs or btrfs, or make an actual raidz ZFS pool as an unraid pool, with its own protection. If in array then it's as usual, if you make a raidz pool it'll likely be able to spindown but any access will require all the drives of the pool to spin up since the data is scattered.
    1 point
  23. Nur zur Info für alle, ich hab ein neues PowerTOP 2.15 erstellt das die Übersetzungen weglässt (da die sowieso keine verwendet) und ein paar KB spart, installation ist also nicht unbedingt notwendig. Das Package kann direkt über un-get mit hinzugefügtem Slackware Repository von mir (Beschreibung hier) oder direkt vom Repository hier geladen werden. Wenn ihr es über un-get installiert, vorher bitte mit `un-get remove powertop` deinstallieren und dann wieder installieren mit `un-get install powertop`. Wenn ihr es direkt laden solltet bitte vorher das alte Paket mit `removepkg` deinstallieren, das neue in /boot/extra geben und dann mit `installpkg` installieren. Die Warnungen beim beenden von PowerTOP könnt ihr getrost ignorieren, funktioniert trotzdem einwandfrei: powertop: /lib64/libncursesw.so.6: no version information available (required by powertop) powertop: /lib64/libtinfo.so.6: no version information available (required by powertop) modprobe cpufreq_stats failed Loaded 0 prior measurements RAPL device for cpu 0 RAPL device for cpu 0 Devfreq not enabled glob returned GLOB_ABORTED
    1 point
  24. Because I know just enough to be dangerous and don't recall who makes which GPUs. My bad. I used to obsess over this stuff, but I've got too many other priorities in life to worry about keeping up. Now I just dabble when I can. Tucks tail and scurries away
    1 point
  25. @trurl Thank you so much for pointing me in the right direction. I dont know in what fever dream state i decided to add those port forwarding options in my router but they have been nuked. I dont need remote access to my server other than through the My Server plugin and that is now set up properly. I reset my DHCP by deleting config/network.cfg on my flash drive I then reset my port numbers by editing the config/ident.cfg file on my flash drive. Reading though the list of security best practices, I thankfully have done/follow most of them (with the obvious exception of my huge port forwarding mistake) I now have access to my web GUI and have a lot more knowledge in my tool belt. Thanks again!!!
    1 point
  26. Just wanted to chime back in on progress info. I've replaced one NVME SSD (the most recent one with errors) and switched to a different adapter card for the other NVME SSD. It's been 48 hours and not a single error has occurred yet. Crossing fingers it was just hardware that needed swapping! If I don't reply again, assume everything is fixed!
    1 point
  27. This, as long as zfs is on partition #1, at least initially.
    1 point
  28. P.S. I find it mildly amusing that the UPS cable setting for slave is "ether", and the UPS type is "net" ethernet, get it? Yeah, yeah, small things for small minds.
    1 point
  29. I wouldn't recommend a P400 nor a P2000 for 2-3 streams since you can get a T400 (Turing based) for cheap nowadays and it is more power efficient than a P400 and even more then a P2000... If you only want to transcode 2-3 4K streams a Intel CPU (Skylake+ <- of course depending on what the source material is) with integrate iGPU, you can get a full list which codecs are supported over here. Also keep in mind that at idle the iGPU only consumes a few microwatts where a Nvidia GPU at least draws about 4-6 Watts and at full blast it only consumes about 10-12 Watts depending on the iGPU, therefore it's really power efficient. Even my J4105 is capable of 2 simultaneous 4K transcodes and this is a really low power chip and not clocked as high as a Desktop Intel CPU. In your case your CPU is Broadwell based which is a little bit unfortunate because this is the last generation where the iGPUs don't support h265 (HEVC).
    1 point
  30. Based on my experience of what I've seen the Unraid Team doing (behavioral thinking), I can actually provide some kind of an answer. Most of the betas go to a number 20 to 30 before an RC is published. They are at number 5 currently. I've never tracked the time associated with those and I think it would be a false way to think about it, depends on the features being implemented. If you are asking for a specific date, you are out of luck, so Soon™️ .
    1 point
  31. 1 point
  32. I updated my docker and it didn't come back up properly. I restarted it. The log now just shows: [migrations] started [migrations] no migrations found ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- **** Server already claimed **** No update required [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. When I go to https://ipadress:32400/web/index.html I just get: This XML file does not appear to have any style information associated with it. The document tree is shown below. <Response code="503" title="Maintenance" status="Plex Media Server is currently running database migrations."/> Any ideas where to go from here?
    1 point
  33. This is what i did to fix it. Stop Swag docker Go to \\<server>\appdata\swag\nginx folder rename original nginx.conf to nginx.conf.old copy nginx.conf.sample to nginx.conf rename ssl.conf to ssl.conf.old copy ssl.conf.sample to ssl.conf restart swag docker This worked for me
    1 point
  34. I have included your update for the next Unraid version. Thanks
    1 point
  35. Thanks for this. You saved me some time. The PR is already accepted and the fix is applied on github.
    1 point
  36. Hey everyone, head over to the Plugins tab and check for updates, My Servers plugin version 2023.01.23.1223 is now available which should resolve many of the issues folks are reporting. This release includes major architectural changes that will greatly improve the stability of My Servers, we highly encourage everyone to update. ## 2023.01.23.1223 ### This version resolves: - My Servers client (Unraid API) not reliably connecting to My Servers Cloud on some systems - Server name not being shown in the upper right corner of the webgui - Cryptic "Unexpected Token" messages when using a misconfigured URL - DNS checks causing delays during boot if the network wasn't available - Some flash backup Permission Denied errors ###This version adds: - Internal changes to greatly improve connection stability to My Servers Cloud - More efficient internal plugin state tracking for reduced flash writes - PHP 8 compatibility for upcoming Unraid OS 6.12
    1 point
  37. The patch allows you to manipulate the bar size through sysfs. See the example how to use it in this reddit post. This is the User Script I created to set the bar size to 16GB. You would obviously need to tweak this to the proper bar size, pice address, folder path, device id, etc. #!/bin/bash echo -n "0000:0d:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind echo 14 > /sys/bus/pci/devices/0000\:0d\:00.0/resource1_resize echo -n "10de 2782" > /sys/bus/pci/drivers/vfio-pci/new_id || echo -n "0000:0d:00.0" > /sys/bus/pci/drivers/vfio-pci/bind # Bit Sizes # 1 = 2MB # 2 = 4MB # 3 = 8MB # 4 = 16MB # 5 = 32MB # 6 = 64MB # 7 = 128MB # 8 = 256MB # 9 = 512MB # 10 = 1GB # 11 = 2GB # 12 = 4GB # 13 = 8GB # 14 = 16GB # 15 = 32GB As stated the patch and the script works, but it was ultimately unneeded with my 4070 Ti since passthrough works when I have Above 4G and Resize Bar enabled in my bios. With those enabled the 4070 Ti defaults to a 16GB bar size, so there is no need to manually change it. Other setups/GPUs will give a code 43 or not passthrough at all with Resize Bar enabled in the bios, which is where this patch would be helpful to manually resize the bar. In that case I'm not sure what GPU-Z would report. The only way to know if it's making a difference is to benchmark a game known to benefit from resize bar with and without manipulating the bar size.
    1 point
  38. I was curious so I did a few benchmarks passing all 32 cores/threads to the VM. Forza Horizon 5 1440p Extreme (No DLSS) VM Rebar OFF 20cpu: 116 fps VM Rebar ON 20cpu: 129 fps VM Rebar ON 32cpu: 134 fps Bare metal Rebar ON: 144 fps Cyberpunk 1440p RT Ultra (DLSS): VM Rebar OFF 20cpu: 81.07 fps VM Rebar ON 20cpu: 95.29 VM Rebar ON 32cpu: 98.26 Bare metal Rebar ON: 102.21 That's pretty dang close to bare metal performance with full resizable bar, given the extra overhead from unraid and vfio. Hitting 129 fps in the vm in Forza is amazing when with the 7900XTX I could never beat 114 fps with identical settings.
    1 point
  39. I ditched the 7900 XTX for a 4070 Ti and I'm seeing encouraging results. Forza Horizon 5 1440p Extreme (No DLSS) VM 256MB Bar: 116 fps VM 16GB Bar: 129 fps Baremetal Rebar Off: 127 fps Baremetal Rebar On: 144 fps Cyberpunk 1440p RT Ultra (DLSS): VM 256MG Bar: 81 fps VM 16GB Bar: 95 fps Baremetal Rebar Off: 94 fps Baremetal Rebar On: 102 fps Of note I went from testing in baremetal windows (with rebar on), rebooted into unraid and the bar size was still set to 16GB. I'm not sure what negotiated that or if it remembers the last setting. Setting the bar to 16GB basically brings it to parity with bare metal when rebar is OFF. Still a big performance delta, which I think it largely due to fewer cpu cores in the VM. Time Spy Graphics scores are identical in VM and baremetal (23k).
    1 point
  40. Some interesting testing and a roadblock. When resize bar is enabled and I boot windows on baremetal, the bar is being sized to the max 32GB/256MB. This sees HUGE performance gains in Forza Horizon 5. Resize bar off: 113fps Resize bar on: 139fps With the resize bar patch I can easily resize the bars to the max, however passthrough fails when bar 0 i set to 32GB. I don't get any errors, it just doesn't work. I can set bar 0 to any other size and passthrough works. With the bars set to 16GB/256MB I'm getting 114fps and improved latency. Really frustrated I can't get the vm to boot when set to 32GB to match baremetal.
    1 point
  41. Because the house was on fire after being struck by lightning?
    1 point
  42. My wife unintentionally deleted a whole year's worth of photos from my server. She has no idea how it happened because she did not intend to do it. Since I keep three backups of all the important data on unRAID (external USB drives, second unRAID server with weekly backup, in the cloud), restoring that folder for a whole year of photos was pretty easy. There is no way parity was going to save me (her) from that. Now that I know better, I have told my wife to never accidentally or intentionally delete something again because that has to be a deliberate action even if she claims she does not know how she did it!
    1 point
  43. To be honest I'm really getting discouraged around trusting my data to Unraid. I updated to 6.11.1 because the SMB performance in 6.10.x was absolutely horrible compared to earlier versions. Then 6.11.2 has a severe bug that seems inexcusable and may cause data loss. And, after 4 years of many reporting the issue and even YouTube videos made on the subject, the USB installer still hangs for a great many of us at "syncing file system." And the list goes on with lots of issues that should have never been released or been fixed long ago. Unraid isn't open source and we're at the mercy of the devs. We pay for one or more licenses yet the overall experience is we're all on our own for support as if it's open source. I haven't seen anyone from Unraid participating in 99% of the threads here. Given the many recent issues I have to wonder if Unraid shouldn't just be moved to open source? We're not getting the sort of support, or quality, one can usually expect from paid software.
    1 point
  44. Hello after a long time I got the Unraid API up and running. I have no idea why it works now. However, what does not work 100% is the transfer to Homassinnstant using mqtt. A part like the array or the dockers are displayed in the HA, but and that would be the more important thing, the VMs are not displayed to me at all. I hope you have a tip for me. Greetings Maggi
    1 point
  45. thanks. I thought this "Pre sales" forum was the place where the developers of this software/license would respond (vs. other forums where end-users help eachother). Guess I was mistaken.
    1 point
  46. @ich777 was just telling me that apparently some games do not like some of the magic that exists in our Linux file systems and one possibly solution is to use a direct path to the disk rather than the array. Eg: Instead of `/mnt/user/games/` use `/mnt/diskX/games/` I see that a few people have run into that XFS error. I have been unable to reproduce it on my Steam library. But if someone here is seeing that, perhaps you could give this suggestion by ich777 a shot and report back??
    1 point
  47. Hoopster, Thank you for your post. It helped me a lot. After taking a close look at the Device Manager tree I noticed it was showing some missing drivers in the "OTHER DEVICES" node on the tree as shown in the JPG below. As you advised I just did an "Update Driver" selection and then I pointed to the "virtio-win-0.1.1" CD Drive and it automatically installed the missing drivers. I have to say that setting up a VM for the first time is a great learning experience. Again thanks for your help.
    1 point
  48. The 16 -16 etc isn't really telling you the slot speed but what it is currently running at. To get the correct speed you really need to put a load onto the card. It is into do with power saving of the gpu when not being used. It quite easy to see this in a windows VM using gpuz. You will see the speed there that the card can run at under bus speed. Hover over that and it will tell you the speed the card is currently running. Then to the right if you click on the question mark you get the option to put some stress on the gpu and you will see the number change then.
    1 point