Leaderboard

Popular Content

Showing content with the highest reputation on 02/24/24 in all areas

  1. On this very special episode of the Uncast Show, we are joined by the Founder and CEO of Lime Technology, Tom Mortensen @limetech, and Co-CEO Tiffany Jones @tiffanyj to talk about the origins of Unraid OS, the evolution of the product over the past 18+ years, and some upcoming changes to the company to better align the product and user's desires in Unraid 7 and beyond. Tom and Tiffany share their personal career journeys and take us down memory lane as Unraid incorporated VMs and Docker containerization, GPU passthrough (a decision that dramatically shifted the course of the company with Linus Tech Tips), made key hires to expand the team, and talk about some key projects and initiatives that got the company where it is today. The conversation shifts gears to the future and goes into the upcoming changes to Unraid's pricing model for new licenses and why this change is crucial and necessary for sustaining the company and improving the product for all in the years to come. Please refer to our blog for full details on the upcoming pricing change. Wrapping up the episode, we'll get a first look at Unraid 7 and explore some exciting future OS features and projects such as making the unRAID array optional, integrating and maintaining plugins into the OS, VM Snapshots and cloning, the "Hardware Database," the "My Friends Network," a new Unraid website, a public, open-source Unraid API and much, much more. We truly hope you enjoy this episode.
    2 points
  2. Thank You very much for information:)
    2 points
  3. Unsurprising that this is leading to some concern and misconceptions. Its good that us long time licenses are grandfathered but at the end of the day I paid $100 or whatever 10 years ago and have had many thousands of dollars of dev for that money. That sadly does not pay the bills at Limetech! With this new plan UnRAID keeps working albeit without updates, this means you haven’t lost functionality, you technically got what you paid for, which was the use of UnRAID in perpetuity and updates for a year. I’d imagine quite a few basic users don’t even remember to update that regularly anyway! Security is an interesting topic, not sure what side of the fence I sit on, if you want to be up to date then pay up or should security updates for an additional year be covered? From LT’s PoV, this can only be done by maintaining multiple branches and I don’t think LT is big enough for that, so I wouldn’t blame them for saying the former… It wasn’t so long ago that you’d pay regularly for new SW versions, I used to buy every new version of Lightroom… Long story short is that a balance needs to be struck and I don’t blame LT going down this route, it seems, a reasonable balance. If I was buying again I’d just get the top tier and be done with it.
    2 points
  4. I have the desire to keep my cache drive full-ish of new media, so that : * Newly downloaded torrents seed from NVME rather than disk * Users watching media have better performance (and more recent downloads are more likely to be watched) * My disks stay spun down more, for power consumption and noise * In flight torrent/usenet downloads that are happening when mover is triggered do not get moved mid-download * When new media is downloaded that would mean the cache is too full, have the mover move off the oldest files to stay under the threshold. (keep newest files on cache up to threshhold) the advantage of my script over what you can do with mover tuning natively is : The settings on age and size will delay moves, but once you reach those triggers, it will move everything that matches. This could result in completely emptying your cache, or moving more than it needed to. For example, lets say you start with an empty cache, and have cache set to move older than 90 days, at 50% full. then download enough content all on the same day to reach 49% of your cache space. Nothing happens. 90 days go by. Nothing happens. You download 2% additional content. All 49% of the 90 day old content will be moved all at once, leaving you at 2% cache. My script will only move the oldest content enough to drop your usage below the desired threshhold. Whats oldest could be yesterday, or could be a decade. WARNING : ONLY USE THIS SCRIPT IF YOU HAVE MIRRORED CACHE, OR REALLY DON'T CARE IF YOU LOSE ITEMS HELD IN CACHE. (Because without mirrored cache, items kept in cache are not protected from drive failure) The Mover Tuning plugin supports two features we can use to make this happen : 1) Run a script before move 2) Don't move any files which are listed in a given text file. So, all we have to do is make a script that puts the newest files into a text file. Setup Copy the script into your appdata or somewhere (preferably someplace that won't get moved by mover). You may need to chmod the file to make it executable. I recommend appdata, because if you put it in data it may either get moved by the mover, or reading the file will spin up a drive when maybe there is nothing to actually move. Modify the variables at the top of the file as needed. Add the script to mover tuner to run before moves. Set mover tuner to ignore files contained in the output filename : If you run the script by hand (bash moverignore.sh) you can see the files that it will keep on the cache. You can then run the following command to test if mover will not move the files in question (any files in the ignore file, should not appear in the output of this command) find "/mnt/cache/data" -depth | grep -vFf '/mnt/user/appdata/moverignore.txt' You can also run mover in the commandline, and verify that it does not move any of the listed files (but should continue to move unlisted files) Additionally, if you want to manually move a directory off of cache you can run the following command find /mnt/user/cache/DirectoryNameGoesHere* -type f | /usr/local/sbin/move&
    1 point
  5. Build’s Name: My Customised SFF Home Server Build Full Spec: PartPicker Link Usage Profile: Unraid Server for Containers, VM's and Storage Server Time to upgrade my home server again, decided to down size from my current Fractal Design XL to something smaller and more power efficient. Decided to go with the Fractal Node 304 case, I’m moving from 12x 8tb disks with dual parity to 6x 20tb disks with single parity so will fit. Using a smaller case will mean taking up a lot less space in my small home office! Here's a finished picture, with a rather long build thread below: ________________________________________________________________________ So then, started of by ordering the case. I managed to swap the white HDD caddys for black ones with a colleague, looks much better. Then swapped out the Fractal fans for black Noctua A9 and A14 Chromax fans. These look slightly better and much quieter. Reused one of the LSI 9207-8i HBA from the old server. The brackets are powder coated black and have a Fractal R3 40mm fan on the heat sink. The fans are mounted using black nylon nuts and bolts, reusing the same holes as the heat sink to keep things neat and tidy. Updated the firmware while I was at it. Installed a Corsair RM650 (2021) PSU and realised the motherboard ATX cables were pretty long. Time consuming, but re-pinned and braided the connectors to make the optimal length. The USB cables from the front IO to the motherboard were also pretty long, so found some shorter ones on Ali Express. Soldered these onto the original PCB and braided the cable. Also shortened and braided a few other cables, like the on/off, restart buttons and hdd activity cables. I also removed the front panel audio cables as not needed for a server. Not a massive problem, but noticed that the power cable orientation meant the cable at the PSU end pointed up and needed to loop round, which looked messy. So found a left angle IEC (that’s a thing) cable, braided and re-terminated at the case end. Now it points down and runs along the bottom of the case, much tidier. Next up was the mother boards silver IO shield, didn’t look brilliant on the black case. I couldn’t find a black one, thought about 3d printing one, but ended up just powder coating the original. Came out really well and looks much better. Installed everything on the motherboard and made a custom length, braided cable for the CPU fan. Did the same for the two front fans and exhaust fan. The case takes 6x 3.5 HDD’s, these would be filled with my 20tb disks, so needed somewhere to install the 2x 2.5 HDD’s I use for cache drives. Easy option would be to mount on the outside of the HDD caddys, but where’s the fun in that! I decided to make my own bracket to mount them both on the side of the case. Fabricated these out of aluminium sheet and powder coated black. Drilled two holes in the bottom of the case, then used some black, low profile bolts to secure the bracket. These are hidden by the plastic feet mounting covers that run round the bottom edge of the case, so can’t be seen. Inside view of the bottom of the case, where the brackets secured. I used black nylock nuts and black washers to keep looking original. Drilled two more holes a the top of the case and secured the bracket using rivnuts and some more low profile bolts, with black washers. These were needed to make sure the case top fitted without snagging. Made some custom length SATA power cables for the HDD’s to keep things tidy. Then connected the HBA’s SATA data cables. I forgot to take a picture with the cables tied together, but looks tidy. Swapped the remaining PCI cable to a black one and all done! All sealed up, shame no one will ever see the hard work that went into the build! A fun project though, so worth it for me! ________________________________________________________________________ Well done for making it to the end of this post! I'm please with how the builds turned out. My old server use to average 190W power draw, this uses 110W. To be honest, I was hoping for a little lower, need to do some troubleshooting when I have some time. I think the HBA is stopping the system from getting to lower C states, so may swap out for a ASM1064 based card in the future and check things out with powertop.
    1 point
  6. Data Source: Type: JSON API Data Source Config: my server is on 192.168.2.254 port of the api: 3005 So the URL must be http://192.168.2.254:3005/api/getServers Grafana Panel Config for Docker Container: Panel https://grafana.com/grafana/plugins/dalvany-image-panel/ use your IP! $.['servers']['192.168.2.254']['docker']['details']['containers'][*].name $.['servers']['192.168.2.254']['docker']['details']['containers'][*].imageUrl $.['servers']['192.168.2.254']['docker']['details']['containers'][*].status
    1 point
  7. TSV stands for Tab Separated Value. TSV file is a flat file, which uses the Tab character to delimit data and reports one time-series per line. CSV stands for Comma Separated Value. CSV file is a flat file, which uses the comma (,) character to delimit data and reports one observation per line. https://wikis.ec.europa.eu/display/EUROSTATHELP/Which+are+the+available+formats#:~:text=TSV stands for Tab Separated,reports one observation per line. So: yes, it is TSV
    1 point
  8. I think this is an important point in all this discussion. Limetech did not have all the details of the new pricing and license structure set in stone before they felt the need to announce early because of all the hand wringing sparked by a Reddit thread. Some of the details, as indicated by Tom, are still being discussed. Not knowing what new license and upgrade pricing will look like, I opted to go with the "sure thing" on one of my Plus licenses and upgraded it to Pro at current pricing. I still have one Plus license left (and two Pro), which I will likely never need to upgrade as it is a small special-purpose server currently using only 5 of the 12 devices the license allows.
    1 point
  9. Your "problem" is that you want to copy stuff (ACLs) that UNIX does not honor. The target file is not capable to attach Window's security settings to that file. That is pretty much normal, the SAMBA server on UNRAID uses it's own security settings (that you can control from the GUI), they cannot be overwritten from the LAN. So, change your robocopy command line, NOT to include security stuff (for instance use /copy:DAT not /copy:DATS, not /COPYALL and no /SEC too) Keep in mind that UNRAID and Windows use different users. So the windows security for user X has no meaning to linux user X.)
    1 point
  10. And set Mover action appropriately depending on if you are moving to the array before changing cache, or moving back to cache after changing.
    1 point
  11. Thanks gottoesplosivo did the postgres update prior to updating immich and it went as smooth as butter.
    1 point
  12. Das ist der kleinere Bruder/Schwester der von mir eingesetzten Karte. Und nur um es nicht falsch stehen zu lassen, das ist eine SAS Karte und keine SATA Karte. Bei mir funktioniert meien Karte. Bis zu kurzem problemlos, bis ich mir durch einen unbedachten Umbau mein System zerschossen habe. Grundlegend ist die Serie 7 von ehemals Adaptec eine gute serie und kann in ihrer Firmware auch auf normalen JBOD Modus (= IT) umgeschaltet werden. Ja, ab und zu meldet unraid, daß es den Cache abschaltet. Aber das ist mit einem Befehl pro Systemstart und Datenträger im Go File bei mir behoben. Beachte: der Kontroller schluckt (wie alle SAS Kontroller) einiges an Strom (und der Kühlkörper deutet es an: eine gewisse Luftbewegung ist da zwingend. Die reine Konvektion könnte da etwas wenig sein. Ich weiß leider nicht, welches Mainboard Du hast und welche Slots frei sind, etc.. aber in der Regel sind SATA Kontroller sparsamer. Und 16 SATA Ports sind oft auch mit Standard SATA Kontrollern und OnBoard Anschlüssen stromsparender machbar.
    1 point
  13. Nothing obvious. Are you doing this from another computer or from a monitor and keyboard attached to your server? If another computer, what browser? Adblockers or anything else that might interfere?
    1 point
  14. Without rebooting from that condition, attach diagnostics to your NEXT post in this thread.
    1 point
  15. Delete the config/shares/system.cfg file from the flash drive. Probably not necessary but I would suggest rebooting. Since it appears you got corruption 0n the flash dtive you may want to plug it into a PC/Mac and run chkdsk on it. when you now go to the Shares tab it should now have default settings. You need to change anything there you do not want to be default value.
    1 point
  16. óhaòeExportNFSFsid="p# It is corrupt, delete that file and a new share cfg will be automatically created, but note that it will use the default options, so make any adjustments if needed
    1 point
  17. Thanks you for the fix and fast update. no issues after updating the docker.
    1 point
  18. Attach Diagnostics to your NEXT post in this thread.
    1 point
  19. I concur, I did this a year ago and worked flawlessly. Just make sure that all files were indeed moved to the array before replacing your cache drive.
    1 point
  20. Also wenn ich es recht in Erinnerung habe rund 2,5GBLan Welcher Flaschenhals? Das Array mit Parity erreicht per default schreibend immer nur zwischen 33 und 50% dessen, was die einzelne Festplatte an der Stelle schaffen kann. (Reconstruct Write mal außen vor gelassen.) Wenn Du einen ausreichend großen SSD Cache verwendest (mir einer entsprechend performanten SSD) und über Netzwerk/SMB darauf schreibst oder liest, sind bei Deinem 2,5GBLan ca. 280Mbyte/s schreibend auf den SSD-Cache möglich und sollten auch erreicht werden. Daß der Mover dann beim Verschieben auf das Array nur mit der Array üblichen Geschwindigkeit vom Cache zum Array schreibt sollte logisch sein. Wenn die Cache/SSD diese Doppelbelastung schlecht abfängt kann der Moverprozeß die Verfügbarkeit/Geschwindigkeit gleichzeitig aus dem Netz eintreffender Daten sogar negativ beeinflußen. (Das auch ein umfangreicher RAM Cache da unterstützen kann, lasse ich mal im Detail weg) In dem Fall kann man nur wieder dazu raten: groß genugen Cache nehmen und den Mover seltener laufen zu lassen. Ich rate in der Regel immer zu einem recht großen SSD-Cache um auch solchen Punkten vrzubeugen (und wegen höherer TBW und Leistungswerte der größeren SSDs). Nicht von ungefär ist eigentlich die vorgesehene Verfahrensweise, daß man über Tag auf dem Cache arbeitet und der Mover eben nur nachts anspringt, wenn man das NAS eben gerade nicht braucht (weil man im Bettchen liegt und von billigen NVMe oder U.2 SSD mit 32TB zu 100 Euro/Stück träumt) 😁
    1 point
  21. Parity im Auto / Standard Mode ... mal probiert ? https://docs.unraid.net/de/unraid-os/manual/storage-management/#turbo-write-mode
    1 point
  22. ich glaube nicht das @DataCollector dies so gemeint hat und lese dies auch nicht so. wenn du "knapp" bemessen bist an Anschlüssen und mit Erweiterungskarten arbeitest, dann schnelle (Bsp. SSD) primär auf die onboard Anschlüsse und etwas langsamere (HDD) an die Zusatz Karte anschließen, denke es ging hier eher um Prio anstelle Ausschluss, so lese ich das zumindest. Die onboard sind in der Regel immer voll angebunden und weniger kritisch zu sehen was Tempo und Durchsatz angeht.
    1 point
  23. I finally find out where the right documentation was. Got my automatic flash back up downloaded and reinstall on a new (temporary) flash. This is my first time doing this, I will see if I discover new issue after this. Thanks for the help.
    1 point
  24. I have that disabled even now. Initially I had it set to the 90, but I would see it hit 100 sometimes. Now the cooler has 50% more cooling headroom but it’s still hot. I had always planned to undervolt a little. I just went i9 cause I wanted a few more cores and maybe swap it with my i7 desktop if needed. But dang it’s hot.
    1 point
  25. Has anyone with a 12 or 13th gen cpu done anything like this to control process split on effeminacy / performance cores. I notice unraid by default doesn't seem to be distributing tasks appropriately in a manor geared for low power. https://superuser.com/questions/1758813/how-do-i-emulate-intel-thread-director-behaviour-in-linux-not-supporting-it-yet
    1 point
  26. Swapped to LSI card and problems are gone. Full 200+MB/s sync and formatting worked way faster too. I wouldn't recommend anyone run an Adaptec card after my experience.
    1 point
  27. Sorry, I found the reason. I found the appdata folder in Array Devices. I deleted it and that solved it.
    1 point
  28. 640K should be enough for anybody 😁
    1 point
  29. Hi 👋 I am using Nextcloud-ffmpeg and I do not spot any errors/warnings that in my opinion I should be worry about. I followed instruction from container support but I think you are aware about it. So what are those error you mentioned?
    1 point
  30. I can passthrough with my HBAs, but I think the trick would be only passing through the drives I'm going to offline and degrade from the current array. Thanks again!
    1 point
  31. Do you want to keep the data on the disk that is currently assigned to the shared pool?
    1 point
  32. The only difference between eSATA and SATA is the plug on one end of the cable. There are external powered eSATA enclosures. And eSATA brackets you can use to bring an eSATA connection out from a SATA port. StarTech.com 2 Port SATA to eSATA Slot Plate BRacket
    1 point
  33. You've got emhttp running twice in the go file. #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & /usr/local/sbin/emhttp & . . . Does it work if you remove one of those lines?
    1 point
  34. That's odd.... if you can post Tailscale diagnostics from both servers, that would be helpful. (There's a diagnostics button in the Tailscale plugin settings -- use that, not the normal system diagnostics.)
    1 point
  35. If it's a one time thing you can ignore, if it keeps happening try limiting more the RAM for VMs and/or docker containers, the problem is usually not just about not enough RAM but more about fragmented RAM, alternatively a small swap file on disk might help, you can use the swapfile plugin: https://forums.unraid.net/topic/109342-plugin-swapfile-for-691/
    1 point
  36. I think this is fine, just don't announce you're being acquired by Broadcom next week!!
    1 point
  37. One of the main reasons I suggest users to upgrade is so the community can support them. We can't keep running old versions just so we can help people that won't upgrade.
    1 point
  38. Ich habs mal korrigiert. Ist aber auch im Englischen nicht eindeutig, da es eher nach einer Statusangabe klingt als der Name für einen Button, der eine Aktion auslöst.
    1 point
  39. Excatly, I could have said that straight away Ignore the readme. The readme is copied/forked from the original repository
    1 point
  40. There is not currently, as this is my personal preview build, and I still need to sanitize the code. If enough people are interested, and when I have time, I'll put together the release notes and post it as an official release.
    1 point
  41. Alright, everyone. Here is the UUD 1.7 preview. It looks pretty different from 1.6, and since I made this version, I haven't really changed it much. It is the most fined tuned and refined version that I have developed, and there isn't much more that I need for me personally. I'm pretty pleased with the way it turned out. Let me know your thoughts and/or if you have any questions! @SpencerJ ULTIMATE UNRAID DASHBOARD Version 1.7 (Click Each Image for 4K High Resolution)
    1 point
  42. Fixed it myself. In case this helps others, here's my template, through you'd want to verify the network type and paths match what you want. <?xml version="1.0"?> <Container version="2"> <Name>NextPVR</Name> <Repository>nextpvr/nextpvr_amd64:stable</Repository> <Registry>https://hub.docker.com/r/nextpvr/nextpvr_amd64</Registry> <Network/> <MyIP/> <Shell>bash</Shell> <Privileged>false</Privileged> <Support/> <Project/> <Overview>NextPVR TV recording and video player software.</Overview> <Category>MediaApp:Video</Category> <WebUI>http://[IP]:[PORT:8866]/index.html</WebUI> <TemplateURL/> <Icon>https://hub.docker.com/r/nextpvr/nextpvr_amd64</Icon> <ExtraParams>--security-opt=no-new-privileges</ExtraParams> <PostArgs/> <CPUset/> <DateInstalled>1695046806</DateInstalled> <DonateText/> <DonateLink/> <Requires/> <Config Name="Recordings" Target="/recordings" Default="" Mode="rw" Description="Recordings Folder" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/media/recordings</Config> <Config Name="Buffer" Target="/buffer" Default="" Mode="rw" Description="Live TV Folder" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/appdata/nextpvr/buffer</Config> <Config Name="Configuration" Target="/config" Default="" Mode="rw" Description="Configuration Path" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/appdata/nextpvr/config/</Config> </Container>
    1 point
  43. You're on 6.12, click the green padlock on the top right row of icons to unlock reordering.
    1 point
  44. Should be available in the next few hours in the CA App.
    1 point
  45. Settings, display settings - color codes in hex
    1 point
  46. I have a special use case which may apply to few unRAID users, but I will document it here in case it is useful to others. I want to write my syslog to a location other than those officially supported by the syslog server. Although I am using a card reader, this should work with any Unassigned Devices location. Caveat: Officially, unRAID does not support card readers as boot flash devices. Don't tell Limetech, but I and others have been doing this for years 😁. In fact, here is a forum discussion of card readers that have unique GUIDs and work with unRAID. This solution involves using an available card slot in the card reader and a flash card as an unassigned device. Theoretically, this will work with any card reader with an empty slot and not just the particular Kingston card reader I am using. By default, the Syslog must be written to an existing share on unRAID with the option to mirror it to the boot flash drive if you need to do some troubleshooting. Alternatively, you can write it to a remote syslog server. It is not recommended to have the syslog mirrored to the boot flash drive constantly as that could result in a lot of writes to the flash drive and you don't want unnecessary wear and tear on your boot flash. If your server will not boot, writing to a share won't do you much good as you will not be able to access the syslog. With some tweaking of the config files, you can get the syslog mirrored to an unassigned device. This has already been documented. What I want to accomplish: Use the unused Micro SD card slot in my Kingston MobileLite G2 card reader as a location to which to write the syslog. The SD card slot is currently used as the flash drive from which unRAID boots. If I can do this, I never need to mirror the syslog to the boot flash drive. NOTE: this card reader is no longer available (perhaps on eBay) but it has a unique GUID and is USB 2.0 which is preferred for unRAID. I also have the MobileLite G3 USB 3.0 version in another server and it works great. Other versions of this card reader do not have a unique GUID. Keep in mind, this flash drive will count against the drive total in your unRAID license as it will be an Unassigned Device that is attached at the time the array starts. It counts even if the Micro SD slot is empty! Might as well use it for something. I have been using the Kingston reader as my boot flash drive for 10 years. It's great as the GUID is associated with the reader, and I can swap flash cards to my heart's content to test different unRAID configurations and troubleshoot. If the flash card fails, I just copy a backup onto a new flash card with no need to license a new flash drive. The Micro SD slot has always been unused and has showed up as an unassigned device. Steps to make this work: Format a Micro SD card as an NTFS partition (I gave the card the label "SYSLOG"). You could do FAT32 like with the boot drive as well. I had a spare 32GB Micro SD card and I chose to format it as NTFS as this is supported by Unassigned devices and makes it directly readable in Windows as is FAT32. If the server won't boot, I can just remove the card from the reader and view the syslog in Windows or post it to the forums for help. Of course, you could also do this with the boot flash drive if you mirrored the syslog there, but I want to avoid that. Put the card in the reader and mount it as an Unassigned Device in unRAID. As seen below, the boot flash is in the SD slot and I now have the SYSLOG card in the Micro SD slot. Yes, the flash drive is public (not recommended) while I did this. Edit the Unassigned Device (UD) settings to enable automount and sharing: Modify the syslog config files on the boot flash drive to point to the UD location: Change the server_folder line in /boot/config/rsyslog.cfg to point to the UD location: server_folder="/mnt/disks/SYSLOG/" In rsyslog.conf, change the $template remote variable to point to the UD location: $template remote,"/mnt/disks/SYSLOG//syslog-%FROMHOST-IP%.log" If you want to preserve your existing syslog info, copy syslog-[ip address of server].log from the current location to the UD location. Disable and then Enable the syslog server to restart it and force it to read the new config. The local syslog folder should now show <custom> If you want to test that syslog is now writing to the new location, log out of the GUI and log back in. This should be recorded in the syslog in the UD location. UPDATE: To make all of the above work properly, I found it necessary to run a user script that restarts the syslog server after the array starts on a server reboot. This script contains the following: #!/bin/bash /etc/rc.d/rc.rsyslogd restart Setting it up to run on array start makes it all automatic so you can set it and forget it. Celebrate your success in writing the syslog to a removable and replaceable UD location that does not hammer your boot flash drive and does not use an unRAID share.
    1 point
  47. Try running Tools - New Permissions against the particular share. If in doubt about what you're doing, then run Tools - Docker Safe New Permissions if you have Fix Common Problems installed
    1 point