Leaderboard

Popular Content

Showing content with the highest reputation since 01/03/23 in all areas

  1. Hello, I came across a small issue regarding the version status of an image that apparently was in OCI format. Unraid wasn't able to get the manifest information file because of wrong headers. As a result, checking for updates showed "Not available" instead. The docker image is the linuxGSM docker container and the fix is really simple. This is for Unraid version 6.11.5 but it will work even for older versions if you find the corresponding line in that file. SSHing into the Unraid server, in file: /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php change line 448 to this: $header = ['Accept: application/vnd.docker.distribution.manifest.list.v2+json,application/vnd.docker.distribution.manifest.v2+json,application/vnd.oci.image.index.v1+json']; And the version check worked after that. I suppose this change will be removed upon server restart but it will be nice if you can include it on the next Unraid update 😊 Thanks
    32 points
  2. Our plan is to release a public beta soon(tm) which includes OpenZFS support and changes which Plugin authors need to be aware of. Posting this now as a sneak peak, more detail will follow. That said.... ZFS support: this will let you create a named pool similar to how you can create named btrfs pools today. You will have choice of various zfs topologies depending on how many devices are in the pool. We will support single 2, 3, and 4-way mirrors, as well as groups of such mirrors (a.k.a., raid10). We will also support groups of raidz1/raidz2/raidz3. We will also support expansion of pools by adding additional vdev of same type and width to existing pool. Also will support raid0. It's looking like first release we will support replacing only single devices of a pool at a time even if the redundancy would support replacing 2 or 3 at time - that support will come later. Initially we'll also have a semi-manual way of limiting ARC memory usage. Finally, a future release will permit adding hot spares and special vdev's such as L2ARC, LOG, etc. and draid support. webGUI change: there are several new features but the main change for Plugin authors to note is that we have upgraded to PHP v8.2 and will be turning on all error, warning, and notices. This may result in some plugins not operating correctly and/or spewing a bunch of warning text. More on this later... By "public release" we mean that it will appear on the 'next' branch but with a '-beta' suffix. This means only run on test servers since there may be data integrity issues and config tweaks, though not anticipating any. Once any initial issues have been sorted, we'll release -rc1.
    25 points
  3. Some clarification... Currently: We have a single "unRAID" array(*) and multiple user-defined "cache pools", or simply "pools". Data devices in the unRAID array can be formatted with xfs, btrfs, or reiserfs file system. A pool can consist of a single slot, in which case you can select xfs or btrfs as the file system. Multi-slot pools can only be btrfs. What's unique about btrfs is that you can have a "raid-1" with an odd number of devices. With 6.12 release: You will be able to select zfs as file system type for single unRAID array data disks. Sure, as a single device lots of zfs redundancy features don't exist, but it can be a target for "zfs receive", and it can utilize compression and snapshots. You will be able to select zfs as the file system for a pool. As mentioned earlier you will be able to configure mirrors, raidz's and groups of those. With future release: The "pool" concept will be generalized. Instead of having an "unRAID" array, you can create a pool and designate it as an "unRAID" pool. Hence you could have unRAID pools, btrfs pools, zfs pools. Of course individual devices within an unRAID pool have their own file system type. (BTW we could add ext4 but no one has really asked for that). Shares will have the concept of "primary" storage and "cache" storage. Presumably you would assign an unRAID pool as primary storage for a share, and maybe a btrfs pool for cache storage. The 'mover' would then periodically move files from cache to primary. You could also designate maybe a 12-device zfs pool as primary and 2-device pool as cache, though there are other reasons you might not do that.... * note: we use the term "unRAID" to refer to the specific data organization of an array of devices (like RAID-1, RAID-5, etc). We use "Unraid" to refer to the OS itself.
    18 points
  4. HIGHLY recommended to NOT patch your docker files manually and instead use the plugin. Patching manually means that if / when you update the OS to 6.12 any manually patches that you are applying automatically will potentially interfere with the OS and be a big pain to troubleshoot
    15 points
  5. This is what i did to fix it. Stop Swag docker Go to \\<server>\appdata\swag\nginx folder rename original nginx.conf to nginx.conf.old copy nginx.conf.sample to nginx.conf rename ssl.conf to ssl.conf.old copy ssl.conf.sample to ssl.conf restart swag docker This worked for me
    12 points
  6. Dear Unraid Community, I wanted to take a moment to wish farewell to Eric Schultz @eschultz, who is leaving the company for other adventures. Eric has been an invaluable member of our team and played a critical role in the growth and success of the company and Unraid. He has always been willing to go above and beyond to ensure that our technology and systems run smoothly and efficiently. On behalf of everyone at the company, I would like to thank Eric for all his hard work, dedication, and contributions to the team and company. All the best, Spencer
    12 points
  7. Welcome to the friendliest server community around! This forum is where our users can collaborate, learn, and provide input on new developments from Lime Technology and its partners. We have a strong team of community moderators, devs, and Lime Technology employees who strive to help as many people as possible. Participating in this forum means agreeing to the following community guidelines and rules. These guidelines and rules must be agreed and adhered to to use this forum. Moderators and Lime Technology staff will enforce the community guidelines at their discretion. Anyone who feels a posted message doesn’t meet the community guidelines is encouraged to report the message immediately. As this is a manual process, please realize that it. may take some time to remove, edit or moderate particular messages. Rules and Community Guidelines To ensure a safe, friendly and productive forum, the following rules and guidelines apply: Be respectful. Respect your fellow users by keeping your tone positive and your comments constructive and courteous. Respect people's time and attention by providing complete information about your question or problem, including product name, model numbers and/or server diagnostics if applicable. Be relevant. Make sure your contributions are relevant to this forum and to the specific category or board where you post. If you have a new question, start a new thread rather than interrupting an ongoing conversation. Remember this is mostly user-generated content. You'll find plenty of good advice here, but remember that your situation, configuration, or use case may vary from that of the individual sharing a solution. Some advice you find here may even be wrong. Apply the same good judgment here that you would apply to information anywhere on the Internet. The posted messages express the author's views, not necessarily the views of this forum, the moderators, or Lime Technology staff. As the forum administrators and Lime Technology staff can’t actively monitor all posted messages, they are not responsible for the content posted by users and do not warrant the accuracy, completeness, or usefulness of any information presented. Think before you post: You may not use, or allow others to use, your registration membership to post or transmit the following: Content which is defamatory, abusive, vulgar, hateful, harassing, obscene, profane, sexually oriented, threatening, invasive of a person’s privacy, adult material, or otherwise in violation of any International, US or State level laws is expressly prohibited. This includes text, information, images, videos, signatures, and avatars. Also: "Rants", "slams", or legal threats against Lime Technology, another company or any person. Hyperlinks that lead to sites that violate any of the forum rules. Any copyrighted material unless you own the copyright or have written consent from the owner of the copyrighted material. Spam, advertisements, chain letters, pyramid schemes, and solicitations. (Note: we have an Unraid Marketplace Board that includes a Good Deals section and a Buy, Sell, Trade section, and they have their own rules.) You remain solely responsible for the content of your posted messages. Furthermore, you agree to indemnify and hold harmless the owners of this forum, any related websites to this forum, its staff, and its subsidiaries. The owner of this forum also reserves the right to reveal your identity (or any other related information collected on this service) in case of a formal complaint or legal action arising from any situation caused by your use of this forum. Please Note: When you post, your IP address is recorded. Repeated rule violations or egregious breach of the rules will result in accounts being restricted and or banned at the IP level. The forum software places a cookie, a text file containing bits of information (such as your username and password), in your browser's cache. Cookies are ONLY used to keep you logged in/out. The software does not collect or send any other form of information to your computer. Lime Technology may, at its sole discretion, modify these Rules of Participation from time to time. For Unraid OS software, website and other policies, please see our policies page! If you have any questions, please contact support.
    12 points
  8. Confirming this worked for me too. Not sure I needed to replace both, but I did anyway and Swag and Nextcloud are both back and up and running. For noobs like me, here's what I did: 1. Stop the Swag container 2. Go to the /mnt/appdata/swag folder 3. Rename your ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (just in case we to restore them) 4. Copy ssl.conf.sample to ssl.conf and nginx.conf.sample to nginx.conf 5. Start the container and you should be good.
    11 points
  9. I have included your update for the next Unraid version. Thanks
    9 points
  10. I replaced both the ssl.conf and nginx.conf files with the sample ones to update them since I did not make any custom modifications to either one of those and this resolved my issue.
    9 points
  11. 8 points
  12. That's the joke. Similar to duke nukem forever. soon has become the light hearted way of dealing with the seemingly interminable delays between releases. Soon™️ has no time scale attached, it's some date in the future with no way to make a prediction. Even the developers don't have a hard timeline, when it's done is the official answer. 1 month is NOTHING in the historical scale of Soon™️. That said, it could happen any time now. I'm beginning to think the Unraid community is unwittingly participating in a variable interval reinforcement schedule study, https://open.lib.umn.edu/intropsyc/chapter/7-2-changing-behavior-through-reinforcement-and-punishment-operant-conditioning/#stangor-ch07_s02_s02_t01
    6 points
  13. More clarifications: in Unraid OS only user-defined pools can be configured as multi-device ZFS pools. You can select ZFS as the file system type for an unRAID array disk, but will always be just a single device. The best way to think of this, anywhere you can select btrfs you can also select zfs, including 'zfs - encrypted' which is not using zfs built-in encryption but simply LUKS device encryption. Also note that ZFS hard drive pools will require all devices in a pool to be 'spun up' during use. IMO where ZFS will shine is in large flash-based pools (SSD, NVMe, etc).
    6 points
  14. When Its Done™️ 😅
    5 points
  15. Nope it's down, and will be down for a couple hours until I can see what the he'll happened Currently AFK
    5 points
  16. We just moved to TrueNAS Core (virtualised on Unraid) in September to support our bandwidth needs... Looking like it won't be long before we move back (Core sucks for the unfamiliar). As a side note, having support for 30+ drives would be nice for us. Our ZFS pool is 24 drives and we have a JBOD case to add another 36 drives over the next 12 months. We can manage otherwise though.
    5 points
  17. PC abschalten. 😁
    5 points
  18. 6.12 beta is on 6.0.15 as I type this. OpenZFS is not listed as good to go on 6.1, though looks like that is imminent, at which time we'll upgrade to 6.1.
    5 points
  19. @SpencerJ sent me a homing pigeon with the following message "VmVyc2lvbiA2LjEyLjAtYmV0YTY="
    4 points
  20. Uncast Episode XIV: Return of the Uncast with Bob from RetroRGB Season 2 of the Uncast pod is back and better than ever, with new host Ed Rawlings, aka @SpaceInvaderOne 👾 On this episode, Bob from RetroRGB joins the Uncast to talk about all things retro gaming, his discovery and use cases for Unraid, a deep dive into RetroNAS, and much more! Check out the show links below to connect with Bob or learn more about specific projects discussed. Show Topics with ~Timestamps: Intro from Ed, aka Spaceinvader One 👾 ~1:20: Listener participation on the pod with speakpipe.com/uncast. Speakpipe will allow you to ask questions to Ed about Unraid, ask questions directly to guests, and more. ~2:50: Upcoming Guests ~3:30: Bob from RetroRGB joins to talk about Unraid vs. prebuilt NAS solutions, use cases, and RetroNAS VMs. ~6:30: Unraid on a laptop? ~9:30: Array Protection, data recovery, New Configs, new hardware and client swapping. ~11:50: Discovering Unraid, VMs, capture cards, user error. ~17:30: VMs, Thunderbolt passthrough issues, Thunderbolt controllers, Intel vs. AMD, motherboard hardware, and BIOS issues/tips. ~21:30: All about Bob and RetroRGB. ~23:00: Retro games on modern TVs and hardware and platforms. ~24:34: MiSTerFPGA Project ~27:15: RetroNAS ~30:30: RetroNAS security: Creating VLANs, best practices, and networking tips. ~37:15: Using Virtiofs with RetroNAS on Unraid, VMs vs. Docker, and streamlining the RetroNAS install process. ~43:13: Everdrive Console Cartridges and optical drive emulators. ~46:50: Realistic expectations and advice to new retro gaming enthusiasts. ~51:05: MiSTer setup how to's and retro gaming community demographics. ~55:45: Retro gaming, CRTs, emulation scaling, wheeled retro gaming setups, and how to test components and avoid hardware scams. ~1:05: Console switches, scalers, and other setup equipment. In the end, it all comes down to personal choice. Show Links: Connect and support Bob: https://retrorgb.link/bob Send in your Uncast questions, comments, and good vibes: https://www.speakpipe.com/uncast Spaceinvader One interview on RetroRGB MiSTer FPGA Hardware https://www.retrorgb.com/mister.html RetroNAS info: https://www.retrorgb.com/introducing-retronas.html Other Ways to Support and Connect with the Uncast Subscribe/Support Spaceinvader One Youtube https://www.youtube.com/@uncastpod
    4 points
  21. Thanks so much! Got Nextcloud working again. What I did after reading the Swag support thread and after stopping Swag: 1. Went to my Swag folder in /mnt/appdata 2. Went to the nginx sub-folder 3. Renamed ssl.conf to ssl.conf.old and nginx.conf to nginx.conf.old (in case something went wrong) 4. Made a copy of ssl.conf.sample and named the new file ssl.conf 5. Made a copy of nginx.conf.sample and named the new file nginx.conf. 6. Restarted Swag. NOTE: I didn't have any customisations in the ssl.conf and nginx.conf files. (I can't claim any credit for this - all taken from the Swag support thread)
    4 points
  22. I updated my docker and it didn't come back up properly. I restarted it. The log now just shows: [migrations] started [migrations] no migrations found ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io ------------------------------------- To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- **** Server already claimed **** No update required [custom-init] No custom files found, skipping... Starting Plex Media Server. . . (you can ignore the libusb_init error) [ls.io-init] done. When I go to https://ipadress:32400/web/index.html I just get: This XML file does not appear to have any style information associated with it. The document tree is shown below. <Response code="503" title="Maintenance" status="Plex Media Server is currently running database migrations."/> Any ideas where to go from here?
    4 points
  23. Ok- I *THINK* this should fix things. For some reason, rich text pasting was enabled (maybe a system change from a recent forum update?) but I changed it to "Paste as Plain text" which I believe was the default before. Please let me know if this issue persists.
    4 points
  24. The Unraid webgui is actually open source, since you found the solution if you are interested you can submit a PR here: https://github.com/limetech/webgui
    4 points
  25. Hey everyone, head over to the Plugins tab and check for updates, My Servers plugin version 2023.01.23.1223 is now available which should resolve many of the issues folks are reporting. This release includes major architectural changes that will greatly improve the stability of My Servers, we highly encourage everyone to update. ## 2023.01.23.1223 ### This version resolves: - My Servers client (Unraid API) not reliably connecting to My Servers Cloud on some systems - Server name not being shown in the upper right corner of the webgui - Cryptic "Unexpected Token" messages when using a misconfigured URL - DNS checks causing delays during boot if the network wasn't available - Some flash backup Permission Denied errors ###This version adds: - Internal changes to greatly improve connection stability to My Servers Cloud - More efficient internal plugin state tracking for reduced flash writes - PHP 8 compatibility for upcoming Unraid OS 6.12
    4 points
  26. First month of 2023 is almost over and this still does not work. At this point it´s just freaking ridiculous. Obviously nobody cares about fixing it. I used to be pretty enthusiastic aboout unraid and used it a lot. That enthusiasm is now pretty much gone. Important fixes like this are ignored, obviously nobody cares, other features like native zfs implementation, which people have requested for years, are still in the "maybe in a few years" category. Just use an old OS version or download some stuff from a random chinese dude, because who the f cares anyways.... How is this still an issue after almost a year...
    4 points
  27. My ngnix.conf had the include /etc/nginx/conf.d/*.conf; inside the http block, I moved it to outside at the beginning and all worked fine
    4 points
  28. Can't wait to try 6.12 with my 7 x 10To hdd, i just receive 2 nvme 2To for the ZFS cache too. I'm checking 4 time per day if it's out 😁 thanks to the devs for their time
    4 points
  29. So I also had this problem, for the time I've reverted all the way back to UnRAID 6.9.2 which exhibits none of these issues. I have both ESXi running as a Guest on UnRAID and a Windows 10 VM that runs VMware Workstation (where my vCenter is installed). I went through the trouble spinning up a Windows 11 VM and testing compatibility in there as well. The primary behavior that was noticed is that this error message when running ESXi 7 and VMware Workstation 16.5 is along the lines of "vcpu0:invalid VMCB". I tested VMware Workstation 17 as well and "AMD-V is supported by the platform, but is implemented in a way that is incompatible. " After some searching it turns out that pre-2011 AMD's version of AMD-V botched the VMCB flags and didn't include the proper virtualization parameters. My best guess at the moment is that the QEMU version in UnRAID 6.11.x is for some reason implementing an extremely outdated version of AMD-V that is getting passed through...when they were doing it properly before. No amount of XML flags seems to fix the issue. Can anyone chime in on QEMU regression changes?
    4 points
  30. I never buy flash drives or other small electronics (chargers etc) from Amazon now. It doesn’t matter what store or seller you choose, your odds of getting a fake are just too high. I’ve had fake USB, fake chargers, even fake water filters show up despite trying my best to buy from the official store on Amazon. they simply don’t care and mix inventory so fake and real are all mixed together. now I go to best buy or Costco. Never had any issues with anything I bought at a real store.
    4 points
  31. Since we are discussing massive changes in array handling I'd like to submit a hair brained idea. Use the same address space that the preclear signature occupies or something similar to put a couple ID kbytes that would allow Unraid to recognize and ID drives that should participate in the classic unRAID parity array. If there is enough space, it could contain ID hashes of the rest of the drives in that set, so Unraid could easily determine what drives should be in what slots for a pool to have valid parity. That way a fresh Unraid install could prepopulate any detected unRAID pools. Maybe even be able to do other pool types this way too. It would be really nice to be able to download a fresh Unraid install and have it instantly recognize all the drives.
    4 points
  32. This is exactly where we are headed! Can I recruit you to rewrite the wiki? (kidding, actually only somewhat kidding It won't all happen in 6.12 release.
    4 points
  33. PC abschalten. Stecker ziehen kann den Verbrauch nochmal um 0,1-5W senken...
    4 points
  34. I was curious so I did a few benchmarks passing all 32 cores/threads to the VM. Forza Horizon 5 1440p Extreme (No DLSS) VM Rebar OFF 20cpu: 116 fps VM Rebar ON 20cpu: 129 fps VM Rebar ON 32cpu: 134 fps Bare metal Rebar ON: 144 fps Cyberpunk 1440p RT Ultra (DLSS): VM Rebar OFF 20cpu: 81.07 fps VM Rebar ON 20cpu: 95.29 VM Rebar ON 32cpu: 98.26 Bare metal Rebar ON: 102.21 That's pretty dang close to bare metal performance with full resizable bar, given the extra overhead from unraid and vfio. Hitting 129 fps in the vm in Forza is amazing when with the 7900XTX I could never beat 114 fps with identical settings.
    4 points
  35. When available zfs will be an option for array and pools.
    4 points
  36. WWF5ISBCZXRhNyEhISEgTGVzIEdPT09PTyEhISEhISE=
    3 points
  37. Two of the most excellent, polite and most helpful humans I have never met. Thankyou both for your great work on zfs over the years.
    3 points
  38. You are absolutly correct. He took my manual build process and automated it so well that I have not had to think about it at all any more! Really took this plugin to another level and now we just wait for the next Unraid release so we can depricate it
    3 points
  39. Based on my experience of what I've seen the Unraid Team doing (behavioral thinking), I can actually provide some kind of an answer. Most of the betas go to a number 20 to 30 before an RC is published. They are at number 5 currently. I've never tracked the time associated with those and I think it would be a false way to think about it, depends on the features being implemented. If you are asking for a specific date, you are out of luck, so Soon™️ .
    3 points
  40. Here is the 1.6 version. If you email SuperMicro support they will send it to you. A2SDi-4C-HLN4F.BIOS.1.6-A2SDICH1.zip
    3 points
  41. Writing to the array using an NVMe SSD as a cache drive is nice and serves well on LANs with 10Gbit/s and more. But once Mover has done his job, read performance drops down to ridiculous drive speeds. What I like to see is another Pool, designated as a read cache for all shares (or configurable, but it does not really matter). * if a file is requested, it is checked first, if it is on cache already * if yes check if it still recent (size / time of last write and so on) * if recent (last cache write is younger than file creation time) reading continues from the cache drive (exit here) * if not, delete the file from the cache SSD (no exit, continue next step as if the file would not have been on cache at all) * if no, the free space of the cache is checked to see if the requested file would fit * if no, but the cache COULD hold the file, the oldest file from the cache is deleted, check is redone (loop until enough space is freed up) * read the file from the array, write it to the LAN, but also write it to the cache drive and write the current time to the cache too * if a file is closed: * if it came from the cache: * update "time of last write" on the cache (this is to let it "bubble" up to prevent it from early deletion if space is needed. Often used files will this way stay on cache for a longer period whereas files that were only ask for once will be prefered to be cleaned up) Fairly straightforward and simple approach. The last part could be optimized by reading ahead and writing asynchoronnally, but with the current speeds for LAN and SSDs, it does not matter, the SSD is in any cases faster than the LAN. This would not speed up the first access to the file, but the second and more would greatly be improved. And if the designated Read Cache SSD is large (like 2Tb or more), a lot of files will fit until the first delete will be necessary. This feature could be added to the high level of the vfs file system overlay from unraid, (the cache disk itself is disposable, even if the content gets lost due to errors, it does not matter, it is just a copy and also needs no backup. So UNRAID should not look for shares or allow to create folders on that designated cache ssd) Update: yeah, I know, it will make file reading a bit slower (because of the additional write to the read cache), but this is almost not measurable. Reading from real disks is about 290MB/s with best conditions, writing to SATA SSDs should be almost twice as fast and writing to NVMe SSDs will be five or even more times faster. So this really does not count in. Update2: I would like to add 2 config settings for fine tuning: a) minimum file size: files smaller than this are never put on cache (default 0) b) maximum file size: files larger than this are never put on cache (default: not set or 100M or so) Update3: additionally there could be an cron driven "garbage collection" to free files from cache that have not been accessed for a certain period of time (should be a piece of cake since the read/close updates the file time, it is always recent and a simple find /mnt/readcache -atime -XXX -exec ... is enough for cleaning up)
    3 points
  42. Hi All. Just updated Swag and now getting this. nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/stream.conf:3 Does anyone know how I can solve this, until then I cannot access anything from outside my network. Thanks.
    3 points
  43. Down the line could there be a way to assign priority values for mover.. maybe 1-5 per share. 1 = move files as normal (daily or low value user defines) 3 = move every 7 days 4 = move monthly or value user defines as high 5 = skip mover unless space is needed We have the mover tuning plugin but still could use a little more wiggle room. Downloaded media content to stay on drive as long as possible before moving to array while archive folders etc are cached then moved on to array quicker. Could do it with custom scripts but sure others would enjoy this too?
    3 points
  44. Not sure if it's just an issue for me or everybody else as well. Since zigbee2mqtt docker updated to 1.29.1 I am seeing the dreaded orange broken chain "not available" under version in the Docker tab?!? Does anybody else have this? Running Unraid version 6.11.5
    3 points
  45. Is it possible to combine both zfs and xfs on a single array? if not, will there be a tool/plugin to migrate existing xfs(which probably most unraid user are using) to zfs.?
    3 points
  46. Awesome! But, it would be great to have array sizes beyond the 30 drive limit or have multiple arrays
    3 points
  47. I've done the same thing using an unassigned SSD drive. Just stop the containers, move the appdata folders over to the new location, update the appdata location in each container, and start containers back up. As long as you make the root path consistent, it's easy.
    3 points
  48. @b3rs3rk I've opened a PR on GitHub to fix the issue with the AMD GPU. They'll be properly shown in the GPU Settings now. https://github.com/b3rs3rk/gpustat-unraid/pull/50
    3 points