primeval_god

Community Developer
  • Posts

    853
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by primeval_god

  1. 45 minutes ago, Thunderhead said:

    Maybe I could do without some new feature updates and MAYBE do without some security updates for a bit, but would my Plugins and Dockers still get updates from the Apps page?  Wouldn't that inevitably lead to Plugins and Dockers being incompatible with the older OS as time goes forward?

    Keep in mind that the "Apps Page" is actually a third party plugin called community applications (though it is developed more closely with Limetech than most). It acts as a convenient way to discover and install containers and plugins but it is not the only way to do so.

     

    Also speaking from personal experience I have in the past typically not run the latest version of unRAID (I usually stay a version back). The lack of security updates for older versions (which was how it used to work before the licensing change) was rarely something that concerned me. unRAID is designed to run from within a home network, with minimal exposure to the outside world.

  2. 4 hours ago, Napoleon said:

    Personally, I think its unreasonable to hold the security of our systems hostage to a subscription. This is an OS, security should be a priority.

    This is a NAS appliance OS security has never been a priority. Ok thats not entirely true (said mostly for effect) but the fact is that the unRAID OS has always been slow to get security updates (except for the occasional critical issue), has a fairly lax security model, and is intended to run on a home network with no exposure to the outside internet.

     

    4 hours ago, Napoleon said:

    Truenas, Proxmox, Xigma, Snapraid and OMV are all free for consumers, some with higher prices for the enterprise sector.

    unRAID is a consumer product only with no enterprise level.

  3. 4 hours ago, csimpson said:

    Not really, I just didn't know if the cache drive was tied to the parity drive at all. Almost like if I have the 8TB parity, two 4TB spinning discs and then another 120GB SSD, would the entire array be 8,120TB including the SSD.

    The term "Array" is old nomenclature that refers to the singular required drive pool that uses the unRAID's proprietary unRAID scheme. The "Array" pool can contain mixed drive sizes and types (though mixing SSDs and HDDs in not recommended), as well as up to 2 parity drives to maintain redundancy for the pool. unRAID also supports optional secondary drive pools (previously called cache, but weren't a cache in a typically sense of the word). These secondary pools can be single disks or have redundancy via ZFS or BTRFS software raid levels. These secondary pools are separate from the "Array" pools in terms of redundancy. In terms of file access unRAID has user shares which preset a combined view of folders from all the various drive pools. There are options to set which pool data gets written to when writing to a user share and when and if data gets moved between pools. One common usage of this is to direct all writes to an SSD pool and then have unRAID move the files onto the Array pool later. 
    Finally to the point above, the general recommendation is to have an SSD based pool to store appdata on, separate from the Array. Whether or not the pool used to store appdata has redundancy is up to you but it does not effect the configuration of the "Array" pool. 

  4. 17 hours ago, AngryPig said:

    That's good to know. Are they required to be in a Subvolume for Send to work or because they are Subvolumes themselves they will automatically work with Send?

    Yes send and receive work on subvolumes only (snapshots are just a type of subvolume).

     

    17 hours ago, AngryPig said:

    Thank you for clarifying this. Does the other file system have to BTRFS to be able to Send to it? ie I would not be able to send to my array as it is XFS?

    Yes the other filesystem has to be BTRFS for send and receive to work, the send subvolume becomes a subvolume on the receiving filesystem. 

     

    17 hours ago, AngryPig said:

    Does this plugin allow for me to Snap on a daily basis and Send on a weekly basis?

    I am not entirely sure about the capabilities of this plugin with regards to scheduling.

     

    17 hours ago, AngryPig said:

    So if I was Sending off-site it would not use my bandwidth as much? Or is it simply that since it maintains the BTRFS CoW, it would not be that much bandwidth after the initial send?

    BTRFS send and receive does a sort of differential send when subvolume/snapshot is based on subvolume/snapshot that is present in both filesystems (assuming you use the option to specify the parent). This reduces the amount of data sent for subsequent snapshots of the same subvolume. I am not sure if this plugin actually makes that option available though as I do my snapshot sending via the command line.

    • Like 1
  5. 14 hours ago, AngryPig said:

    This will convert that share to a subvolume. Since this is a file system level of change, do I need to go in and update Docker mappings and the Share Settings (Primary storage, move settings, SMB settings etc) as if a new Share had been created in UNRAID? Or do I now have two Shares of the same name and I need to configure the new one and remove the old one?

    You will want to stop any Docker containers that have a mapping to or within the share you are operating on while you make changes, aside from that though you dont need to make any changes since you are creating the subvolume with the original path. Likewise you shouldnt need to make any changes to share settings since so far as unRAID is concerned the new subvolume (which has the same path as the original user share) is the existing user share

     

    15 hours ago, AngryPig said:

    I can then enable Snapshots of a Subvolume and either store it within the Subvolume (preferred?) or to a different Subvolume as long as it is on the same pool. ie I cannot make a snap to /mnt/cache-single/some-snaps as 'some-snaps' is not a Subvolume

    Where you store snapshots doesnt really matter, they can be anywhere within the same pool (they dont have to be within a subvolume). Snapshots themselves are just subvolumes anyway.  

     

    15 hours ago, AngryPig said:

    I cannot send to /mnt/cache-double/... as 'cache-double' is a different pool.

    This is not entirely true but it requires some explanation. When you snapshot a subvolume the snapshot must be made somewhere on the same filesystem (pool) as it is a CoW copy of the subvolume (and a new subvolume itself). You can however send subvolumes from one BTRFS filesystem to another using btrfs send and receive (which are available in this plugin). Doing this copies the subvolume to the other filesystem and thus it is no longer CoW copy but a full copy taking up space on the other filesystem. Once a subvolume is sent to the other filesystem there is a way to send subsequent snapshots of that subvolume between the two filesystems in a way that maintains the CoW relationship between the subvolume and its snapshots.

    • Like 1
  6. 4 hours ago, unraid_fk34 said:

    So I'm having issues displaying the custom network name in the docker containers tab. It will only show me the ID. I found a thread discussing this particular issue on the forums. Is that something that has to do with unraid or the plugin? Is it something that could be fixed in the future or is there a solution maybe?

    The issue is with unRAID not this plugin. Last I was aware there was an open PR fixing the issue.

  7. 10 hours ago, kiwijunglist said:

    This doesn't seem to be available anymore under community apps.

     

    I installed using docker-compose-manager plugin.

     

    Add new stack, name it vorta.

    Is there a question here? I dont really understand the post. 

  8. 5 hours ago, Lebowski89 said:

    Getting some label UI oddities here and there. For example, with Traefik, I've put the correct URL (http://192.168.##.##:port) in the Web UI column but it opens the localhost IP without a port. Then for Whisparr the WebUI is also correct, but when you open it the IP you've entered is doubled (http://192.168.##.##/192.168.##.##:port). Have tried stack down, up, updated, deleted containers, etc.

    The changes you are making are likely not taking effect. There is a long standing issue with how dockerman handles the webui and icon labels. Basically it caches them the first time a container is seen and subsequent changes dont have an effect. Try restarting your server, that may fix the webui label issues.

    • Like 1
  9. 7 hours ago, Lebowski89 said:

    I notice only .png are working for the icons. Using an .svg displays a blank icon. This differs from Docker Folders which accepts both. Is this intentional?

    That is a Dockerman question. The compose plugin only assigns the labels to the containers. Unraid's builtin Dockerman is what actually reads the label and caches/displays the icon.

    • Like 1
  10. On 3/20/2024 at 5:23 PM, mbush78 said:

    Are we like the only weird-o's that were building docker images in unraid? :) Or are we just to only two that somehow broke their system lol. 

    I haven't tried it yet but was thinking maybe I'll create a fresh install of the trial build on some random usb stick just to see if buildx works there and it's really just my issue or if its truly busted 😕  really dunno what else to do.  Is there someplace to submit bug tickets that I've totally overlooked somewhere? 

     

    I was blind, https://account.unraid.net/troubleshoot going there allowed me to submit a support ticket. 

    Did you get any response on your support ticket?

  11. 1 hour ago, TRusselo said:

    1. USB flash is likely to die within a year.  My USB dies every year.

    If this is the case then you are doing something very wrong. I have been using the same flash drive for 5+ years without issue. On a normal unRAID system the flash drive incurs minimal reads and writes as the os is run completely from ram. The only writes should be one the occasion that settings are changed or plugins or OS is updated.

  12. 2 hours ago, BIGFAT said:

    Since the last 2 minor unraid updates it seems that mkswap can't grab the created swapfile anymore (on xfs), resulting in a massive CPU load (50 to over 100 load on a 16 thread CPU) ending in the system being unresponsive.

    Logs (specifically anything that mentions swapfile) and the specifics of your settings would be helpful. Also the actual unRAID Version as i am not certain what the last 2 minor releases were. 
     

  13. 2 hours ago, blueink said:

    I appreciate any insight anyone can provide!

    This thread is not really the right place to ask about issues with specific compose stacks, unless the problem is with the compose manager plugin rather than the stack itself. Having lots of people post whole compose files in this thread makes it harder for others to find info about the plugin itself.

    I am not really an expert on compose files in general and i dont use that specific application so i dont know how much help I can be. On general observation is that you should remove the version: "3.8" at the top of the file. Compose no longer requires it and actually recommends against calling out a specific compose version in compose files to better support mixing and matching syntax from various version of the compose spec.

    • Thanks 1
  14. 2 hours ago, kcossabo said:

    Now if there was a way to expose the webUI to unRAID for a container built by Portainer, that would ROCK, and I might eliminate my 'homer' launch page.

    I think what you are looking for is the label "net.unraid.docker.webui". You can apply this label to a container created by Portainer (or any other container creation method) and set its value to the url of your container and unRAID will show a WebUi button for the container. There is also a "net.unraid.docker.icon" for setting an icon (url to the icon) and "net.unraid.docker.shell" for setting the shell option (bash for example).

     

    Important to note there is a bit of buggy behavior with these labels in that unRAID caches the value and subsequent changes are not reflected in the ui. So essentially you get one shot at setting these things, make sure they are correct before spinning up a container with one (it is possible to wipe the cached value but its not as simple as a page refresh). 

    • Upvote 1
  15. 3 hours ago, Jehoshua said:

    But it is extremely disappointing that Unraid still does not differentiate between commercial and private licenses.

    And now, it get even worse: Unraid only allows rich users or commercial users to buy a lifetime license.

    unRAID has always been a purely home solution and i would never recommend it for business purposes. 

    • Upvote 1
  16. 23 minutes ago, NominallySavvyTechPerson said:

    3. The two sellers that I see talked about most are GoHardDrive and ServerPartDeals. I think they have reputation around respecting the warranty. Idk if they have any rep around screening and testing.

    I dont know about GOHardDrive but since you mention ServerPartDeals I will say that i have had great success with their "Manufacturer Recertified" Drives? In my experience and from what I have read those "manufacturer recertified" drives are basically good as new. I use them in my main array without any additional worry. 

  17. 2 hours ago, maTTi said:

    I know there is a container with Mkvtoolnix but this won't work for me as I don't want to run script in the container.

    Why not? Using containers to run scripts is actually quite convenient. See my comments in this thread for more info on how to do it.

      

    • Thanks 1
  18. 1 hour ago, Kees Fluitman said:

    Is it possible to auto update stacks in compose.manager?

    No. I am not a fan of autoupdates for anything, but especially not container stacks. If you are looking to auto update containers i would suggest looking into watchtower.

  19. On 3/11/2024 at 7:55 AM, Kiara_taylor said:

    We're a small crew of about 20 people and our data is currently sitting on a Windows Server DC with some backup measures in place. We don't have a ton of data, but what we do have is super important.

    I will start with a boiler plate opinion of mine which is, unRAID should not be used for business purposes, especially without someone who is deeply familiar with the OS. unRAID is designed for home usage and (in my opinion) the security and support guarantees are not at a level that meets the needs of a business. If you have no experience with NAS devices my recommendation would be to look into a totally off the shelf solution like something from Synology or a similar company. They are more expensive, but the support and reliability are more inline with business needs.

     

    On 3/11/2024 at 7:55 AM, Kiara_taylor said:

    Do we need a NAS?

    It sounds like that may be what you are looking for. A NAS is just a dedicated network storage device, not unlike the windows server you are already using though typically focused on file storage only.

     

    On 3/11/2024 at 7:55 AM, Kiara_taylor said:

    Does it have to run on Windows?

    No, most NAS devices dont run windows and happily co-exist and support windows networks. One word of caution though, typically you should have someone familiar with Linux if you intend to integrate a linux machine on a windows network. This is less important if you are using a NAS appliance solution that just happens to be Linux based (synology, unRAID, etc), but if a generic linux server distro (debian, ubuntu, etc) is something you are considering then you want someone who knows about integrating linux in a windows network environment.

     

    On 3/11/2024 at 7:55 AM, Kiara_taylor said:

    Our needs are simple:

     

    - Not a lot of storage required (even 10TB would do)
    - Protection against drive failures (daily backups are a must)
    - Files need to be accessible on Windows via UNC Path

     

    Any advice would be greatly appreciated!

    Here are a few more bits of advice for you (I am not looking to argue with any of the other replies above). It is important to understand the difference between RAID (which in this case includes RAID like solutions like the unRAID array) and the different forms of backup. RAID IS NOT BACKUP. RAID is meant to protect against hardware (specifically disk) failure and keep your data available if a disk should fail (downtime costs money). There are many things that it does not protect against including, corruption, accidental file deletion, filesystem problems, intentional (malicious) file deletion, and others. That is why a good backup strategy is crucial regardless of your hardware redundancies. Some things to consider for backup, you must have multiple physically separate copies of your data. Typically a local copy (on another machine or a removeable disk), and an offsite copy (cloud based or the old disk in a safety deposit box) are recommended. Retention strategies are also important, i.e. how often are backups done and how long are they kept. You might for instance have filesystem snapshots done on you data hourly, which get kept for a week, and daily backups to your local and offsite solution which get kept for a months and weekly backups that are retained for longer (note this is an off the wall example, not advice on a specific solution). Finally an very importantly, you must periodically test your ability to restore from all your various backup locations. You never want to be in a position where you need to restore from a backup only to find that it hasnt been working for some reason.

     

    Another thing that you should consider is the type of files you are storing and how they need to be accessed. For instance you NAS solution could look very different depending on if you are storing mostly text files, or media files like video. Also important is how many people/machines need access to the file at once and at what kind of speed. 

    • Upvote 1