primeval_god

Community Developer
  • Posts

    854
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by primeval_god

  1. I will start with a boiler plate opinion of mine which is, unRAID should not be used for business purposes, especially without someone who is deeply familiar with the OS. unRAID is designed for home usage and (in my opinion) the security and support guarantees are not at a level that meets the needs of a business. If you have no experience with NAS devices my recommendation would be to look into a totally off the shelf solution like something from Synology or a similar company. They are more expensive, but the support and reliability are more inline with business needs. It sounds like that may be what you are looking for. A NAS is just a dedicated network storage device, not unlike the windows server you are already using though typically focused on file storage only. No, most NAS devices dont run windows and happily co-exist and support windows networks. One word of caution though, typically you should have someone familiar with Linux if you intend to integrate a linux machine on a windows network. This is less important if you are using a NAS appliance solution that just happens to be Linux based (synology, unRAID, etc), but if a generic linux server distro (debian, ubuntu, etc) is something you are considering then you want someone who knows about integrating linux in a windows network environment. Here are a few more bits of advice for you (I am not looking to argue with any of the other replies above). It is important to understand the difference between RAID (which in this case includes RAID like solutions like the unRAID array) and the different forms of backup. RAID IS NOT BACKUP. RAID is meant to protect against hardware (specifically disk) failure and keep your data available if a disk should fail (downtime costs money). There are many things that it does not protect against including, corruption, accidental file deletion, filesystem problems, intentional (malicious) file deletion, and others. That is why a good backup strategy is crucial regardless of your hardware redundancies. Some things to consider for backup, you must have multiple physically separate copies of your data. Typically a local copy (on another machine or a removeable disk), and an offsite copy (cloud based or the old disk in a safety deposit box) are recommended. Retention strategies are also important, i.e. how often are backups done and how long are they kept. You might for instance have filesystem snapshots done on you data hourly, which get kept for a week, and daily backups to your local and offsite solution which get kept for a months and weekly backups that are retained for longer (note this is an off the wall example, not advice on a specific solution). Finally an very importantly, you must periodically test your ability to restore from all your various backup locations. You never want to be in a position where you need to restore from a backup only to find that it hasnt been working for some reason. Another thing that you should consider is the type of files you are storing and how they need to be accessed. For instance you NAS solution could look very different depending on if you are storing mostly text files, or media files like video. Also important is how many people/machines need access to the file at once and at what kind of speed.
  2. I think somewhere on the forums someone has scripted something to this effect, but my advice is that it is not worth the time. I believe that an upcoming version of unRAID will fix the display of update status for non-Dockerman containers.
  3. You cannot update containers created via compose using the built-in unRAID Docker page. Also the Update status displayed in the unRAID webui is not valid for containers that were created through means other than Dockerman.
  4. Take a look at these plugins https://github.com/dcflachs/compose_plugin https://github.com/dcflachs/swapfile_plugin particularly the pkg_build.sh as an example of how to build a .plg Its something I was asking for a long time ago, but it never happened.
  5. Its a known issue with the way dockerman shows networks.
  6. I dont think, generally speaking, that unRAID users are going for ZFS. Its inclusion seems like mainly an attempt to bring in people from truenas who see ZFS as an essential (mainly due to a perceived need for speed and bitrot protection).
  7. For the compose plugin there is no way currently to control the order in which stacks start, but that shouldnt matter as you should not have dependencies between stacks. As for containers within a stack I believe compose files have syntax to control which containers depend on others to start. An issue some people have is with stacks that depend on external networks. Specifically the docker networks that unRAID creates change on every bootup thus containers must be recreated (rather than restarted) on boot. There is an option in the plugin settings to recreate containers on startup, but my recommendation is to just use custom docker networks and ignore the ones unRAID creates.
  8. What services exactly are you referring to?
  9. Going to have to respectfully disagree here. Many unRAID users, myself included, routinely run systems a version or two behind the current latest release (some much farther). I just this weekend finally upgraded from 6.11 to 6.12. For us the lack of security updates is not new. An unRAID NAS is meant to run internally on a home network where its exposure to potential threats should be minimal. It would be nice to get ongoing security updates for a particular os version (without the introduction of features or potentially destabilizing changes), but at this point I am not to worried about it.
  10. Well ZFS is not the future for my unRAID servers. I have no interest in it and no intention of using it. As for something like Snapraid I also have no interest, realtime parity protection is a must for me. The unRAID array type is what brought me here and my filesystem of choice is BTRFS. For the purposes of single disk filesystems with unRAID driver parity i see no advantages of ZFS over BTRFS. Really my only wish for improvement to the unRAID style array (that isnt already in the works) would be leveraging filesystem checksums along with parity for bitrot protection (which i have accepted really isnt as big a threat to data integrity as its made out to be).
  11. Before you waste your time on this please note that any "fixes" are unlikely to be merged. I dont consider it to be an issue with this plugin. The problem is the same for any containers on the system created by means other than Dockerman (cli, portainer, composeman, etc.) thus the fix should be in Dockerman. Update tracking is not a feature of the compose manager plugin, and is in many cases irrelevant as locally built containers are a more common use case with compose.
  12. Try uninstalling the plugin and reinstalling.
  13. Did you create the folder called "swap"? If so delete it and and recreate it with the command "btrfs sub create /mnt/ssd/swap". Swapfiles need to be in a subvolume on btrfs. The plugin should create it automatically if it is allowed to create the folder as well as the swapfile.
  14. It should be the first option on the setting page.
  15. @darkside40 If your wondering why this request hasnt gotten much attention, its probably because it has been discussed on the forums for a very long time and is not likely to be added. There are several reasons, as you know there is already a plugin that adds this feature, but it often causes people trouble. The big problem is S3 sleep is a lot harder of a problem to sort out than most people think. The reason is that hardware support for sleep, and to a greater extent waking from sleep, is all over the place. This is especially true on linux, for server grade hardware (which includes HBA cards), and for booting from USB drives. People have all sorts of problems for different hardware setups, ranging from wont go to sleep, to wont wake, to wakes up in a weird state. Its a support nightmare and even when it works many people come to realize that S3 sleep is not a useful as it seems for a device like a NAS which is typically expected to available on demand. Personally i used the S3 plugin for a long time on the previous iteration of my NAS. It was a constant headache. Wake on Lan for on demand stuff never worked properly. Waking on a schedule took forever to figure out because as it turned out the realtime clock on my motherboard could only set alarms for something like 12hr in the future (anything longer just wouldnt wake). And most annoyingly disk spin state was always problematic on wakeup. I abandoned the concept of a sleeping NAS long ago. P.S. This is in response to your post in the announcement thread. I am just a user like you, I dont speak for Limetech. My comments above are based on information i have gleaned on the forums over the years not any official word from Limetech.
  16. I agree that I would prefer the existing base tier, but if I had to speculate (and this is only wild speculation) the new base tier may be a response to the demand for a application host only version. For a while there have been a lot of people who want a cheaper tier without an "array" to use as a Docker/Application host without NAS functionality.
  17. Its more like pro with 1 year of updates and an optional subscription for future updates. After the year you can continue to use the OS at the last version that was available while your update subscription was live (a key distinction in an era where a lot of software requires a subscription for continued use).
  18. They will have to wait until the new update utility is integrated into unRAID. Until then they will have to use the old update utility without any of the new features.
  19. Create a Dockerfile something like this (basically copied off of the dockerhub page for the official python container) FROM python:3.10 COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt WORKDIR /scripts #Uncomment below if you want to include your script inside the container #COPY . . #CMD ["python", "./yourscript.py"] put you requirements file in the same directory. If you want to bundle your script into the container put that in the same directory as well and uncomment the bottom 2 lines in the docker file. Build a local image with this command (from the same directory) docker build -t local/mypythonimage:latest . when finished you can run the script in one of the following ways. If you included your script in the images just run the following form anywhere on the system docker run --rm -it local/mypythonimage:latest If you did not include the script in the image run docker run --rm -it -v /mnt/user/path/to/your/script/dir:/scripts local/mypythonimage:latest python yourscript.py If your script needs access to other files on your system you will need to bind mount the directories into the system with additional -v flags.
  20. Yeah LXC is a bit like running a VM without the overhead. You can get a full featured linux distro within the container much like a vm. There is no need for an nfs share to get access to files on the host system. LXC containers allow you to pass folders from the host system in a similar manner to docker. As for the specific error you posted, it would be best to ask about that in the LXC plugin support page. I am not an expert in LXC, I have never had such an error setting up container on unraid. Regarding python in docker, it can actually be quite simple to setup if you are only using the container to hold the environment. A dockerfile, based on one of the official python images, that installs your env file should be pretty simple to write. Build the image locally then you can use it to run scripts from the unRAID command line or from a user script with a command something like this docker run --rm -it -v /host/scripts:/container/scripts -w /container/scripts local/python:3.10 python myscript.py -some-arg which launches a container, bind mounts the scripts directory from the host into the container, sets the container working directory to the scripts directory and then runs the command python myscript.py -some-arg
  21. Trying to run python scripts directly on unRAID is just asking for a headache. Save yourself some trouble and just run your scripts in a docker container or an LXC container (LXC requires a plugin). For docker there are several ways to handle it, you could build your whole script into a container, or your could create a very simple container from one of the python base containers that has only your venv in it and keep the script external. You could run it from a script in the user scripts plugin on a schedule using a docker run command to launch the container, pass through your python script directory and run your python script.