gyto6

Members
  • Posts

    148
  • Joined

  • Last visited

Everything posted by gyto6

  1. Well, I agree with you. I'll get in touch with the container developper to get a clue on why I can't get the variable as usual on Unraid.
  2. Hi everyone, I've some trouble with one container. I need to pass a variable in it, but this same variable is already set in the docker-compose.yml container's file, so I have to edit the variable at the source. The problem is that I'm using the Docker vDisk file, and due to all these docker layers, I'd like to know what could be the best way to edit this file. I'm not experienced enough with docker, and maybe that using vDisk might lead me to a dead end, but i've still a lot of interess to any explanation. Thank you!
  3. From the IPMI, I'm running at 90W IDLE... I expected better from this 1400W PSU concerning the ratio:
  4. A quick note for ZFS users with NVMe devices. While scrolling the OpenZFS github page, I've discovered this PR from Brian Atkinson concerning the Direct IO implementation within ZFS. Not applicable for databases, but many others situations shall see a real benefit with its implementation. You must understand that we're bypassing the ARC, and Direct I/O is very specific. So it's not a "performance booster" but a property that should be set accordingly to your need. https://docs.google.com/presentation/d/1f9bE1S6KqwHWVJtsOOfCu_cVKAFQO94h/edit#slide=id.p1
  5. ZFS "plugin" downloads the last ZFS version from OpenZFS github, so the functionality is there.
  6. First, you'll find many informations on Aaron Toponce website : https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ Second : In Situ scenari, I wrote this tutorial to optimize ZFS for another application. I tried my best to explain ZFS so it might worth the shot. https://github.com/photoprism/photoprism-contrib/tree/main/tutorials/zfs Third : ZFS/Unraid Compendium from @BVD https://github.com/teambvd/UnRAID-Performance-Compendium/blob/main/README.md
  7. No need to come back to Unraid if you're only planning to use Plex and to store personal data on your server I think.
  8. Nope. If Unraid add ZFS support, it should be a bit more graphical. But with many caveats probably as they are tons of properties to customize a Zpool and a dataset.
  9. First thing that comes to my mind, according to the required space, set a raidz or raidz2 and dedicate your 18Tb drive as a spare. https://docs.oracle.com/cd/E53394_01/html/E54801/gpegp.html https://docs.oracle.com/cd/E19253-01/820-2315/gcvcw/index.html (french one) Proceed to a safe test; with no data in the array; to check that in case of a drive failure, the 18Tb drive is automatically mounted. Just remove the SATA cable to simulate a drive failure. When creating the raidz, set the autoreplace property.
  10. ZFS is not more pertinent. But if you’ve the required stuff in your server, you can custom the Filesystem in this way. Whatever, if you’re meeting performance issue, I don’t think that ZFS will magically solve the trouble.
  11. For those wishing the use of a ZVOL with XFS for a XFS docker disk image: zfs create -V 20G yourpool/datasetname # -V refers to create a ZVOL / 20G = 20GB or 20Go cfdisk /dev/zvol/yourpool/datasetname # To create easily a partition mkfs.xfs /dev/zvol/yourpool/datasetname-part1 # Simple to format in the desired sgb mount /dev/zvol/yourpool/datasetname-part1 /mnt/yourpool/datasetname/ # The expected mount point
  12. To explain: Zfs create : Is used to create a dataset, so any path following this command refers to a zfs pool (already created) and a dataset (or sub dataset) to be created. Do not mix up "zfs create" and "zpool create". BTRFS or XFS : At the time I posted it, I tried first to make a docker disk image run in a ZVOL, so I began with the BTRFS. Several specifications tends to prefer the use of XFS instead of BTRFS. Use XFS docker disk image in a ZVOL partitionned with XFS. Error accessing specified device: /dev/pool/docker is a ZFS "file". /dev/pool/docker-part1 is the relative path to the partition included in the ZVOL. Once your partition configured with the appropriate file system, you'd better use a script to mount the filesystem at boot in a different path to enable writing. My own actually : mount /dev/zvol/pool/Docker-part1 /mnt/pool/Docker/
  13. I've got a similar issue, but with the Nvidia Driver plugin however. Whatever, the missing plugin has been automatically downloaded right after reboot.
  14. I had the same troubleshoot. Whatever, I procedeed as you did and got this "false positive" solved. I don't like having some errors unsolved Thanks
  15. With an image MYDOMAIN.COM -----> WAN ROUTER IP -----> PORT 80/443 FORWARDED TO NGINX -----> PRIVATE HOST FOR MYDOMAIN.COM -----> PRIVATE HOST FOR MYDOMAIN2.COM
  16. If you want to host several websites behind one WAN IP, you must forward the dedicated port to a reverse proxy like Nginx, then proceed to assign different domain name to each reachable host. Once the hosts named with a public domain name, set them into Nginx that will redirect the requests according to the desired domain name.
  17. If I can help you concerning the full dataset snapshots deletion: zfs destroy tank/docker@% The % in ZFS is like the most common * character to include any value. So any snapshot from the docker dataset shall be removed.
  18. Add User Script plugin and create a script with the relative command : zpool scrub tank Twice a month for public drives and once a month for professional drives.
  19. Indeed, ZFS is a "bit" more complex and I don't see it perfectly included within Unraid as they did with BTRFS/XFS. Using a plugin as it's the case with "My Servers" is not something that I expected, but it might do the work indeed. Only authorising its integration with the disk management GUI would be the least, and we won't finally require a dead usb-key or disk to start the array anymore. Indeed, words have a meaning. What do you mean saying "that's happened in a few places already" ? That other functionalities has been annonced but not released, or at least in another upgrade?
  20. ZFS support is official now for Unraid 6.11 : This release includes some bug fixes and update of base packages. Sorry no major new feature in this release, but instead we are paying some "technical debt" and laying the groundwork necessary to add better third-party driver and ZFS support. We anticipate a relatively short -rc series. @ich777 You'll finally have some rest with this thread in an aproximate future. ICE : Actual ZFS beginners users should audit their configuration in order to be able to recreate their zpool with the incoming Unraid 6.11. Save your "zpool create" command in a txt file as for your datasets in order to keep a trace from all the parameters you've used when creating your pool. BACKUP ALL YOUR DATA ON AN EXTERNAL SUPPORT. SNAPSHOT IS NOT A BACKUP. A COPY IS. ZnapZend or Sanoid plugins will help, or a simple copy from your data on a backup storage with the cp or rclone commands. Don't expect the update to happens with no trouble
  21. Please, provide your HDD references and your zpool properties for better support.
  22. @Jack8COke Please, if your using ZnapZend or Sanoid, DO NOT USE the script to remove your empty snapshots. These solutions manages Snapshots Retention for Archive purpose and would keep some snapshots longer. If your using this script, a snapshot managed to be kept a month can get simply removed...
  23. Sure, select yourpool/yourdataset instead of yourpool to apply specific commands and snapshots to this dataset Some scripts seems to exists, but for security matters, "We don't that here" as they take no space when empty (nearly). This script does the job but stay cautious, it can be break with an upgrade : https://gist.github.com/SkyWriter/58e36bfaa9eea1d36460 Did your heard about Sanoid ?