188pilas

Members
  • Posts

    35
  • Joined

  • Last visited

Posts posted by 188pilas

  1. 3 hours ago, ich777 said:

    The answer is rather simple, I noticed that you are using 'mlxconfig' instead use 'mstconfig' (if you type in 'mst' and then tab two times it shows which files are installed)

     

    I think the mlxconfig is part of the closed source package and the mstconfig is part of the open source package.

    Thanks you are correct!! Let me test out changing the config on one of the 10gb cards.

    • Like 1
  2. 5 minutes ago, ich777 said:

    No because I don't use ZFS personally... :P

     

    But you could easily do that. ;)

    I think I got a build somewhere lying around (6.9.0beta30) with ZFS and iSCSI if you haven't built one yourself feel free to contact me. ;)

    I am going to build one with Unraid 6.8.3 with ZFS 0.8.4...I also have Mellanox Technologies MT26448 ConnectX EN 10GigE and will try to build with 

    Melanox Firmware Tools. Going to backup my pool and boot drive...I do not have a cache drive and I read that the builds are /mnt/cache/appdata/kernel/output-VERSION by default...can we manually change that? or do I need to put a cache drive temporarily?

  3. @juan11perez I saw the stacks option in portainer, but it appears to create dockers from an already existing docker-compose file. I would like to do the reverse...create a docker compose file from all my dockers running on unraid with their respective values.

     

    Thanks for the suggestion though.

  4. 9 hours ago, juan11perez said:

    The easiest way is to install the portainer docker.

    you can then use the built in docker compose feature, which is under the stacks heading

    Thanks!!! Let me give this a try.

  5. Hey all - does anyone have any suggestions on auto generating a docker compose file from an existing unraid setup? I have several dockers running with custom port mapping along with volume mapping and would like to have a docker-compose file of that setup. Thanks!!! 

  6. anyone having an issue with ZFS on latest beta 6.9.0-beta25 ? I tried to import and have the below error due to unsupported feature.

     

    root@Omega:/# zpool import omega
    This pool uses the following feature(s) not supported by this system:
            com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
    All unsupported features are only required for writing to the pool.
    The pool can be imported using '-o readonly=on'.
    cannot import 'omega': unsupported version or feature
     

  7. k still running into some issues...-bash: /usr/local/bin/znapzend: /usr/bin/perl: bad interpreter: No such file or directory when I run znapzend --logto=/var/log/znapzend.log --daemonize. below are some logs.

     

    root@Omega:~# cat /var/log/syslog | head
    Mar  9 20:41:30 Omega kernel: microcode: microcode updated early to revision 0x1d, date = 2018-05-11
    Mar  9 20:41:30 Omega kernel: Linux version 4.19.107-Unraid (root@Develop) (gcc version 9.2.0 (GCC)) #1 SMP Thu Mar 5 13:55:57 PST 2020
    Mar  9 20:41:30 Omega kernel: Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=8086:2934,8086:2935,8086:293a, vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=6-7,14-15 pcie_acs_override=downstream initrd=/bzroot
    Mar  9 20:41:30 Omega kernel: x86/fpu: x87 FPU will use FXSAVE
    Mar  9 20:41:30 Omega kernel: BIOS-provided physical RAM map:
    Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
    Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009f378fff] usable
    Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f379000-0x000000009f38efff] reserved
    Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f38f000-0x000000009f3cdfff] ACPI data
    Mar  9 20:41:30 Omega kernel: BIOS-e820: [mem 0x000000009f3ce000-0x000000009fffffff] reserved
    root@Omega:~# cat /var/log/znapzend.log | head
    cat: /var/log/znapzend.log: No such file or directory
    root@Omega:~# ls -la /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
    -rw------- 1 root root 0 Mar  9 20:58 /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on
    root@Omega:~# ps aux | grep -i znapzend
    root     18502  0.0  0.0   3912  2152 pts/1    S+   21:06   0:00 grep -i znapzend
    
    

     

  8. hello steini84 - I have this plugin installed and a couple of jobs for some datasets however I have noticed that the jobs are not running running automatically. I usually have to run the job using the "znapzend --debug --runonce=zpool/dataset" command and it will run successfully. Below is an example of one of the schedules that I have setup:

     

    znapzendzetup create --recursive SRC '1week=>12hour' zpool/dataset DST:a '1week=>24hour' [email protected]:zpool/dataset DST:b '1week=>24hour' [email protected]:zpool/dataset

     

    I can schedule a user script to run at a schedule to run the znapzend --debug --runonce command; however just wondering if there were any other steps. I setup the touch auto_boot_on file.

     

    Thanks!