Jump to content

zxhaxdr

Members
  • Posts

    13
  • Joined

  • Last visited

Posts posted by zxhaxdr

  1. I have been experiencing the same issue since updating to 6.12 rc5. I updated from 6.11.* to rc1, and went through all the rc versions up to rc6, and the issue still persists in rc6 as well. However, after clearing some logs, I now have 1GB of free space remaining on my 32GB USB drive.

    Edit: Last night, I updated to RC7, and it seems to have fixed itself as the space has cleared up. Currently, the used space is 1.3GB, leaving me with 30GB of free space. The used amount hasn't seemed to increase since 8 hours ago.

  2. 12 hours ago, SimonF said:

    Did a test on may 6.9.2 machine worked ok, must be an issue with support for my alderlake CPU or something else as cannot delete any, Will have to look into that.

    According to the method https://github.com/open-iscsi/rtslib-fb/tree/master/rtslib

    def _gen_attached_luns(self):
            '''
            Fast scan of luns attached to a storage object. This is an order of
            magnitude faster than using root.luns and matching path on them.
            '''
            isdir = os.path.isdir
            islink = os.path.islink
            listdir = os.listdir
            realpath = os.path.realpath
            path = self.path
            from .root import RTSRoot
            from .target import LUN, TPG, Target
            from .fabric import target_names_excludes
    
            for base, fm in ((fm.path, fm) for fm in RTSRoot().fabric_modules if fm.exists):
                for tgt_dir in listdir(base):
                    if tgt_dir not in target_names_excludes:
                        tpgts_base = "%s/%s" % (base, tgt_dir)
                        for tpgt_dir in listdir(tpgts_base):
                            luns_base = "%s/%s/lun" % (tpgts_base, tpgt_dir)
                            if isdir(luns_base):
                                for lun_dir in listdir(luns_base):
                                    links_base = "%s/%s" % (luns_base, lun_dir)
                                    for lun_file in listdir(links_base):
                                        link = "%s/%s" % (links_base, lun_file)
                                        if islink(link) and realpath(link) == path:
                                            val = (tpgt_dir + "_" + lun_dir)
                                            val = val.split('_')
                                            target = Target(fm, tgt_dir)
                                            yield LUN(TPG(target, val[1]), val[3])

     

    it scans subfolders under the /iscsi/ folder to delete luns. Everything is treated as a folder except the:

    excludes_list = [
        # version_attributes
        "lio_version", "version",
        # discovery_auth_attributes
        "discovery_auth",
        # cpus_allowed_list_attributes
        "cpus_allowed_list",
    ]

    where this cpus_allowed_list is excluded. So it shouldn't still try to access it as a folder. the reason is, in the targetcli you are using, this cpus_allowed_list is not in the excludes_lsit. 

     

    By checking the latest commit for the fabric module

    https://github.com/open-iscsi/rtslib-fb/commit/8d2543c4da62e962661011fea5b19252b9660822

    I see

     

    Quote

    handle target kernel module new attribute cpus_allowed_list

    target has been added cpus_allowed_list attribute in sysfs. Therefore, the rtslib should handle the new attribute: 1. add cpus_allowed_list item in target_names_excludes 2. add cpus_allowed_list feature in ISCSIFabricModule.

     

    So this might indicate that the targetcli is out dated in Unraid 6.10.3 for some systems that has cpus_allowed_list.

     

    Thank you

     

    Edit: targetcli 2.1.54 is already the newest version for and no newer package has been built. We need to rebuild the targetcli to use the latest rtslib-fb.

    • Thanks 1
  3. 33 minutes ago, SimonF said:

    targetcli crashes also so need to make sure name starts with an alpha.

     

    /backstores/fileio> ls
    o- fileio ..................................................................................................... [Storage Objects: 6]
      o- 000test ..................................................... [/mnt/user/VM2s/VM2s/000test.img (1.0MiB) write-thru deactivated]
      | o- alua ....................................................................................................... [ALUA Groups: 1]
      |   o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
      o- test ....................................................... [/mnt/user/VM2s/VM2s/testtest.img (5.0MiB) write-thru deactivated]
      | o- alua ....................................................................................................... [ALUA Groups: 1]
      |   o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
      o- test2 ......................................................... [/mnt/user/VM2s/VM2s/test2.img (1.0MiB) write-thru deactivated]
      | o- alua ....................................................................................................... [ALUA Groups: 1]
      |   o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
      o- test3 ......................................................... [/mnt/user/VM2s/VM2s/test3.img (1.0MiB) write-thru deactivated]
      | o- alua ....................................................................................................... [ALUA Groups: 1]
      |   o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
      o- test4 ......................................................... [/mnt/user/VM2s/VM2s/test4.img (1.0MiB) write-thru deactivated]
      | o- alua ....................................................................................................... [ALUA Groups: 1]
      |   o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
      o- test5 ......................................................... [/mnt/user/VM2s/VM2s/test5.img (1.0MiB) write-thru deactivated]
        o- alua ....................................................................................................... [ALUA Groups: 1]
          o- default_tg_pt_gp ........................................................................... [ALUA state: Active/optimized]
    /backstores/fileio> delete 000test
    Traceback (most recent call last):
      File "/usr/bin/targetcli", line 4, in <module>
        __import__('pkg_resources').run_script('targetcli-fb==2.1.54', 'targetcli')
      File "/usr/lib/python3.9/site-packages/pkg_resources/__init__.py", line 665, in run_script
        self.require(requires)[0].run_script(script_name, ns)
      File "/usr/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1470, in run_script
        exec(script_code, namespace, namespace)
      File "/usr/lib/python3.9/site-packages/targetcli_fb-2.1.54-py3.9.egg/EGG-INFO/scripts/targetcli", line 329, in <module>
      File "/usr/lib/python3.9/site-packages/targetcli_fb-2.1.54-py3.9.egg/EGG-INFO/scripts/targetcli", line 317, in main
      File "/usr/lib/python3.9/site-packages/configshell_fb-1.1.29-py3.9.egg/configshell_fb/shell.py", line 900, in run_interactive
        self._cli_loop()
      File "/usr/lib/python3.9/site-packages/configshell_fb-1.1.29-py3.9.egg/configshell_fb/shell.py", line 729, in _cli_loop
        self.run_cmdline(cmdline)
      File "/usr/lib/python3.9/site-packages/configshell_fb-1.1.29-py3.9.egg/configshell_fb/shell.py", line 843, in run_cmdline
        self._execute_command(path, command, pparams, kparams)
      File "/usr/lib/python3.9/site-packages/configshell_fb-1.1.29-py3.9.egg/configshell_fb/shell.py", line 818, in _execute_command
        result = target.execute_command(command, pparams, kparams)
      File "/usr/lib/python3.9/site-packages/configshell_fb-1.1.29-py3.9.egg/configshell_fb/node.py", line 1406, in execute_command
        return method(*pparams, **kparams)
      File "/usr/lib/python3.9/site-packages/targetcli_fb-2.1.54-py3.9.egg/targetcli/ui_backstore.py", line 309, in ui_command_delete
      File "/usr/lib/python3.9/site-packages/rtslib_fb-2.1.74-py3.9.egg/rtslib_fb/tcm.py", line 269, in delete
      File "/usr/lib/python3.9/site-packages/rtslib_fb-2.1.74-py3.9.egg/rtslib_fb/tcm.py", line 215, in _gen_attached_luns
    NotADirectoryError: [Errno 20] Not a directory: '/sys/kernel/config/target/iscsi/cpus_allowed_list'
    root@computenode:/usr/local/emhttp/plugins/usb_manager# 

     

    Mine works OK. It says cpu_allowed_list is not a directory in your case. Obviously, that is a file not a directory, and I don't have this file under that dir.

     

    That should be something else because I don't think this cpu_allowed_list file is relevant to the name of any fileIO. Why does it try to access that file? Hopefully, it's causing issues to the frontend only. 

     

    2022-07-24.png

  4. 14 minutes ago, SimonF said:

    @zxhaxdr It is working fine on all of my machines.

     

    you can edit this line echo '<input id="removeFileIO" disabled type="submit"  value="'._('Remove Fileio').'" onclick="removeFIO();" '.'>';

     

    in (around line 170) 

     

    /usr/local/emhttp/plugins/unraid.iSCSI/include/ISCSI.php

     

    and remove the disabled word to get it to work. If you run your browser in debug(F12) do you see any errors on the console.

    I have found the problem.

     

    In,


    $("#ft1 input[type='checkbox']").change(function() {
      var matches = document.querySelectorAll("." + this.className);
      for (var i=0, len=matches.length|0; i<len; i=i+1|0) {
        matches[i].checked = this.checked ? true : false;
      }
      $("#removeFileIO").attr("disabled", false);
     });
     

    method querySelectorAll() throws exception

    Uncaught DOMException: Failed to execute 'querySelectorAll' on 'Document': '.000test' is not a valid selector.
        at HTMLInputElement.<anonymous> (<anonymous>:3:26)

     

    See the fileIO name is 000test and it seems it desen't like digits in the front. Other name formats are fine, e.g., test or test_test.

     

    So I think we need to regulate the class name by adding a prefix or something.

    Cheers

  5. On 7/23/2022 at 12:53 PM, SimonF said:

    The button enables as soon as you click on a check box, doesn't check any physical files/locations. This will just be processed by the local browser, maybe close your browser and try again?

    Thank you for the reply. Still doesn't work. Even in the incognito mode. Maybe the frontend isn't rendered properly. I removed the plugin and reinstalled but that doesn't help. At some point it's broken, and it's broken forever. I will look into in to get more info sometime.

  6. 11 minutes ago, SimonF said:

    Working ok on my machine, but I will continue to see if I can recreated.

     

    image.thumb.png.1bd81fdaafabb33ffdbd491b1a4e4c49.png

    What I remember I did is. If I remember, at first, it does work, however it doesn't really delete the img under the /mnt. So I manually removed the .img and created another fileio. Since then, the button is forever greyed out.

     

    BTW, I didn't create the fileio under /mnt/user but under ZFS /mnt/**pool like that.

  7. I found a strange behaviour for ZFS Master. I have three pools that are using lz4 as the compression algorithm. I changed one of them to gzip-9, and ZFS Master stopped showing dataset info. Only one row which has SHOW DATASETS, SCRUB POOL, EXPORT POOL and CREATE DATASET is showing. The rest pools are displayed correctly. Also, I checked with the command line, and the pool with gzip is normal.

    Does anyone else have this issue?

  8. I found a strange behaviour. I have three pool that are using lz4 as compression algorithm. I changed one of them to gzip-9 and zfs master stopped showing dataset info. Only one row which has SHOW DATASETS, SCRUB POOL, EXPORT POOL and CREATE DATASET is showing. The rest pools are displayed correctly. Also, I checked with command line and the pool with gzip is normal.

     

    I'm not sure if this is the correct place to ask but I can't seem to find the support topic of ZFS Master.

  9. I just tried to passthrough another USB controller and it works flawlessly with 102400MB ram. The differences between these two controllers are:

    1. the working one is an on-board controller USB controller: ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller.

    2. the bad one is a pcie add-on USB controller: VIA Technologies, Inc. VL805 USB 3.0 Host Controller (rev 01).

    They belong to the same IOMMU group before broken apart by the multi-function override.

  10. 39 minutes ago, testdasi said:

    Your motherboard only supports up to 64GB RAM.

    I think only X570 chipset supports 128GB RAM.

     

    If you run your RAM at 2133MHz or 1866 MHz, you may get away with 128GB on X370 it but it's unlikely.

    Hi thanks. The mobo says the maximum is 64Gb but with 4 x 32G rank 2 ram sticks it's actually working. I think there's are cases people try x370 with 128G ram and it even works on first gen Ryzen 1800. I mean the host machine is completely fine and I also could run multiple VM instances that sum up to 128G ram, all fine. However when I assign around 64G to a single VM, the USB controller stopped working. The RAM is running at 3200MHz now which is the stock speed of Corsair Vengeance LPX and it's very stable, but I can try 2133 later see if it changes anything.

    Also I will try what happens if I exclude that particular USB controller (just for testing purposes but I need that USB controller passed trough anyway). Thank you very much.

  11. Here's my specs

     

    Model: Custom

    M/B: MSI X370 GAMING PLUS (MS-7A33) Version 3.0

    BIOS: American Megatrends Inc. Version 5.JQ. Dated: 11/29/2019

    CPU: AMD Ryzen 7 3700X 8-Core @ 4200 MHz

    HVM: Enabled

    IOMMU: Enabled

    Cache: 512 KiB, 4096 KiB, 32768 KiB

    Memory: 128 GiB DDR4 (max. installable capacity 128 GiB)

    Network: eth0: 1000 Mbps, full duplex, mtu 1500

    Kernel: Linux 4.19.98-Unraid x86_64

    OpenSSL: 1.1.1d

  12. I've been using the vm normally without any issue until yesterday when I increase the RAM for the VM to 65024 (and above). When using larger RAM allocation the usb controller stopped working and my Ubuntu vm thrown errors like

    "device descriptor read *** error - 110", and "xhci host not responding **" etc.

    When this happened, I reduced the ram but the vm always freeze at BIOS screen and I have to reboot host machine to make the vm work again. Tried Windows vm but same problem.

    Tried to search for any answer but no luck.

    Could someone help please? Thanks.

×
×
  • Create New...