localh0rst

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by localh0rst

  1. Hi All, I hope my question makes sense, but I can't get my head wrapped around it: I am moving files with unbalance from one drive of the array to another: disk3 -> disk2. So far so good. However, this is the R/W activity of Unraid: To my understanding, the only activity should be: READ from disk 3, WRITE from disk 2 and WRITE on the two parities. Did I miss something? I recently removed a failed disk (disk 4, it was emulated by the parity), made a new Configuration, left the disk 4 empty (I read something about not to touch the order of the drives when double parity is present) and rebuild the parity from scratch, like it is described in the wiki. No parity errors. Now, why is disk 2 having READ access? And should the parity not also have only WRITE activity? Is there still someting emulated? Thanks for you help, localh0rst
  2. Hi all, since I upgrade to the 6.12 branch, I get random freezes with Unraid. Sometimes the Server does not even respond to a ping, sometimes the GUI ist just not showing any information. I attached the syslog export from after the reboot when the Server became unresponsive. To be honest, I'm not sure for what to look, so I'd hoped you experts could have a look! Things I want to mention, but I dont know if it is in any way responsible for the freezes: I hooked up an AURGA Viewer (in response to the freezes) and applied a task from spaceinvader to minimize the power consumtion of nVIDIA cards (There is a RTX 3070 for Plex Encoding) Edit: My Hardware: Asus H97 Plus Intel I7-4790 Zotac RTX 3070 24GB RAM 7x 8TB 3x SSD (128. 250, 250) 1x 3.2TB HPE NVME (SAS to PCIe) I appreciate your help! Thanks, localh0rst localserver-diagnostics-20230819-1545.zip
  3. This is no "Dedupe" app. It searches for duplicate files and deletes them.
  4. Hi All, I'm in the situation where I tried the plugin - worked very well (thanks for that!) but I wanted to go back and passthrough the GPU (RTX3070) in a VM, unfortunately, I don't see any PCIe devices available for pass through - I am very sure the GPU showed up before. Is this something the driver might have caused or do I have to look somewhere else? Tried to mess around with append vfio-pci.ids=10de:2484,10de:228b into syslinux.cfg, but no luck until now. Any help would be appreciated! Thanks! localserver-diagnostics-20220201-1925.zip
  5. Okay, Thanks. I try to free up the Cache one Docker image after another and try to find the image which uses more space than displayed in MC / Krusader. Thanks for your help!
  6. Sure, something is using it. My lack of understanding is, even if I move "everything" - in this case the folder "appdata", it is the only folder in it - can I look elsewhere? some root-folder / mount point where some data is still on the btrfs? Wanted to avoid moving all docker appdata. I appreciate your help. Edit: This is the only folder in the pool with 71GB
  7. Hi JorgeB, thanks for your reply. Overall: Device size: 231.03GiB Device allocated: 223.58GiB Device unallocated: 7.45GiB Device missing: 0.00B Used: 209.07GiB Free (estimated): 9.67GiB (min: 9.67GiB) Free (statfs, df): 5.94GiB Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- --------- --------- -------- -------- ----------- 1 /dev/sde1 108.76GiB 3.00GiB 32.00MiB 1.02MiB 2 /dev/sdd1 108.76GiB 3.00GiB 32.00MiB 7.45GiB -- --------- --------- -------- -------- ----------- Total 108.76GiB 3.00GiB 32.00MiB 7.45GiB Used 102.81GiB 1.72GiB 16.00KiB So here it's clear. What I dont understand - the only Share on it is this folder with 71GB in it: Is there a way I can check where the "missing" 40Gib are located on this btrfs R1? Unraid says 113GB used.
  8. At least that the free / used space shown in unraid and krusader can't be correct (71GB vs 113GB) - I've searched every corner in Unraid, I can't explain why this pool should have 113GB in it - it only has 71GB in exactly 1 Folder - same situation as @Swagnoor19 I guess. Edit: Maybe this helps - There must be some data on it, But I can't find it.
  9. Hi, I don't want to hijack this thread, but my problem seems exactly the same - what to you mean by "balancing to the intended one"? In my case - only 1 share in a pool, says ~6GB free, and only 71GB of data: I cant find any data on this pool - some ~40GBs are missing. Krusader:localserver-diagnostics-20211222-1725.zip Thanks a lot!
  10. I'm not, but I wish I could contribute something to this really great project! But good to know that you're on it! I just checked, and there are two entries with plots_dir. I added this in a previous release and probably didn't notice that there was already a variable in the config. It states that I have 650 plots (which is correct), so it is limited by 150 in this page. Maybe you could create a link to a new page or just display all plots when clicking a button to avoid listing all from the beginning. I really appreciate all your help! Keep up the good work! If there's a way to virtually buy you a beer, let me know
  11. Thanks! This helped, working again! Quick note: The "Farming" section does only show the first plot directory, dirs manually added via command line don't show up anymore. this worked in previous releases.
  12. thanks for the suggestion, unfortunately this didn't help. seems something is wrong with the plotman.yaml?
  13. Hi @guy.davis, I just updated to the latest test-build, unfortunately the webserver didn't respond. Internal Server Error The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application. Docker log shows this (don't know why the first characters are cut in the web) ing Chia... ey at path: /root/.chia/mnemonic.txt ot directory "/plots". ot directory "/plots02". ot directory "/plots03". ot directory "/plots04". ot directory "/plots05". ot directory "/plots06". ot started yet daemon vester: started mer: started l_node: started let: started ing Plotman... k (most recent call last): achinaris/scripts/plotman_migrate.py", line 67, in <module> yaml.load(pathlib.Path(PLOTMAN_CONFIG)) hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/main.py", line 431, in load elf.load(fp) hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/main.py", line 434, in load onstructor.get_single_data() hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/constructor.py", line 122, in get_single_data elf.construct_document(node) hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/constructor.py", line 132, in construct_document my in generator: hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/constructor.py", line 1617, in construct_yaml_map struct_mapping(node, data, deep=True) hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/constructor.py", line 1501, in construct_mapping check_mapping_key(node, key_node, maptyp, key, value): hia-blockchain/venv/lib/python3.9/site-packages/ruamel/yaml/constructor.py", line 295, in check_mapping_key plicateKeyError(*args) aml.constructor.DuplicateKeyError: while constructing a mapping t/.chia/plotman/plotman.yaml", line 3, column 1 plicate key "user_interface" with value "ordereddict([('use_stty_size', True)])" (original value: "ordereddict([('use_stty_size', False)])") t/.chia/plotman/plotman.yaml", line 19, column 1 ess this check see: aml.readthedocs.io/en/latest/api.html#duplicate-keys e keys will become an error in future releases, and are errors lt when using the new API. ing Chiadog... Chiadog... Machinaris API server... Machinaris Web server... d startup. Browse to port 8926. I rolled back to :latest: Internal Server Error Traceback (most recent call last): File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 40, in get_validated_configs loaded = schema.load(config_objects) File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 714, in load return self._do_load( File "/chia-blockchain/venv/lib/python3.8/site-packages/marshmallow/schema.py", line 896, in _do_load raise exc marshmallow.exceptions.ValidationError: {'directories': {'log': ['Missing data for required field.']}, 'logging': ['Unknown field.'], 'commands': ['Unknown field.'], 'version': ['Unknown field.']} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/chia-blockchain/venv/bin/plotman", line 8, in <module> sys.exit(main()) File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/plotman.py", line 137, in main cfg = configuration.get_validated_configs(config_text, config_path) File "/chia-blockchain/venv/lib/python3.8/site-packages/plotman/configuration.py", line 42, in get_validated_configs raise ConfigurationException( plotman.configuration.ConfigurationException: Config file at: '/root/.chia/plotman/plotman.yaml' is malformed Rolled back again to :test - still no difference, first internal server error is showing. Your help is highly appreciated Thanks!
  14. Hey, first of all I want to thank you for doing this hard work! Keep it up! One question though: I tried to copy the data from an existing installation, as you described: cp -r /mnt/user/appdata/chia /mnt/user/appdata/machinaris But then, you end up with this: root@localserver:/mnt/user/appdata/machinaris# ls chia/ machinaris/ mainnet/ plotman/ in the chia directory there is all the information, including mainnet. Shouldn't be the content of chia in the root folder of machinaris? As of my very limited knowledge of Linux, shouldn't it look like this? cp -r /mnt/user/appdata/chia/. /mnt/user/appdata/machinaris /Edit: yes, it worked. Missing the '.' Looks good so far! More questions to come, for sure. One thing would make this perfect: Integration with chiadog! It has already a docker integration as well. My knowledge is super limited and I'm not confident enough to mess with a docker image via CLI on unraid. But, if you say, no prob - I'll give it a shot with some guidance, there is even a howto if chia is residing in a docker as well: https://github.com/ajacobson/chiadog-docker docker volume create chia-home docker run --name <container-name> -d artjacobson/chiadog-docker:latest -v chia-home:/root/.chia:ro docker run --name <container-name> -d ghcr.io/chia-network/chia:latest -v chia-home:/root/.chia <additional args for keys and plots>