Ymetro

Members
  • Posts

    32
  • Joined

  • Last visited

Everything posted by Ymetro

  1. 😅Ah, now I see why you needed this: I also needed my own info for the latest Docker update to version: 28.0.2.5, so let me show the steps: - Just click on the Nextcloud icon on the Docker page - Click "Console" in that menu - first, before you use a command, learn what it can do and what options it has with: php ./occ --help - then use: php ./occ status to check what is going on. Probably, the "maintenance:" and "needsDbUpgrade:" lines have the status "true" - than do: php ./occ update --all and some package is being mentioned for upgrading - it was the Deck app in my case - and after that do a: php ./occ upgrade just like an "apt update" and "apt upgrade" in a Linux CLI - after that, turn off the maintenance mode with: php ./occ maintenance:mode --off The prompt will mention that Maintenance mode is disabled. - you can check the status again with: php ./occ status All should be fine, and the web interface should work like advertised.
  2. Thanks for your answer @itimpi! Learning all the time. It might be wise have to use nohup or screen because of its long process then won't get stopped when the PC one is logged in from crashes or anything. I stopped the dd process with Ctrl + C and created a screen session for it just to be sure.
  3. I am also busy with clearing a disk (Disk 5 in the array) for removal without having to rebuild parity. Disk 5 is being cleared by the dd command, but does it matter that the parity is updated in the meantime? Does it affect clearing speed or something? Or strain the system unnecessary?
  4. I might should have mentioned a "Docker image upgrade of Nextcloud" and an "Nextcloud app update" in the OP title for better perception... But it seems one cannot edit an OP title, AFAIK. Also I am also not quite sure if this topic is in its right forum folder. "8 months later...": Hey: I seemed to edit the title! Nice.
  5. After the upgrades/updates mentioned in the OP title, Nextcloud was stuck in Maintenance Mode. Letting it do it's thing for a whole night did not resolve this problem. I need to mention that I did not try a reboot of the server, because of not wanting to interrupt the services to my network. My server run only the official Docker images of Nextcloud, the OnlyOfficeDocumentServer and MariaDB. I also wanted to help others and myself in the future, because of earlier frustrations with Nextcloud Docker updates messing up on my Unraid server, while not being able to make heads or tails of it then and now, while it fails its main instant-use service convenience at home. I entered the Nextcloud Docker with: docker exec -it Nextcloud /bin/bash and it gave the prompt: "I have no name!@9ece3e0eb99f:/var/www/html$" Note that we are in the /var/www/html map where the occ program is. I tried to turn Maintenance Mode off with: php ./occ maintenance mode --off and it gave: "Nextcloud or one of the apps require upgrade - only a limited number of commands are available You may use your browser or the occ upgrade command to do the upgrade" Tried te same line again and it added the line below the message mentioned above: "Maintenance mode disabled" So I tried to upgrade all apps with: php ./occ app:update --all Still the same "Nextcloud or one of the apps require upgrade - only a limited number of commands are available You may use your browser or the occ upgrade command to do the upgrade" response. Then I used: php ./occ upgrade Then it mentioned: "Nextcloud or one of the apps require upgrade - only a limited number of commands are available You may use your browser or the occ upgrade command to do the upgrade Setting log level to debug Turned on maintenance mode Updating database schema Updated database Updating <recognize> ... Repair error: RecursiveDirectoryIterator::__construct(/var/www/html/custom_apps/recognize/node_modules/ffmpeg-static/): Failed to open directory: No such file or directory Updated <recognize> to 4.0.1 Starting code integrity check... Finished code integrity check Update successful Turned off maintenance mode Resetting log level" Checked the Maintenance Mode with: php ./occ maintenance:mode And it mentioned: "Maintenance mode is currently disabled" So, I want to help others that hit the "Maintenance Mode" wall after an update/upgrade and I am eventually not sure which of the above did fix this, but it now runs again. It seems something did go wrong with the Recognize app: "Repair error: RecursiveDirectoryIterator::__construct(/var/www/html/custom_apps/recognize/node_modules/ffmpeg-static/): Failed to open" This map does not appear to exist when listing the files and maps within the bash CLI. Curious. EDIT: Update: Made an issue request @ its github page. I disabled Docker from within Settings and re-enabled it again just to make sure maintenance mode remains off after a reboot and all is well. And it is. Again without compromising any other services to the network, like network shares.
  6. Just found out that when commenting out # chip "kraken2-hid-3-6" # label "fan1" "Array Fan" # chip "kraken2-hid-3-6" # label "fan2" "Array Fan" in sensors.conf made everything working. Sjeez. Ánd it is showing the Kraken fans?? Now to only make this persistent after boot it seems I have to edit the /boot/config/plugins/dynamix.system.temp/sensors.conf file. Alas I cannot reboot the system to test this as it is currently moving a lot of data from an emulated drive to the others atm, and it takes days. All the sensors are visible now after running the probes and "sensors -s" commands; temps and fans. Not sure if it is because an Unraid update, but it is working without using the "acpi_enforce_resources=lax" boot option.
  7. Thanks for the excellent step-by-step guide, @PeterDB! 👍 My guess is that my worked without the workaround in 6.11.5 on my ASRock Z97 Extreme4, but after the first scan after a fresh boot all sensors are detected and then they disappear after selecting or scanning, not coming back after further unloads and rescans, but once again after a reboot of the server. So I applied the workaround, but it did not fix it in my case. It seems my problem has something to do with the Kraken X42 AIO CPU cooler I have installed, as the sensors command shows: Linux 5.19.17-Unraid. root@UPC:~# sensors Error: File /etc/sensors.d/sensors.conf, line 25: Undeclared bus id referenced Error: File /etc/sensors.d/sensors.conf, line 27: Undeclared bus id referenced sensors_init: Can't parse bus name and the content of /etc/sensors.d/sensors.conf shows: root@UPC:~# cat /etc/sensors.d/sensors.conf # sensors chip "nct6791-isa-0290" ignore "temp8" chip "nct6791-isa-0290" ignore "temp9" chip "nct6791-isa-0290" ignore "temp10" chip "nct6791-isa-0290" ignore "fan1" chip "nct6791-isa-0290" ignore "fan5" chip "nct6791-isa-0290" ignore "fan6" chip "coretemp-isa-0000" label "temp1" "CPU Temp" chip "nct6791-isa-0290" label "temp1" "MB Temp" chip "nct6791-isa-0290" label "fan2" "Array Fan" chip "nct6791-isa-0290" label "fan3" "Array Fan" chip "nct6791-isa-0290" label "fan4" "Array Fan" chip "kraken2-hid-3-6" label "fan1" "Array Fan" chip "kraken2-hid-3-6" label "fan2" "Array Fan" I don't know what this means or how it works, but any idea how to fix these "declarations" of "kraken2-hid-3-6" or where I can find a solution?
  8. You can see the process in the Main tab when you click on the first hard drive in de SSD pool at the Balance status part. Not sure if this applies to older Unraid versions. I wasn't able to see it other than seeing disks being active in the cache pool on te Main page. I just checked progress the day after if the command line returned the prompt. I have seen another thread about BTRFS replace commands, but Bungy's step-by-step plan seemed more easily doable. Cheers!
  9. The command of step 6 is not complete: it should state btrfs device delete /dev/sdX# /mnt/cache, or else it gives an error. sdX# needs the partition number on that failing drive. Mostly it is 1. So for example: if you want to remove drive sdf than you should input sdf1. Still seems to be relevant on Unraid 6.11.5 in almost 2023.
  10. Because of a large library of Steam games that are installed on the Unraid server at home and the problems between NFSv3 not supporting UTF-8 and the UTF-16 translation that the Windows NFS Client (to my understanding) does, there are Asian character in map names that prevents the maps to be accessed (read) by the Windows client, so Steam fails to run them directly from the NFS Shares. I have been applying what is said in the forum below, but users there are mentioning a Linux client. No matter if I change the /etc/nfsmount.conf (changing "Defaultvers=" to 4) and /etc/exports (adding "V4:" in fromt of every line) files and making them survive reboots by copying them over from the Flashdrive before the emhttp line, "nfsstats -s" still answers with NFS v3 statistics. None for v4. I would like to know if Unraid is indeed running NFSv4+ (V4 was mentioned in the 6.10 release notes), do I need to add the lines after the "emhttp" line or how to force it to use V4 and if the Windows 10 Pro even supports NFSv4+, because if the latter is the case I'll have to find support somewhere else. Update: Seems Windows NFS clients only support NFS up to v3. I might have to build my own Windows NFS 4.1 Client. I have never done this. Got some instructions from https://github.com/kofemann/ms-nfs41-client (9 years old? Oh boy...). Now to build it with MS Visual Studio 2022 SDK and WDK or whatever it's called... Or drop NFS for SMB/CIFS...
  11. Sigh, this will never be read will it? Even not sure if the report I send in Unraids' internal feedback system will be picked up. (No confirmation of whatsoever). I get it - it's not a full time job for the developers, but still... So I got back 2x 16TB Seagate IronWolf Pros from my RMA'd 2x 14TB Seagate IronWolfs (🤔can't remember if these are Pros or not) - A Big Thank You To Seagate! 👍 (and a FU to Amazon 😡) - I decided to copy the disk that isn't recognized by 6.9.2 and copy it back from the new 16TB disk (Precleared first) after upgrading to 6.9.2 and formatting it. So the 1st replaces my 2x 6TB the 2nd one is for secure the array with parity. Now I am in the process of copying the almost 14TB dat to the 16TB drive via Midnight Commander in a detachable screen console, as rsync didn't finish with the '--info=progress2' option and broke in the terminal. (Too much data?) This takes days...
  12. Got the same problem. So I landed here. See: Only now when I upgrade the BTRFS cache pool is broken too: in detail from first cache SSD, file system status shows: Somehow on 6.8.3 in the Docker tab I cannot update the Dockers anymore: ** placeholder until I can revert to 6.8.3 and make a screenshot ** Hexdump on Disk1: hexdump -C -n 512 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001c0 02 00 00 ff ff ff 01 00 00 00 ff ff ff ff 00 00 |................| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 Hexdump on 1st Cache drive: hexdump -C -n 512 /dev/sdd 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001c0 00 00 83 00 00 00 40 00 00 00 70 6d 70 74 00 00 |[email protected]..| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 Hexdump on 2nd Cache drive: hexdump -C -n 512 /dev/sdb 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001c0 00 00 83 00 00 00 40 00 00 00 70 6d 70 74 00 00 |[email protected]..| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 Anonymized system report added. In There is mentioned that this will be fixed. Will this be done on in 6.9.4? I am a bit nervous about not being able to update because of missing the latest security patches and such. I'll have to look for a few reasonable (most of them where in this server and have some errors) working hard disks to backup Disk1 to and format it afterwards and restore the backups to it. Still this feels like a big hurdle for upgrading to the latest version. I don't mind reformatting the BTRFS cache pool as I have done so more than once because my system is rather sensitive it seems. I am still saving for an upgrade to a new AMD system (a Ryzen 7 3700X on ASRock Fatal1ty X370 Pro Gaming with ECC RAM Corsair Vengeance RGB Pro Black 32GB DDR4-3200 CL16 ECC quad kit - don't need RGB as I like to keep it dark in the room, but only one I could find that has ECC), cos I'm thinking about running Plex and a few VMs adding to the current Dockers its running atm. And with my current build (I swapped out with another system, because of the PCI-E add-in card seemed unreliable) only has SATA300 ports and it can't run the SSDs at full speed. -- this might be for another thread... is there one? pcus-diagnostics-20210518-0901.zip
  13. +1 / bump Yes it would 😉 I am using: expr [sector size value] \* [start sector value] in Windows Terminal when ssh-ing into my Unraid server. I doubleclick on the values, richtclick to quickly copy and again rightclick to quickly paste on the cursor spot for quickly calculating and copying the result with all those quickclicks I just mentioned for the offset part of the ``mount`` command line. It's the quickest way I know. - and I sound like a duck right now 🤣 It would be nice to have a GUI to mount an img file and select the disks in the vdisk img without doing the procedure above.
  14. I submitted the bug report, including above findings. I await their recommendations before any upgrading to 6.9. Thank you @JorgeB and @trurl for helping me out! 🙏👍 I'll report back when this is fixed.
  15. It gives for Disk1: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001c0 02 00 00 ff ff ff 01 00 00 00 ff ff ff ff 00 00 |................| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 What does this say about this disk?
  16. I did what you said and went ahead making a backup of the entire USB for: "just in case". Now my server is back up and running 6.8.3 AND it shows Disk1 agian!! Awesome! 👍🥳 I didn't knew changing versions was this easy. 😎 O: I only had to reactivate the cache pool - good that I made a screenshot earlier for the right order. Should I stay on 6.8.3 or try again with 6.9.2? 🤔
  17. Have already updated to 6.9.1 and according to the Tools section I can only rollback to 6.9.0. Is there a way to rollback any further to 6.8?
  18. Disk check gave this: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 ... - agno = 12 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 ... - agno = 12 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
  19. Since I updated/upgraded the Unriad server to version 6.9.0 the Disk1 of the array says "Unmountable: Unsupported partition layout" and Partition format says "factory-erased", but that was not the case: it was formatted, running fine and filled to the brim with data when it was running under Unraid 6.8! I did a maintenance mode check on it like it was recommended on this forum: But it didn't seem to fix the problem. 😢Sadly I have no parity secured array to recover the disk with, because two of my disks have errors and where disabled by the system and I yet have to RMA them. 1 of them was the parity drive. 🍀Luckily I still got warranty on both those drives. I got the impression both disks where from Amazon. And I am not so confident that Amazon handels these harddrives well enough. 🤔 🙏Hopefully only the Steam game library was on there and can be redownloaded. I like to know if there is anything that can be done to fix this before I throw in the towel and format the drive for the array. Anyone got any idea if this can be fixed?
  20. Thanks for this guide @Nxt97! I had the same problem with 2 Sata SSD's as a BTRFS Cache Pool. For Unraid 6.8.3 I recommend the Unassigned Devices plugin to remove the old formatted partitions on the cache disk(s). So you: Do al the 3 steps Nx97's post mentions (disable VM Manager and Docker and start the rsync backup command) stop the array, deselect the SSD'(s) under Main -> Cache and it/they appear in the Unassigned Devices list below. Click on the plus sign [+] on the disk(s) in that UD list and it wil fold out showing the partition in it, which you can remove by pressing the Remove button. Confirm removal of partition with typing "Yes" and press Confirm. Reassign the disk(s) to the Cache Pool and start the Array. (Might be that you need to select the Yes checkbox on the right side) Format the Cache Disk/Pool by clicking the format Cache button that will be available to after the Array has started. Restore with the rsync commando mentioned in Nxt97 post after the formatting.
  21. Thank you for the info. I appreciate it. Even got my 10GbE MTU set to 9000 for te server and main PC NIC. It seems to get faster transfer speeds. The reformat of the cache pool has been done and the backup has been restored to it, and it seems to run fine. Now I am a bit worried about my parity disk as its Reported uncorrected went up to 17. I seem to either got a bad batch of 2 Seagate's Ironwolf 14TB (non Pro) and/or my PSU is not up for them. The latter I read somewhere on this forum. I am curious though: 2 of the 3 Ironwolfs non Pro's came in from Amazon. I hope they are kind to HDD's.... Maybe I should put this in a new topic?
  22. Maintenance mode didn't do much, because the disks aren't mounted. Just restarted the server with Docker and Virtual Machines disabled and started another run of copying files with MC to the backup folder.
  23. Thanks for te reply. I stopped all te VM's and Dockers. Also did active the mover. The latter didn't seem to do much. I just copied te /mnt/cache/ folder contents to my backup array share /mnt/user0/backup/cache backup/cache/ in Midnight Commander. There where some file errors. I will try again when I put the array in maintenance mode. Before this works and I format the SSD's again: Is BTRFS still the best relaiable option for a SSD cache pool nowadays?
  24. I am not sure if the topic matches the content. I not so good at that. I am open for suggestions about this.