hawihoney

Members
  • Posts

    3497
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by hawihoney

  1. Please have a look at the two screenshots. As you can see there's a device sdb with the partition AW11 shown on the Main tab. Two historical devices/partition U33 and R15 are visible as well. On the console you can see a partition ELEMENTAL on sdb. This is an old already removed USB device. I can't umount that device. The AW11 USB stick is attached currently and shown on the Main tab. BUT it's missing in the mount list. ELEMENTAL is shown instead. Needless to say that if I navigate to AW11 on the Main tab the folder/file list is empty. This is the only free USB port I'm using. No way to swap USB ports, because the rest is blocked with USB license sticks passed thru to VMs. What's happening here? And is there a way to fix that without rebooting the server? Any help is highly appreciated. Diagnostics attached. tower-diagnostics-20201223-1107.zip
  2. I'm not quiet sure if this is an Unraid or Unassigned Devices problem. As a Linux noob it looks as an Unraid thing to me. Please have a look at the two screenshots. As you can see there's a device sdb with the partition AW11 shown on the Main tab. Two historical devices/partition U33 and R15 are visible as well. On the console you can see a partition ELEMENTAL on sdb. This is an old already removed USB device. I can't umount that device. The AW11 USB stick is attached currently and shown on the Main tab. BUT it's missing in the mount list. ELEMENTAL is shown instead. Needless to say that if I navigate to AW11 on the Main tab the folder/file list is empty. This is the only free USB port I'm using. No way to swap USB ports, because the rest is blocked with USB license sticks passed thru to VMs. What's happening here? And is there a way to fix that without rebooting the server? Any help is highly appreciated. Diagnostics attached. tower-diagnostics-20201223-1107.zip
  3. Thanks in advance. IMHO: Don't modify user content in an entryfield automatically. It becomes a can of worms to decide and filter what's unwanted and what's intended. If you need to, check for unwanted characters and simply popup a small dialog "Non-ASCII characters detected. Are you sure?" and let the user decide.
  4. [BUG] German Umlauts [äöüÄÖÜß] disappear during save I do have some scripts that do require German Umlauts (see above). I can put them into the big edit field but they disappear after saving the user script. For example: Within a user script I do call a python script to export data. I enter something like that (shortened): python3 [path]/ExportData.py -l Hörbücher After saving and reopening the script this is what became stored: python3 [path]/ExportData.py -l Hrbcher Thanks.
  5. Just went there to look. That's what I found: It took me 30 seconds to find out that this is nothing I can consider right now. Will be ready in the same timeframe like muItiple array pools in Unraid I guess
  6. IMHO this becomes messy. May I suggest to collect all mount points of Unassigned Devices under one unique root? Call it UD or whatever but put them together please. E.g. /mnt/UD/remotes and /mnt/UD/disks. It becomes obvious then that these mount points belong to your plugin.
  7. Sorry for buttin' in: I know that Unraid has no typical RAID write pattern, but what about disk replacements. If a CMR array disk becomes replaced by a SMR disk, the SMR disk becomes written to completely with several TBs. What about write penalty in this specific case? Will the resync last for a multiple of the usual timeframe then?
  8. Did it and it worked this time. Thanks for your support.
  9. "There's an update available." I have fear to go thru this again. Won't use the GUI updater in the future.
  10. Rolled back successfully. So I tried the steps you did sugest. Here's the result: root@2cca0e75b737:/# occ update:check Nextcloud 19.0.5 is available. Get more information on how to update at https://docs.nextcloud.com/server/19/admin_manual/maintenance/upgrade.html. 1 update available root@2cca0e75b737:/# occ upgrade Nextcloud is already latest version Is this the expected result? Thanks again.
  11. Thanks for your answer. It seems that's there's no occ anywhere. I used find over the complete environment. I'm currently rolling back completely. I store nightly backups from the (stopped) mariadb and nextcloud /config and /data directories. I guess I'm up and running again in an hour or so. My first descision was to never again update Nextcloud as every update thru GUI was a pain here. Somewhere in this thread I found the manual steps to upgrade via CLI. If my nerves are back I'll try that way. Thanks for taking your time to respond.
  12. I know that single disk cache with XFS is possible, but I want more savety: Dual disk parity for the array and dual disk cache. Later is only possible with BTRFS. The last Docker on Cache here is Plex with it's trillion files spread over 215 GB in our case. I'm currently rethinking that. Perhaps I move Plex to the array as well, drop the Cache completely and live with that performance degration until further features become built into Unraid - I hope.
  13. Just my personal two cents: After migrating three Unraid servers (over 30 disks) from ReiserFS to XFS years ago, I never had one single problem with that file system. BTRFS was running a cache pool out of two disks here since that same time. BTRFS gave me lot's of problems in that same time. There are still workarounds one needs to know (cache settings to avoid massive writes on some NVMe disks, needs balance twice to work properly in some configurations, etc. etc.). Whenever one BTRFS formatted disk fails within a cache pool, this results in massive writes on the remaining healthy disk before you can even replace the failing disk (who designs that?). This year I had two cases where a BTRFS disk dropped from the cache pool. Unraid didn't notice that and works as if nothing had happened - the disk was simply gone silently. I'm trying hard to get a BTRFS-less world here. In fact I already moved all non-performance-critical Dockers and VMs to the array. And I hope the day would come, when there's a XFS or other FS driven cache pool possible. I consider BTRFS as under construction, not ready at all. As I said, this is my personal opinion build from my personal experience.
  14. I need urgent help to fix an failed update of Nextcloud. Seems that this update trashed my instance completely: The update via GUI did hang several times, needed restarts and stopped at some point during 'Delete old files'. So maintenance is active currently. 1.) I tried to switch off maintenance mode and edited the config.php file from /config/www/nextcloud/config/config.php. I changed maintenance mode from 'true' to 'false' and did restart the container - no joy. The GUI still says 'Update in process.'. 2.) I tried to switch off maintenance mode with this command. It says there's no occ file. In fact there's no occ file anywhere. root@Tower:/mnt# docker exec --user abc nextcloud php /config/www/nextcloud/occ maintenance:mode --off Could not open input file: /config/www/nextcloud/occ 3.) The updater, called manually within the container at location /data/updater-octcdtkazv4p/downloads/nextcloud/updater, returns: Nextcloud Updater - version: v19.0.3-8-gbfdc40b PHP Notice: Undefined variable: CONFIG in phar:///data/updater-octcdtkazv4p/downloads/nextcloud/updater/updater.phar/lib/Updater.php on line 59 Could not read data directory from config.php. So I'm stuck with an activcated maintenance mode and a backup directory containing three partly filled backups. root@Tower:/mnt/disk17/NextCloud/updater-octcdtkazv4p/backups# ls -lisa total 8 4820253502 0 drwxr-x--- 5 nobody users 133 Nov 20 05:49 ./ 2177489188 0 drwxr-xr-x 4 nobody users 100 Nov 20 05:50 ../ 8925770799 4 drwxr-x--- 13 nobody users 4096 Nov 20 05:34 nextcloud-19.0.4.2-1605846684/ 10904276397 4 drwxr-x--- 13 nobody users 4096 Nov 20 05:35 nextcloud-19.0.4.2-1605846907/ 5674653091 0 drwxr-x--- 5 nobody users 150 Nov 20 05:49 nextcloud-19.0.4.2-1605847799/ I looked thru the Nextcloud backup/restore docs but I didn't find instructions what to do with these different backup files that are all from the same previous release level. Any help is highly appreciated.
  15. I did answer to somebody who wants to run NVIDIA drivers on stable Unraid releases only. The latest stable release of Unraid is 6.8.3. So I did point him to the correct way to add current NVIDIA drivers to that release.
  16. 6.8.3 is a stable release and there's a prebuilt image with NVIDIA 450.66 available. Just copy an handful of files over to the stick and reboot. That's all.
  17. Just a follow up - just out of curiosity: I can block PCI devices and pass them thru to VMs like so (two HBAs): # on flash kernel /bzimage append xen-pciback.hide=(06:00.0)(81:00.0) initrd=/bzroot # for VM [...] <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> [...] Is this possible with USB sticks as well? I mean, these sticks wouldn't count to the limited devices and can be used as boot devices for the VMs then? If yes, how? Here are the two USB sticks for example: # How to block these USB sticks??? Bus 002 Device 004: ID 0930:6544 Toshiba Corp. TransMemory-Mini / Kingston DataTraveler 2.0 Stick Bus 002 Device 005: ID 8564:1000 Transcend Information, Inc. JetFlash # on flash --> no idea??? # for VM [...] <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0930'/> <product id='0x6544'/> <address bus='2' device='4'/> </source> <alias name='hostdev1'/> <address type='usb' bus='0' port='1'/> </hostdev> [...]
  18. Ah, thanks. These two are license sticks for my two Unraid VMs. These sticks are passed thru to the VMs. Good find.
  19. The removed device was part of the cache pool. As you can see in the images above the cache pool is not degraded, the device is not emulated. The device is no longer there. It's not listed in the SCSI devices on the System Devices page. I usually don't care because I hold 5 or 6 Pro licenses for 3 machines. It's just something I'm interested in. Even with the missing device counted in, the result would be 24 array devices + 2 cache devices = 26 and not 27. Just curious.
  20. I'm running the prebuild images e.g. NVIDIA for 6.8.3 from here:
  21. I'm just curious about the way Unraid counts the attached devices. The registration page tells "27 attached devices". My main page shows 24 hard drives plus 1 cache and 1 flash device. In the past I had a cache pool of two devices, but after a crash one of the pool devices is dropped off and no longer available. I did reboot after the crash and now I'm running a single cache device. Is Unraid still counting the removed cache disk? Thanks in advance. tower-diagnostics-20201117-1001.zip
  22. I don't use user shares at all. Everything is diskx or cache. I want to give individual dockers individual config folders located at: /mnt/cache/system/appdata or /mnt/disk1/system/appdata This would give a user share system but I wouldn't use it at all. But I want to be sure that mover does not move these folders. cache=no for system seems to be wrong because part of it is stored on cache. And cache=only seems to be wrong as well because part of it is stored on disk1. Any idea?
  23. For example: /mnt/cache/system/appdata /mnt/disk1/system/appdata Some docker apps write to the first disk, other docker apps write to the second disk. All files should stay where they are written - mover shouldn't move anything. What would be the setting for 'Use cache' on the system share then (yes, no, only, prefer)? Thanks in advance.
  24. Yes, in the meantime I saw 600 TB written for the 'Samsung SSD 970 EVO Plus 1TB' model. Hmm, the disk says everythings ok, but it's failing from time to time. It's screwed to a PCI x4 card. Hmm, need to rethink that.