Progeny42

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Progeny42's Achievements

Noob

Noob (1/14)

10

Reputation

  1. Unraid version is 6.12.4. I've created a new Ubuntu 20.04 VM, and I'm passing through one of the shares using the Unraid Share Mode Virtiofs Mode. I've mounted it in the VM as follows in the /etc/fstab. Repository /mnt/repository virtiofs defaults 0 0 If I make a change to a file on the Repository network share on my Windows desktop computer, and save the change, the updated file is not immediately available within the VM. For example, I could add a new line to the end of a file, and if I 'cat' or 'nano' the file in the VM, the change is not there. After randomly using 'cat', 'nano' or 'ls' on the /mnt/repository directory for a minute or so, the changes eventually appear, though I'm not convinced any of those commands are doing anything to help. If I watch an example file using 'watch -n1 cat /mnt/repository/test_file.txt', and change the contents on my remote machine, I've still not seen the changes appear for over 2 minutes now. Is this an artifcat of using virtiofs? In the past, I've used 9p, and not had the same Issue. I've not tried that yet to see if is any better. ---- Update In the event anyone else also suffered this issue, I've reverted to using 9p pass through instead. In the VM settings on Unraid, I changed the Unraid Share Mode to 9p Mode, and updated the /etc/fstab by replacing "virtiofs" with "9p". This now resolves my issue.
  2. The latest version (2023.09.25.68.main) has created a regression whereby Datasets with spaces are no longer being detected by the plugin. I had installed version 2023.07.04 for the first time, and they did show, as per the release notes. Additionally, with destructive mode enabled, I'm unable to destory a Dataset with the plugin, and am presented with the error "Device or resource busy". However, if I run the command from the Unraid terminal (for example) zfs destroy tank/appdata/filebrowser it works fine
  3. As of 7 hours ago, I'm getting the following alert pinged to me via email from Unraid: /bin/sh: line 1: 28292 Killed /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null This is occurring every 25-30 minutes or so. The Unraid GUI is broken a bit now too; I can navigate between pages, but the syslog won't load, sometimes the Main tab won't load areas, the Docker and VMs tab won't load at all, and I cannot update Plugins. The day before, Unraid became unresponsive around 1:00am, and was no longer displaying an output, so I had to hard reset it. It is currently finishing up the parity check it initiated due to this. It's obviously a dynamix plugin causing this issue, but not sure which one uses the "monitor" script. I have not added, updated or removed any plugins in over 4 weeks, so seems odd that this is occurring now. The only thing that has changed in my machine is I added some new sticks of RAM, but that was 2 weeks ago. The more I write this, the more it sounds like it's a hardware fault, and not software, seeing as the OS is running in RAM. Unfortunately, I tried to get Diagnostic logs, but it's hanging (it's been running for over an hour now). I've attached all of the Dyanmix plugins that I currently have. Running on Unraid 6.9.1. I'll try pulling the extra RAM I added, and guess I'll have to monitor it for a few weeks to see if this happens again.
  4. I am also having issues as of late as identifed by the File Activity tool, where Nextcloud is accessing an /appdata_{instanceId} directory and /files_encryption directory under /mnt/diskX/Nextcloud. This is causing a few of my disks to remain spun up. This did not use to be an issue. I use Nextcloud as a target for Backup, so I cannot change the share to use the cache. The Container is correctly configured using /data => /mnt/user/Nextcloud and /config => /mnt/user/appdata/nextcloud
  5. *Bump* If 9p is a bad way to mount a drive, then would anyone care to explain what the correct way is for best performance?
  6. I reseated the LSI card, the breakout SAS to SATA cable, and the SATA Power cable to the HDD. I'm still having the same issues as described, with the same behaviour shown in the log etc.
  7. Okay, I'll double check all the connections tomorrow. Apologies; Have attached them to this response if they are still necessary. I did some further investigation pending my hardware checks tomorrow; If when the drive is spun down (according to Unraid) I go to spin it up by selecting the grey circle on the Main tab, it "spins up" in 1 second. On the other hand, if I then spin it down, and then up again, each operation takes ~8 seconds. I've tried to demonstrate this in the Unraid terminal; If I issue a spindown command to a different disk that is spun down (not on the LSI card), using the time command, it takes 0.005s. If I then issue the same command to this HDD, /dev/sdc, it takes ~0.600s. Below is a snippet of the log again which shows my spindown command and the drive spinning itself up unbeknownst to Unraid. Each time the block starting with "attempting task abort! scmd(0000000016cce60b)" occurs, the drive is now spun up, as the spindown command will take ~0.6s. I did this twice in succession at 22:53:27 and 22:54:05. If I issue the command again quickly (at 22:54:10), the drive has not yet spun up, and the time for the operation is 0.003s, and there is no "Power-on or device reset occurred" message. Jan 1 22:52:38 Elysia kernel: sd 9:0:1:0: attempting task abort! scmd(0000000016cce60b) Jan 1 22:52:38 Elysia kernel: sd 9:0:1:0: [sdc] tag#2945 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Jan 1 22:52:38 Elysia kernel: scsi target9:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4) Jan 1 22:52:38 Elysia kernel: scsi target9:0:1: enclosure logical id(0x5842b2b057bb1000), slot(7) Jan 1 22:52:38 Elysia kernel: sd 9:0:1:0: task abort: SUCCESS scmd(0000000016cce60b) Jan 1 22:53:27 Elysia kernel: mdcmd (969): spindown 29 Jan 1 22:53:27 Elysia kernel: Jan 1 22:53:28 Elysia kernel: sd 9:0:1:0: Power-on or device reset occurred Jan 1 22:53:46 Elysia kernel: sd 9:0:1:0: attempting task abort! scmd(000000006a31cdd3) Jan 1 22:53:46 Elysia kernel: sd 9:0:1:0: [sdc] tag#1081 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Jan 1 22:53:46 Elysia kernel: scsi target9:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4) Jan 1 22:53:46 Elysia kernel: scsi target9:0:1: enclosure logical id(0x5842b2b057bb1000), slot(7) Jan 1 22:53:46 Elysia kernel: sd 9:0:1:0: task abort: SUCCESS scmd(000000006a31cdd3) Jan 1 22:54:05 Elysia kernel: mdcmd (970): spindown 29 Jan 1 22:54:05 Elysia kernel: Jan 1 22:54:06 Elysia kernel: sd 9:0:1:0: Power-on or device reset occurred Jan 1 22:54:10 Elysia kernel: mdcmd (971): spindown 29 Jan 1 22:54:10 Elysia kernel: Also, while the GUI shows the drive is spun down, it reports an average temperature of 35 C (which is the temperature of that drive) It only changes to a * when I then issue the spindown command elysia-diagnostics-20210101-2303.zip
  8. My concern isn't the GUI so much as it is that the LSI card is keeping the drive spun up near constantly
  9. Unraid Version: 6.8.3 LSI SAS 2008: Firmware: 20.00.07.00-IT Product ID: SAS9211-8i BIOS: 07.11.10.00 NVDATA: 14.01.00.08 I recently purchased the above LSI card to add additional SATA drives to my server. At the moment, it has 1 HDD and 1 SSD attached. For some reason, the HDD is being spun up, but Unraid does not know about it. Note I have 5 disks + 2 parity. The LSI attached drives are shown below: SCSI Devices: [9:0:0:0]disk ATA INTEL SSDSC2BF24 LH6i /dev/sdb 240GB [9:0:1:0]disk ATA WDC WD80EDAZ-11T 0A81 /dev/sdc 8.00TB I've attached the full syslog from Unraid, where the start of the log is when the new LSI card was installed. Here is a snippet from the logs showing this spinup behaviour (Disk 29 is the HDD attached to the LSI Card): Jan 1 10:25:34 Elysia kernel: mdcmd (881): spindown 0 Jan 1 10:25:35 Elysia kernel: mdcmd (882): spindown 1 Jan 1 10:25:35 Elysia kernel: mdcmd (883): spindown 2 Jan 1 10:25:36 Elysia kernel: mdcmd (884): spindown 3 Jan 1 10:25:36 Elysia kernel: mdcmd (885): spindown 4 Jan 1 10:25:37 Elysia kernel: mdcmd (886): spindown 5 Jan 1 10:25:37 Elysia kernel: mdcmd (887): spindown 29 Jan 1 10:26:12 Elysia kernel: sd 9:0:1:0: attempting task abort! scmd(000000007fcbcbd9) Jan 1 10:26:12 Elysia kernel: sd 9:0:1:0: [sdc] tag#435 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Jan 1 10:26:12 Elysia kernel: scsi target9:0:1: handle(0x0009), sas_address(0x4433221104000000), phy(4) Jan 1 10:26:12 Elysia kernel: scsi target9:0:1: enclosure logical id(0x5842b2b057bb1000), slot(7) Jan 1 10:26:12 Elysia kernel: sd 9:0:1:0: task abort: SUCCESS scmd(000000007fcbcbd9) As you can see, about 45 seconds after the spindown command, the disk gets spun up again, but Unraid does not show the disk as Active (presumbly because it looks at the last command it issues). I noticed this behaviour as the drive was constantly reporting Temperature statistics in Grafana, and I could hear it spin up. I'm entirely new to the world of SAS / HBA cards, and the above specs are those taken from the listing - I have not verified they are true (I'd have to look into how to do that). Does this look like a fault of the LSI card? Is this a Firmware issue? Thanks in advance elysia-syslog-20210101-1057.zip
  10. @santiman717 Can you post a screenshot of the Container's configuration? (Remove or blur any emails or sensitive info)
  11. I've been trialling Nextcloud as a cloud backup service for myself, and if successful, my family which live remotely. I'm using Duplicati to perform the backups, but that's not the point of this guide. The point is that when I backup files to Nextcloud, the Docker Image slowly fills up. I've never had it reach a point where the Image reaches 100%, but it's probably not a fun time. After searching the internet for hours trying to find anything, I eventually figured out what was required. For context, I'm running Nextcloud behind a Reverse Proxy, for which I'm using Swag (Let's Encrypt). Through trial and error, the behaviour I observed is that when uploading files (via WebDAV in my case), they get put in the /tmp folder of Swag. Once they are fully uploaded, they are copied across to Nextcloud's /temp directory. Therefore, both paths need to be added as Bind mounts for this to work. What To Do Head over to the Docker tab, and edit Nextcloud and add a new Path variable: Name: Temp Container Path: /tmp Host Path: /mnt/user/appdata/nextcloud/temp Next edit the Swag (or Let's Encrypt) container, and add a new Path variable: Name: Temp Container Path: /var/lib/nginx/tmp Host Path: /mnt/user/appdata/swag/temp And that's it! Really simple fix, but no one seemed to have an answer. Now when I backup my files, the Docker image no longer fills up.
  12. Hmm not sure I can help you with that. It's not my software, I just made a template for it in unraid. Try contacting Snipe-IT directly about the issue
  13. That'll likely be your problem. That App Key is too long. Make sure you use this command to generate a key in the Unraid terminal: openssl rand -base64 32 Then in the Container Edit menu, put: base64:yourappkeyhere
  14. @Korshakov what is your App Key? This is the only time I've seen that issue.
  15. @Dimtar Hey glad you figured it out. Yes, you will experience an error if the App Key is not valid as far as the Container is concerned. There's also nothing the log file to indicate that the App Key is the cause of the problem, so it's mostly just a trial an error thing. I'll add it to the original post as an update to warn people.