dcruff

Members
  • Posts

    75
  • Joined

  • Last visited

Posts posted by dcruff

  1. The parity drive was cleared and formatted as part of the process of setting it up as a data drive.  I only had 2 data disks before, so when I replaced the 14TB parity disk with a new 16 TB disk, I added that former parity drive (14TB) to the data array, making it a drive array of 1 Parity (16TB) and now 3 Data disks (14TB each).

  2. That does appear to have fixed the issue, as my files appear to be intact (what few of them had actually been written to the drive).  I'll watch it to see if this reoccurs.  What scares me is that it happened within a day of getting the drive up and running as an array data drive instead of the parity drive.

    Thanks, the chkdsk function in the GUI was painless and it was necessary to know about removing the -n.

  3. Yep, the drive was showing as mountable, and I mounted it, formatted it, and started using it.  Loaded files, which got stored to it because my other two drives were getting full.  Then the next day, they are unmountable and unusable.  I didn't put much new data on it, be I'm very concerned that this could happen and makes me nervous in case it happened to one of my full drives.  I went looking through the Alerts, Warnings, and Notices to see if there was any indication of a problem.  None that I could see.
    I have seen that there are options to try to repair a faulty structure on a disk, but I would like to know what happened and how it can happen again.  Realize that I have been using this particular disk in this UNRAID computer as the Parity Drive for over a year with absolutely no problems.  But now that it's been cleared, re-mounted, formatted, and data stored on it, it is having problems.
     

    embyserver-diagnostics-20240409-0707.zip

  4. 3 hours ago, itimpi said:

    It is not clear to me why you think there would be any files on a drive you have just added?   Have I missed something?    
     

    Normally when you add a new drive to the array Unraid Clears it (I.e. sets it to be all zeroes) to avoid invalidating parity.    When that completes you then format the drive to create an empty file system and make it ready for use.   This is the process documented here for adding drives in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.

    After I successfully added the 14TB drive to the array (or so I thought), UNRAID showed me that I had a new 14TB of available, usable space.  I then went about my business adding movie files, and other stuff to the array - a couple of hundred GB of data.  I did this yesterday.  Then today, the drive is inaccessible and I don't know why.  My Emby Server still shows the movies that I added to the library, but now shows that all of the data is missing.
    What really concerns me is why there is no way of reclaiming that data when it is in a parity-protected drive array.  What's the reason for the parity drive?

     

  5. Well now I have another problem - the 14TB drive that I added to the array (Disk 3) is now showing 'Unmountable: Unsupported or No File System'.  I don't have any errors that I can find, and files that I stored to that drive are now non-existent.  I rebooted Unraid and nothing changes.  The file system is listed as XFS. with 470 READS, 0 WRITES, and 0 ERRORS.

    The movie files that I added are now where to be found, assumable being stored on Disk 3.  Shouldn't there be some kind of recovery available since I have a parity drive associated with my array?

  6. I finally figured it out, with the help of ChatGPT,  It was a lengthy process to get everything working the way I wanted.

    The New 16TB drive took about 22 hours to set itself up as the new parity drive once I cleared it.  I had to take the array down, then in the drive screen, I picked the top option to assign the new drive as 'Parity 1'.  It was not intuitive because the option didn't show up until I selected that drive space and a drop down showed my unassigned drive, which I selected.
    After the parity drive was set up, I added my 14TB disk (previous parity disk) and added it to the array.  It took another 18 hours to clear and prepare the disk.  After that, I had to format the drive and then it was ready.

    Now everything works.

  7. Actually, I may have figured it out.  I put the new unformatted 16TB disk back in, started up unraid and then stopped the array.  From there, I was able to see an option to assign the drive to the array.  I believe that it is now rebuilding the parity device (new 16TB disk).  I'll update when it is completed.  

  8. I have three 14TB drives, with one of them acting as a parity drive.  Since I'm about 90 percent full on the two data disks I ordered a new 16TB drive.  I didn't realize at the time that I needed to make sure that the new drive was no bigger than the parity drive, so I attached the new 16TB drive and formatted it xfs while in 'unassigned devices'.   After I realized my error, I decided to use the new 16TB drive as the parity drive (and forfeit the extra 2 TB).

    I stopped the array and removed the original parity drive (14TB).  Then I attached the new drive (16TB) using the same cables as the former drive (14TB), but Unraid still just showed it as a formatted drive in the Unassigned section, but I could not find any way to assign it as the parity drive.  Then, while consulting ChatGPT, I did a 'Clear disk' on the new 16TB drive to remove the formatting so that Unraid would see the drive as a new unformatted drive and prompt me to assign it as the parity drive and go through it's normal process of setting it up for the parity drive.  No such options presented themselves and I could not find any other way to assign it as such.
    So, I then stopped the array and put the original parity drive back in and took the new drive out so that I could start over again.
    All of this time, each time I restarted Unraid, my array (now only two drives) was intact with all of the data.
    Now with the old drive back in its' original bay and cables, it just shows as 'formatted' and I can't find any options to add it as the parity drive, back to where I started.

    Now I have an unprotected array.
    I would love some suggestions as to how I can get my parity back.

  9. Here's what I see - almost every line is an error.

     

    Feb  9 08:52:25 EmbyServer nginx: 2024/02/09 08:52:25 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:25 EmbyServer nginx: 2024/02/09 08:52:25 [error] 10584#10584: *7552400 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:25 EmbyServer nginx: 2024/02/09 08:52:25 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 08:52:26 EmbyServer nginx: 2024/02/09 08:52:26 [crit] 10584#10584: ngx_slab_alloc() failed: no memory
    Feb  9 08:52:26 EmbyServer nginx: 2024/02/09 08:52:26 [error] 10584#10584: shpool alloc failed
    Feb  9 08:52:26 EmbyServer nginx: 2024/02/09 08:52:26 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:26 EmbyServer nginx: 2024/02/09 08:52:26 [error] 10584#10584: *7552409 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:26 EmbyServer nginx: 2024/02/09 08:52:26 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 08:52:27 EmbyServer nginx: 2024/02/09 08:52:27 [crit] 10584#10584: ngx_slab_alloc() failed: no memory
    Feb  9 08:52:27 EmbyServer nginx: 2024/02/09 08:52:27 [error] 10584#10584: shpool alloc failed
    Feb  9 08:52:27 EmbyServer nginx: 2024/02/09 08:52:27 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:27 EmbyServer nginx: 2024/02/09 08:52:27 [error] 10584#10584: *7552420 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:27 EmbyServer nginx: 2024/02/09 08:52:27 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 08:52:30 EmbyServer nginx: 2024/02/09 08:52:30 [crit] 10584#10584: ngx_slab_alloc() failed: no memory
    Feb  9 08:52:30 EmbyServer nginx: 2024/02/09 08:52:30 [error] 10584#10584: shpool alloc failed
    Feb  9 08:52:30 EmbyServer nginx: 2024/02/09 08:52:30 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:30 EmbyServer nginx: 2024/02/09 08:52:30 [error] 10584#10584: *7552450 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:30 EmbyServer nginx: 2024/02/09 08:52:30 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 08:52:32 EmbyServer nginx: 2024/02/09 08:52:32 [crit] 10584#10584: ngx_slab_alloc() failed: no memory
    Feb  9 08:52:32 EmbyServer nginx: 2024/02/09 08:52:32 [error] 10584#10584: shpool alloc failed
    Feb  9 08:52:32 EmbyServer nginx: 2024/02/09 08:52:32 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:32 EmbyServer nginx: 2024/02/09 08:52:32 [error] 10584#10584: *7552475 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:32 EmbyServer nginx: 2024/02/09 08:52:32 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 08:52:34 EmbyServer nginx: 2024/02/09 08:52:34 [crit] 10584#10584: ngx_slab_alloc() failed: no memory
    Feb  9 08:52:34 EmbyServer nginx: 2024/02/09 08:52:34 [error] 10584#10584: shpool alloc failed
    Feb  9 08:52:34 EmbyServer nginx: 2024/02/09 08:52:34 [error] 10584#10584: nchan: Out of shared memory while allocating message of size 13913. Increase nchan_max_reserved_memory.
    Feb  9 08:52:34 EmbyServer nginx: 2024/02/09 08:52:34 [error] 10584#10584: *7552516 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Feb  9 08:52:34 EmbyServer nginx: 2024/02/09 08:52:34 [error] 10584#10584: MEMSTORE:01: can't create shared message for channel /disks
    Feb  9 10:58:57 EmbyServer emhttpd: spinning down /dev/sdc
    Feb  9 11:10:44 EmbyServer emhttpd: read SMART /dev/sdc

  10. It's interesting - I refrained from re-booting my Unraid Server so that I could maintain the information that might aid me at finding a cause.  I notice that on the Dashboard, under the System section (which shows active bar graphs of memory usage) it showed that I was over 90% on Log.  It stayed there for several days.  Now it is showing only about 21% utilization.  I'm not sure how that might relate to the '/var/log' directory, but now my /var/log only shows that it has 26M.

     

     

  11. Sorry, I thought it was a 1.

    -rw------- 1 root   root        0 Dec  1 12:44 btmp
    -rw-r--r-- 1 root   root        0 Apr 28  2021 cron
    -rw-r--r-- 1 root   root        0 Apr 28  2021 debug
    -rw-r--r-- 1 root   root      515 Jan 29 17:36 dhcplog
    -rw-r--r-- 1 root   root    62666 Jan 29 17:36 dmesg
    -rw-r--r-- 1 root   root     1048 Jan 29 17:36 docker.log
    -rw-r--r-- 1 root   root        0 Feb 13  2021 faillog
    -rw-r--r-- 1 root   root        0 Apr  7  2000 lastlog
    drwxr-xr-x 4 root   root      140 Jan 29 17:36 libvirt/
    -rw-r--r-- 1 root   root        0 Apr 28  2021 maillog
    -rw-r--r-- 1 root   root        0 Jan 29 17:36 mcelog
    -rw-r--r-- 1 root   root        0 Apr 28  2021 messages
    drwxr-xr-x 2 root   root       40 Aug 10  2022 nfsd/
    drwxr-x--- 2 nobody root      100 Feb  4 04:40 nginx/
    lrwxrwxrwx 1 root   root       24 Dec  1 12:43 packages -> ../lib/pkgtools/packages/
    drwxr-xr-x 5 root   root      100 Feb  4 07:17 pkgtools/
    drwxr-xr-x 2 root   root      280 Feb  4 07:17 plugins/
    drwxr-xr-x 2 root   root       40 Jan 29 17:36 pwfail/
    lrwxrwxrwx 1 root   root       25 Dec  1 12:45 removed_packages -> pkgtools/removed_packages/
    lrwxrwxrwx 1 root   root       24 Dec  1 12:45 removed_scripts -> pkgtools/removed_scripts/
    lrwxrwxrwx 1 root   root       34 Feb  4 07:17 removed_uninstall_scripts -> pkgtools/removed_uninstall_scripts/
    drwxr-xr-x 2 root   root       40 Jan 30  2023 sa/
    drwxr-xr-x 3 root   root      320 Jan 29 17:36 samba/
    lrwxrwxrwx 1 root   root       23 Dec  1 12:43 scripts -> ../lib/pkgtools/scripts/
    -rw-r--r-- 1 root   root        0 Apr 28  2021 secure
    lrwxrwxrwx 1 root   root       21 Dec  1 12:43 setup -> ../lib/pkgtools/setup/
    -rw-r--r-- 1 root   root        0 Apr 28  2021 spooler
    drwxr-xr-x 3 root   root       60 Sep 26  2022 swtpm/
    -rw-r--r-- 1 root   root     5012 Feb  4 15:43 syslog
    -rw-r--r-- 1 root   root  1519616 Feb  3 04:30 syslog.1
    -rw-r--r-- 1 root   root 62058496 Feb  2 04:40 syslog.2
    -rw-r--r-- 1 root   root        0 Jan 29 17:36 vfio-pci
    -rw-r--r-- 1 root   root        0 Jan 29 17:36 vfio-pci-errors
    -rw-rw-r-- 1 root   utmp     6912 Jan 29 17:36 wtmp

  12. Do I need another command line switch to get deeper details?

     

    btmp
    cron
    debug
    dhcplog
    dmesg
    docker.log
    faillog
    lastlog
    libvirt/
    maillog
    mcelog
    messages
    nfsd/
    nginx/
    packages@
    pkgtools/
    plugins/
    pwfail/
    removed_packages@
    removed_scripts@
    removed_uninstall_scripts@
    sa/
    samba/
    scripts@
    secure
    setup@
    spooler
    swtpm/
    syslog
    syslog.1
    syslog.2
    vfio-pci
    vfio-pci-errors
    wtmp

  13. I don't know what the actual available capacity is so I don't know if 109M is a problem.

    root@EmbyServer:~# du -h /var/log
    0       /var/log/pwfail
    0       /var/log/swtpm/libvirt/qemu
    0       /var/log/swtpm/libvirt
    0       /var/log/swtpm
    0       /var/log/samba/cores/rpcd_winreg
    0       /var/log/samba/cores/rpcd_classic
    0       /var/log/samba/cores/rpcd_lsad
    0       /var/log/samba/cores/samba-dcerpcd
    0       /var/log/samba/cores/winbindd
    0       /var/log/samba/cores/nmbd
    0       /var/log/samba/cores/smbd
    0       /var/log/samba/cores
    640K    /var/log/samba
    0       /var/log/sa
    0       /var/log/plugins
    0       /var/log/pkgtools/removed_uninstall_scripts
    0       /var/log/pkgtools/removed_scripts
    4.0K    /var/log/pkgtools/removed_packages
    4.0K    /var/log/pkgtools
    48M     /var/log/nginx
    0       /var/log/nfsd
    0       /var/log/libvirt/qemu
    0       /var/log/libvirt/ch
    0       /var/log/libvirt
    109M    /var/log

  14. 13 hours ago, klepel said:

    This isn't a quiet support channel when the issue pertains to unRAID itself. You're looking for Docker support in General unRAID space. I don't have one of the screenshos the mods have, but click on Emby on the Docker tab and then click on Support. That will take you to the support thread for Emby. You ask your question in the continuous thread.

    It wasn't just the Emby Docker - the problem was between the router and Unraid.  I still want to know how to access Unraid through the WAN on my cell phone, and so far I haven't been able to find support for that.

  15. OK, No responses.  Now, I've been down for days.
    Should I just reinstall Emby and restore my settings?

    Or, could I go back a version in hopes of refreshing some setting?

    Does anyone have any ideas if this might be tied to the Unraid operating system rather than the EmbyServer docker?  It would seem to be Emby because it gets temporarily fixed each time I restart the docker.

  16. Emby has been working great for months, now it only works great on internal LAN.  Each time I restart the Emby Docker, the external access only works for about an hour or so before shutting down and leaving the external users off-line.
    There are no noticeable errors with Emby or Unraid.  The way that I know it is down externally is when I shut off my WiFi on my cell phone and access the Emby app.  If it can't connect, I know that I have to restart the Emby docker again.

    I'd like to be able to access Unraid from my cell phone and just installed an Unraid Monitor App, but I don't know how to use it.  The rest of my network throughout the home never fails and my son is a heavy duty gamer in the evenings.  He hasn't experienced any hiccups.

  17. embyserver-diagnostics-20220914-1505.zipThe frustrations come from sometimes waiting days for a response.  I have many family members who log into my server nightly and I wasn't able to get the server up as quickly as I wanted to.  The responses on these threads are helpful but often not timely.  I don't feel that anyone here owes me anything, which is why I was looking for someone who could give more immediate service for a fee.

    Part of my problem was just basic understanding of how the cache drive was being used.  Once I gained a little more understanding, I chose not to try to recover any data on it and replaced the SSD with a new one.  Then, I realized that it didn't mount automatically - again, neophyte issues.  Once I mounted it, it became purely an Emby issue because Emby no longer saw it's configuration files and thought that it was a new install.  I quickly found the backup files on disk 2 of the array and was able to restore the most recent configuration files.

    I thought that I was home-free and informed my users of such, but the Emby docker quickly utilized 100% of the processing resources.  Several more attempts were made at clearing the SSD (reformatting it) and then doing the restore again with similar results - Emby server locking up within an hour.
    Final solution - partial restore of configuration (Media library locations and User account names- 24 listed users).  

    Emby Server is now at 100% functionality with over 4500 films, almost 80 Series, hundreds of music CD's and much more.

     

    I've looked for dockers in the past that would help with file management and backups and tried some out that are no longer updated.  They were inconsistent and locked up often.   I haven't looked at that for a while because I've settled into a routine of using the LAN and mapped drives from this Windows computer.  Maybe I should also look for some kind of a tutorial on scripting.  I know that there is a lot of power in scripting, but I'm not there yet.