Ascii227

Members
  • Posts

    67
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ascii227's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. Hi, My server has been up and running fine for years. Today it is down. Please help! Clicking around the UI is slow, and 50% of the time gives me this error message in the browser: Warning: shell_exec(): Unable to execute 'logger error: '/webGui/include/DeviceList.php': missing csrf_token' in /usr/local/emhttp/plugins/dynamix/include/local_prepend.php on line 18 I went to the syslog and it has been throwing this error every 5 mins for the last 2 weeks: Sep 14 12:45:06 Server_name ntfs-3g[26405]: Failed to read vcn 0x0: Input/output error Sep 14 12:45:06 Server_name kernel: Buffer I/O error on dev sdt1, logical block 36, async page read Sep 14 12:50:06 Server_name ntfs-3g[26405]: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Sep 14 12:50:06 Server_name ntfs-3g[26405]: Failed to read vcn 0x0: Input/output error Then last night it started throwing this error every 5 mins: Sep 18 01:50:29 Server_name emhttpd: Pro key detected, GUID: ***REDACTED*** FILE: /boot/config/Pro.key Sep 18 01:50:54 Server_name emhttpd: Unregistered - flash device error (ENOFLASH7) Sep 18 01:56:22 Server_name emhttpd: Unregistered - flash device blacklisted (EBLACKLISTED2) and now all I am getting in the logs is this over and over: Sep 18 14:24:50 Server_name emhttpd: read SMART /dev/sdf Sep 18 14:24:50 Server_name emhttpd: error: device_read_smart, 7977: Cannot allocate memory (12): device_spinup: stream did not open: sdf and Sep 18 14:17:57 Server_name php-fpm[5669]: [ERROR] fork() failed: Resource temporarily unavailable (11) I have tried exporting diagnostics but the server just hangs. My basic knowledge is suggesting that my flash drive is toast, but I dont want to go down that route without some confirmation from somebody who knows a bit more Any help or guidance on how to troubleshoot or fix this issue is much appreciated. Thanks in advance!
  2. Hi, Apologies to necro this thread but it is the only one on the forum about this specific issue. Please let me know if it is more appropriate to create a new thread, but I thought as it is related it should stay here. Last night I put a new disk into my server and started the disk clear. This morning when I came to the server the syslog is full of errors identical to the OP: Dec 9 08:11:31 AsQ-NAS nginx: 2020/12/09 08:11:31 [crit] 4885#4885: ngx_slab_alloc() failed: no memory Dec 9 08:11:31 AsQ-NAS nginx: 2020/12/09 08:11:31 [error] 4885#4885: shpool alloc failed Dec 9 08:11:31 AsQ-NAS nginx: 2020/12/09 08:11:31 [error] 4885#4885: nchan: Out of shared memory while allocating message of size 10229. Increase nchan_max_reserved_memory. Dec 9 08:11:31 AsQ-NAS nginx: 2020/12/09 08:11:31 [error] 4885#4885: *97557 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Dec 9 08:11:31 AsQ-NAS nginx: 2020/12/09 08:11:31 [error] 4885#4885: MEMSTORE:00: can't create shared message for channel /disks I followed this thread through and I can see it was an issue with safari, however I use windows and firefox. Also I do not leave my browser open. All browsers and connections to the server were closed whilst this check was going on and the error messages occurred. I cant find anything else related to these nginx memory errors in my logs or on the forum. I am using unRaid 6.8.3 and have had no other issues. Diagnostics attached and taken while the new disk clear is still running, any advice would be much appreciated. Thanks very much. asq-nas-diagnostics-20201209-0923.zip
  3. Thankyou for the suggestion, I will swap a few drives about and see if the error follows the drive or stays with the same controller/port.
  4. Hi all, I used to run a Marvell based HBA which constantly stalled, crashed, and gave me BTRFS errors in my cache. I just upgraded to an LSI SAS 9300-4i. Initially all was good. VM related errors are all gone, read/write speeds are increased etc. However occasionally (maybe once every 2 days) I am getting IO errors for one of my SSD drives attached to the HBA: Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00 Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#0 CDB: opcode=0x28 28 00 05 fe 18 18 00 05 e0 00 Oct 23 19:08:19 AsQ-NAS kernel: print_req_error: I/O error, dev sdi, sector 100538392 Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#1 Sense Key : 0x2 [current] Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#1 ASC=0x4 ASCQ=0x2 Oct 23 19:08:19 AsQ-NAS kernel: sd 7:0:1:0: [sdi] tag#1 CDB: opcode=0x28 28 00 05 fe 1d f8 00 09 e0 00 Oct 23 19:08:19 AsQ-NAS kernel: print_req_error: I/O error, dev sdi, sector 100539896 The drive gave no IO errors using the previous marvell controller. I am getting no functionality problems, the drive still works fine and all the data on it seems ok. Some Context: This LSI HBA has 3 SSDs attached to it and that is all. One of the SSDs is my cache drive which is drive sdh, whereas the other 2 ssds (sdi and sdj) are unassigned devices used as shares for my windows 10 vm. I only get errors for drive sdi. The erroring drive is only being used to download a few torrents at a time onto. Some help on where to start troubleshooting this would be much appreciated, to find out whether it is really a drive fail or controller problem etc. Thankyou in advance. Diagnostics attached asq-nas-diagnostics-20191024-1017.zip
  5. Wow that was fast! Thankyou for your response, indeed it seems you are correct. I have the netdata docker installed, and to test I just went and restarted it. Immediately there were those nginx errors entered in the log again. This is reliably reproduceable. I have found somebody else on the forum who has these errors after upgrading to 6.7.2. I cant guarantee thats when I started seeing the errors, but as I said before Im pretty sure they are a recent thing. I am also running 6.7.2, should we raise this as a bug? Does this look like more of a netdata error or unraid?
  6. Hi, Recently I have been getting about 1-2 errors a day from nginx in my unraid syslog: Oct 21 09:34:55 AsQ-NAS nginx: 2019/10/21 09:34:55 [error] 5003#5003: *878192 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Oct 21 09:34:55 AsQ-NAS nginx: 2019/10/21 09:34:55 [error] 5003#5003: *878194 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost" I am not sure when this started, but I dont remember seeing these errors before so I dont think it has always done it. Some googling and searching on this forum has thrown up a few results, but in those always seem to result from a troublesome plugin whereas my issue seems to be more system related. For example in this post the errors quite obviously come from the dynamix sleep plugin: If anybody can help to track down what is causing these nginx errors it would be much appreciated. Thankyou very much. Diagnostics attached. diagnostics-20191021-0914.zip
  7. Unraid makes this easy. Just go into settings and there is an option for cpu pinning. After you have created your vm you can choose which cores you want to pin it to, and then I would also recomend isolating the same cores. Isolating them alows only the pinned application to use the cores and stops unraid from using them at all. If you dont isolate them then unraid will use them for plex and other things slowing your windows vm down. Both options are on the cpu pinning settings page. Leaving the dockers unpinned just leaves unraid to manage them as it normally would, using the unpinned cores and any available ram. I have included a screenshot of my settings for clarity. If you get stuck, check out the spaceinvader one videos. They are a great help for things like this and explain it better than I ever could:
  8. My 2 cents on your build as I have almost exactly the same set up and have been running it for over a year. You wont need to separate the ram between plex and unraid. If you have 32gb in your system and you assign 16gb to your windows VM, then unraid will just see the other 16gb as available and dynamically dish it out to the plex docker as it sees fit (assuming you will be running plex in a docker?). I would advise cpu pinning, but again no need to pin plex as you may be opening yourself up to problems in the future. I have a 6 core i7 8700k and I pin 3 of the cores with their hyperthreading pairs to my windows vm, and just allow unraid to manage to the other 3 cores between itself and the other dockers including plex. I went down the path of trying to pin plex to its own cpu cores with its own separated ram, however I found that sometimes the unraid os would need those resources more (for example in a parity rebuild) whereas other times plex would need the resources more (when transcoding/scanning/generating thumbnails etc). In the end it was easier to just let the os manage the docker resources and seperate off hardware for the VM only. Transcoding plex through intel quicksync (onboard) whilst gaming on the separate gpu works really well, no problems there at all!
  9. Im sorry that you seem to have taken it this way. Due to the way both unraid and dockers have developed over time, the standard way now advised in all tutorials and youtube videos to get these systems set up is to use automated setup scripts and plugins. Unfortunately in depth documentation or even signposts to the correct locations of the docs have not kept up with this pattern and remain on developer centric sites such as github. As Squid himself has said, he foresaw this as a potential issue and specifically developed a feature in his app store to combat it. If you look at my post history you will see I have spent quite some time researching how to use unraid properly, and never just whined and asked for people to solve the problems for me. All I suggested was that on the first post of this thread you mention that the readme and all potential troubleshooting info can be found on the github page. No spoonfeeding necessary, just a simple signpost to where the docs are so people can find them. Considering this thread is the official support for this docker and system I think thats quite a constructive suggestion, and not really one that requires a condescending reply about how little I am doing to help myself.
  10. Thanks for the calm and considered responses from you guys. I appreciate that documentation is hard to keep updated in many different locations, I was just pointing out that the signposting to it in this instance is a little expectant of the user to assume to go to the docker hub or github repo. Squids feature in CA looks awesome, this would have nailed my issue with tag confusion upfront before it even appeared
  11. If you want to go down this path of pointing out where I should have looked then thats fine, but I did check the first post of this thread several times and had no reason to go to the github link as I had no interest in the code and already had a path of downloading the docker directly from the community apps. If you want to avoid other people making these same mistakes may I suggest you make it much more obvious everywhere that the readme and possible troubleshooting options are all on the github site and to check that first before posting. Better still copy and pasting the entire readme into the first post would be even better as the thread is listed as the place to get support, not the github repo.
  12. Fair enough. When I created the docker from the template in community apps the tag field was prefilled with latest, so I assumed that was default and I was doing the right thing by leaving it like that. I have just been through the linuxserver.io docker image info in the unraid community apps plugin and there is no mention of tags whatsoever. The only reference I can find is in the template overview where it states 'VERSION Set to either latest,public or a specific version e.g. "1.2.7.2987-1bef33a"' Nothing to implicate that latest is a beta build and public should be used for stability. For users like me who are just installing the Docker straight through community apps there is little in the way of explanations as to what the differnt tags mean. In fact in all my weeks posting in the forums this is first I have ever heard of the public tag. I have now changed the tag in my docker to public and all is well. Apologies for polluting this issue with misunderstanding the tag system.
  13. No offence, but I really dont think this is much help. Its not like I am manually trying to upgrade to some bleeding edge beta version. As soon as I fire up the standard plex docker the logs show 'Attempting to upgrade to 1.15' and then all the trouble begins. Only by reverting to an old version of the docker am I able to get anything working. Me too, which is why I am using all the default settings on the docker and just letting it do its thing. It is attempting the upgrading to 1.15 on its own and causing me to not be able to watch anything. Just for info, here is the plex docker log from the first time I install and run it directly from the app store. As you can see, I am doing nothing. The docker is doing everything itself the first time it starts up from a fresh install. ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 40-chown-files: executing... [cont-init.d] 40-chown-files: exited 0. [cont-init.d] 50-gid-video: executing... [cont-init.d] 50-gid-video: exited 0. [cont-init.d] 60-plex-update: executing... Atempting to upgrade to: 1.15.0.659-9311f93fd 2019-02-20 07:46:24 URL:https://downloads.plex.tv/plex-media-server-new/1.15.0.659-9311f93fd/debian/plexmediaserver_1.15.0.659-9311f93fd_amd64.deb [81547030/81547030] -> "/tmp/plexmediaserver_1.15.0.659-9311f93fd_amd64.deb" [1] 2019-02-20 07:46:24 URL:https://downloads.plex.tv/plex-media-server-new/1.15.0.659-9311f93fd/debian/plexmediaserver_1.15.0.659-9311f93fd_amd64.deb [81547030/81547030] -> "/tmp/plexmediaserver_1.15.0.659-9311f93fd_amd64.deb" [1] (Reading database ... 10486 files and directories currently installed.) Preparing to unpack .../plexmediaserver_1.15.0.659-9311f93fd_amd64.deb ... (Reading database ... 10486 files and directories currently installed.) Preparing to unpack .../plexmediaserver_1.15.0.659-9311f93fd_amd64.deb ... Unpacking plexmediaserver (1.15.0.659-9311f93fd) over (1.14.1.5488-cc260c476) ... Setting up plexmediaserver (1.15.0.659-9311f93fd) ... Setting up plexmediaserver (1.15.0.659-9311f93fd) ... Installing new version of config file /etc/init/plexmediaserver.conf ... Processing triggers for libc-bin (2.27-3ubuntu1) ... [cont-init.d] 60-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done.
  14. I deleted my codec folder from the location you specified then watched as plex redownloaded the required codecs for every file I pressed play on, but still failed to play them all with the same error as before. I took gacpacs advice as it seems the last thing I can possibly do. So I deleted the docker and image, and reinstalled from the community apps plugin. Remapped and scanned all my library files. After all that I get exactly the same results, nothing plays with a transcoder crash error. Changing the docker tag back to 168 (PMS 1.14.1) and restarting the docker makes everything play fine again. Im hoping this at least rules out anything to do with my unraid setup. I did manage to find an mp4 file in my library which will direct play just fine in latest/1.15, but as soon as I try to convert or transcode it I get the same error. I guess I will just sit on tag 168 forever, its not causing me any issues to stay there so as gacpac inferred - if its not broke dont fix it! EDIT: I noticed from poking in the codec folder that the two versions of plex (1.14.1 and 1.15.0) Seem to be keeping their codecs in differently named folders. Could be nothing, but could mean something to someone: 1.15.0 stores them in codecs/a22632d-2034-linux-x86_64 1.14.1 stores them in codecs/531e313-1328-linux-ubuntu-x86_64
  15. Thanks very much for such a comprehensive list of steps! A couple of differences in my usage though. I never use subtitles and only ever use the web client, never any apps or players. Also I am unable to p[lay anything both locally and remotely. The only thing I can see to try is to delete my codecs folder and see if plex redownloads the correct ones. I will give this a go and report back. Thanks again.