Jump to content

enmesh-parisian-latest

Members
  • Posts

    121
  • Joined

  • Last visited

Everything posted by enmesh-parisian-latest

  1. Just did a reboot and get this within 5 mins: May 11 17:47:11 tobor-server nginx: 2020/05/11 17:47:11 [error] 8916#8916: *2941 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" May 11 17:47:11 tobor-server nginx: 2020/05/11 17:47:11 [error] 8916#8916: *2943 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost" Different errors, but related to nginx & php? EDIT: Ok ignore those errors, they're related to netdata as per this post: Any ideas about the initial problems?
  2. I'm having some serious ongoing problems with nginx, I'm on 6.8.3. The log was filling up, mainly with nginx errors, eg: May 10 05:30:02 tobor-server kernel: Code: Bad RIP value. May 10 05:30:02 tobor-server nginx: 2020/05/10 05:30:02 [alert] 7919#7919: worker process 21090 exited on signal 11 May 10 05:30:02 tobor-server nginx: 2020/05/10 05:30:02 [crit] 21187#21187: ngx_slab_alloc() failed: no memory May 10 05:30:02 tobor-server nginx: 2020/05/10 05:30:02 [error] 21187#21187: shpool alloc failed May 10 05:30:02 tobor-server nginx: 2020/05/10 05:30:02 [error] 21187#21187: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. May 10 05:30:02 tobor-server nginx: 2020/05/10 05:30:02 [error] 21187#21187: *1848775 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" I needed to generate the diagnostics using the terminal as the UI is not responsive for certain pages. I'm planning on doing a memtest when I'm near my server next, any other ideas I should be looking at? server-diagnostics-20200510-1035.zip
  3. I love that unraid runs so well on old hardware. In 2020 I'd like the array operations such as parity swap & failed disk replacement to be made smoother/more foolproof. I always feel like I'm doing something wrong when I perform any array changes.
  4. Downgrading Unraid version until I can get my hands on a monitor for a memtest. Back to corrupted databases it seems ¯\_(ツ)_/¯
  5. Two errors popped up in the logs this afternoon Dec 17 18:07:54 tobor-server kernel: traps: lsof[19693] general protection ip:154f19058b8e sp:32f7692d41180102 error:0 in libc-2.30.so[154f19039000+16b000] Dec 17 18:11:17 tobor-server kernel: traps: lsof[20832] general protection ip:1468052f9b8e sp:8e5fceff2847ec2f error:0 in libc-2.30.so[1468052da000+16b000] A search for "kernel: traps: lsof general protection" turns up this article which suggests it may be bad ram. I'll start a memtest as soon as I can get my hands on a monitor.
  6. Hey, I'm having issues with the latest release (6.8.0). Docker seems to crash every 24h UI unreachable every morning Can access log through UI Couldn't generate diagnostics today (needed to hard reboot to generate below diagnostics) Docker now won't start (tried deleting docker.img, still won't start) Any ideas would be fantastic, I'm not sure where to go from here. ================================================================== M/B: Supermicro X9SRL-F Version 0123456789 - s/n: ZM16BS015017 BIOS: American Megatrends Inc. Version 3.2. Dated: 01/16/2015 CPU: Intel® Xeon® CPU E5-1650 v2 @ 3.50GHz HVM: Enabled IOMMU: Enabled Cache: 384 KiB, 1536 KiB, 12288 KiB Memory: 32 GiB DDR3 Multi-bit ECC (max. installable capacity 512 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 eth1: interface down Kernel: Linux 4.19.88-Unraid x86_64 OpenSSL: 1.1.1d ================================================================== tobor-server-diagnostics-20191217-0919.zip
  7. Having same issue, set mover tuning to "very low / idle" I'm assuming the settings aren't applied to an existing mover session as plex is still inoperable. Thanks for the fast update to mover tuning, I noticed the "help" info for the two new settings hasn't been added yet.
  8. Any idea when v1.48 will be available?
  9. Thanks for the response, I'll quit panicking. Any benchmark tools you'd recommend to track drive performance over time?
  10. On reboot today I was alerted by FCP that three of my disks have "Write Cache is disabled" errors: Write Cache is disabled on disk4 You may experience slow read/writes to disk4. Write Cache should be enabled for better results. Post your diagnostics for other users to confirm this test and advise. NOTE: If this drive is connected to your server via USB, then this test and the fix may or may not work / be accurate as USB support for smartctl and hdparm is hit and miss Write Cache is disabled on disk5 You may experience slow read/writes to disk5. Write Cache should be enabled for better results. Post your diagnostics for other users to confirm this test and advise. NOTE: If this drive is connected to your server via USB, then this test and the fix may or may not work / be accurate as USB support for smartctl and hdparm is hit and miss Write Cache is disabled on disk6 You may experience slow read/writes to disk6. Write Cache should be enabled for better results. Post your diagnostics for other users to confirm this test and advise. NOTE: If this drive is connected to your server via USB, then this test and the fix may or may not work / be accurate as USB support for smartctl and hdparm is hit and miss I ran a check on disk4: hdparm -W /dev/sdl /dev/sdl: write-caching = 0 (off) then: sudo hdparm -W1 /dev/sdl /dev/sdl: setting drive write-caching to 1 (on) write-caching = 0 (off) Seems like I can't set it to "on". Should I be concerned about the drives? server-diagnostics-20190528-0256.zip
  11. More than likely it's the chroma plugin, disable it (it does nothing) and try again. 90% sure it'll fix it.
  12. Regarding the multiple library thing, there are a few ways of doing it. You could run two seperate beets containers and just call them like beet1 & beet2. The neater way would be just to have two seperate config files and use the -c flag to specify the config files: docker exec beets beet -c /config/config-v1.yaml import /downloads docker exec beets beet -c /config/config-v2.yaml import /downloads Inside the config files you can specify different musiclibrary.blb databases. I'm not sure how the state.pickle file would be affected by the two libraries, although you may be able to specify the state.pickle inside the config.
  13. You're on your own with this one, I don't use a local MB server. If beets is being run as users:nobody and you've run DSNP, then there's no way it can be changing permissions outside its own user group. Check your container actually has the PUID and PGID set correctly. I suggest due to the volume random questions which you can find answers to already, I suggest you consult the documents. I had many similar questions when I was starting out with beets and received the RTFM suggestion many times. Every time I'd overlooked the clearly written answers already available to me. Really, the beets docs are very well written although you may need to read a few times for it to sink in. Due to the breadth of the possibilities we could be here for days.
  14. Oh right sure, you can point beets at any folder you've mapped to it. so in this case you could set: copy: no move: no then you can run: beet import /libraryfolder/Abba-Album2 to specify the album you want included. Ok sure, I had my own MB server running at one point. I ended up removing it as I could never be sure if it was causing problems or not. Once the initial library scan is complete the scan times aren't that bad. Besides, I find discogs provides me with better matching anyway. I'd suggest putting the local MB server on the backburner until you have everything else working smoothly. You should only run DSNP if a container (or user) has been editing files as a user other than users:nobody, if beets is set correctly then check the app which is creating the files in the first place (downloader or file browser) As mentioned in the documents, beets has its limitations when it comes to perfect matching of releases. You can increase or lower the threshold before beets requires user input, I have mine set at 82% of a confident match, anything below will require me to manually match. Too many or too few tracks always triggers a manual import where I look up the id on discogs or MB then input manually. Look up the documentation on the fetchart plugin, you can be very specific about artwork sourcing. I don't use it myself rather relying on existing artwork in the folder. Not wanting to pass you off onto someone else, but I generally find the beets google groups to be very helpful technically when it comes to beets support. That's where I go for assistance.
  15. Any data in the files id3 tags, folder names or any matched data from discogs, musicbrainz (or any other source via plugin) can be sent anywhere you like, be it a tag or a file/foldername. Just find the ID3 tag and apply it to the folder name Check out the docs, "The -L (--library) flag is also useful for retagging. Instead of listing paths you want to import on the command line, specify a query string that matches items from your library. In this case, the -s (singleton) flag controls whether the query matches individual items or full albums. If you want to retag your whole library, just supply a null query, which matches everything: beet import -L" I think that's what you mean? MBZ mirror? Make sure you're running beets as users:nobody, usually PUID/PGID of 99:100 in your beets container settings. Then make sure your data permissions are correct, the easiest way to do this is install the Fix Common Problems plugin then run Tools > Docker Safe New Perms. YW
  16. Chroma is a pretty dodgy and unnecessary plugin in my opinion. Switch it off by deleting or commenting out the 'chroma' line under plugins in your config file
  17. Was a while ago, but HBA was my choice I believe. Good card, but it squeals when it gets too hot
  18. Again, I'm really no expert on the plugin, however common sense would suggest you isolate what the problem is by making sure you can convert to other formats, then work out why only m4a doesn't work (if that's the case).
  19. I'm no expert in using the convert plugin, however I'd recommend trying different formats other than m4a. If others work but m4a doesn't then you may want to look at ffmpeg.
  20. Hey Zozat, I suggest you experiment with trial and error. The config.yaml file can become very complicated, so I'd recommend starting with no plugins or any modfications. Make sure you can get a simple import from a source to destination before adding the convert plugin. Once you're happy with the way the data is being handled with a VERY simple config file, then add the plugin and fiddle with the settings. Please read the manual several times, there's always something to miss on first read: https://beets.readthedocs.io/en/v1.4.7/plugins/convert.html . Keep in mind that by default, the plugin depends on FFmpeg to transcode the audio, so you might want to install it. Here's a simple example for convert plugins: convert convert: auto: no dest: C:/Transcodes never_convert_lossy_files: yes quiet: false formats: flac: command: ffmpeg -i $source -y -vn -aq 2 $dest extension: mp3
  21. I assume this looks familiar to you? I certainly don't miss chroma after disabling, it was pretty useless and only slowed down imports. Beets can run through several TBs of data now without issues.
  22. Do you have the chroma plugin active? I had a similar problem until I disabled chroma.
  23. I'd also love to know how to do this, there are several plugins I'd like to add
  24. Hi zandrsn, while I can't provide a technical answer of why chroma isn't working, I disabled the plugin a while back after I discovered it was crashing my imports. When I had it running it very rarely (basically never) provided any useful matching and I haven't missed it since. Chroma is only useful if there are no usable ID3 tags or path names to identify the music. My imports match the ID3 tags with discogs and musicbrainz and while they sometimes require some manual input, I can match 99% of albums using just these two sources.
  25. Is there any way I can manually add a plugin to beets running in a container? I've tried installing nerdtools > installing php 7 > installing pip then running pip install requests requests_oauthlib as suggested in the documents: https://beets.readthedocs.io/en/v1.3.19/plugins/beatport.html Unfortunately I get the following error: root@server:/# docker exec -ti beets beet import /downloads ** error loading plugin beatport: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/beets/plugins.py", line 270, in load_plugins namespace = __import__(modname, None, None) File "/usr/lib/python2.7/site-packages/beetsplug/beatport.py", line 25, in <module> from requests_oauthlib import OAuth1Session ImportError: No module named requests_oauthlib I've contacted one of the developers who suggested it may be due to a different Python installation. I'm stuck now https://groups.google.com/forum/#!topic/beets-users/NdKdgYzvokk
×
×
  • Create New...