[Support] selfhosters.net's Template Repository


Recommended Posts

On 7/19/2021 at 1:33 AM, davidjmorin said:

 

You ever get this working?  I have the same issue

 

Oh God I did..

I can't remember what I did..

I know I changed to MS SQL Server as the database... I think that was part of it..  I can't remember and searching my one note docs, I didn't write anything down sorry. If I remember I'll let you know

 

Link to comment

Gday kind people!

 

I just now found out you have a shinobi template, that pulls the original. 

However, its stated that you can enable hardware acceleration by adding --runtime=nvidia. and adding the proper parameters.

I have done this, but i don't get Cuvid as accelerator at start up.

Any idea what's happening?

 

(p.s. i've checked spelling and all that, parameters are correct)

Link to comment

Hello All,

    I have been toying with the youtubeDL-material docker and I have has some issues with the media naming. It seems the upload dates are not replicating properly from youtube, or perhaps the date i see under the youtube video isnt the field the docker references (maybe the actual upload date and not the published date or something). I was hoping to use the "custom file output" function but after reading the documentation, I'm still a bit lost. Ill link to it below but if anything can advise me on what I should input, I would greatly appreciate it. 

 

https://github.com/ytdl-org/youtube-dl/blob/master/README.md#output-template     (under the "format selection" segment about halfway down)

 

Thank you in advance. 

Link to comment
On 8/5/2021 at 3:51 PM, Aerodb said:

Hello All,

    I have been toying with the youtubeDL-material docker and I have has some issues with the media naming. It seems the upload dates are not replicating properly from youtube, or perhaps the date i see under the youtube video isnt the field the docker references (maybe the actual upload date and not the published date or something). I was hoping to use the "custom file output" function but after reading the documentation, I'm still a bit lost. Ill link to it below but if anything can advise me on what I should input, I would greatly appreciate it. 

 

https://github.com/ytdl-org/youtube-dl/blob/master/README.md#output-template     (under the "format selection" segment about halfway down)

 

Thank you in advance. 

 

If you're using %(release_date), this isn't gonna work as YT doesn't have that info apparently. I did just try %(upload_date) and it worked without issue, perhaps you should put this in the default file output setting:

 

%(upload_date)s - %(title)s

 

Let me know if this helps! Feel free to open an issue or discussion on the GitHub page too if you experience any other problems.

Edited by tzahi12345
Link to comment

Problems with the Cloudfare DDNS docker image.

 

If I set a subdomain like 'www' and a zone like "mydomain.com"

 

It will only update the A record for the 'www' (aka the subdomain) but it won't add and update a second A record for the actual domain itself. This means if my IP changes: https://www.mydomain.com works fine, but simply https://mydomain.com does not, and neither does any https://subdomain.mydomain.com

 

This needs to be fixed asap. All of my subdomains break when my IP changes. I can fix this by removing the 'www' subdomain and making the subdomain option in the docker config empty. However then I lose the 'www' subdomain updating automatically when my IP updates.

Link to comment
11 hours ago, JonathanM said:

@Glassed Silver, if you have time, would you be willing to write up a quick guide specific to Unraid on setting up the mongodb container and linking youtubedl to it?

 

Thanks!

I just played around with some settings until it all fitted.

 

Basically, I took the official mongoDB container as posted in the CAs, removed the values from the template for the admin user and password, set networking to host and set the port to my desired value.

Then in ytdl-m (make sure you're on a nightly build that supports mongoDB), do these things:

  1. Make backups of the database you have is always a must. Preferably before shutting down the container. Especially pre-mongoDB the database stored in its json files is flakey if you have a large dataset or currently working downloads and/or subs.
  2. Go to Settings > Database and set the mongoDB connection string to what the example shows, only that you fill in the IP and port of your mongoDB. We set mongoDB to be host networked, so your unRAID's IP address and the mongoDB port should do.
  3. Hit save.
  4. Click test connection string. (hitting save was required before testing in a previous nightly build. Not anymore though)
  5. If the connection is successful, hit "Transfer DB to mongoDB" or similarly worded (it says "... to Local" for me now, so can't check without ctrl+F'ing into the code, which I'm too lazy to do atm :P)
  6. Shut down the container safely
  7. I added these keys to my template (I'm using a DIY template, not the one from the CAs, build should work regardless!):
    variable 'ytdl_use_local_db' set to false
    variable 'write_ytdl_config' set to true
    This is probably optional, but that's what I did. So... ymmv

Cheers, hope this helps! :)

Link to comment
10 hours ago, Glassed Silver said:

I just played around with some settings until it all fitted.

Well, I'm in the middle of trying to migrate, and it's crushing my server. The initial "Transfer DB" took only a few seconds, but it only imported half of my files. (I had 32K showing in the local DB, only 15K showing after the transfer said it was done.)

When I restarted the ytdl container, it immediately started finding files and adding them to the DB (good!)

But the server is groaning, very unresponsive (bad)

 

Hopefully it recovers without crashing.

 

Progress, sort of. It added about 7K more files to the DB, then "recovered", as in loaded the GUI and became responsive. I restarted the server using the button in settings, and once more it's crushing the server while it adds files to the DB. Hopefully I only have a few more cycles to go before it catches all 32K files.

  • Like 1
Link to comment
16 hours ago, JonathanM said:

Well, I'm in the middle of trying to migrate, and it's crushing my server. The initial "Transfer DB" took only a few seconds, but it only imported half of my files. (I had 32K showing in the local DB, only 15K showing after the transfer said it was done.)

When I restarted the ytdl container, it immediately started finding files and adding them to the DB (good!)

But the server is groaning, very unresponsive (bad)

 

Hopefully it recovers without crashing.

 

Progress, sort of. It added about 7K more files to the DB, then "recovered", as in loaded the GUI and became responsive. I restarted the server using the button in settings, and once more it's crushing the server while it adds files to the DB. Hopefully I only have a few more cycles to go before it catches all 32K files.

This is at the moment kind of expected. I'll tell you that much: big datasets are definitely not something that performs smoothly at the moment, but Tzahi is aware of it and after showing him my setup performance he got very inspired to work on it, little did he know how performance on big datasets is.

There's no need to multi-cycle this unless your container crashes or something. The application will keep scanning for a long time, restarts of my container for example take a long time as well. Right now it does add files to the db that are missing from it before loading the UI. This will be changed in the future and made a background task.

Beyond that, there's some probability you had duplicates in your old db so you'll end up with less entries in the database once everything is said and done. Happened to me, but the fact that ytdl-m pretty reliably finds missing files gives great peace of mind.

In the future there will be a lot of improvements, it's definitely useful to already run this application today though, because preservation of data happens today, the presentation is something that can happen today, tomorrow or next week. Can't do the same with a video that got taken down for whatever reason. :P So, definitely recommend running the app as well, but there's a reason why I didn't add my template to my unRAID CA repo yet.

In any case, glad you've got it working now and I hope you enjoy the app.

I've added an unRAID section to the Wiki over at the YoutubeDL-Material repo as well, so the quick little writeup from above will be there as well.

Link to comment
On 8/20/2021 at 3:46 AM, Glassed Silver said:

it's definitely useful to already run this application today though, because preservation of data happens today,

How much RAM do you have? I can't seem to keep more than 1 or 2 subs unpaused simultaneously, so I'm stuck walking through the list 2 at a time, or manually checking on youtube for content and unpausing that channel to catch it.

 

Leaving 3 or more channels running on auto causes crashes within minutes of the code starting to look for content.

 

This server has 32GB of RAM, but I've got quite a few things running constantly, Unraid shows around 65% RAM committed on the dashboard normally, but it seems this container wants 8GB or more at times.

 

Also, I'm currently running jti989 / robo3t-docker to deal with the mongodb when needed, several times I've had to stop the youtubedl container and change all the subs back to paused in order to get back in to the youtubedl GUI, otherwise the container would just keep crashing. The robo3t app works well enough, but I haven't figured out how to make the connection entry persistent. No big deal, just have to put the server IP in after accepting the license each time. You mentioned running a mongodb tool, which one are you using? I suppose I could just run the tool on my desktop, but having it as a container on the same server is much more convenient.

Link to comment
2 hours ago, JonathanM said:

How much RAM do you have? I can't seem to keep more than 1 or 2 subs unpaused simultaneously, so I'm stuck walking through the list 2 at a time, or manually checking on youtube for content and unpausing that channel to catch it.

 

Leaving 3 or more channels running on auto causes crashes within minutes of the code starting to look for content.

 

This server has 32GB of RAM, but I've got quite a few things running constantly, Unraid shows around 65% RAM committed on the dashboard normally, but it seems this container wants 8GB or more at times.

 

Also, I'm currently running jti989 / robo3t-docker to deal with the mongodb when needed, several times I've had to stop the youtubedl container and change all the subs back to paused in order to get back in to the youtubedl GUI, otherwise the container would just keep crashing. The robo3t app works well enough, but I haven't figured out how to make the connection entry persistent. No big deal, just have to put the server IP in after accepting the license each time. You mentioned running a mongodb tool, which one are you using? I suppose I could just run the tool on my desktop, but having it as a container on the same server is much more convenient.

48GB, but yeah having a lot of subscriptions makes the GUI take forever to load back up it seems. I don't get crashes at all from it, but what I do get is that it just takes a long time to load back up. I had subscriptions not working for a month because there was some bug introduced at some point in mid-July that if you didn't run in multi-user mode it would never check subs. Now I'm catching up with many subs active and I think it'll take quite the while.

If you see unhandled promise rejections a lot from catching subs, that's (probably) nothing to worry about, just very verbose about the way downloads are handled atm until they are fully downloaded. Basically a download is constantly tried to get imported to the db and once the final file is fully present that's successful and the errors stop as well. Not sure if I explained that well, but for anyone reading this: it'll get reworked very likely and the logs will be a bit more sane from that as well eventually. It's a known issue.

If your instance is legit crashing from a lot of subs though (as in, more than UI not loading for a while after a restart of the container) that'd be an unknown issue so I'd suggest you leave a bug report over at the git repo. :) (however it's to be expected to be caught and resolved with the new download management module that is currently being worked on)

 

Ah yes, a central way to manage a mongoDB. mongoDB Express definitely didn't meet my expectations, but mongoDB Compass (a desktop client indeed) at least works nicely and that's also what Tzahi uses, in case you're curious. :)

 

Benefit: it's the official tool, so well tested and sanctioned. https://www.mongodb.com/products/compass

Link to comment
4 hours ago, Glassed Silver said:

If your instance is legit crashing from a lot of subs though (as in, more than UI not loading for a while after a restart of the container)

No GUI, and lots of this

Spoiler

<--- Last few GCs --->

[19:0x561850a7c620] 7483982 ms: Mark-sweep 2038.3 (2050.9) -> 2037.8 (2050.6) MB, 1653.7 / 0.0 ms (average mu = 0.154, current mu = 0.005) allocation failure scavenge might not succeed
[19:0x561850a7c620] 7484958 ms: Mark-sweep 2038.4 (2050.6) -> 2037.9 (2050.6) MB, 969.0 / 0.0 ms (average mu = 0.103, current mu = 0.007) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x347a740008d1 <JSObject>
0: builtin exit frame: parse(this=0x347a7401ee79 <Object map = 0x166468983639>,0x0fc578700119 <Very long string[268643]>,0x347a7401ee79 <Object map = 0x166468983639>)

1: getJSONByType [0x27d9d8be3dd9] [/app/utils.js:~141] [pc=0x2ab41dcb21ce](this=0x3e1222d42cd9 <JSGlobal Object>,0x35b480f426e9 <String[5]: video>,0x2349a9fa4bf1 <String[44]: /The Countach Union Negotiating Top Salaries>,0...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
error: Forever detected script was killed by signal: SIGABRT
error: Script restart attempt #1

<--- Last few GCs --->

[10642:0x55b69fb2a620] 3883551 ms: Mark-sweep 2040.6 (2061.8) -> 2030.8 (2052.1) MB, 821.8 / 0.0 ms (+ 0.5 ms in 1 steps since start of marking, biggest step 0.5 ms, walltime since start of marking 922 ms) (average mu = 0.194, current mu = 0.185) alloca

<--- JS stacktrace --->

==== JS stack trace =========================================

0: ExitFrame [pc: 0x55b69eae60ed]
Security context: 0x22d2eb9c08d1 <JSObject>
1: slice [0x2b346b5dabd9] [buffer.js:~605] [pc=0xdf0bf4cd846](this=0x2b346b5d8439 <Object map = 0x35de02926459>,0x0a0a58cc4659 <Uint8Array map = 0x35de029259b9>,0,268882)
2: toString [0x2150990723d1] [buffer.js:~773] [pc=0xdf0bf3dc2f9](this=0x0a0a58cc4659 <Uint8Array map = 0x35de029259b9>,0x215099069fc9 <String[#4]: utf8>,0x16c816d804b1 <undefined>,0...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
error: Forever detected script was killed by signal: SIGABRT
error: Script restart attempt #2

 

Link to comment
4 hours ago, Glassed Silver said:

Ah yes, a central way to manage a mongoDB. mongoDB Express definitely didn't meet my expectations, but mongoDB Compass (a desktop client indeed) at least works nicely and that's also what Tzahi uses, in case you're curious.

I see. I searched, but haven't found a containerized version of Compass yet. I wonder if it even exists. The robo3t allows me to explore and edit, so I'm good there for now I guess.

Link to comment
3 hours ago, JonathanM said:

I see. I searched, but haven't found a containerized version of Compass yet. I wonder if it even exists. The robo3t allows me to explore and edit, so I'm good there for now I guess.

Yeah sadly not, that being said you could consider setting up a dedicated Linux VM for all the tools that may not have containers available.

 

I'm considering doing the same for my FMD2 application, that'd be for Windows though. Oh well.

 

As for the crashes you're experiencing, I'd suggest filing a bug report over at https://github.com/Tzahi12345/YoutubeDL-Material/issues

Link to comment
32 minutes ago, Glassed Silver said:

As for the crashes you're experiencing, I'd suggest filing a bug report

Probably not going to be relevant, I'd expect the amount of changes needed to accomplish

7 hours ago, Glassed Silver said:

the new download management module that is currently being worked on)

will significantly change anything I'm seeing right now.

 

Given the ongoing development pace, I'll wait to see if the new improvements change things for me, rather than muddy the waters with a report for code that probably won't even exist soon.

  • Thanks 1
Link to comment
5 hours ago, JonathanM said:

Probably not going to be relevant, I'd expect the amount of changes needed to accomplish

will significantly change anything I'm seeing right now.

 

Given the ongoing development pace, I'll wait to see if the new improvements change things for me, rather than muddy the waters with a report for code that probably won't even exist soon.

Fair enough!

Yeah we're definitely not starved for tickets, I mean just look at my submissions over there... :D

 

Either way, maybe you wanna try out the latest nightly. It really helped on my end even without the new download management backend, but the GUI loading is done in parallel to the importing phase of on-disk files. Also, latter function has been (hopefully) proofed against importing doubles. :P

 

Cheers and thank you a lot for your interest in the project and the patience. :)

Link to comment

i am running zwave2mqtt and every so often the serial port for zwave will drop out and i will lose the ability to control my devices. i just have to go in and reset it. is there something to prevent this from happening. it is just a pain to have to reset this a couple of times a week...

Link to comment

I have been using zwave2mqtt for a couple months now. So far there have been 3 updates for the container. Each time I've applied the update it erases all of my zwave settings. This seems to be abnormal as none of the other 5 containers I'm running do that. Any tips as to why that might be the case?

Or you know, I could actually edit the container and set the app data path... In my defense, they typically set themselves ;)

Edited by Celsian
Link to comment
On 5/29/2021 at 5:27 AM, mkono87 said:

Draw.io.....Are we able to save the drawings on a share or is the only option to save/open it to/from our computers? Sorry im a bit confused how to have draw.io see a mapped share.

Hello, did anyone have an answer for this one? I too want to figure out how to present a path through draw.io so I can save my diagrams onto the server. I've tried to add a path but that doesn't seem to show up within draw.io. Anyone have a method for doing this?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.