Goldmaster

Members
  • Posts

    169
  • Joined

Everything posted by Goldmaster

  1. Hi there, I am looking at migrating my Linux server io nextcloud docker container to you. i have a Nextcloud share set up as i was following this video https://www.youtube.com/watch?v=id4GcVZ5qBA and it gets annoying having to manually update the container. so apparently this oen you dont need to manually update the container before updating the docker. Question is how do i migrate to this container? I have used the Linux server io default values where possible.
  2. So after my fiasco with Plex. at 1st thought was a major bug in plexpass, which then continued to plex docker. I had followed along with the video on fixing a corrupt plex database. It got me wondering if i could write out a userscript that could automatically guide a user though and fix a plex database. currently only works with @binhex plex containers for now but may expand to other plex docker containers. This is the userscript link https://gitlab.com/Goldmaster/plex-database-fixer/-/blob/main/plex-database-fixer.sh I did write this with the help of chatgpt. so may be some issues.
  3. Well, i had a similar issue as well. The system works fine and like you said end up with that. I also use /tmp for plex but how the heck does plex have to do with unassigned devices that would cause such issue? Its a minor thing, but when you need to use the web ui, its not allways practical to reboot.
  4. Well sods law was that I sorted it AFTER posting. I have used the file manager plugin and just deleted the docker folder bit by bit until I had btrfs and another folder, then just deleted them. it was a bit by bit kind of situation. so now my cache drive has gone from around half full to just under a 5th fall on my cache drive. Im marking as solved.
  5. Hi there, I had for a brief period of time, used the docker folder option. While the advantage is of not needing to worry about a docker image filling up, there is a risk that an incorrectly docker mapping, could fill up the whole lot of your cache drive. if i need to make the docker img file bigger because of adding more and more docker programs then i can manually do so. Plus i can copy and paste the img file as and when i need to and backup if needed. I have switched to docker.img but I still got a docker folder inside the docker folder, so the path mapping is /mnt/user/system/docker/docker/ So to clear out the old docker folder, im trying to delete the docker folder, but its taking foooorrrreeeevvvveeeerrrr. Tried running terminal, and takes ages, midnight commander it also takes ages and the plugin that adds file management to unraid web gui is also taking for ever. How can I just delete the folder and job done? rather than taking hours? I dont see an option in docker settings. I also did not see the option when changing back to docker.img to delete the old docker folder. any ideas please? or do i have to keep deleting as and when i can? so yeah, dont use the docker folder based option.
  6. well i have pulled down the binhex plex docker and set it to public. just renamed the binhex-plexpass appdata folder to binex plex and works fine. will now clean up the mess.
  7. Thank you, I will switch to stable or use the official docker image.
  8. That doesn't fix the issue. where do i specify the correct version to use?
  9. only thing to do is migrate your plex appdata to the offical docker image. you may need to change the docker template to match your appdata. for example, in the offical template, set container path to be /media and also set to read only to protect your library files. just pull down the container, then stop it. then delete the plex media sever folder in the official plex appdata container folder, then copy your binhex plex media server arccross. start up the offical plex container, and you should find it will all work. just also be sure to change version to beta to get beta upgdates if you are plex pass holder. Update It looks to be a bug in plex beta, and would have been updated into the docker. I wonder if @binhex could temporerly role back the docker image to the last working stable version?
  10. I dont know why, but plex has stopped working. I am getting 2023-04-21 17:35:17,664 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 22928360935568 for <Subprocess at 22928360934704 with name plexmediaserver in state RUNNING> (stdout)> 2023-04-21 17:35:17,664 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 22928360939648 for <Subprocess at 22928360934704 with name plexmediaserver in state RUNNING> (stderr)> 2023-04-21 17:35:17,664 WARN exited: plexmediaserver (exit status 255; not expected) 2023-04-21 17:35:17,664 DEBG received SIGCHLD indicating a child quit 2023-04-21 17:35:18,666 INFO spawned: 'plexmediaserver' with pid 131 2023-04-21 17:35:19,667 INFO success: plexmediaserver entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2023-04-21 17:35:24,372 DEBG 'plexmediaserver' stdout output: Error: Unable to set up server: sqlite3_statement_backend::prepare: no such table: metadata_agent_providers for SQL: update metadata_agent_providers set online = 0 (N4soci10soci_errorE) showing up in the logs and appdata restore is not working, i did try linux server plex docker which worked fine, when I copied the appdata across. when i go to the webpage, i am presented with Plex Media Server is currently running database migrations which has been going for several hours.
  11. This has solved the problem, will remove the old docker folder. did try just deleting it in kruasder, but seamed to not do anything. I guess run sudo rm rf -f /mnt/user/docker/docker/ or something?
  12. Using my previous settings, it seems to now take around 10 minutes total time. What I will do is remove 1 exuded folder, and it should take just over the same amount of time. It did dawn on me that I did use docker folder in the past, rather than image file, and i think after removing exuded folders, that being the system folder which has the docker folder and inside that, has both docker folder and docker img file, luckybackup was trying to back up the docker folder hence the long time. i will remove 1 exuded folder at a time and see.
  13. Firefox user since 2017. I had been using chrome since 2009 but later on, chrome changed the new tab page and then from there started changing other things for no reason, so I switched to firefox and never looked back as it feels much lighter and just as fast.
  14. Thats absurd as I know I dont have any hard or soft links. I haven't changed anything in docker config. The only thing i had changed was removing previously exuded directories, as I now have a bigger backup drive. I have gone about adding files normally. I could excude hard and soft links if possible. Still a little amusing in a way, but i did think something was stuck in some form of scanning itself.
  15. This is crazy, 16 hours later and this is far as it gets. I will have to cancel, im sure something is wrong. im backing up 2.98tb of content. this is the number of files and folder in my mnt/user folder Only thing i can do is try and see how the backup performs on a 2nd 2tb protable drive that i have that is nearly full, with the last used profile for that and if it takes much shorter than i could restore the appdata settings as i must have removed a folder i excluded. But again none of this makes any logical sense. Seams to be going on forever. I have not changed docker settings. only moved from the original 2tb drive onto the new 8tb drive.
  16. yeah got it sorted, now to save the seed. This video does explain 2fa well. Also I have enabled 2fa but I still have the yellow banner saying enable 2fa when I have.
  17. 2 things 1. i quite like the new login screen, looks less cluttered. 2. i had 2fa enabled, now is asking me to re-enable it.
  18. there's no way it could run out of ram. in my signature, it does show i have 128gbs of ram
  19. It currently at 31,779,000 files and still going. Im fairly certain something is wrong, but the docker mappings and all that are correct. It's running in dry run anyhow, and my /mnt/ which the container sees is actually /mnt/user/ is set to read only. Well if i did not have that bug i mentioned earlier then it would have done the job fine. I might leave it and if it gets to when i go bed then i will have to turn the server off (s3 sleep plugin doesnt work).
  20. For some reason I have never been able to get this work. I remember following Spaceinvaderone video on how to remotly start and connect to a vm and it worked fine when I had the x299 motherboard. but now, i have never been able to get wake on a vm to work. I have got the wake on lan plugin along with the python 2 moduale installed and it does say running. I have port forwarded 7 to unraid I know of the mac address and my public ip ( will want to to get duckdns to work, but need to pin down the issue) Wake on lan is enabled in bios The only thing i can think of why wake on lan is not working, is due to having the network model as virtio so i can access network shares from my other laptop. I wonder if i could do a local internal wake up rather than using depricaus to pin down the cause.
  21. I haven't synced for quite some time. I had a 2tb drive that was full and couldn't take any more. So the temporary solution was to exclude less important folders, until i could not do much else. I have now got an 8tb drive, moved the copied stuff over from the old 2tb drive and let it synced.
  22. Yeah, thank you. I think I just need to be patient. Trouble is that I can't leave my server running over night, as its next to where I sleep. So I am wondering if I could somehow resume building filelist? It's not practical to leave it running all the time. The s3 sleep plugin doesn't put the server to sleep, only shut it down.
  23. What does this mean in luckybackup? I am syncing just over 2tb of stuff and had to leave overnight, I came to take a look at the progress and see this after leaving luckybackup running for several hours. 2238700 files... <a name="error1"></a><font color=red>The process reported an error: "Crashed"</font> So i have start the whole building file list again. Never knew lucky backup to take this long when syncing around the same amount. I'm wondering weather to switch to dirsync pro as that can scan, then compare, similar to goodsync.
  24. Hi there, I had come across this post https://perfectmediaserver.com/01-overview/overview and had known of the website for quite some time. But essentially, this post https://perfectmediaserver.com/01-overview/faq/#what-about-unraid-openmediavault-or-freenas is basically saying that the perfect media server is something open source. I have found a comparisment here https://perfectmediaserver.com/01-overview/faq/#what-about-unraid-openmediavault-or-freenas. But what I dont want is that in 30 years time something happens to unraid, and it shuts down or something like that. I hope it does last until the 24th century. I know it is built atop open source, but I have heard some people say unraid is not open source? i don't know if unraid is open source and the only part that is closed source is the payment system? If unraid is not open source, it would be a great feature to have, considering it's about self-hosting and people want things to last.