Jump to content

Harro

Members
  • Posts

    296
  • Joined

  • Last visited

Everything posted by Harro

  1. I guess that is what I am confused about. If I use duckdns for a domain to my static ip. Any connection to that domain name will open the dvr viewer. How to get other internal ip's on lan available to the static ip is where I am confused on. Whether that be on port 80 or whatever.
  2. I would like to set this docker up but have some issues involving conflicting ports. I have a stand alone dvr recording 18 security cameras, This dvr has port 80, 8082 and 443 used for outside access. I have a static IP which the dvr forwards to. I think I can change ports on the dvr and then set the router to forward those ports but the 80 port for web access gets me messed up with how to distinguish between the dvr internal ip and and my other computers on the network, Any help on this or direction would be appreciated..
  3. I have had no problems with linuxserver/plex for the past year or so. I just was thinking that most of the media info was probably in those folders (cache, media and metadata) and might shorten the scan time down on if ashman70 were to switch containers to the linuxserver/plex container. I know that scanning a large library in can take a looooooooong time.
  4. Should copy your cache, media and metadata folders to another location and set up the new container. then copy those folders back into the new container and no need to index media.
  5. Yes I am surprised also, that is why my initial question. Hoping someone would know. But I get this error in log when testing connection. System.AggregateException: One or more errors occurred. ---> System.NullReferenceException: Object reference not set to an instance of an object at Ombi.Api.SickrageApi+d__6.MoveNext () <0x4106a950 + 0x007de> in :0 --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional (Boolean includeTaskCanceledExceptions) <0x40bf7a70 + 0x0003f> in :0 at System.Threading.Tasks.Task`1[TResult].GetResultCore (Boolean waitCompletionNotification) <0x41ad9a60 + 0x0008b> in :0 at System.Threading.Tasks.Task`1[TResult].get_Result () <0x40a12530 + 0x00027> in :0 at Ombi.Core.TvSenderOld.SendToSickRage (Ombi.Core.SettingModels.SickRageSettings sickRageSettings, Ombi.Store.RequestedModel model, System.String qualityId) <0x41069f10 + 0x00223> in :0 at Ombi.Core.TvSenderOld.SendToSickRage (Ombi.Core.SettingModels.SickRageSettings sickRageSettings, Ombi.Store.RequestedModel model) <0x41069eb0 + 0x0002f> in :0 at Ombi.Services.Jobs.FaultQueueHandler.ProcessTvShow (Ombi.Store.RequestedModel tvModel, Ombi.Core.SettingModels.SonarrSettings sonarr, Ombi.Core.SettingModels.SickRageSettings sickrage) <0x41069b50 + 0x0012f> in :0 ---> (Inner Exception #0) System.NullReferenceException: Object reference not set to an instance of an object at Ombi.Api.SickrageApi+d__6.MoveNext () <0x4106a950 + 0x007de> in :0 <---
  6. Crap. no go on that. Thought it was pulling from my couch and sickbeard.
  7. Well it is indexing TV and Movies. This will take awhile. Will report back tomorrow.
  8. Will this also work with sickbeard instead of sickrage? Have over 350 tv shows and I dont want to rebuild a database again.
  9. I started with the drive with the least amount of data on it and used unbalance to spread that out between the drives. I chose the share folders to move on that drive. I do not use disk shares and have user shares. With that said, I set the setting in unbalanced to 50% instead of the 450 Mb size. I calculated and could see where all the files would go and to which drives. Happy I hit the move button. It was taking about 4 - 5 hours for each TB to move, depending on how much activity was going on with the server. IE Plex or Kodi streaming. I would normally start my move at night and by morning it was complete. Once the move in unbalance was done, I went to main array tab and looked at the disk I had just emptied to make sure no files were left. My user share folders were still on the disk but nothing inside of them. Make sure to check them since at one point my mover had moved files into those share folders. Once satisfied everything was empty and only user shares remained, I stopped array. I then set the format on the empty disk to xfs and restarted the array. Once array was started Fix Common Problems plugins pops a message up in red, ignore that and down by stop/start button for array, you will see the empty disk on left side and a format box. Check the format box and the disk will now format to xfs. Takes a few minutes. Once done I re-run Fix problems plugin and make sure all is good. Now you have an empty disk formatted in xfs. You can use unbalance again to continue until done with your array. Parity was valid through out the whole process My conversion took roughly 2 weeks to finish 17 drives. I did not stay at it all the time though. Good luck.
  10. 16GB for me with running some dockers and plugins and have never run into any problems so far with lack of memory. Highest usage I have seen is 3GB.
  11. It did not work. I had tried both the stable and dev releases. I had removed the docker and reinstalled 4 times. Each time I would get an error in putty with saying that no socket was available and no killing the process. Each time I looked at the advanced.yaml and nothing had been written to it. So I took the settings from plexreport config.yaml and entered that info into the advance yaml. Once I did that the emails would go out but the webpage would only show the default page stating once report has run this page will show. etc.etc. So I then added the home_page: www/index.html and it poped right up with all the info.
  12. I am happy to report that I have final finished converting all my disks to XFS format. This thread had been of great use, along with buying new 8TB drives. I did not use the recommended rsync command but opted for unbalance plug in. My transfer speeds where in the neighborhood of 55-65MB's. All along keeping a vaild parity, so I am happy. Thank you all for the help
  13. I have installed and it is working great. I had to modify the advanced.yaml. It seems that nothing in the settings of the docker are written to the advanced.yaml. Some lines I modified to work. web: home_page: www/index.html <--------------added this line title_image: 'img/nowshowing.png' logo: 'img/logo.png' headline_title: 'Just added:' headliners: 'Laughs, Screams, Thrills, Entertainment' footer: 'Thanks for watching!' language: 'en' plex: server: 192.168.xxx.xxx <--------------Added this line api_key: XXXXXXXXXXXXXXX <----------------Addd this line plex_user_emails: 'yes' mail: address: smtp.gmail.com <--------Added port: 587 <------------Added username: xxxxxxxxxxxxxx <--------Added password: xxxxxxxxxxxxxxxx <------------Added Once i had added all that it works great. Web page displays and emails are sent. Just re-ran the report but this time at interval 7 instead of 1 and the docker will need to be restarted in order for the webpage to refresh the updated info. I have been testing this out with putty. Command I use is docker exec nowshowing combinedreport -d -t Thanks
  14. That sounds excellent. The report as it is now works fine , just that I have too much crap. With the new features I could set it for delivering an email every 2 days and that would sort of keep everything fresh for my family members to see. Thanks .
  15. I just set this up and sent a test email to myself and have like 30 movies and around 40 episodes. Then new TV series. Sending that much info out on an email is too much. I thought maybe to limit both to around 5 each. This would limit the amount of info and make sending the email quicker.
  16. Was wondering if there is a way besides editing the .erb file to limit the new release info that is shown?
  17. If you are using the plugin with unassigned devices, there should be a red X next to the pre-clear that will stop it.
  18. I had done 3 pre-clears on that drive. Now ready to install.
  19. Just finished pre-clearing a 8TB Seagate Archive results: Cycle elapsed time: 45:07:31 | Total elapsed time: 45:07:33 # Step 1 of 5 - Pre-read verification: [15:03:56 @ 147 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [14:58:37 @ 148 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [15:04:50 @ 147 MB/s] SUCCESS Thought that may help you on the time.
  20. Read through that. It will help.
  21. Everything went smoothly and I must had had some excess gas with a lot of brain farts the other day. Have drive converted to XFS now and will continue with the rest. Thank you all once again.
  22. I will let everyone know tomorrow. Should have another drive empty tonight sometime. Ready for XFS . Thank you all.
  23. This is exactly what I had done after removing all data from that disk. Problem was the format was grayed out and would not let me choose the xfs format, only could add back to array with the reiserfs format. That is why I went to the other steps, thinking that the disk was holding some info from preventing me to format to xfs. Like I previously stated, only empty share folders remained on disk... Would that prevent the disk from formatting? I am moving data from another drive now but will post a screenshot if I run across that again. .
×
×
  • Create New...