Jump to content

lionelhutz

Members
  • Posts

    3,731
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by lionelhutz

  1. There is a sticky about pinning a specific core to a docker, but I think that is about it. They just use the ram they need.
  2. unRAID will still recognize the drives and current array even after hardware changes assuming you have a valid configuration. No need to start over. Typically, just plug the USB into the new server with the same drives and it will boot and work just like nothing was changed.
  3. After first adding the my movies share the library scan would just hang every time. The docker doesn't hold all the information and images so of course it's scraping data from outside sources so this will take time to complete on a first scan where the images don't exist. But, as I already noted twice now, in my case the first scan of the library would hang every time so it was nothing to do with bandwidth. It simply was hanging on trying to identify a movie or movies.
  4. When I first installed it likely about a year ago the library scanning would just keep hanging up and I eventually had to manually identify a lot of my movies. They were all identified by the old MediaBrowser but the new server still had issues.
  5. I thought Plex pretty much needed to use their defined ports for any clients to connect. Does the server have a second network connections available? If yes, I think you could make a Plex docker use it. It requires some manual command line use to build the docker since the unRAID interface doesn't allow what I think you have to do. If you can create another way to have another IP address like that Pipeworks mentioned in theory will do, then you could also run 2 instances attached to 2 different IP addresses. Otherwise, here are the ports. If you use bridge mode in a second docker then you can put the ports on the container side and put different ports on the host side. Also set the correct protocol for each. With a free ethernet IP address - stop your production docker and do the above using direct 1:1 port mapping to create the test one. "save" so it makes the image and then the docker run command creating the container will be given in the "command" window. You will see the port mapping like -p 32400:32400/tcp etc, 1 for each port. Copy the command and change it so every port map looks like this instead -p IP:32400:32400/tcp where IP is the spare IP address the server has. Delete the existing container only, not image you just created then run the modified command via the command line. Then, the docker will appear on the unRAID interface so you can stop and start it.
  6. You're right, just edit it again and I can delete it. It's still rather dumb that it can't be deleted at first, but you can't help it. I did find a change that was odd. I can't find any way to put the channels into a Channels group like before instead of them all appearing individually.
  7. I could add the template no problem, the WebUI link work fine and it seems to work fine. I just stopped the old docker and added this one to see what would happen and it all seems fine. It's annoying that there is now a /mnt volume that can't be deleted that I had to direct as read-only to a low hanging directory so I could make my own media volumes. Does your docker file force that or is this the new way unRAID does it?
  8. You basically put the script in your GitHub repository and then put commands like this at the end of the Dockerfile to install and run it. ADD install.sh /tmp/ RUN chmod +x /tmp/install.sh && /tmp/install.sh && rm /tmp/install.sh The above copies the script install.sh into the docker, runs it and then deletes it. But then, I'm guessing you really want to do it without creating your own repository, right?
  9. Here is a post with an example using the GUI of the server. Basically, you click "Add Container" but instead of picking a template you add the info manually. http://lime-technology.com/forum/index.php?topic=37732.msg349938#msg349938
  10. It's trying to find the file here - /downloads/downloaded/torrents/Rectify.S03E01.Hoorah.1080p.WEB-DL.DD5.1.H.264-NTb[rarbg] Where on your server is that file? In all those applications working together /downloads should be mapped to /mnt/user/downloads. Don't get fancy and point one program to /mnt/user/downloads and another to /mnt/user/downloads/complete/TV
  11. unRAID doesn't cause the stall but the bandage fix on the unRAID side is to use spinup groups. It's a Windows problem. Windows basically creates one connection to the server so when a second program tries to access a spun-down disk the connection hangs until it gets a response.
  12. It reads like you copied the flac directory from disk4 to a flac directory on disk3. If you did, then that directory also automatically becomes part of the same flac share. But then, maybe you copied the flac directory from disk4 into a different directory on disk3. That would be fine because that other user share would not contain the contents of the flac directory. I really don't know what you did because what you wrote really wasn't clear. The rule is that any top level directory on any of the disks with the same name is part of the user share of the same name. It doesn't matter how you setup the share, even if you use include and exclude disks it doesn't change the way all same name directories are combined into user shares.
  13. The spin-up groups doesn't just solve an IDE drive issue. It also eliminates playback pauses if the same Windows PC is playing back a file and then tries to access a file on a spun-down disk at the same time. It's a work around for Windows.
  14. The share will be gone if you delete the share directory on EVERY disk - both array and cache. Obviously, someone had selective reading not following the first part of my instructions. rm -r /mnt/disk1/flac rm -r /mnt/disk2/flac rm -r /mnt/disk3/flac etc Actually, if you do the above you really don't need to delete the cfg file but might as well delete it to get rid of the lingering file that no longer has any use.
  15. If the share is empty then you're supposed to be able to delete the name to get rid of it. The most foolproof is to go to each disk share and delete any share directories that exist. Then go to the flash drive under config\shares to delete the config file. Then stop and start the array and the share will be permanently and completely gone.
  16. See if you do an Add if you find the my-xxx templates in the list. If you pick those, they will re-install as they were. FYI, the appdata is not the dockers, it's just stored data the dockers use. You should have a docker.img file and that holds the actual dockers.
  17. Technically, there is no such thing as unRAID dockers. The repositories give the templates which fill-in the setup fields. Once you understand the setup fields, you can fill all of the fields yourself almost as easy as using a template. The dockers used by the templates use are built a certain way to maintain a consistency that reduces the amount of stuff downloaded since docker will re-use common parts like the base image if it's already downloaded. That's about all that is special about them, assuming that much was done.
  18. Here is how to make the install look pretty. Click/slide the advanced button that is located in the grey preferences bar to the right. A new box will appear. Put the Docker Hub address for the docker into that URL line. Put the web address you use to access the docker into the WebUI line. Put a link to a banner and/or image to represent the docker into those lines. Put a description if you want. I got the image by going to the MediaBrowser home page and then right clicking and opening the icon on the top left of that page. Then, copied the link and stuck it in there. It seems to require a web hosted image so host a picture on a sharing site and link it if you want. The minimum to put is the WebUI link and the banner or icon link depending on which view you use on the unRAID docker tab.
  19. Here is a quick example of running any docker using the official MediaBrowserServer Docker https://hub.docker.com/r/jacobalberty/mediabrowser/~/dockerfile/ Between the information and dockerfile tabs you can find the info you need. - The description tab says to use a volume called /config and a variable called /TZ. - Click onto the dockerfile tab and looking at the docker file will also help if you know what to look for. In the following image you can see the VOLUME line that lists /config and the EXPOSE lines that list the port numbers and protocols. So, these are the settings for the unRAID Docker setup screen. See how I gave it a name, anything will do. I copied the repository from the "pull this repository" box only using the name and not the "docker pull" command I put /config for the container volume and pointed to the cache on my server where I wanted to store the config data. I added the /tv and /movies container volumes and pointed them to the correct server locations. I put the ports in and the protocol. I put the TZ variable and picked a location nearby. The :latest after the repository name is not required, but it's a good idea to ensure you get the latest stable docker. I was getting both the "latest" = latest stable release and "daily" = daily built beta downloads without putting it there. The /tv and /movies could be almost anything. I could also add /music or /games in the future. You can create container volumes that are not listed as "VOLUME" lines in the dockerfile. BUT, you must use the "VOLUME"'s that are in the dockerfile. Just one warning, if you use a common Linux path such as /mnt or /etc you will over-write a path and files that exist inside the Docker and it won't work. /tv and /movies and not paths in the root of a Linux install so they are safe. You could put different ports on the host side if you wanted to. Be aware that using different ports on this docker would break the client applications that access the server. I also believe if I didn't map the ports they'd just work with the numbers inside the Dockerfile but I'm not positive of this. It's wasn't hard just to map them 1:1 so I just did it. That's about it. Hit Apply and install the Docker.
  20. Thanks for the add but those were all at the end of last year and they are in service now and I won't need another one for a while. I sold my old ones off except one I put in a USB case so I got no drives left to test with. If I had a drive to test I've give it a run and help you out. I'm just following along because the preclear is so heavily used that any improvement is most excellent to see happening.
  21. You guys do know the preclear script creates a report on the flash drive giving the 3 times and speeds? Here's examples from 2 different Toshiba 3T and a Hitachi 3T == Last Cycle's Pre Read Time : 6:43:49 (123 MB/s) == Last Cycle's Zeroing time : 5:48:03 (143 MB/s) == Last Cycle's Post Read Time : 13:52:32 (60 MB/s) == Last Cycle's Total Time : 26:25:32 == Last Cycle's Pre Read Time : 6:33:50 (126 MB/s) == Last Cycle's Zeroing time : 5:38:29 (147 MB/s) == Last Cycle's Post Read Time : 13:35:50 (61 MB/s) == Last Cycle's Total Time : 25:49:19 == Last Cycle's Pre Read Time : 6:38:14 (125 MB/s) == Last Cycle's Zeroing time : 5:36:23 (148 MB/s) == Last Cycle's Post Read Time : 13:49:19 (60 MB/s) == Last Cycle's Total Time : 26:05:05
  22. It sounds great and there will be many people who will love the quicker speed you've found. Now I wonder if there was a good reason you disabled the randomized IO that you've forgotten about?
  23. Saving 10+ hours would be for a 4T disk? Interesting. The last 3T drive I precleared took 13h 35m to post read. The drive wrote zero's at 147 MB/s and my typical parity check speeds are this fast too (I have all the same drives) so I suspect this would be the fastest average sequencial read/write speed to read the whole drive. It took 5h 40m to do this clearing, so I'd expect any post read improvement to have a similar read time as a minimum. It would appear just under 8 hours of time could be saved at the most, which is a good improvement. However, if you add random seek testing then I'd expect the average read speed to go down and the time saved to be less. FYI, pre-read was 6h 33m.
  24. No problem if you do reads only. Just do a non-correcting check. If it's failing, put the drives back onto the motherboard.
  25. I can't recall if it is in this thread, but there have been a few cases of SiL3132 cards not reading consistently when reading from both ports at the same time. I suspect it is counterfit chips but have no proof. I'd buy a card and test it before using it further. You could just move 2 drives from motherboard ports to the card and start a non-correcting parity check to test it. It's likely bad if you start getting errors.
×
×
  • Create New...