Jump to content

lionelhutz

Members
  • Posts

    3,731
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by lionelhutz

  1. It's not useful just with certain controllers. It's also useful if you use a single PC to access multiple things on the server at the same time. A PC using SMB makes a single connection to the server. So, the whole network connection hangs while the PC waits for a drive to spin up, even if the hard drive controllers in unRAID can keep responding. If you're playing a video on one monitor and try to access any other file using the second monitor you could get a pause. Of course, this means the spin-up group could include all disks if you truly had a need to fix this SMB issue. The original intent was for the master/slave IDE channels. But, it was found that even with controllers that didn't pause a PC would have hang-up issues if it was multitasking. In my testing, I could access spun-down drives all I wanted from another PC, but if I did it from my media PC while it was playing a video file, the video file would pause unless I used a player that could be set to use a big buffer. I have no idea if NFS also has the issue, but I suspect not. If your PC is using SMB and WMC requests a file that is on a disk which takes time to respond then all network traffic pauses until the disk is spun up and the file is made available. This being an issue or not is something you'd have to test, unless you don't use SMB as the connection.
  2. My bays have power buttons and I don't see any reason to be concerned about swapping disks if they are not mounted.
  3. I have added, pre-cleared and assigned drives without rebooting the server. I just had to stop the array when I assigned each drive then start it again. Preclear and the unassigned devices plugin will work without stopping the array. When upgrading a drive, I would first hot-plug the new drive into a spare slot to preclear it. Then, I would stop the array, pull the drives involved and put the new drive into the slot for the old drive before starting the array again. If I was selling the old drive, after a week or two I'd plug it into a spare slot again to clear it before selling it. I've also upgraded the cache with VM's and Dockers turned off and then plugged the old cache into a spare slot to copy the application data to the new cache before turning the VM's and Dockers back on. I then unmounted the old cache drive and pulled it without stopping the array. You can't actually hot-swap existing array drives with the array started, but having bays and hot-plugging drives is handy.
  4. A little hint. docker exec -t -i 37fcc75e886d /bin/bash You can just put the docker name where "37fcc75e886d" is.
  5. You're right. Install W10 directly and then record the key and uuid combo for future reference.
  6. I think I tried changing that too and W10 stays activated. I've got the W7 to W10 upgrade pretty much nailed down. - Use the Microsoft tool to create a W10 iso. - Create a VM and install W7. Turn off automatic updates during the install. - Shut down the VM and change the install iso to the W10 iso. - Restart and run the W10 install. I've been picking to not keep anything. Once you have the VM created if you make note of the uuid you can delete it and then create a new one using the W10 install iso directly. When creating the VM uncheck the start after creation box and edit the xml after it's created. Paste in the uuid and then start the VM and do the W10 install.
  7. Nice, using the uuid is the key to keeping the activation. I screwed up my W10 install with a bad driver so I deleted it and created a new VM but copied the uuid from the old VM into the xml. The new W10 VM install came up activated. Dan - I would upgrade them all and record the uuid for each install. Then you can switch some back to W7 installs. In the future, to "upgrade" one of the W7 install to W10 you delete the W7 VM and create a new W10 VM using the same uuid. W10 activation is stored online keyed to the machine, so once the machine has been upgraded and registered at Microsoft you can switch it back and forth between W10 and any other OS as much as you want. A new install of W10 on the machine will be activated as soon as it goes online. In my test example above, the original W10 install was seabios and I used OVMF for the new install and it still activated fine. You can only change the bios by creating a new VM so it's not a huge concern. I have also tried changing the machine type and the number of cores and the activation has stuck so far.
  8. I've been playing with W10 upgrades to see what happens. I've found it seems to activate only one the virtual disk that you are using. Microsoft says that W10 activiation is tied to hardware but the VM's don't seem to work right. I setup one activated W10 VM. Then I created a second identical VM but it would not activate. I can at least somewhat understand this one, but in theory the ID of both VM's should be the same. I setup an activated W10 VM and copied the virtual disk. I created a new VM using the copied virtual disk and when I started it that version of W10 was not activated. I'm not sure why this works since one was a copy of the other. So, I have this feeling that you can upgrade W7 to W10 but if you lose or damage the VM beyond repair you likely can't recreate it without buying W10.
  9. Volume map to a server location where the file is stored and simply copy it inside the docker?
  10. There is a sticky about pinning a specific core to a docker, but I think that is about it. They just use the ram they need.
  11. unRAID will still recognize the drives and current array even after hardware changes assuming you have a valid configuration. No need to start over. Typically, just plug the USB into the new server with the same drives and it will boot and work just like nothing was changed.
  12. After first adding the my movies share the library scan would just hang every time. The docker doesn't hold all the information and images so of course it's scraping data from outside sources so this will take time to complete on a first scan where the images don't exist. But, as I already noted twice now, in my case the first scan of the library would hang every time so it was nothing to do with bandwidth. It simply was hanging on trying to identify a movie or movies.
  13. When I first installed it likely about a year ago the library scanning would just keep hanging up and I eventually had to manually identify a lot of my movies. They were all identified by the old MediaBrowser but the new server still had issues.
  14. I thought Plex pretty much needed to use their defined ports for any clients to connect. Does the server have a second network connections available? If yes, I think you could make a Plex docker use it. It requires some manual command line use to build the docker since the unRAID interface doesn't allow what I think you have to do. If you can create another way to have another IP address like that Pipeworks mentioned in theory will do, then you could also run 2 instances attached to 2 different IP addresses. Otherwise, here are the ports. If you use bridge mode in a second docker then you can put the ports on the container side and put different ports on the host side. Also set the correct protocol for each. With a free ethernet IP address - stop your production docker and do the above using direct 1:1 port mapping to create the test one. "save" so it makes the image and then the docker run command creating the container will be given in the "command" window. You will see the port mapping like -p 32400:32400/tcp etc, 1 for each port. Copy the command and change it so every port map looks like this instead -p IP:32400:32400/tcp where IP is the spare IP address the server has. Delete the existing container only, not image you just created then run the modified command via the command line. Then, the docker will appear on the unRAID interface so you can stop and start it.
  15. You're right, just edit it again and I can delete it. It's still rather dumb that it can't be deleted at first, but you can't help it. I did find a change that was odd. I can't find any way to put the channels into a Channels group like before instead of them all appearing individually.
  16. I could add the template no problem, the WebUI link work fine and it seems to work fine. I just stopped the old docker and added this one to see what would happen and it all seems fine. It's annoying that there is now a /mnt volume that can't be deleted that I had to direct as read-only to a low hanging directory so I could make my own media volumes. Does your docker file force that or is this the new way unRAID does it?
  17. You basically put the script in your GitHub repository and then put commands like this at the end of the Dockerfile to install and run it. ADD install.sh /tmp/ RUN chmod +x /tmp/install.sh && /tmp/install.sh && rm /tmp/install.sh The above copies the script install.sh into the docker, runs it and then deletes it. But then, I'm guessing you really want to do it without creating your own repository, right?
  18. Here is a post with an example using the GUI of the server. Basically, you click "Add Container" but instead of picking a template you add the info manually. http://lime-technology.com/forum/index.php?topic=37732.msg349938#msg349938
  19. It's trying to find the file here - /downloads/downloaded/torrents/Rectify.S03E01.Hoorah.1080p.WEB-DL.DD5.1.H.264-NTb[rarbg] Where on your server is that file? In all those applications working together /downloads should be mapped to /mnt/user/downloads. Don't get fancy and point one program to /mnt/user/downloads and another to /mnt/user/downloads/complete/TV
  20. unRAID doesn't cause the stall but the bandage fix on the unRAID side is to use spinup groups. It's a Windows problem. Windows basically creates one connection to the server so when a second program tries to access a spun-down disk the connection hangs until it gets a response.
  21. It reads like you copied the flac directory from disk4 to a flac directory on disk3. If you did, then that directory also automatically becomes part of the same flac share. But then, maybe you copied the flac directory from disk4 into a different directory on disk3. That would be fine because that other user share would not contain the contents of the flac directory. I really don't know what you did because what you wrote really wasn't clear. The rule is that any top level directory on any of the disks with the same name is part of the user share of the same name. It doesn't matter how you setup the share, even if you use include and exclude disks it doesn't change the way all same name directories are combined into user shares.
  22. The spin-up groups doesn't just solve an IDE drive issue. It also eliminates playback pauses if the same Windows PC is playing back a file and then tries to access a file on a spun-down disk at the same time. It's a work around for Windows.
  23. The share will be gone if you delete the share directory on EVERY disk - both array and cache. Obviously, someone had selective reading not following the first part of my instructions. rm -r /mnt/disk1/flac rm -r /mnt/disk2/flac rm -r /mnt/disk3/flac etc Actually, if you do the above you really don't need to delete the cfg file but might as well delete it to get rid of the lingering file that no longer has any use.
  24. If the share is empty then you're supposed to be able to delete the name to get rid of it. The most foolproof is to go to each disk share and delete any share directories that exist. Then go to the flash drive under config\shares to delete the config file. Then stop and start the array and the share will be permanently and completely gone.
  25. See if you do an Add if you find the my-xxx templates in the list. If you pick those, they will re-install as they were. FYI, the appdata is not the dockers, it's just stored data the dockers use. You should have a docker.img file and that holds the actual dockers.
×
×
  • Create New...