Jump to content

bungee91

Members
  • Posts

    744
  • Joined

  • Last visited

Everything posted by bungee91

  1. Is anyone running active streams in 6.1 without issue? I haven't had a lot of time to pinpoint, however I believe it is causing my VMs to power off after some time. I may just be crazy, but removing it seems to have fixed my issue.
  2. I had this issue today, wasn't sure what caused it, had just rebooted after updating to 6.1 RC6. Did the edit, save routine and it worked correctly right after. For some reason it was as if it stopped listening on port 3389.
  3. Coming back to this... I haven't RDP'd into my myth docker is a while so I just tried to help bungee find the log files. RDP failed when it always worked in the past for me. So, I shut down every docker I have that uses RDP connections (even though they are on different ports)...still no go. I restarted the myth docker and that didn't help either. I eventually ran a "force update" on the myth docker and I was immediately again to RDP. Not sure what happened. Anyway bungee, I'm trying to find the logs now. John Thanks, and I'm sorry you're having the issues RDP'ing! I was able to find them, but no clue how to do that from outside of the Docker..? I RDP'd in, went under the file system, and they're right there! I have what I needed, and know what's going on now! The question is.. How do you make MythTV use another tuner (I have 4) if another computer (a Windows VM I run) is using the 1st tuner that MythTV wants to use? Basically I'm recording the same thing in two places, both want the same tuner, WMC grabs it, Myth tries, fails, tries again, fails, gives up. I would think it would just naturally go to the next tuner, but apparently it doesn't!.. I can fix this easily by removing redundancy for this show (it was a thing I did at first), however I will have this issue soon as I allow someone to stream football from my house to another state. Meaning I will have the same thing recording twice (one in WMC to use ServerWMC and Plex to transcode/send to another state, and what I'm watching in Myth here at home). edit: Found this.. Apparently Myth doesn't play nice with others!.. First, MythTV does not share tuners well. Never has (after all, it started with tuners inside the PC it was running on, and sharing was not even considered), and no one has contributed fixes to make it do so for potentially sharable network devices such as the HDHR. You should not plan to share tuners with MythTV (unless you are going to be writing and submitting the patches to perform the required sharing and locking), as you will find that experience poor as other solutions (WMC and DLNA clients) pull the tuners out from under the use of MythTV. https://www.silicondust.com/forum/viewtopic.php?f=21&t=19537
  4. Edit: I found the files through RDP in the docker using it's file manager, and copied back to the server. Can someone tell me where I could find the log file for MythTv to diagnose an issue I am having? I looked in the MythTv wiki and it say's it's be /var/log/mythtv/ however that doesn't seem to apply to the docker here. I tried to look for it in the /home/mythtv path I set at install (which for me is /mnt/cache/docker/Programs/MythTV) however I didn't find it there either. -- My issue (for those who want to read, and maybe have a solution that would be very helpful): Basically, I have a daily recording that keeps failing. I get 2 recordings of nothing for the same thing, then it gives up. I can record manually or make a new record of something else and it works fine. The tuner in question works when I test live Tv or use the the HdHomerunview option to make sure it's not the tuner. I'm assuming the log will tell me something, MythWeb doesn't seem to have anything helpful listed in it. I don't think this issue is anything with this Docker, more something with MythTv in general, and fixing it would be great! I had this issue previously, so I deleted everything and set this backup from scratch. It worked for about a week, and then started failing again. I assume after it fails the 1st time it tries again, it fails and it gives up. Both files are unplayable, I believe like 8kb in size or something like that.
  5. You can try passing it the rom file in order to help with this situation. It's a pretty easy thing to do. Basically download the rom (look in guide), place it somewhere on your server, add to xml the location, start. Detailed here http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Edit_XML_for_VM_to_supply_GPU_ROM_manually Unfortunately (in my experience) if this doesn't work, you don't have too many other options. You can see if that card has a firmware update from the manufacturer that may help alleviate this. You can try OVMF instead of SeaBIOS (I didn't look specifically, but I assume that is what you're using) as it may help also. This change may require you to make a new XML, as (for example) Windows can't be switched from SeaBIOS to OVMF all willy nilly.. If none of this reliably fixes the issue, there's not much that I know you can do to help. Some cards work very well, others, not so much.
  6. Yes, this is done automatically through using the VM manager, I don't see this referenced as needing to be done within this guide.
  7. A NIC is a device that UnRAID will bind, the stub effectively tells UnRAID to leave it alone, it is not allowed to be assigned to it. This way unRAID won't put it into use, and then you can pass the device through into your requested VM. If you do not use this, it will be assigned to unRAID and not be able to be used for pass through. Basically you stub it, and then add the proper line into your XML for the device slot that is listed. There is more magic going on in the background of the vm manager, as it will silently assign that device automatically to vfio-pci (I think that's still what it uses) for use in the VM.
  8. That's good to know, thanks (also I think you're following me.. )
  9. I'm not sure if this is needed anymore, however I am also just guessing. For instance if I go into the share page and click compute on a share, it will show me all the drives with that share on it. If I delete that folder from a drive, and hit compute again, it is now gone and no longer listed.
  10. Not necessarily, but not sure.. If you haven't written anything to the appdata folder since the change, I would guess it would not. As soon as you did though, it would be there. However just create a folder called appdata on the cache drive and you would then have a folder there. That mapping is for a physical mapped drive (mnt/cache/appdata) , as opposed to a user share mapping (/mnt/user/appdata). So as an example if you added a folder called "media" on disk1, it is there as soon as you make the folder. The path would be /mnt/disk1/media no "share" needs to be created as it is a path. In that example if you wanted it as a user share, you would make "media" as a user share, and then you could point to /mnt/user/media and it would find the files that are in a "media" folder at the root of any drive. No, see below.
  11. First off, welcome! It's actually even easier than that... (give or take) No need to stop the array to change a disk share to cache only, but you will need to move the files yourself. Also, no need to fix the mapping to the container... Regardless of which disk the files are on, you can still point it to /mnt/user/appdata and if it's set to cache only, it will only be on your cache directory! So.. (1) unRAID Main > Shares > appdata : select Use cache disk: only and remove any include/exclude, it's not needed.. Cache only is cache only, apply, done. (2) Move files from disk 1 to cache drive (this can be done in multiple ways, however you ABSOLUTELY don't want to copy from a user share to disk share (or whatever way that is)). The easiest way is to just use another computer and copy the appdata folder from disk 1 to the cache drive (if in Windows, use the basic file explorer copy/paste). If you do not currently export the disk shares, click on each disk (1, and cache), set export to yes for either SMB or NFS, now use SMB or NFS from another PC to copy it to cache, and when finished delete the copy from disk 1. ----- (If you don't feel comfortable using SSH please do not, it shouldn't be needed)----- I have noticed some issues at times attempting to copy in use or permission locked files, so if that's the case and you know how to use Putty to SSH in, just type this. cp -avr /mnt/disk1/appdata /mnt/cache/appdata Once it's successfully transferred (please verify contents prior to continuing by checking your cache share for the appdata folder) rm -rvf /mnt/disk1/appdata
  12. edit:never mind, following the video now to a "T", and my question was answered in there!
  13. Looking good! I assume a notification will be made if/when triggered?
  14. This is true, but as you state, it'd be a fringe case. I think this in general would be a valuable addition as a feature in whatever way it is implemented. The idea as posed is mainly for a critical temp, where off is much better than continued heat build-up. However, I also think it could be done better to help avoid getting to that point, and thus requiring a completely off state.
  15. My only question would be, is it a good idea to turn off a disk in a hot state? I know that sounds weird, but off means no fan spinning (if the fan is broken it's a moot point), so it is not being actively cooled. I think of an overheating engine and shutting it off... You do not want the temp to rise, however turning it off kills the water pump, radiator fan, etc... and the temp just sits until passively cooling. So, if you're going to implement this it may want to have some consideration. Also, what about a staged response? If all we were to do is shut off the server, why not just shutdown that disc? It is effectively the same, however the fans (the ones that are still working) are still moving air. This can go the opposite way as there are still devices causing heat, but I think it is still something that should be pondered.
  16. One more for you, and the obvious question of.... How's it going (the point of this thread, not you, however, I suppose you also... ) Avermedia A188C 06:00.0 Multimedia controller: Philips Semiconductors SAA7160 (rev 01) It is also defined as a "multimedia controller" .
  17. I couldn't reproduce, but will investigate this soon. K, thanks!
  18. Any thoughts on this, I haven't tried again so not sure if it's a fluke or reproducible. Also after a reboot the drives were back listed, however they showed 2.2TB and they are 4TB drives, the preclear script properly listed them both as 4TB.
  19. Think I may have found a bug. Powered on server with 2 new unassigned drives, array set to auto-start. Both drives showed up in the unassigned devices area initially, hit spin down for all disks, drives are no longer listed showing "No unassigned disks available". As I was planning to with the new drives, ran preclear_disk.sh -l and both unassigned disks show up fine, both are currently pre-clearing as expected. I tried to refresh, etc... however they still don't show up.
  20. Tuner Card: AverMedia M780 (Combo PCI-E) 06:00.0 Multimedia video controller: Micronas Semiconductor Holding AG nGene PCI-Express Multimedia Controller (I technically have this uninstalled from my server right now, but have this output from a previous lspci result that I had ran, I would assume it'd report the exact same under Tools -> System Devices once I install it again).
  21. Sure thing, attached... Image built fine, just had to be patient (read: go do something else!). syslog.txt
  22. *I just read that others are having this issue... Again likely not expected, but if you are patient it will finish building the new image. Take a look at the image size growing through the cache directory share (or wherever, slowly) and you'll know it's working and not frozen. For some reason (IDK, maybe it's normal, but I doubt it!) when I delete docker.img and recreate a new exactly the same named docker.img it takes a LONG time to create it, like hours! I had this previously happen when I had a corruption issue (ran out of space), so I deleted my 10GB one, and made a new one with the exact same name 15GB. The UI just spins, and spins, however I can see that it is still going by looking on the cache drive and seeing the image size growing. The exact same issue that I had when doing this previously (10GB to 15GB) is happening now when I deleted and recreated for the new image with the No COW image. I'd assume not normal, thought I'd chime in, see if fixable. It will eventually go, but no lie almost an hour to build a 15GB image on an SSD cache drive.
  23. Thank you, I think the data will be preserved just fine also I just wouldn't have had a clue this happened without you mentioning it (thank you BTW!). I think for ease in regards to a learning curve I will just mount the drive in a spare PC, load a live version of Linux, and copy the data over. Unless this is not recommended, it seems pretty straight forward.
  24. So that would be a "yes".. Dammit! Ok, will need to load the old drive outside of the array and copy my files back to the new drive. I would have left it as RFS if I would have known this to be the case, honestly I don't think this is well described in the messages on the GUI (or I am dumb, but I think I am a tad more technical than the average "noobie").
×
×
  • Create New...