bungee91

Members
  • Posts

    744
  • Joined

  • Last visited

Posts posted by bungee91

  1. I think there's a bit of confusion here, so I want to make sure you're pointing yourself in the right direction.

    You used to use a Mac at this location to connect to, and then control the WebUI of UnRaid, correct?

     

    The current proposal is to get VPN access to the network where your UnRaid server is located, and therefore on the same subnet/network.

    If this is the case, then perfect, from wherever you are far far away you use your main computer, connect to the VPN, and then type the IP of your UnRaid server. Success!

    You are then able to get to the WebUi, and SSH into it for whatever commands or access that you may want to do.

     

    If this is all you want to do, then no hardware is involved. I assume if you messed something up (updated UnRaid and it hangs, etc.) you could contact your friend and tell him to hit reset, pull power, etc. However you certainly would cover that issue of "phone a friend" by having IPMI (or maybe even Vpro, not sure of limitations) in order to control lower level things, as suggested.

  2. 1. Can I do it?  - Yes, no issues.

    2. Should I do it? Up to you Would performance suffer? From VNC'ing from a passtrhru machine, no, not at all. It would be used for office and graphic design work. No games or video. RDP would likely work better than VNC, but you can try either (but I'd recommend RDP if we're talking a Windows machine)

    3. Is there a better solution? This should work fine, but you likely have some options. Controlling a VM from another passthru VM should be as close to a "bare metal" machine accessing a VM, and I'd doubt you'd notice any difference (I certainly don't)

     

  3. To answer your question directly, no, 6.2B21 does not directly resolve this issue with an Nvidia GPU as the only one installed to be used for passthrough to a VM. There is a thread that supposedly resolves this by backing up the rom, or loading it in some way that allows an Nvidia card to be used when in the primary slot and "stolen" from the console. I can't say for sure this will work for you though.

    If you have the expansion slot to add another cheap card, it may be the easiest option. I'm not certain I'd call this a defect, but IDK just my thoughts.

  4. Mythlink would give you proper show names that points to the mythtv numeric names.

     

    Does TVHeadend support the automatic expiration of shows? Like Show A, keep 10 recordings, delete oldest to make room for new ones? Also, is it simple to just give it my Schedules Direct creds and it grabs my EPG and keep it updated? Also, does it have scripts that can remove commercials?

     

    I wouldn't mind getting off mythTV also if I can find something that takes care of all those rules and also looks at the disk space and makes automated judgements on what to delete based on what I watched and the number of shows to keep I defined.

     

    Thanks to this Docker, it is painless to get going, but anything ligther, simpler and faster that can match those aforementioned features is worth a look.

    Hope that helps.

     

    BTW - What is the best Docker implementation of TVH out there?

     

    That's a lot of questions, I only had 1... LOL

    While this is a bit of thread derailment, I can answer some of them. For the others, hop over to the TvH thread and ask in there.

    The best Docker/thread is located here http://lime-technology.com/forum/index.php?topic=37671.0 Install (preferably) from Community Applications.

    Does it support schedules direct? - Yes, easily - no. Meaning, it does support it natively, but setting it up is a bit of a cludge. For that specifically, look here (quoting myself here). http://lime-technology.com/forum/index.php?topic=37671.msg442261#msg442261 more info above that post.

     

    Remove commercials, yes, but have not used it as of yet.

    Expire recordings, I'm pretty certain yes, however wasn't spelled out exactly as "keep 10 recordings of XXX recording" however the specifics would be better asked in that thread, or better yet on the TvH forum or even the TvH subforum in Kodi.

  5. Check out HandOfMyth. You can setup a User Job to be called when a recording is complete. In that way, MythTV passes the data to the script then you are free to then do what you want with the data. You can place it in /home/myth and have that bind mount to a path on your host so it doesn't need to be in the container. The User Job can be setup in MythWeb.

     

    Thanks!

    Only question is, I just want to have this run once to export recordings as I'm moving over to camp TvHeadend (nothing particularly wrong with Myth/this Docker, but channel change speed and LiveTv loading is blazing fast in TvH). Therefore I don't need it to run after the recordings, just the one time. Sounds like it will do what I need though, and I'll check it out.

  6. Question hoping someone can answer (someone here knows, I'm certain  ;) ).

     

    I'd like to export my MythTv recordings with (useful) proper file names.

    There are instructions on how to do this here http://parker1.co.uk/blog/?p=182 for a normal Myth/Ubuntu install.

    The part about old/new database is not needed.

    It's basically run the perl script, and Myth will rename the files from the info in its database, and make a copy somewhere.

     

    Can this be completed with this Docker? I fumbled around in the Docker with RDP but got nowhere.

    Not certain if this can be done natively, or if running a perl script is even supported without adding a package.

  7. Hey bungee91,

    do you think this might fix my issue?

    http://lime-technology.com/forum/index.php?topic=39954.msg467996#msg467996

     

    I wonder if this mod will cause issues when no graphics card is plugged in the PEG slot?

    I just need the PCI card to work.

    I have no clue what this may do without an actual card installed.. I suppose it wouldn't hurt to try, however I take no responsibility in that assessment. If you do attempt, report back with the outcome.

  8. If its 6.2 related, increasing the image does not seem like a reasonable solution.

    Did you try to find out which container consumes the space, to see if its a beta related issue?

     

    If not, I would suggest installing "cAdvisor" from Community Applications and then have a look into the ressource Monitor of Community Applications.

     

    I ended up sticking at 20GB, as I only use ~6GB from initial Docker install with my standard Docker apps installed.

    I haven't been monitoring the free space for this, so I'm unsure which Docker would have cause this, but I do understand this may not be directly 6.2B related (however never happened prior to now).

    I had ran the script from a Docker thread (don't have link right now) to remove cache/left behind items in the Docker image, and at that time (months ago) I did not have any offenders. However since then I had installed Plex, and I bet it consumed all kinds of space for no real good reason (some hate on Plex there, and also others have had this issue).

     

    What surprises me is this:

    My Docker.img was (almost certainly) full.. Ok, fine, Docker should become "broken".

    Why does a disc image that is full cause a read-only/write failed condition for the libvirt.img or the qemu XML to be written?

    This may have also made the rest of cache drives writes fail also, however I am uncertain.

     

    Now knowing this is a bad condition for it to happen, (sorry haven't looked as I never had this issue) can we set a notification for the Docker image utilization at a given %?

    I know I get this for my array discs.

     

  9. A lot of hate on the use of MySQL for a shared library in Kodi....

    It's not really THAT hard to setup, even for a novice (which by no means am I an expert).

    The biggest thing I hated was waiting for OE to update, however I've removed it's usage with Kodibuntu, so that's no longer an issue.

     

    While Kodi does not transcode, it has more available options for a lot of things.

    I use Kodi for LiveTv/PVR, and even though it is far from perfect, there are many backend options that work with Kodi, last I checked this was not the case with Plex or Emby.

    Emby has a new PVR for use with HDHR devices, however when it was being developed it only supported the newer DLNA HDHR's (HDHR4US for OTA), and to do some PVR actions required you to pay for some sort of subscription or something to Emby... To me that was a ride on the fail boat. Maybe all of this has changed, IDK, but it was unfortunate at the time. I do pay for Schedulesdirect, so I'm not that cheap, however what you had to pay for for PVR to fully work (was it series recordings?) was just weird.

     

     

  10. Just had a nasty BTRFS loop error with all VM's and Docker no longer working at all.. Grabbed diagnostics right after, rebooted, same issue. Ran scrub on the BTRFS cache drive with no errors. Really appreciate help in getting this fixed.

    Diagnostic attached.

     

    Edit: I think it all started here, however I have over 200GB free on my cache drive.

    Apr 25 21:00:21 Server shfs/user: shfs_write: write: (28) No space left on device
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902630400, length 4096.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669200
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902760960, length 512.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669455
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902891520, length 1024.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669710

     

    When this happened all VM's went into a paused state, Docker looked to be on, but I think it was dead also.

    After reboot Docker has disappeared, I cannot start VM's, they error out (forget the message). If I edit a VM, I get a read-only error (or write failed, can't recall) that pops up.

     

    So, this is certainly related to the write errors I was getting, however interesting to diagnose.

    I decided to disable my VM's, and delete the 1GB libvirt.img located at mnt/user/system/libvirt/libvirt.img (from console). The RM went fine, and the image was deleted.

    From the WebUI I increased the size to 4GB (I have the space) and re-enabled, however a write error was in the log and it was never created:

    Apr 25 22:50:02 Server root: Creating new image file: /mnt/user/system/libvirt/libvirt.img size: 4G
    Apr 25 22:50:02 Server shfs/user: cache disk full
    Apr 25 22:50:02 Server shfs/user: shfs_create: assign_disk: system/libvirt/libvirt.img (28) No space left on device
    Apr 25 22:50:02 Server root: touch: cannot touch '/mnt/user/system/libvirt/libvirt.img': No space left on device
    Apr 25 22:50:02 Server shfs/user: cache disk full
    Apr 25 22:50:02 Server shfs/user: shfs_create: assign_disk: system/libvirt/libvirt.img (28) No space left on device
    Apr 25 22:50:02 Server shfs/user: cache disk full
    Apr 25 22:50:02 Server shfs/user: shfs_create: assign_disk: system/libvirt/libvirt.img (28) No space left on device
    Apr 25 22:50:02 Server root: failed to create image file

    I decided to disable docker under settings/Docker/Enable setting it to no.

    After doing this I was able to create the new 4GB libvirt.img, and edit my primary VM XML (from the VM manager) and it booted and is working as it should.

     

    So I assume my 20GB docker.img filled up (somehow) and caused a lot of havoc on the cache drive, leading to the VM (and libvirt.img) not properly functioning.

    I will now plan to delete my Docker.img, increase it to 30GB (which seems extremely excessive for my Docker usage) and see how it goes.

    A new diagnostic attached of my adventure in figuring this out.

     

    Edit/Update: Fixed, all is well  8)

    server-diagnostics-20160425-2306.zip

  11. Just had a nasty BTRFS loop error with all VM's and Docker no longer working at all.. Grabbed diagnostics right after, rebooted, same issue. Ran scrub on the BTRFS cache drive with no errors. Really appreciate help in getting this fixed.

    Diagnostic attached.

     

    Edit: I think it all started here, however I have over 200GB free on my cache drive.

    Apr 25 21:00:21 Server shfs/user: shfs_write: write: (28) No space left on device
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902630400, length 4096.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669200
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902760960, length 512.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669455
    Apr 25 21:00:25 Server kernel: loop: Write error at byte offset 2902891520, length 1024.
    Apr 25 21:00:25 Server kernel: blk_update_request: I/O error, dev loop0, sector 5669710

     

    When this happened all VM's went into a paused state, Docker looked to be on, but I think it was dead also.

    After reboot Docker has disappeared, I cannot start VM's, they error out (forget the message). If I edit a VM, I get a read-only error (or write failed, can't recall) that pops up.

    server-diagnostics-20160425-2101.zip

  12. I have been coming across a weird issue lately which seems to have crept in with this new version, everything was working fine for ages but now i keep getting my vm's lockup and the following error message

     

    Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0: AER: Uncorrected (Non-Fatal) error received: id=0018
    Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, id=0018(Requester ID)
    Apr 23 15:49:00 Archangel kernel: pcieport 0000:00:03.0:   device [8086:2f08] error status/mask=00004000/00000000

     

    I'm unsure how to trace what the requester id actually goes back to but when this error appears both my vm's become paused under the gui and can not be resumed. On an attempted resume i get the bottom message:

    internal error: unable to execute QEMU command 'cont': Resetting the Virtual Machine is required

     

    Not sure what to do with this as it is rendering my vm's unusable as without a forcestop they will not do anything. VM 2 (one named "Cat - SeaBios") will start up after the failure but will refuse to see any usb devices attached to it

     

    No idea what to do with this one at all - i was able to take a diagnostic after this error occured

     

    I've seen this error on my system after a nasty motherboard death issue. Anyhow mine was related to memory timing, and I was able to change some settings and have never seen it again. Even if you haven't changed anything hardware related, I'd still run Memtest to be certain.

  13. You have a btrfs formatted cache drive.  Are you running the VMs from your cache drive?

     

    This is from the KVM documentation at http://www.linux-kvm.org/page/Tuning_KVM

    Don't use the linux filesystem btrfs on the host for the image files. It will result in low IO performance. The kvm guest may even freeze when high IO traffic is done on the guest.

     

    Could this be the problem with those having VM lockups?

     

    Edit: I don't think this is specific to the "lockup" as UnRAID becomes very laggy and awful also, not just the VM's.

     

    Hmm.. Yes, my cache drive is BTRFS, and has 5 VM's (concurrently running) and regular writes from caching recorded Tv (I don't record that much actually).

    The image file I was writing to as a 2nd vdisk was on an array drive, which are all XFS.

     

    That statement to me sounds a little over reaching (wouldn't you say?). I'm not questioning their guidance, but that's basically saying (for UnRAID here) don't have a cache drive as BTRFS with VM images on it. I'm pretty sure this is the opposite of the current "recommended" use case, as it seems BTRFS is primarily recommended for that exact use case (cache, with apps/VM's). I'm pretty certain most testing from JonP is with this exact same setup, and also users with cache pools have no other option than BTRFS.

     

    Anyhow it could have been the issue, but the only time I've noticed it.

    I've been stable since stopping the game download.

    My old AMD C60 netbook handled the download just fine  ;D

     

    I may update the virtio drivers on that VM as I have not done so in a while (as I have had no reason to honestly).

  14. I'm going to hop into the "moving data breaks things in 6.2B21" camp.

    I didn't expect to be, but so be it.

     

    However I move things from a vdisk to array from Windows regularly and have not had an issue until this point.

     

    I have a Win8VM (headless) that I use for torrents, it has a 2nd vdisk with a raw image I use as a HDD in the VM to store stuff.

    I wanted to download a game I just purchased using the GOG downloader, this was ~40GB and a bunch of files.

    Keep in mind I have had zero issues with 100+ GB of torrents down/up as this is a normal occurrence for this VM.

    Anyhow this game downloading leads to the VM acting all kinds of weird, slow, laggy, etc. My Torrent app crashes, VM will hang at restart.

    I leave it alone for the night, however this occurrence seems to cause  network traffic/dockers to not be accessible (SSH works fine), as I couldn't get to my MythTv backend, and it's webpage/port is inaccessible.

    Try to shutdown the VM, nope. Try to kill the VM, nope error about unable to sigkill something something.

    SSH .. virsh destroy VMname (same error). Grab diagnostics. Powerdown -r (powerdown initiated), nada... Hard reset.

    Comes back up normal, dirty bit detected on USB, however no parity check "parity is valid".

     

    Think this is a fluke, reload VM's, restart my game downloading (and torrent app).

    5 minutes later, same exact thing (I was up for ~9 days previous prior to downloading this game).

    Powerdown -r (powerdown initiated), nada. Shutdown -r (going down) nada.

    SSH still works. Hard reset, back up, dirty bit detected, no parity check "parity is valid".

     

    I've decided that downloading games are bad for me and I gave up (and have my Netbook doing it now).

    I don't think the diagnostics are going to show you anything good, but attached.

     

    Everything is back up without the attempt to download this game in the VM (torrent app running, nothing downloading), and all is well again.

     

    I pulled 3 diagnostics while this was all happening, but nothing in the syslog that seems to point to anything.

    Also, when this is happening I cannot stop dockers either, it is as if a process/thread (whatever) is holding up other things from working.

    When I tried to access my flash drive through SMB during this (from within my primary VM) it barely loaded, just kept waiting to load.

     

     

    server-diagnostics-20160420-1648.zip

  15. Is this just Haswell or Haswell-E as well? How did you get the information in the "code" section?

     

    This does not effect Haswell-e, my 5930k does not have this issue.

    My previous 4790s very much had this issue, and the intel_pstate=disable was the best way to "fix" it.

    It is well known (from people with the issue, as others will tell you differently) that this is not a reporting issue, but a real issue with the CPU not properly throttling.

    I confirmed this with both temp and wattage at the plug.

     

     

  16. So I have to set the BIOS when I install the VM?  Am I limited by AMD CPU?  I see the choices if I go to edit the VM as Q2.3 or i440.  The BIOS choice is greyed out.

     

    So technically I only lost the video?  I could use the keyboard blind if my network ever locks up?

     

    I.e.  hit enter a couple times then type "Powerdown" to cleanly turn off so I could reboot?

     

    I installed the Powerdown app.  It seems also if I tap the Power button once, the drive activity increases and it turns off after a bit.  When I Power on, I just have to start array.

     

    Sent from my XT1563 using Tapatalk

     

    You have to set OVMF or SeaBIOS at creation, yes. If you could change it, it wouldn't boot (without intervention).

    You're not limited by having an AMD CPU, it shouldn't matter.

    I'm pretty sure the console no longer works, blindly or not, it don't work.

    When I had needed this function previously when having issues, I could have sworn it did nothing for me, but I'm willing to be wrong on this issue.

     

    The 6.2B21 GUI boot option does NOT have this issue, and will stay working when another VM is started up (I'm guessing GUI boot uses OVMF, if not it is clearly magic!).

  17. Looks to me as though the third setting down applies to USB2... May be worth a tinker?

     

    If you're referring to the legacy USB support, if you disable it UnRAID doesn't boot (for non UEFI boot methods), and the boot option for legacy boot devices (UNRAID) will no longer be an option.

    While I do not have his issue, I have played extensively with the USB options for Smart Auto/Auto/Enable/Disabled, and XHCI & EHCI handoff, and in my testing this did not cause UnRAID to not boot, just set the controller it would use (EHCI or XHCI).

     

    Upgrading from 6.1.9 to 6.2B21 lead me to having a "boot failed" on boot (not a BIOS "no boot device") but a statement of boot failed.

    I fixed this by removing drive, pop in Windows computer, re-run make_bootable.bat (as admin), finishes correctly, fixed the booting issue (this is clearly not the fix for everyone).

     

    All this said, and all of the posts we've seen relating this, I do think there is some issue with the USB device or the way UnRAID is doing something that is causing it, and not the BIOS USB 2 options. IDK, it brings to mind the issue we had in the 6.0 beta days of ASRock boards not able to boot from a reboot command (would just loop at POST), and having to powerdown completely or it wouldn't work. JonP found some magical way to fix this, but it was odd regardless.

  18. If you RDP into a Windows VM just hit Alt+F4 on the desktop, the shutdown options will then appear.

     

    edit:

    I'm also confused of the need (even though I do this for my "seedbox" VM, as it takes a long time for the program to close)?

    Even if I stop the array, all VM's will gracefully shutdown without doing anything.

     

    I am OCD though, so:

    I hit the stop button in the VM manager on each VM, then when they're off, hit the stop array button (or better yet the reboot/power off button from the System Buttons plugin).

    If you're having an issue with the VM just sitting at the shutdown screen (which you've mentioned), there has to be something in the VM hanging, and not directly related to UnRAID.

    I do not think this is common, so lets try to fix that as opposed to patching the way you shut the VM down to be certain (however still better than not).