Nezra

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nezra's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Nezra

    Watch Folders

    trurl, you are indeed correct. i wasn't using CA correctly. i spent the day digging into it and found the handbrake option. ill take a look into it. Thanks!
  2. Nezra

    Watch Folders

    TRurl, I do have CA installed, and i tried adding the github reference for the handbrake docker into templates, but i don't get a template i can add for it. when i go to the github, there's a docker file. Unless there's a different one/way i am supposed to approach it, or there's a different handbrake docker i should be referencing. This is the post in question: http://lime-technology.com/forum/index.php?topic=39624.0 I installed CA as a result, but still couldn't get the template to show up in my docker templates.
  3. Nezra

    Watch Folders

    Not exactly an unraid question, but in a way it is. Looking for some expertise from some of the gents here. What i'd like to do ultimately is set up something, either a VM or a docker, to watch a folder on my share and re-encode videos i have a pair of decently heavy weight processors on one of my 2 unraid boxes that for lack of a better terms are under utilized because i can't slap a video card in it. Here's what i'm wondering. 1. Is there a prebuilt solution anyone is aware of to facilitate this easy and out of the box, via a docker or a VM. anyone else doing this have a preference? 2. if i go the VM route with a lightweight command line linux distro, it's sole purpose would be to watch a few different folders on the share, and depending on what the folder is run handbrake in CLI via a specified preset. i've looked into 'Inotify' but i still can't quite wrap my head around it. any suggestions on a distro as well? i have some experience with Ubuntu & debian(RPI version) but a tad clueless overall if that helps with suggestions. any help with the inotify stuff would also be useful. 3. i looked a bit into the docker for handbrake, but i couldn't figure out how to get it to load into the template. i'm unfamiliar with how dockers in general work, despite best efforts on getting it to work. i assume the point i am missing is i'd have to create a custom template to get it to work, but i couldn't find the "dummies" version of the instructions to get it into a template. If at all possible, if there is a prebuilt variant, or if not, a useful explanation of how and why it works would be amazing as well, as i'd love to actually learn why and how it works. Sometimes the journey is more educational and helpful than the final product, as it'd allow me to understand why and how to overcome future oddities. I have a friend who does some content creation, and asked me if this was possible so he can convert the raw footage into a few different formats. My use case would be: 1. Video file is place in 1 of 3 user share folders. 2. Depending on which folder i put it in, a different preset would be run on handbrake and it would be re-encoded. 3. The output file would go into a destination folder depending on the initial folder. 4. a clean up would be done to remove the original file. if it can't be re-encoded, or it fails, said file goes into a rejected folder so i can look at it later. if at all possible, i'd love some sort of log from the handbrake failure. way back in my early dos days, i'd just pipe the output to a 'log'. i'm assuming that's possible as well. If anyone has any insight or ideas, or even a where to begin or suggestion, i'd appreciate anything from snarky to helpful. It's more of an educational experience than anything.
  4. GPU passthrough is handled by passing the entire device through to a single RUNNING VM. You cannot share the same card with 2 RUNNING VM's and passthrough one port to each. the same card can be in two different VM's, but only one can be run at a time. using the integrated graphics port on your mother board if it has it, adding a second card would be the only way.
  5. My use case was i didn't need another unraid server at the time i poked around with this. I actually accomplished this myself by slapping together an ubuntu server with SMB and CIFS(the rest of my machines are windows so i wanted to be able to transfer stuff there that way). On the unraid side of things i used the unassigned devices plugin, and mounted a network share in it. this actually creates a share on the unraid server itself(albiet i could never access the share from unraid's shares on a windows PC due to permission issues). the unassigned devices plugin will allow to you add individual drives, like usb external drives, as well as SMB or NFS network shares. i'd recommend giving it a look, and firing up a trial of unraid, installing the amazing unassigned devices plugin, to see if it fits your use case. From there, because its actually showing as a share like in a normal linux share, i could import that share folder into any docker, and my VMs would access it directly via the linux box. i've tacked in both windows and linux shares this way at times to simply cloudify my storage. This proved useful because i run Plex as a docker, and needed a temporary storage fix. Plex was able to access the share through unraid. I stopped using it because the unassigned devices plugin & unraid didn't like it when the linux server went down for whatever reason or was disconnected, and tried to reconnect. i had to reboot the unraid server before it'd work properly again from the console, as the web gui became unresponsive. I'll also add i stopped about 6 months ago going this route once i resolved the storage issue. i have a new unraid build i'm working on, whereby a seperate machine more capable of adding more drives will simply be used as long term storage. i plan on pairing the two unraid servers with the unassigned devices plugin simply for ease of use. i can work with linux & CLI, but i prefer the ease of use, and the additional functions/stability i can easily achieve with unraid. sometimes i want things to just work, and this worked the best for me.
  6. Sadly, appears I cannot even get to that point. Attempting to log in via the terminal results in some sort of trace error, and brings me right back to the server login. Assuming i am hosed. Let me know before I ungracefully reset this machine
  7. Need some assistance with UD. First off, I LOVE it. but, I have one small issue. If I mount a network drive, and for whatever reason the network location becomes unavailable, and then available again, it hangs the whole unraid ui/ssh. it completely stops working. it's happened twice now once on 6.1.9 and now on 6.2. the last time I force rebooted the server. this time I posted elsewhere in hopes someone could advise how to reboot the machine and safely shutdown the array. The issue only occurs when the remote location becomes unavailable and available again. it hangs everything up once it starts attempting to remount it. the folders it creates for the mount points persist until reboot when they are cleared, leading me to believe UD is getting stuck, and by association the rest of the emhttp/ssh, when it tries to recreate the folder or when it is remounting. The other thing is it only happens if the remote location becomes unavailable before I unmount it from the GUI. if I manually unmount it, and then make the remote location unavailable, then manually remount it, everything is fine. I have auto-remount OFF as I hoped that would fix it, but has no bearing. I can't always stop the remote locations from being unavailable, or time when it will happen, as they may crash, install updates, or reboot, lose network connectivity, etc. My current setup has a separate NAS I use to back important files up to. it's been flakey lately from a connectivity standpoint, and will be something I am looking to replacing, but when it becomes momentarily unavailable, UD appears to break, and takes both emhttp and ssh with it. I found this out by going to the terminal and killing each of the processes one by one(I know, REAL bad) trying to reboot the machine. once UD's process was killed that indicated it was trying to mount something, the server was running fine again. Is there a bandaid fix or some other way I should look into it?
  8. Basically, accessing the server from the web GUI results in a blank screen. tried on several different browsers on several different machines. I tried to ssh into the machine, but after entering the password, it simply closes putty. I tried telling it to keep open, and while I see it successfully logged in, the server simply closes the connection. The dockers are still running, and the VM's were(I shut them down hoping that was the issue, since I could still ssh into those), and all the shares are still accessible. so the server is still running. I have no clue how to shutdown the docker image(plex) as the last time this happened everytime I tried killing the process, it simply restarted. I'm assuming I can still access terminal. What I'd like to do is SAFELY stop the array, and reboot the machine. I have heard you can't simply restart the http server, and since SSH seems to be failing, rebooting seems the next best logical course of action. I just hate having to put the system through a parity check. I'd be willing to be the emhttp hang is a result of the unassigned devices plugin, as it really seems to break when a mounted network share becomes unavailable and available again. that's a post for a different thread. I could probably kill the unassigned devices process, but I would still be concerned with needing to reboot, and if I can't guarantee the webui would return, I still would be interested in knowing how to shutdown the array safely. I've searched endlessly, and the wiki is a bit out of date(most of it is still written for 5.0), so I'd appreciate any advice/direction. if I could figure out how to get logs from the terminal, I'd be willing to provide those, but everything I could find says to grab them from the webui, which, is broken right now. In short, Please help!
  9. I believe you need to delete all of the usb controllers relating to ich9. it's what i had to do. effective, delete this: <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0' multifunction='on'/> </controller>
  10. waited a few days, this repository appears to still be broken. Try this one: http://linorg.usp.br/slackware/slackware64-current/slackware64/l/parted-3.2-x86_64-2.txz You'll have to forgive me, but i have no idea how to change it. When i hit the update button next to the version in the plug in tab, i don't exactly have the option of changing the mirror it's pulling from. it just keeps pulling from the same one.
  11. can't upgrade to 2016.03.16. plugin: downloading: http://mirrors.slackware.com/slackware/slackware64-current/slackware64/l/parted-3.2-x86_64-2.txz ... failed (Network failure) plugin: wget: http://mirrors.slackware.com/slackware/slackware64-current/slackware64/l/parted-3.2-x86_64-2.txz download failure (Network failure) except network is fine. I tried going there manually, and appears there is a resolving host error. mirrors.kingrst.com’s server DNS address could not be found. is there another way to update it?
  12. @jonp, This solved so many of my vm headaches with tying to figure out how to keep the keyboard and mouse working along with the virtual cd rom drives to simply install windows. Any chance we could at least get a check box to have it added to the vm xml instead of needing to rewrite the xml after evey edit? My mother board and bios settings wont allow me to rip apart the usb hubs, so short of purchasing a pcie card, shutting down the vm to swap in and out usb devices is my only choice, which requires editing the xml manually every time.
  13. When the 4TB has passed its current high water mark, half of its free space. Then the next drive will be used until it passes half of its free space, and so on. Then when the last drive the share includes has passed its high water mark, it will go back to the first with a new high water mark of half of its remaining free space, and so on. Now if I'm not mistaken, since the set up here is 4TB, 1TB, 1TB, 1TB. It'll keep using the 4TB disk after the first high water mark... since half of 2TB is equal or greater then 1TB. Edit for more complete explanation: Your Array: Disk1: 4TB Disk2: 1TB Disk3: 1TB Disk4: 1TB High-Water doesn't work the way people seem to assume it does (IMO this means the help / documentation / naming needs to be looked at so that people can understand it better...) The first High-Water mark is 1/2 the size of the largest disk. Since the largest array disks is the 4TB disk the first high-water mark is 2TB. When writing to the high-water share unRAID will pick the disk with the least free space greater then the high-water mark or in this case 2TB. (This of course assumes no other conflicting logic with split levels or excludes) The only disk that qualifies is the Disk1 since Disk2-4 don't have more then 2TB of free space. Once you exceed 2TB on that disk, the High-Water Mark will recalculate by dividing the old high water mark by 2. In this case the new high-water mark will be 1TB. Again the only disk that qualifies is Disk1 since Disk2-4 don't have greater then 1TB of free space. Once that new high-water mark is met, the new high-water mark will recalculate using the same approach. In this case the new high-watermark will be 500GB. At this point Disk 1 has 3TB of data and 1TB free Disk 2-4 have 1TB free, these all have an equal amount of space above the high-water mark so it'll pick a disk likely disk1 (not sure how exactly it picks when they are equal) and fill that first. It'll continue to fill that until that disk no longer has the least free space above the high-watermark (so writing 500GB) then it'll move to the next disk and write 500GB to that one and so on... And turns out this is exactly what happened. I incorrectly assumed after reading the documentation that it would have been the reverse. I proceeded to start copying files to the array one at a time. once it hit 498GB free(3.5TB used) on the 4TB drive, it proceeded to start filling the others. I incorrectly assumed that because the wording mentioned half of it's free space, it would start splitting, when in reality, like you said, it's 1/2 of the free space of the SMALLEST drive. In my case it turned out to be 500GB. All fixed and understood now, working as intended.
  14. I can confirm this solution. Exactly my issue as well with my win 10vm. The account I logged into the VM as was administrator. couldn't play anything steam related without adding this Dword. Off to play my missing steam games!