Jump to content

Energen

Members
  • Posts

    516
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Energen

  1. So when you click on the ? up top, to display the page help.. it says this And links to https://www.linux-kvm.org/page/FAQ#Is_dynamic_memory_management_for_guests_supported.3F Which says So basically .. if you're running a Windows VM it will automatically assign the "max" memory. If you were running a Linux VM and set the initial memory to 8GB and the max to 16GB, then it would initially use 8GB when it starts up and as needed it would expand to 16GB. That's my understanding. Does it work in practice? Don't know. In your picture you are giving the VM 20GB of RAM. Depending on what the VM is and what you are doing, you may not need that much. And how much RAM does your Unraid server have? This is not a fictional number for the VM.. it uses your Unraid RAM... so if you have 16GB of RAM in your system, setting it to 20 will most likely be some kind of problem.
  2. This sort of idea was mentioned in another thread, basically building a database of compatible hardware. While it's technically possible, it's unrealistic. First of all because you'd actually have to have the thousands / however many there are Unraid users to actually respond. Second of all, since Unraid isn't exactly hardware dependent you could quite literally have every user with a unique combination and therefore have no clear answer and only a list of 500 motherboards, 500 cpus, etc. Then the choice of hardware is personal preference on top of that. Consumer/Server board? ECC/Non-ECC RAM? IPMI ability or no? 1GB/2.5GB/10GB LAN? Networking to support those options would have to be included? SSD/NVME/HDD/whatever else? Low power media server or ability to run 20 VMs? Every use case is different, everyone's preference is their own, so there's no good way to report on a generic list of hardware that works, because generally speaking, all hardware in existence works, with few limitations.
  3. Unless you specifically created a share for the archive stuff and limited it to a particular drive, you'll never really be able to limit read/write/delete. Maybe that's what you had planned anyways? But by default the way Unraid works is you could potentially have your archive tv shows spread across 4 disks and not even realize it. Shares don't 'need' to be mapped in Windows.. but it's the most convenient way to access them... in my opinion. If they're accessed infrequently then I guess it doesn't really matter. I access multiple shares frequently and therefore have them mapped as network drives.
  4. Windows. If you wanted to use Unraid for creating folders you'd have to use the command line, krusader, mc, or any other file management tool........... Windows is just easier since you're using it anyways. You can do it that way if you want... it's personal preference. Unraid doesn't care what share you have files on. In practicality though, do you want to have to map 6+ network drives in Windows? Unless you just access everything through \\unraid... but personally I have all my media on a media share, and that share is mapped to a single drive in Windows. Again, all personal preference. So this is where your personal preference makes things sloppy.... I personally don't see a reason to seperate current/archive tv shows. Sonarr doesn't care what you're watching/not watching, and any media player you're using should show you what you are currently watching, what you have left to watch, and be able to hide things that you've already watched ("archived"). Here's what I would probably do... Set Sonarr's media path to /mnt/user/TV SHOWS/current, add another path for archive. Then in Sonarr add all the shows from the archive using the import existing series on disk.... which would be accessible under /archive from my example and check that everything adds correctly... which it should. Then do the same thing for current shows, using the default /media location.. This method will never automatically update anything in archive. If you move a show from current to archive, Sonarr will still think it's in current. You'll have to manually change the path in Sonarr to reflect that it's been moved to archive. Other than that, this should give you what you want...
  5. I see this is going to be never ending... just take my point and accept it and move on. You said "I've had to go an remove the container, delete the database and redownload." ... when you ONLY need to delete the database, you don't have to remove or redownload the container. Stop container, delete database, start. Finished. So no, not "exactly" what you said you were doing. And my mistake, I don't look at github just to prove a point. The last time I looked at the github page nothing had been updated in quite some time. You certainly wouldn't know of any developmental changes by the lack of communication or responses by the dev regarding the multitiudes of bugs/suggestions in this thread. However, the last docker update, which is the only thing that actually matters, was 6+ months ago. Let me rephrase for accuracy so you don't point it out to try and prove me wrong, the "latest" tag is still 6 months old. Anyways, enjoy.
  6. I agree, but there hasn't been any development on this in over a year so we have to make do with what we have...
  7. So if that's what you'd rather do then ok, but it's certainly more hassle then setting the very few options there are. Also you can use your imagination and only delete the unmanic.db file and retain the settings.json file so you accomplish both deleting history and retaining your settings. I'm sure you could have figured that out had you looked at the directory first. Sent from my SM-G981U1 using Tapatalk
  8. You have to turn off subtitles encoding. Sent from my SM-G981U1 using Tapatalk
  9. You only need to delete the contents of the appdata folder for unmanic to clear out any history. Sent from my SM-G981U1 using Tapatalk
  10. Start here. https://www.osboxes.org/android-x86/ Might not work very well though.
  11. So while I'm not an expert with Unraid's encryption routines ... I've had my own share of troubles using it.... for 2020 the entire encryption thing should be worked on... that'd be my request. Somehow improve the entire operation. So here's a couple thoughts on it.. you said you saved the key to 2 files ---- on Windows? what file format? If it added any Windows line endings (invisible to you, but not to the computer) it can mess it up as it would basically include the line ending as part of the key. When you open a command window, and go to /root/ is a keyfile present? If so, 'nano keyfile' ... does it contain your key? If there is no keyfile present after trying to start the array then welcome to the world of Unraid encryption bugs... (in my view) Instead of using the key file you can try starting it as a passphrase instead... not sure if it will work with 64 characters though for the input.. So, you can try back on the command line........... if the keyfile is there, delete it. This is what I've found to force restarting the array with a keyfile .. without rebooting the entire system.. echo -n 'PASSWORD' > /root/keyfile && CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') && curl -k --data "startState=STOPPED&file=&csrf_token=${CSRF}&cmdStart=Start&luksKey=/root/keyfile" http://localhost/update.htm If that works then you know your passphrase is good and it's just a problem with your keyfile
  12. I don't know the specific quality settings it uses. I suppose if I really wanted to figure it out I could dig through the source code but I'm not sure if that would really gain me anything, other than just knowing. My experience is that 95% of my conversions the output file was of a smaller size with no perceivable visual differences upon a quick check on my 24" 1080p LCD using Potplayer on my PC. The other 5% resulted in a larger file then the original, but no quality loss. When I refer to quality loss I mean that I don't see any artifacts or blurring that would indicate a low bitrate encode. I do not mean that I'm taking a 4K source and converting it to a fraction of the size and it still looks like 4K, know what I mean? Sent from my SM-G981U1 using Tapatalk
  13. Ah, it has been quite a while since I actually used RealVNC, changed to TeamViewer to get around the ports issue with RealVNC.. The last time I used it you needed open ports. Sent from my SM-G981U1 using Tapatalk
  14. If the Pi is connected to the PC, and it is behind a router without port forwarding, how would you connect to it? RealVNC would require an open port. @gfjardim just posted this on the forum, and seems very much like what you want to do... have not looked into it deeply but I bookmarked it for future reading because it was interesting. https://pikvm.org/
  15. @housewrecker have you changed anything with the RSS feed output? As of late I haven't been able to import the missing list in Radarr...
  16. Those would all be wrong to start... because linux is very specific on the format of things... there's no such thing as mnt/user because it would actually be /mnt/user/ with a preceeding forward slash.. maybe you just left that out while writing and actually know that already but figured I'd just point that out in case... No, everything exists already. No, /mnt/ is not under /root/. Open a new console window, you might see root@TOWER:~!# assuming your user name is root and TOWER is your Unraid name. type ls -- you probably only see mdcmd@, right? This is your root directory.. it's not really used for anything at this point.. that I'm aware of. type cd /boot and then ls --- that's your flash drive. you shouldn't really do anything here, just showing you a location. now type cd /mnt and ls -- now you see your disks. type cd user and ls -- now you see your shares. from /root you'd type cd /mnt/user to get to your shares So in the case of creating a VM, I'm not actually sure what your problem is .... in the template you'd select an existing iso which would look like /mnt/user/isos/ubuntu-20.04.1-desktop-amd64.iso
  17. So this goes back to what I originally said about not knowing what quality settings Unmanic is hard-coded with. Doesn't matter what codec or format is used if the quality settings are different. If you take two original videos and encode one at x265 4K and one at x265 1080p, and then use an automated process like Unmanic to convert those videos.. the logic that Unmanic uses says that both of those files are already in the correct format and are ignored. Do you get what I mean by quality settings? In the mean time that 30GB 4K video could have been reduced to 2GB. In the case of Unmanic -- this B (2008).mkv file is ignored.. it's a 1.6GB file. If Unmanic actually processed it at the quality settings it's configured for........ would the ending file be smaller? That's what I'm driving at... the question still remains what 'quality' setting Unmanic uses for the codec it uses. Or are the files skipped because Unmanic actually compares the encoder settings and sees that it's the same? Another example, in Handbrake if you select x265 1080p but one with an avaerage bitrate of 3000 and another with 500, they are technically the same format/codec but obviously different quality. The comparison of codec only would make both of these files the same.
  18. Read everything here... and try again..
  19. So despite the lack of ability to actually configure any encoder settings / resolutions / etc I've been running this between my Unraid docker and Windows with Docker Desktop (does not work 100% accurately, but it works well enough to make use of it) and feeding it a ton of media files that I wanted to see the results on. I'm fairly impressed with the results. I've converted around 300GB+ of media files to a fraction of that and the quality appears to be very good. I have not figured out a way to force a conversion of media files that Unmanic thinks is already in the correct format. Just because it might be in the same format does not mean that I don't want to try to convert it for space saving reasons. @Josh.5 option to force possible?
  20. That makes more sense.. I had not tried to access it from outside the network... Sorry for the confusion!! My fault!
  21. What is needed, or at least what would be extremely nice, is to be able to just install the template and run it .. out of the box, so to say. I mean having to track down and then configure your own servers to run speed tests against is not good.. why have a working docker template that doesn't do anything? I have no need for a speed test docker.. so I installed it just to check it out because it was new... found that it doesn't work. Delete. That's what most people are going to do. Scanning through the librespeed doc file, apparently you need to add files to the server (appdata folder?) that are not included with this docker but are in the github Wait a minute... hold the horses... I was under the impression that this was a normal speed test docker.. but after some more reading this is not a standalone speed test application... it's a server/client setup. That changes everything. So this app is only for speedtests between multiple servers that you have access to for installing/configuring the backend stuff, or have access to the ip information for from other people running the backend servers. Perhaps that description needs to be changed to reflect that you can't just install this and run speedtests like Speedtest.net (my impression when seeing it).
  22. pureftpd is available if you can configure it.
  23. Might take some customization and getting used to if you're used to using Krusader. My first impression is "wow, that's a busy interface". Here's the default screen as soon as I start it up...
  24. Tried it out.. seems nearly useless/pointless in it's current state. To make use of this docker it would seem that you should configure it with more arguments as seen here https://github.com/librespeed/speedtest/blob/docker/doc.md Also, it would seem that it takes some amount of customization to get working in the first place... there appears to be no servers configured by default to run a speed test against. So hmm.. more work and effort involved than it might be worth just for a speed test. The privacy policy leaves much to be desired. It's unclear whether or not the TELEMETRY option turns off all the data collection mentioned in the privacy policy.
×
×
  • Create New...