Jump to content

Energen

Members
  • Posts

    516
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Energen

  1. That's odd.. not sure what the problem could have been in that case. So for FileBot you will have /storage mounted in the docker configuration to /mnt/user/Media Movies --- or to just /mnt/user so that all shares below it are accessible -- tv shows, etc .. if you use different shares for each type of media. Mine is mounted to /mnt/user/Media because everything is contained within subfolders in the Media share. Then I can access /Media/Movies, /Media/Television, etc... So in my case, and you can adjust for yours, my Filebot destination directory would be /storage/Media/Movies/{n} ({y})/{n} ({y}) For the empty directory, or blank name, it should show up in the Shares tab if it was created within /mnt/ and you should be able to delete it that way. If not then you'll just have to figure out it's "name" to remove it. If it's a " " (space) name then you can do rm -r \ / (there's a space in between \ and /)
  2. You didn't need to rebuild, stop, or unmount the array, it sounds like all that happened was you used the wrong naming convention and created an 'invalid' folder which showed up as a share, which any folder would do when it's created in the right location. So it sounds like you were trying to rename a movie from (making up names here) /mnt/user/Media Movies/Terminator.avi to /mnt/user/Media Movies/Terminator (2000)/Terminator (2000).avi Something like that right? In FileBot the naming would be /mnt/user/Media Movies/{ny}/{ny} or however you wanted it named...
  3. Is the network IP to your Unraid server 192.168.15.3 or is it 192.168.1.201? You opened the ports on the router to the .1.201 but the logs are showing the server address as the .15.3. I'd just suggest to make sure you are using the correct IP address for the port forwarding.
  4. I don't know if this is the cause or not without knowing the Windows error # but you should change your virtual hard drive from virtio to sata. Aside from that you could check what device is in group 31 from the error in the system log and see what hardware it's referring to. If it happens to be a hard drive then yeah change to sata. (In Unraid WebGUI, go to Tools, System Devices and see what's in group 31).
  5. As far as file uploading, yeah any of the dockers would work.. Nextcloud, Owncloud, whatever... but yes anything is going to require some form of end-user software... you can't just magically upload something, unless it was website based which might not be the best solution for large files. You can host a VM and have everything contained within the VM, and allow remote access to that VM (with an app like TeamViewer) for whatever else you want to do ... video editing software, etc.. you could set up an ftp server in the VM to upload the files to which might be easier and more straight forward than nextcloud/whatever... I wouldn't expect great performance / ease to edit videos in a VM though... video editing generally is an intensive process even on high end hardware.. a virtualized VM is just making things difficult. Not impossible but not great.
  6. Or perhaps you're just not using Plex correctly / optimally. If you have Plex set up to do a partial scan on detected changes (or whatever the option is exactly called) there would never really be a need to perform a full scan. And even with a full scan, why would you need to do it so often where that would make Unraid unusable for you? Seems like you are the problem, not Unraid. I personally never have to force a full library update because Plex always picks up my changes as they happen, and maybe I have a full scan scheduled by default but day to day it just works as expected and my library is never out of date.
  7. I wouldn't really call that a bug of Unraid... if you are using a file system tool to delete an Unraid share, why would you expect Unraid to know that you deleted it on purpose? You have to remove the share from the Shares tab in Unraid.
  8. You can also run a User Script (user scripts plugin) to back up whatever you want. I run this daily, to a Dropbox share. #!/bin/bash /usr/bin/rsync -avXHg --delete /boot/ /mnt/user/Dropbox/unraid_BootUSB_Backup; chmod 777 -R /mnt/user/Dropbox/unraid_BootUSB_Backup;
  9. Then why are you asking? I don't mean to come off as "trolling" you but if you're used to backing up from synology and as far as you know you need a backup... then just do it. You were wondering if you need to do it, so don't wonder about it. I mean... if the baby was ugly..................................................................................... kidding, kidding.
  10. Do you need to? Probably not, but if you want to then you can. You could use any of the backup dockers to do it, or however you prefer to.
  11. Seems to be specific to some of these Asrock boards.. you can try a couple things.. http://forum.asrock.com/forum_posts.asp?TID=204&title=am1hitx-p150-cant-boot-without-a-monitor
  12. So I don't know if this is a bug or I'm just really doing something wrong here but I set up my encrypted array (with a passphrase) to automatically download a keyfile from ftp on the start of the array --- using techniques found in this forum with the go file. Sometimes it works ok, sometimes it doesn't .. especially when the ftp server with the keyfile is not running. In that instance I'm at the Array Operation screen where the Encryption status is Missing key. I attempt to start the array by inputting my encryption key (passphrase) and hitting start. The result is that it continues to tell me that my key is missing. I can't do anything at that point except to hit Delete. And try again. I put in the passphrase, and hit start ......... key is missing. Rinse and repeat. I cannot start the array with the passphrase whatsoever. I check /root/keyfile in the console after hitting start and the keyfile has been created with 0 file size, no data written to it. <-- bug? I change it to keyfile and input my downloaded keyfile --- will not start. Missing key. I'm about to restart the server..... again... and hope that it starts the array this time with the keyfile.. but what the hell is going on? Is Unraid's encryption routine completely screwed up? Or what am I doing wrong here?
  13. I don't think you are on the right track with your plans.. I think you are missing the point of using, and the features of, Unraid. While Unraid itself is not really a RAID system, you don't really "need" RAID because of the parity drive.. so while you won't have a RAID-1 copy of a drive, if a drive failed you would be able to rebuild it from the parity drive. I believe you can pass a RAID-1 drive to Unraid but not sure how that would work, and might be more complicated than it's worth. Basically, if you want to use RAID, you don't want Unraid.. that's my opinion. Also, while you 'could' run a Windows VM to install the apps you were talking about, that's not efficient. You'd be creating the unneeded overhead of running a VM and then relying on that VM to work properly at all times, and if something were to happen to that VM, such as a VM file getting corrupted -- there goes everything running inside of the VM. Running any OS inside a VM is not going to the most "capable" system.. while it's usable for a lot of things it's not a 1 to 1 comparison against running the actual OS on your hardware. So while you could probably use the NVMe drive for a Windows installation, it's still running as a VM and you're limited to that "experience". If that makes sense... You want Plex --- run it in a docker. Ubiquity, you probably want the UniFi Video docker. Roon (music server?), you might be able to get a docker that works, or find another solution. File server would be through SMB shares, no problem there. So let's say you drop the RAID portion of things and go fully Unraid.. get the new drives and then you could run your system like this: 1x 512GB SSD cache drive (buy one, or use the Samsung NVME drive) 1x 6TB parity 2x 6TB data drives 1x 3TB WD Green data drive 1x 1TB WD Black data drive = 16TB storage all protected by parity. But since you already have one bad drive from each existing disk, I'd be somewhat concerned about those other disks too. Either dump them, or double up on a parity drive. 2x 6TB parity, 1x 6TB data drive. If one drive fails at a time you can restore from one parity drive, but if 2 drives fail at the same time you can not rebuild 2 disks from 1 parity drive (read the wiki about parity drives). Hope this helps a little..
  14. You are making multiple assumptions that don't really matter. If someone wants cloud backup then that's the only conversation. 1) What if you have no family members? 2) What if you have family without internet access? 3) How do you ensure that the backup drive will always be plugged in, turned on, and accessible? 4) How do you ensure network connectivity, router ports, firewall access, etc so that #3 applies? The list goes on. Sure it's an offsite backup if you can make everything work but that's not what is being talked about. If you want cloud storage (which has access, redundancy, security, etc) you expect that there will be a recurring cost.. it doesn't matter if there are cheaper ways to do it.
  15. I think Harro was thinking about B2 Backblaze, which appears to be a different product/package and cheaper. https://www.backblaze.com/b2/cloud-storage-pricing.html Either way this thread is not about which cloud provider to use, so in that regard excellent write up on implementing Google Cloud. As for pricing on Google, it appears that 2TB storage would be around $2.50/mo USD, and retrieval of same would be $103 USD. If that's accurate than it would appear that even the B2 Backblaze is comparing to a different Google service, as storing 2TB seems to cost $10USD/month. The only concern I would have with using Google is 1) privacy -- what sort of encryption does it use on your files and are you really sure that Google/anyone can't access them. And 2) Longevity. Google is famous for shutting down services whenever people actually get used to using them. jameson_uk I would be interested whether or not you could add any extra layers of encryption to your storage?
  16. FreeNAS does use zfs so you can go that route if you want to and are up for an adventure. If you have everything from the FreeNAS already backed up to an external drive (assuming fat32/ntfs as most people don't change the format of a regular store bought external drive) then *I* would just not bother with the zfs drives from freenas and just get all the drives up and running on Unraid and copy the data from the external drive later. As for the Plex metadata.. they say most of the data should be compatible across systems... it might be worth trying to back up at least some of the stuff like your watched status database. For FreeNAS the entire metadata/database location is at ${JAIL_ROOT}/var/db/plexdata/Plex Media Server/ So you could potentially move that entire folder to Unraid and restore everything. I would archive the folder somehow -- whatever you could do on freenas -- into a zip/rar/lzma/whatever. Because there's a LOT of files that would take forever to transfer by ftp/smb/whatever. Otherwise, since the cache/metadata isn't really that important and can be redownloaded in a couple hours by Plex, I'd concentrate on just the databases in /Plug-in Support/Databases, or none if it if you don't really care.
  17. That's the one I mean.. there's no point in reinventing the wheel if the wheel already works.. I plan to get one that is known to work out of the box.. 9205, 9211, 9207, something like that. https://wiki.unraid.net/Hardware_Compatibility#PCI_SATA_Controllers
  18. I don't know what games you plan on, and honestly I don't know what it takes to host games in a VM, but your CPU and RAM might be / could be / probably are too low to expect any good performance for gaming. Can't help you with the LSI card as I plan on getting one from the list of "works out of the box".
  19. I have not seen anything that does these things exactly as you want them. The problem as I see it is that you want to use one program (like Thunderbird) for day to day use and then disconnect your backed up archive from Thunderbird to another program to view/search as an independent, separate archive. Too complicated! You'd be better off to set up your day to day use of Thunderbird to store messages offline (therefore creating your own backup archive within Thunderbird), and then backing up your entire Thunderbird profile to Unraid (not for viewing, just for backup purposes). This is basically how I have mine set up. Everything still exists within Thunderbird but I have a backup of my data. You can have Thunderbird run your profile directly off of Unraid so that it's always in somewhat of a backed up state (parity), but I've found that having your profile actively running on an SMB share causes Thunderbird to run a bit slower. I'm about to move my profile back to my local drive and only use Unraid for backup. I tried running it this way so that I could access my email data from any computer, not just my main computer running Thunderbird. Another thing I've tried and works generally fine is having Thunderbird running in a docker container and being able to access that instance of Thunderbird. I did not use a specific Thunderbird docker though. I used VNC Web Browser and manually installed Thunderbird on it, and copied in my existing profile data. Then I can access Thunderbird from a web browser. This works fine also but again is not exactly what you are looking to do. I suppose you could configure two instances of Thunderbird to semi-accomplish what you want. You have your day to day Thunderbird on PC and then this docker Thunderbird to keep an archive of your stuff -- but if you were to do that you might as well just keep everything on your PC Thunderbird to begin with. I guess my question is why don't you guys just want to keep everything in your Thunderbird? It really makes everything easier and just works. I'm open to any other thoughts, input, or ideas about this.. I myself am still trying to find what works best for me as far as running Thunderbird and maintaining data archives / accessibility on multiple pcs.
  20. Perhaps VS Code? https://code.visualstudio.com/docs/containers/quickstart-python Not sure if code-server in Community Applications is worth looking at.
  21. That is awesome. Very nice work. At first I was like "where is it!?!" then I found that you have to re-scan for missing movies to repopulate the information.
  22. Should be good to go. When you start up with the new machine assign your disks to the same locations and there shouldn't be any problems.
  23. Benchmark results on the Sabrent Rocket Q 8TB NVMe, the only 8TB that popped up as the first result. So on a drive like this, I guess it would depend on how fast the write cache is cleared out and written to memory. I'm not an SSD expert, but if you're writing 8TB to an 8TB drive, and the write cache fills up at 2TB before slowing down to 276MBps, that sounds like it's going to be 6TB at slow speeds as I guess it only clears out the cache at around 276MBps? Same speed out, same speed in.
  24. I wouldn't be against AMD at all since you're using consumer hardware. Since I "want" to use server grade stuff it's a little more difficult, if not impossible, to do an AMD platform. There are plenty of folks here using AMD. You would just have to research if there are any current issues with recent AMD chips to resolve for using with Unraid. I remember watching one of SpaceInvaderOne's videos and he used some flag in one of the config files for his AMD processor, don't know which video it was, or why it was needed, but things like that you'd need to know.
  25. Problem with either SSD or NVMe is once you fill up the onboard write cache (and/or hit the temperature throttle) your write speeds will drop significantly, defeating the purpose of what you're trying to accomplish by using them. This has been a hot topic of debate lately but there's enough articles out there talking about this issue to completely ignore it. I don't know if there are better types of drives made for a corporate type of use vs a home consumer, but your average consumer SSD/NVMe drive is only going to have less than a couple hundred GB of fast write cache before it starts to slow down. Google it. My only suggestion is to reconsider whether you actually need those backups
×
×
  • Create New...