Caldorian

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by Caldorian

  1. I have an APC Back-UPS Pro 1500 connected to and configured on my Unraid server. The UPS is now reporting that the battery has "failed", even though it looks like it's working fine. As a result, my Unraid server is pinging me every 9 hours about the failed battery. Is there any way that I can disable this notification from happening, while keeping the other notifications intact?
  2. Since I cut over from a windows 10 system to unraid, using the linuxserver.io Plex image, some of my users have complained that sometimes when they try and stream to a chromecast, they have to kick it off 2 or 3 times for it to go successfully. Any thoughts on what might be the issue, and or where I can look to find out?
  3. Tried playing with the docker this weekend, looking to see if I could use it to replace my delugevpn docker (one thing deluge is missing that I'd like to see is information about what blocks/pieces are downloading and/or missing). Ran into 2 major issues: 1) Ports being listed as not open. I figured out how to modify the rtorrent.rc file to set custom ports, but still didn't work. Admittedly, this could be an issue with my VPN provider, but haven't seen anything from deluge indicating that this is an issue. 2) And this is the bigger issue for me: I couldn't get magnet links to work. With Deluge, I just needed to copy the link, paste it as a URL, and the torrent kicks off without issue. Trying a similar process with rtorrent, it just hangs there with the torrent labelled as the magnet hash value. It wasn't until I did a full download of the torrent and importing of it that things would start to work. The other thing that was getting at me was that my testing torrents were listed as having errors, but I couldn't find anything that would tell me what the actual errors were. The logs tab would only show me general data about the whole rtorrent environment, and nothing else I poked at seemed to show things.
  4. Running 6.5.2 on my system. Use SSL is set to auto, but when I access my server it's not redirected to an HTTPS connection. I guess as a result(?), the "Open Web UI" link in UnBALANCE is left as a regular http:// link.
  5. I'm running into the same issue using the latest version of Chrome. Manually setting the URL in the browser to https://<ServerIP>:6237 fixed it for me. So it sounds like the plugin is trying to use SSL by default now, even if I'm accessing my unRAID server by unencrypted http. In a separate issue, I was moving a few hundred GB from one disk to 2 different ones in preparation to remove the disk from my server. Not sure what happened, but around the end of the move, my unraid server seemed to lock up. The unbalance GUI wasn't updating, my unRAID GUI wasn't updating, and I couldn't get in with Putty either. Ended up giving the server a hard reboot. No big deal. A parity check later (no issues found), I continued with the data move. However, it appears that things ended up dying at a point where it had completed a large data copy to the new disk(s) and was in the middle of removing the moved files on the old disk. Thus, it left me with many files that were duplicated on 2 different disks. Running the Scatter/Move option again in Unbalance seemed to have it recognize these were duplicate files, and it just deleted the files on the the old disk (I'm just making this assumption given that it was reporting processing speeds in the 3000+MB/s range). Not complaining that it seems to have worked out in the end. But I'm curious if this is expected behavior.
  6. You find a solution for this? Considering spinning up some services myself, and I'm in your same situation (Bell Fibe customer?). Additionally, I wanted to get things entirely up and tested before I setup the port forwards on the router, so being able to do a split dns would be useful.
  7. Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk. Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so.
  8. Getting the same issue. 2 of my disks are being detected, but aren't getting benched. Where can I get you the diags/logs?
  9. So I've been playing with Duplicati to backup my everyday machine to my unRAID server (using local directory to go direct to an SMB share). Working as expected. My next step there will be to convert the local install to a service so it'll run without anyone logged in. After that will be adding duplicati to my unRAID server, and backing that up to the cloud, probably following @gridrunner's guide. So I'd be curious if anyone has any experience with Duplicati backing up it's own dblock/dindex/etc files. That said, has anyone had any experience with Duplicacy? The tiny bit of reviews that I'm seeing look decent. The licensing model looks strange (the CLI is free to use for personal use, but you need to pay to get the GUI and/or commercial use), but it's not particularly expensive. I'd also be curious if people have gotten it up and running on unRAID.
  10. Agreed. I just started using Duplicati myself to test it out. You can backup to shares on unRAID without using Minio. You just need to use the local path option, and then manually set the path to the UNC path. So Minio is only required if you want to be able to backup remotely. However, I could imagine that using Minio might make things a little easier. That’ll be my next test.
  11. Had an interesting issue crop up last night, curious if anyone else has seen it. I was copying about a 1TB of data to my unRAID server which has the plex docker running on it from Windows 10 via SMB. It was getting near to the end of the data set, and I tried to open up plex on my iPhone. It couldn't get a response from the server. I tried opening the web GUI, and it just sat on the spinning circle. So I restarted the docker. The docker was exceptionally slow to boot up, taking like 5 minutes to get to the point where it says "Server startup is complete. Host name is <blah>." Even once it got there, I still couldn't get a response from the webGUI. Went to my Windows box, and it had paused the upload of data, prompting me with a dialog stating that some of the file properties couldn't be copied, and asking if it should continue. I said yes, the copy finished in another couple seconds, and then Plex came back to life. Array share I was copying to has the cache settings set to No, and my plex docker's config is on the standard appdata share (with cache set to prefer). Anyone have any thoughts? Edit: Here's an example of the dialog I'm talking about: https://superuser.com/questions/548221/file-copy-stops-to-ask-about-properties-that-cant-be-copied-to-new-location
  12. I believe it's because Duplicati doesn't support backing up to SMB shares. So you install Minio on unRAID to present a share as an Amazon S3 compatible service that Duplicati can target against.
  13. Can't decide if you're serious or not, but in case: 1) So it's not a 3 phase process (create user in the GUI, modify the files by hand, then restart the server <restarting the array does allow SMB to work, but it takes a full array restart for the GUI to recognize the modified name>) 2) Prevent typos from occurring while hand-modifying 5 different config files 3) Allow you to delete the users without having to go back and re-edit the files back to non-email based names
  14. Interesting. Which plugin is that? That said, it still means you have to know which config files are involved. In my case, it was 5 different ones. Being able to do it right from the GUI would simplify the whole process.
  15. Getting through my first unRAID installation, it would be really helpful if there was a way to create new users/SMB users who's username is an email address. This allowed me to proper connect to my shares from my Windows 10 machine that uses Microsoft accounts. I'd also think it would me useful for machines that are domain joined an use UPNs for login. I followed @geekazoid instructions for converting a username, and it worked well. Would be nice if you didn't have to SSH into the server and hand edit files though.
  16. So I've been playing with unRAID for the last 10 days now, after spending way to long researching various options before that. And after using it now, I've come to the conclusion that it's probably the perfect system for my desired usage (personal home storage server, media server, etc.). However, I really wish that some of the ways that it works were far more prevalent in the documentation. If I had found this stuff earlier, I wouldn't have had to spend a month looking at and debating options. I would have gone to unRAID right away. So here's my list of things I wish I could have found more easily (in no particular order). Parity occurs in real time You'd think this would be obvious, but when I first started looking at unRAID, I couldn't find it anywhere. I knew files were stored whole on disks, and then parity information was written out to the parity disk, but no where was it obvious that this was done in real-time to maintain data integrity, or done on a schedule (like SnapRAID). Also, that unRAID calculates parity at the block level, not file level was helpful to finally discover. Once I figure this piece of information out, the inner workings of the data integrity system made a lot more sense. unRAID doesn't have data caching, it has data tiering Calling them cache devices really is a disservice to the ability of the setup and the control that you have in how you use them. One of my hesitations and what I spent a lot of time trying to research was how I could create virtual machines that had their main storage on SSDs, since I knew that using an SSD as an array devices was frowned upon. It never occurred to me that a share could be set to use the cache array as it's main form of storage (and either not use the main array at all, or only use it when the cache is full). The difference between "Prefer" and "Yes" for the share cache option could also be more clear in the help dialog. That one took me a while to figure out. You have a lot of control over which disks your files are stored on Yes, unRAID can be used in a set-and-forget manner. But if you've got OCD about where your files are actually located on disk, it fully supports that too. Between the include/exclude disk options on the shares, and setting the split levels, the user has a lot of control in telling unRAID which disks should be used to store which files. And if unRAID puts a file on a disk you don't like, just go into the terminal and move it like you would any file in Linux. Dockers remove a lot of the need for VMs Admittedly, this one isn't so much about unRAID, but my lack of exposure to dockers/containerization. When I first started conceptualizing my home server, I was thinking that I'd have the base OS as my NAS layer, and then I'd need one virtual machine to run Plex from, a second virtual machine a my "media acquisition" server so I could isolate it behind a VPN, etc. But once I got the server running, and read a couple install guides, the power of dockers has been made very apparent. I'm really glad I've made this first build on an old system, rather then going out and buying new hardware first, as I would have drastically over provisioned the amount of RAM I'd need. So this is my list from my first 10 days. I'm curious what things other have learned using the system that they wish they knew earlier.
  17. Hey @geekazoid, I finally managed to get this working. Your instructions were pretty good. However, the biggest thing that I had to do was turn off public access to all my exported shares. Once I did this, access seems to work as expected. And no, I didn't end up having to fix my account on my local system so that the Windows username matches the local-part of my Microsoft Account name. Just make sure those public shares aren't published so that Windows doesn't access UnRAID at all in an unauthenticated manner. Now, if only the Create Users dialog was amended to allow the creation of email-like users so you don't have to manually edit files. Just tried this on a QNAP SAN, and it worked flawlessly (again, having to first disable guest access on all shares).
  18. Already had all my credentials removed, and tried connecting to the secured shares first. Played around with it some more today. I'm wondering if the issue is that my Windows "username" is different from the local-part of my email address. (ie. Windows says via whoami/"echo %username%" my username is "john", but my email address is "[email protected]"). I think I'll try clearing all the users off my UnRAID server, and try setting up things up again clean on a VM to see if I can a) Get things working, and b) re-create the failure once it works, which shouldn't be hard
  19. Hi @geekazoid I tried using your instructions, but to no success. Most of my shares are publicly available, but the one share that I tried to restrict to myself prompts me for credentials. After trying to enter my credentials again, it fails to connect. Any thoughts on where I can keep trying to troubleshoot this?
  20. Hi @binhex Trying to get this up and running, and the docker refuses to start. I'm trying to get this running with KeepSolid VPNUnlimited. I set VPN_PROV to 'custom', and put the ovpn file in the openvpn folder. However, every time I try to start the docker, it gives me the same error, saying that VPN_PORT not found. I've attached the ovpn file (with cert/key data removed) and the supervisord log. supervisord.log config.ovpn Edit: Seems like I managed to get past this now. Had to edit the opvn file using Wordpad, and added the port (1194) to the end of the remote command. Was trying to do that in Notepad before, but would get the same error (VPN_PORT not found). Not sure why, but with Wordpad it works.