Jump to content

danioj

Members
  • Posts

    1,530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. Good mod. I like it. I also like the extra fans mod. Do you have a post detailing what you did there? Assuming that they are on intake!?
  2. Quote

    Live as if you were to die tomorrow; learn as if you were to live forever - Mahatma Gandhi

     

  3. Back in the day it was much harder than it is now with unRAID 6+ access to SMART data in the GUI and its ability to highlight variables which have reduced or are failing. Also, I'd like to make 2 points first (and I know this is not quite in response to what you asked) which are important: First and foremost keep a backup. Parity is NOT backup. Second of all, the increase of a 2nd Parity drive in your system gives you additional protection in the event of drive failure(s)* * You might question the need for this if availability is not an issue (e.g. you have a mirrored Backup server which you can switch to). Note: TRIM is a tool for SSD's only and not for array (mechanical) devices. Unless of course your array is made up of SDD's but AFAIK unRAID does not officially support this (although I know some users have played with it) nor does (I believe) the TRIM plugin (which I assume you are using). As for preventative actions. I do the following and I believe it is sufficient and is consistent with practice the majority of unRAID users follow: - Parity check once per month. - Turn on notifications to allow you to be notified (via whatever method you prefer) when SMART variables get iffy. - Manually check SMART variables once per month. There is also the extended keep disks spinning vs spin down argument (which I will not re-ignite here) which (depending which side of the fence you sit on) could extend / reduce the life of your disks. https://forums.lime-technology.com/topic/10200-to-spin-down-or-not-to-spin-down-that-is-the-question/ In summary, if you keep doing what you are doing PLUS setup your notifications and add a bit of proactive checking I think you will be as proactive as the majority of unRAID users I know. EDIT: I'd like to add a comment on your mover execution frequency. Unless you're running a protected BTRFS array as your cache device, then I don't think that running your mover once per week is sufficient (Even if you are using a protected BTRFS array as cache - I feel that this is still not good enough as BTRFS is IMHO widely not considered to be stable enough yet and I don't believe there are unRAID GUI tools to support restoring cache data). This is because your files are not protected by parity if they are not on the array and any cache device failure could result in data loss. My advice would be to run this at least (assuming daily use of the array of course) once per day. If the data is REALLY sensitive (and you wish to keep using Cache) then I'd even suggest running the mover more frequently (e.g. every 6 hours). Of course if your disks never spin down then you could even run it every hour as there would be no spin up penalty.
  4. Your output indicates a successful operation. Based on the steps I believe you have taken, I feel your Parity and data is fine. As the instructions in the wiki indicate: https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Running_xfs_repair
  5. https://forums.lime-technology.com/topic/37847-seagate-8tb-shingled-drives-in-unraid/?page=10#comment-500321 For convenience see link to a post I made with my timings earlier.
  6. I don't think you are going to get an answer to that question. My understanding is that it depends on many things including the drive specification, file system damage and file system use etc. Reports online suggest that it can take days. Therefore my advice would be to "set and forget". Check every now and then but prepare for the long haul and let it do it's thing. It will finish when it finishes.
  7. I am feeling there is a corrupt filesystem here. Then again I have seen these errors when there is a bad disk. Some parts of the log "might" suggest memory issues too (although I'm not clear on this). In any case, now you have grabbed diagnostics I would say that it is safe to do a clean shutdown. If it was me I would work through the potential causes to try and diagnose. I have only had the chance to skim your diagnostics file as I am at work but here are my initial findings .... HUA723020ALA641 is showing as having a reallocated sector - which "might" indicate issues with that disk. Although that is a stretch I think as strictly speaking having a reallocated sector is not a bad thing. There are no pending sectors which I would be more worried about. You could run XFS_repair to check the FS and fix any issues (if this is in fact due to a corrupt FS) but I would wait for further input before doing this. In the meantime, while you wait on others to pipe in, run a memtest overnight. You can do this from the initial unRAID boot menu.
  8. I only ever had pfSense working in a VM when I created bridges for each interface rather than passthrough and assigned each bridge to the VM (In the past you had to do this manually in the go script but now unRAID can manage this for you). There was no performance penalty for doing this in my testing (although this has been noted by other users).
  9. Don't reboot. Can you please post your diagnostics file. Type diagnostics from the command line and grab the file from the usb OR use the GUI to grab it.
  10. I "Think" this is a case of a dead / corrupt USB stick and that this is telling us that the kernel cannot communicate with the USB stick and therefore cannot mount the root file system. Either way I'm inclined to advise you to upgrade the server to the latest version first anyway. Undertaking that process you will quickly determine if there is an issue with the USB stick AND you will also get yourself on the latest version allowing you to capture more detailed diagnostics if there are any further issues. Backup your existing USB first (if you can) if you haven't already. If you need to change your USB key: https://lime-technology.com/replace-key/ To aid you in upgrading: http://lime-technology.com/wiki/index.php/Upgrading_to_UnRAID_v6 First of all however, lets see if anyone else can offer something else that I haven't seen ....
  11. Good luck, nice time to come back and get involved again!
  12. Upgraded again. No issues. This new focus on Security Vulnerabilities is EXCELLENT LT! Awesome!
  13. So subjective. I would say that if there are apps available via Docker to meet your requirements it is more efficient / simpler etc to run the Dockers vs a VM. JM2C.
  14. Dude, I think the following meets you're requirement (although NOT a Docker): Install @dmacias Command Line Tool Plugin which allows you to run a terminal session from a browser: https://lime-technology.com/forum/index.php?topic=42683.0 Then once you have logged in run MidnightCommander: For your consideration.
  15. To connect to your server: https://lime-technology.com/forum/index.php?topic=43317.0 To maintain a VPN connection to a VPN provider: http://www.pfsense.org You can run pfsense via a VM in unRAID but requires some configuration. The best solution is to run it via dedicated hardware I guess (although I run it in a VM hosted by unRAID just fine). Remember if you're encrypting a connection upwards of 30Mb/s with today's recommended encryption settings your standard commercial routers are not powerful enough to handle it (although they "support" VPN) hence the need for running in a VM or dedicated hardware. Intel CPU is recommended. In addition if you use pfSense you can route some IP's via VPN and some via Clearnet (normal ISP). Plus physical interfaces some VPN some Clearnet. Then you can attach wireless access points to those interfaces meaning you can have VPN wireless and Clearnet wireless. In closing it has a great firewall suite meaning you can also easily apply rules to block DNS leaks and easily apply policy Based Routing! My setup incorporates all of the above. It's not hard.
  16. Im inclined to think the same as squid: Not a clue how to fix TBH. I don't think it is a SR issue. More compatibility with SR and RFS- if we agreed with Squid. If we do, you have 2 options IMHO talk to SickRage dev's re this issue and stopping SR from writing attributes to file which RFS can't deal with OR talk to LT about the mover and how it deals with these. The thing both of you seem to have in common is RFS. Id prefer not to advise a move to XFS as a "fix" but, are you planning on moving to XFS any time soon? Either way, I don't think this is a SR Container issue.
  17. I feel like I'm missing something REALLY simple You're not. I know what the issue is. Sent a PM but you haven't seen it so Ill just post it here. Switch the Container to Bridge Mode in the Docker Tab. Add an additional port (e.g. 1194 UDP), call it whatever you want. Then hit save. Then it will work. The connectivity issue has to do with the Container running in Host Mode. I am going to co-ordinate some changes to the guidance centrally. Its a bit of pain as it was originally setup to run in Bridge Mode but was changed (due to issues experienced by users) to run as Host. Now it appears it has to go back again. Yet...post 3 or so says things NEED to be HOST and PRIVILEGED I understand, we shall clear the guidance up as required. Just trust me, give it a go. I'm 99.9% sure this is the issue .... Confirmed via IRC that move to Bridge mode worked.
  18. I feel like I'm missing something REALLY simple You're not. I know what the issue is. Sent a PM but you haven't seen it so Ill just post it here. Switch the Container to Bridge Mode in the Docker Tab. Add an additional port (e.g. 1194 UDP), call it whatever you want. Then hit save. Then it will work. The connectivity issue has to do with the Container running in Host Mode. I am going to co-ordinate some changes to the guidance centrally. Its a bit of pain as it was originally setup to run in Bridge Mode but was changed (due to issues experienced by users) to run as Host. Now it appears it has to go back again. Yet...post 3 or so says things NEED to be HOST and PRIVILEGED I understand, we shall clear the guidance up as required. Just trust me, give it a go. I'm 99.9% sure this is the issue ....
  19. I feel like I'm missing something REALLY simple You're not. I know what the issue is. Sent a PM but you haven't seen it so Ill just post it here. Switch the Container to Bridge Mode in the Docker Tab. Add an additional port (e.g. 1194 UDP), call it whatever you want. Then hit save. Then it will work. The connectivity issue has to do with the Container running in Host Mode. I am going to co-ordinate some changes to the guidance centrally. Its a bit of pain as it was originally setup to run in Bridge Mode but was changed (due to issues experienced by users) to run as Host. Now it appears it has to go back again.
  20. Thanks! I have been using linux for years, but never really played with Docker. I was under the impression that it was connecting to the LDAP server in unRAID (unRAID is using LDAP, right???) Wow - what a nice quote I made I just noticed that you couldn't connect to the interfaces of the Container either. Im watching the video. Inspired by your video my friend, I am producing a small video myself which walks you through how to set this up! Standby!
  21. Yes, thats what my test above was. No Cache. Disk to Disk. No slow down irrespective of file size above 10GB onwards ....
  22. Thanks! I have been using linux for years, but never really played with Docker. I was under the impression that it was connecting to the LDAP server in unRAID (unRAID is using LDAP, right???) Wow - what a nice quote I made I just noticed that you couldn't connect to the interfaces of the Container either. Im watching the video.
  23. Hi mate and welcome to unRAID. I think you have the wrong end of the stick. The user system in unRAID has no relationship with the user system within the OpenVPN-AS Container. I am not sure how familiar you are with Docker Containers, but (as I don't have a heap of time) you can think of them as mini sandboxed linux installations. In that the Container has its own little filesystem, user control etc etc and does not communicate with unRAID OS (well thats technically wrong, but it gets my point across). This is why you map paths / network ports between unRAID and each Container. unRAID users are for user shares etc IF you enable security in unRAID. The root user is still required to log into the console or the webGUI (if you set a password). Therefore, you NEED to change the admin password and add a user [glow=red,2,300]in the container[/glow] via the command line as follows: command: docker exec -it openvpn-as passwd admin sample output: root@main:~# docker exec -it openvpn-as passwd admin Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully root@main:~# command: docker exec -it openvpn-as adduser newuser sample output: root@main:~# docker exec -it openvpn-as adduser newuser Adding user `newuser' ... Adding new group `newuser' (1004) ... Adding new user `newuser' (1004) with group `newuser' ... Creating home directory `/home/newuser' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for newuser Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y root@main:~# Then you can access the OpenVPN-AS GUI's (as specified in your setup) using admin to begin with (and then as I do, specify the user you've just added as an admin user) and configure as required. Remember, it is because of the above that each time the Container is updated / re installed etc that this will have to be re-done. EDIT: if you accidentally add a user called "newuser" (rather than specifying a username of your choice) via a copy and paste of the above, use this to remove it: sample command: docker exec -it openvpn-as deluser newuser sample output: root@main:~# docker exec -it openvpn-as deluser newuser Removing user `newuser' ... Warning: group `newuser' has no more members. Done. root@main:~#
×
×
  • Create New...