Jump to content

Healadin

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by Healadin

  1. 7 minutes ago, pluginCop said:

    Because of the monster nature of this particular thread, it's rather hard to gather what exactly is happening.

     

    Is it fair to say that Dynamix S3 Sleep is for all intents and purposes non-functional (and has been since 2018)?  In looking through the code change that was posted earlier, it would appear that the routine that is determining whether or not the drive(s) are idle is completely broken?  Right / Wrong?

     

    Is there still a usage case for this plugin putting your server to sleep even if the drive(s) are still active?

    sleep button works for me, so I think routine for checking drives inactivity when you set autosleep is broken and that fix from darkside40 should solve it (I didn't have time to try it once more today)

  2. 7 minutes ago, darkside40 said:

    From what ive know it work for many others. Maybe an editing or packaging failure?

    I should go into

    /config/plugins/dynamix.s3.sleep

    on flash drive and copy dynamix.s3.sleep.txz  into my local machine right?

     

    Extract it and go into

    dynamix.s3.sleep/usr/local/emhttp/plugins/dynamix.s3.sleep/scripts/

    modify s3_sleep file (comment out 3 lines and add condition (lines 232-236) and modify version at the top

     

    then compress top dynamix.s3.sleep folder and put it back into

    /config/plugins/dynamix.s3.sleep

    on flash drive and modify checksum that i get from "md5sum dynamix.s3.sleep.txz" and change it in header of

    /config/plugins/dynamix.s3.sleep.plg

    file and that is all?

     

    I just written what I did. Hopefully I didnþt missed any step? Thanks for lot of feedback, I will give it another try tomorrow.

  3. 30 minutes ago, darkside40 said:

    You must calculate the MD5 hash of the new txz and add edit that in the plg. Than reboot your server and you should use the edited script.

    Btw i can download the attache file without any issues.

    ok, now there is no longer even sleep button... so I guess it is safe to assume, it would not work for me right? :/

  4. 4 minutes ago, darkside40 said:

    You must calculate the MD5 hash of the new txz and add edit that in the plg. Than reboot your server and you should use the edited script.

    oh its just md5sum command on linux? that should be easy then

  5. On 2/23/2020 at 11:16 PM, darkside40 said:

    The sleep plugin is know for being defunctional for quite some people since the last update nearly two years ago.

    You could give this a try:

     

    Hi,

    firstly sorry, I thought I answered you already.

     

    Thanks for response. I tried to comment out those three lines and change the condition, then put it back into .txz file (zipped it and then with xz I turned it into .txz). Sadly, I haven't found what to change in that .plg file and it doesn't show version 3.0.6.1. (added it into commentaries at the top of s3_sleep file).

     

    File that you attached to that post is no longer available, so if you could upload it for me again it would be great :) .

     

    Thanks once again for respoding, I already though it wouldn't get answer.

  6. Hello,

    I have installed S3 Sleep a while ago (almost 2 months now) and my Unraid never went to sleep as far as I know. My config plugin config is in the attached screen. I also have running duckdns and OpenVPN docker container (I think that might be the problem?). No idea, to be honest, but I would like to use the sleep mode (it consumes around 75 Watts).

     

    Also I am wondering if its possible to wake it up over VPN or at all? But I doubt it.

    Screenshot from 2020-02-04 12-59-59.png

  7. 1 hour ago, saarg said:

    Don't exec into the container. Just edit the file in the appdata share for openvpn-as. Then you can use nano.

     

    ah, thx... I wasnt sure what to do, coz when I consoled into unraid with putty/webconsole "ls" showed nothing... but when I did "cd /mnt/cache/appdata" I managed to find config file :)

  8. 8 hours ago, jonathanm said:

    Have you read and followed the application setup guide on the github or docker hub link in the first post of this thread?

    ye found it there, thx :)

     

    The "admin" account is a system (PAM) account and after container update or recreation, its password reverts back to the default. It is highly recommended to block this user's access for security reasons:


    1. Create another user and set as an admin,

    2. Log in as the new user,

    3. Delete the "admin" user in the gui,

    4. Modify the as.conf file under config/etc and replace the line boot_pam_users.0=admin with #boot_pam_users.0=admin boot_pam_users.0=kjhvkhv (this only has to be done once and will survive container recreation)


    IMPORTANT NOTE: Commenting out the first pam user in as.conf creates issues in 2.7.5. To make it work while still blocking pam user access, uncomment that line and change admin to a random nonexistent user as described above.

  9. 1 minute ago, trurl said:

    Have you setup any dockers / VMs yet?

    I tried some SFTP dockers (and after a little bit of thinkering with them I just removed them) and also duckdns (that one is the only one currently running). Also I assigned all my drives as first thing when I started configuring unraid, so the cache is there from very beggining, therefore dockers should be there by default. I will have to check it when I arrive home.

  10. Can someone also give advice regarding that second part please?

    Quote

    What is the best way to access my shares from outside of my private network? I was trying some sftp dockers and forwarding port on my router with duckdns as domain to access it, but unsuccessfully. Then I found some OpenVPN guide and it looked like it should be the way to go, but haven't tried it yet.

    Sadly I do not have a lot of time to play with unraid at home, but I can discuss ideas at work :) .

  11. When I was deciding on OS for my NAS, I came across similar setups and people recommended FreeNAS -> you can make more "pools" (it is called vdev or something along those lines I think) there, so you can make one pool from 16 TB drives and another with 4 TB drives.

     

    Depends on how many parity disks you want to have and also how many you plan to have overall. If you would end up with 3x 16 TB, 3x 4 TB and 2 parity drives you can go with one pool with 16 TB drives (and 1 parity) and second pool with 4 TB drives (1 of them parity). In case like this you would have to use 2 of 16 TB drives for parity with unraid.

     

    When you are deciding on what software to use, you should consider what setup suits you best in terms of lost capacity to parity.

  12. 20 minutes ago, testdasi said:

    You need to set the share to cache = only but if you follow SpaceInvaderOne guide on Youtube then you would set that up as part of the tutorial so "by default"-ish.

    and what about plugins? do they install just straight to the RAM and just some kind of record in config file? or maybe usb stick?

  13. 48 minutes ago, itimpi said:

    The alternative is to go for the Plus license which increases the limit to 12 drives.   You may well want this anyway if you want to be able to freely attach USB drives for purposes such as backup.

    that is good point

     

    also do you know if 120 GB ssd cache is good idea, when I am bottlenecked by Glan anyway?

     

    only good reason to have it, that comes to my mind, is if I upload files there and edit them, I edit them on cache disk and then when I am finished editing them, they flush to HDD array and blocks are written sequentially without data being interrupted by editting them right after uploading

     

    also I am going to be using it atm mainly for long term storage, not as editting server or something

  14. Hello,

     

    I build my NAS (unraid server but main use as NAS) couple of days ago, but I still need some clarification on how things work.

     

    First of all, I have 6x 6 TB drives (2 for parity) and one m.2 sata SSD 120 GB (25 €). From what I found, it seems with basic license, I can use just 6 drives, so I am considering unassinging ssd drive (license to add it is the similar to drives value, also I am gonna be limited by 1 gbit lan). Question is if I can just remove it from array (after using mover) or do I need to do anything special? Drive seems to have used 20 GB all the time and that is the part that bothers me a little. Also can I use this "(soon to be) unassigned" drive in VMs or something?

     

    What is the best way to access my shares from outside of my private network? I was trying some sftp dockers and forwarding port on my router with duckdns as domain to access it, but unsuccessfully. Then I found some OpenVPN guide and it looked like it should be the way to go, but haven't tried it yet.

     

    Also I have issues with S3 sleep plugin (havent found support thread yet, so I am just going to ask here). So far I left just duckdns container running, no one was using NAS and I have scheduled mover and ssd trim into "awake hours". In the morning, when server should have been asleep, it was running. Also what if I want to use server while its asleep? Is there any way to wake it up?

×
×
  • Create New...