vanes

Members
  • Posts

    120
  • Joined

  • Last visited

Everything posted by vanes

  1. MC full features don`t work in browser. Try use ssh client like Putty to share my Backup folder on my pool ( mounted /mnt/zfspool )i added [Backup] path = /mnt/zfspool/Backup comment = browseable = yes # Public writeable = yes read list = write list = valid users = vfs objects = To Settings>SMB>SMB Extras
  2. @comet424 go to terminal, then type mc, you can see your pool there if it is mounted to see mountpoint use "zfs list" comand root@unRaid:~# zfs list NAME USED AVAIL REFER MOUNTPOINT zfspool 127G 322G 127G /mnt/zfspool to add share go to Settings - SMB and add something like this to SMB Extras [Backup] path = /mnt/zfspool/Backup comment = browseable = yes # Public writeable = yes read list = write list = valid users = vfs objects = this worked for me, i am new to zfs. Some days ago i created my first usb mirror pool for backups. will see how it go....
  3. user script worked! Thanks!
  4. i need to limit zfs ARC-cache size I'm trying to do as it is written in the first post, by adding line in go file, but this doesn`t work =(
  5. Thank you so much for watching, man. I fixed it to 5 fields, I'll see how it works, I'll write it helped or not ? I'm almost sure that the reason for this P.S. Thanks, @itimpi , Problem solved!
  6. Hi, need some help. I try to limit the ARC to 2GB, this is my go file: #!/bin/bash #Zfs ARC size echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max # Start the Management Utility /usr/local/sbin/emhttp & I edited go-file, rebooted, then checked cat /proc/spl/kstat/zfs/arcstats it seems the size has not changed, it is still 4gb max root@Tower:~# cat /proc/spl/kstat/zfs/arcstats 13 1 0x01 96 26112 40241515841 4620213926208 name type data hits 4 702353 misses 4 4420 demand_data_hits 4 94 demand_data_misses 4 0 demand_metadata_hits 4 700921 demand_metadata_misses 4 3616 prefetch_data_hits 4 0 prefetch_data_misses 4 0 prefetch_metadata_hits 4 1338 prefetch_metadata_misses 4 804 mru_hits 4 30635 mru_ghost_hits 4 406 mfu_hits 4 670574 mfu_ghost_hits 4 571 deleted 4 258596 mutex_miss 4 0 access_skip 4 0 evict_skip 4 191 evict_not_enough 4 50 evict_l2_cached 4 0 evict_l2_eligible 4 34050764288 evict_l2_ineligible 4 105383936 evict_l2_skip 4 0 hash_elements 4 32762 hash_elements_max 4 33061 hash_collisions 4 7788 hash_chains 4 520 hash_chain_max 4 2 p 4 3998406656 c 4 4008861696 c_min 4 250553856 c_max 4 4008861696 size 4 3988219216 compressed_size 4 3836652032 uncompressed_size 4 4142148608 overhead_size 4 139904512 hdr_size 4 10920384 data_size 4 3973056000 metadata_size 4 3500544 dbuf_size 4 410704 dnode_size 4 247104 bonus_size 4 84480 anon_size 4 51695616 anon_evictable_data 4 0 anon_evictable_metadata 4 0 mru_size 4 3924730368 mru_evictable_data 4 3721777664 mru_evictable_metadata 4 1073664 mru_ghost_size 4 121528832 mru_ghost_evictable_data 4 93454336 mru_ghost_evictable_metadata 4 28074496 mfu_size 4 130560 mfu_evictable_data 4 0 mfu_evictable_metadata 4 0 mfu_ghost_size 4 33736704 mfu_ghost_evictable_data 4 3276800 mfu_ghost_evictable_metadata 4 30459904 l2_hits 4 0 l2_misses 4 0 l2_feeds 4 0 l2_rw_clash 4 0 l2_read_bytes 4 0 l2_write_bytes 4 0 l2_writes_sent 4 0 l2_writes_done 4 0 l2_writes_error 4 0 l2_writes_lock_retry 4 0 l2_evict_lock_retry 4 0 l2_evict_reading 4 0 l2_evict_l1cached 4 0 l2_free_on_write 4 0 l2_abort_lowmem 4 0 l2_cksum_bad 4 0 l2_io_error 4 0 l2_size 4 0 l2_asize 4 0 l2_hdr_size 4 0 memory_throttle_count 4 0 memory_direct_count 4 0 memory_indirect_count 4 0 memory_all_bytes 4 8017723392 memory_free_bytes 4 3323105280 memory_available_bytes 3 3197829120 arc_no_grow 4 0 arc_tempreserve 4 0 arc_loaned_bytes 4 0 arc_prune 4 0 arc_meta_used 4 15163216 arc_meta_limit 4 3006646272 arc_dnode_limit 4 300664627 arc_meta_max 4 17378936 arc_meta_min 4 16777216 sync_wait_for_async 4 0 demand_hit_predictive_prefetch 4 0 arc_need_free 4 0 arc_sys_free 4 125276928 Please help me to limit ARC size.... PS: When i run "echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max" in terminal, it works. Arc max size is decreasing on the eyes along with the occupied RAM. Go file does not work for me. Can I make a script with "echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max" and run it when array start?
  7. Hi guys, i need some help. I am trying to run custom cron of my scrub scripts, but recieve this: unRaid crond[1825]: exit status 127 from user root * /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/Scrubc/script > /dev/null 2>&1 my script is: (\config\plugins\user.scripts\scripts\Scrubc\script) #!/bin/bash /usr/local/emhttp/plugins/dynamix/scripts/notify -e "start_scrub_cache" -s "Scrub cache drive" -d "Scrub of cache drive started" -i "normal" -m "Scrubbing message" btrfs scrub start -rdB /mnt/cache > /boot/logs/scrub_cache.log if [ $? -eq 0 ] then /usr/local/emhttp/plugins/dynamix/scripts/notify -e "start_scrub_cache" -s "Scrub cache drive" -d "Scrub of cache drive finished" -i "normal" -m /boot/logs/scrub_cache.log else /usr/local/emhttp/plugins/dynamix/scripts/notify -e "start_scrub_cache" -s "Scrub cache drive" -d "Error in scrub of cache drive !" -i "alert" -m /boot/logs/scrub_cache.log fi Run/Run in Background works fine. Custom cron - dont.
  8. Wery wery interesting! Tell us the secret of your success
  9. It seems I made it work I found https://rocket.chat/docs/installation/paas-deployments/aws/#5-configure-nginx-web-server-with-tlsssl , then edited file like this server { listen 443 ssl; server_name chat.my.domain.com; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; root /config/www; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { proxy_pass http://192.168.0.10:3000/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forward-For $proxy_add_x_forwarded_for; proxy_set_header X-Forward-Proto http; proxy_set_header X-Nginx-Proxy true; proxy_redirect off; } } server { listen 80; server_name chat.my.domain.com; return 301 https://$host$request_uri; } now ios app can connect! ssllab shows A+
  10. Hi guys, need some help with Rocket-chat and letsencrypt docker container (linuxservers). I got running nextcloud and emby througt letsencrypt reverse proxy, everything is working fine, but I cant access from rocket chat ios app to my server. But I can connect to rocket chat in browser... Please help me to config nginx file in site-conf of letsencrypt appdata folder to get it working whith rocket chat I tried this server { listen 80; server_name chat.vanes.mydomain.com; return 301 https://chat.vanes.mydomain.com$request_uri; } server { listen 443 ssl; server_name chat.vanes.mydomain.com; root /config/www; index index.html index.htm index.php; include /config/nginx/ssl.conf; client_max_body_size 0; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; location / { proxy_pass http://192.168.0.10:3000/; proxy_max_temp_file_size 2048m; include /config/nginx/proxy.conf; } }
  11. I do not have to do anything and wait for the update?
  12. Updated to Rc6, everything works fine, vm`s, dockers, but i see strange warning! All shares are on their places, fix common problems says that everything is good. Do I have to worry? Need parity check? Rebooted 2 times, warning is still here. unraid-diagnostics-20180416-0001.zip
  13. @nautarch try to clear browser`s cache
  14. Thanks @johnnie.black What about system share? Does it make sense to enable COW for my system share (Docker img). It is set to "No" Or it does not matter?
  15. I understand that it should be set to "No" for v disk`s and docker img shares located on HDD`s that are BTRFS formatted ? Should I recreate shares if I use SSD cache pool? Will it be better? to swich shares to "No" I should copy img files to somewhere else, recreate shares with "No" setings, and then copy img and appdata files back to this shares... is this right way? Or I need to recreate my vm`s and docker img after recreating shares? I'll do as @johnnie.black says. =))
  16. Hi, please someone explain about the share setting "Enable Copy-on-write" I found that my shares located on ssd pool it is set to: system share (docker img) is set to "No" appdata share is set to "Auto" domains share (vDisks) is set to "Auto" is this correct, or I should recreate shares, and change all to "No"? Should I recreate VM`s or docker img? Does this affect performance and stability? Thanks!
  17. Replaced slow wd green with Samsund 850 120Gb x2 Raid1. Some users report low write performance on this disks, freezes... etc In my cases, I do not see any problems. Only good performance i am runnin win10 vm on 50Gb vdisk, and this is coping from array to vdisk which is on cache pool raid1. Wd Green`s speed was only 50 mb/sec.
  18. I confirm this issue also watched this....
  19. unraid-diagnostics-20180321-1906.zip
  20. mine works fine in safari, chrome, edge
  21. Sorry guys, cant test this anymore! I changed UPS to APC back ups 700 and everething is perfect now! I think anybody alse can test it...
  22. i use Time on Battery method. I do not know how to install this file =)) I'm new to Linux =)) I ask, if not difficult, to write a short algorithm step by step Thanks