Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

25 Good

About Can0nfan

  • Rank
    Advanced Member
  • Birthday 06/04/1977


  • Gender
  • Location
  • ICQ
  • MSN Messenger
  • Personal Text
    my ICQ and MSN are dead but i do remember those!

Recent Profile Visitors

1222 profile views
  1. Just remove the copy and execute commands commands from your go file then Run /etc/rc.d/RV.pulseway stop to remove the config and files in command line run these: rm -rf /etc/pulseway rm -rf /var/pulseway rm -rf /boot/pulseway rm /boot/extra/pulseway.tgzp then reboot the server
  2. thank you for the update for 6.8 !!!! I was just following a video tutorial here on youtube and i am running 6.8.1 and wondering why after all was setup Pulseway saw the server and i rebooted to test that pulseway never saw it come back up and when i saw the daemon wasn't running and I couldnt start it was getting an error i then found the config.xml was gone so i redid them and made some manual backups to the array as well for safe keeping
  3. after doing some research online I found some drives have it disabled in firmware and you need to contact the drive maker to get updated firmware to have it enabled. ill eventually replace that old HGST as i bought it used anyways but no issues with writespeeds with it disabled anyways
  4. me too leaving third one (main one) on 6.8 for now in case of issues
  5. there is not a single cheap drive out there I can back all my critical data up with I currently use three unraid servers for various tasks the shear amount of data i want to back up is over 100TB the absolutely must not lose is approximately half of that (bussines data for home based business) the other half is stuff i could easily download again if I needed too. Also not all data would fry since my main server is also connected to an external Disk Shelf where i plan to add my array pool to about 24 then start stacking my cache pool in a decent raid mode that is supported by BTFRS and unRAID. I have a total of 36 Bays available (12 in the server and 24 in the disk shelf) for the one server the servers are using server grade cases and PSU's not some cheap off the shelf PSU and power is managed by the servers backplane so i would probably lose the backplane before the drives I am totally aware i need an off site solution and very very slowly getting all my data to G Suite but the 7MB/sec speed capp to not go over the 750GB/day upload is going to take a while to get the 76TB in one server and 18 in the other and 7 or so in the last server all uploaded
  6. Hi @limetech Hi Tom I am really looking forward to the multiple pools featire in 6.10 I recently added a Netapp DS4243 (upgrading IOM cards to SAS 6gbps to kind of make it a DS4246) my main unraid server already has 14 drives in it (12 hot swap parity and array disks as well as two SSD's for cache sitting behind the fan wall) so I will never be able to fill my Netapp for the array due to that being a total of 36 Drives 34 for Array and 2 for parity I am looking forward and really hoping the multiple pools will not be resctricted to 30 (2 parity and 28 array) total between all pools. Are you able or will to confirm? I am hoping we will be able to have upto 30 disks and 24 cache drives for each pool. I know this is a big ask to but what about more than 2 parity options??? big arrays like 30 drive and bigger will surely have certain senarios where more than 2 disks fail at one time causing loss of data and unless those of us with huge deep pockets (im a new dad so thats not me lol) would have to create secondary servers or rely on other methods to back these large arrays up in the very rare event of that 3 disk failure senario I don't think its unreasonable to allow for 3 to 5 parity as a new max remove it from max array disks if needed when used keep max 30 array/parity like 5 parity-25 array, 3 parity-27 array I'm sure you get the picture just wondering if its feasilble or is it too technically challenging to implent? i guess in a way if the Multiple pools allows for multiple pools of 30 disk/24 cache etc. then its moot as we could have dual parity for every 28 drives per pool thanks for reading my ramblings hope to hear from you
  7. this isn't a percentage per-say but you can get an actual read out already on the main tab like this to get it, go to settings, display settings then scroll to bottom and check all the Used / Free columns setting to find your preference my image is using Bar (Color)
  8. I did like 3 years ago lol this my post was to be able to ssh without using a password some things need CLI when GUI isnt available
  9. I have SSH keys enabled for my Putty, and linux and mac terminal i found it on a linux forum to use and create Ssh keygen On host: ssh-keygen ssh-copy-id root@unraid hostname or ip On server may need to run these as root chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys then add this to the go file in tools->config file editor (it will make the key and ssh file persistent across reboots since the live unraid system lives in ram) #SSH Keys Copy and enable mkdir /root/.ssh chmod 700 /root/.ssh cp /boot/config/ssh/authorized_keys /root/.ssh/ chmod 600 /root/.ssh/authorized_keys as for your questions about fail2ban and lets encrypt i cant answer those I have my own reverse proxy running in a VM on another server and dont expose my servers host ip to the internet (dockers and VM's have access since they cannot access the host directly)
  10. you can use the SSH plugin called "SSH Config Tool" on the app tab to enable multi user SSH access it works well if you dont want to use root when using SSH more info here
  11. network stats is a plugin i suspect you can shut down the server move the USB to a computer and go to /config/plugins remove the netstats plg file and its folder then move USB back to the server and try and boot
  12. I noticed using br0 that the webgui in the contextual stopped working on unRAID 6.7.4 binhex doesnt want to find and fix what ever changed since then back when i noticed it i back dated and the option was there again. updated and its gone again something changed in unRAID in 6.74 that conflicts or errors out, something other plex dockers dont seem to have a problem with
  13. yes binhex container wont show webgui link since unRAID 6.7.4 i have personally spoken to Binhex here and on the github link and he says support for using custom is not built in...it works still access the webgui via the IP:32400 and you will get in I have found linuxservers Plex works fine when using br0 as does the official container but i prefer the binhex/arch-plexpass container myself. to fix the need for an ip in my browser I added a fqdn to my Pi-Holes lan.list file in /etc/pihole made mine like this myplex.mydomain.ca myplex now i can ping myplex and get a reply and bring the gui up by typing myplex.mydomain.ca:32400
  14. just saw this banner come up on two of my 3 unRAID server Happy New Years and may 2020 bring many more great things to everyone