Can0n

Members
  • Posts

    613
  • Joined

  • Last visited

Everything posted by Can0n

  1. Hello Everyone I retired my Green server in my signature, sold the hardware but kept my unraid USB, I am looking to give this USB to my brother for his build and wondering if there is an easy way to factory reset it with out the hardware to boot and reset the configs, i found the obvious one like user shares i removed and plugins and machine name, SSL certs etc. should i just back up the key file and format the usb and reflash unraid to it and put the key file back? Ideas and Suggests are all welcome
  2. The beauty is you don’t even need to install it on your VMs you can just set your config profile in unRAID To send you an alert if pinging any of those PMs goes down that’s how I have my VM’s monitored without paying extra to Pulseway Since I am currently paying for five systems but I only need four now since I sold one of the servers that was running my third instance of Unraid
  3. I have no issues accessing my br0 dockers using the fqdn i have assigned to them internally from my iPhone using Wireguard with Remote LAN connection, only using port forwarding (cant seem to get static routes to work on my Unifi setup) no VLAN's in use for br0 either
  4. Just remove the copy and execute commands commands from your go file then Run /etc/rc.d/RV.pulseway stop to remove the config and files in command line run these: rm -rf /etc/pulseway rm -rf /var/pulseway rm -rf /boot/pulseway rm /boot/extra/pulseway.tgzp then reboot the server
  5. thank you for the update for 6.8 !!!! I was just following a video tutorial here on youtube and i am running 6.8.1 and wondering why after all was setup Pulseway saw the server and i rebooted to test that pulseway never saw it come back up and when i saw the daemon wasn't running and I couldnt start it was getting an error i then found the config.xml was gone so i redid them and made some manual backups to the array as well for safe keeping
  6. after doing some research online I found some drives have it disabled in firmware and you need to contact the drive maker to get updated firmware to have it enabled. ill eventually replace that old HGST as i bought it used anyways but no issues with writespeeds with it disabled anyways
  7. me too leaving third one (main one) on 6.8 for now in case of issues
  8. there is not a single cheap drive out there I can back all my critical data up with I currently use three unraid servers for various tasks the shear amount of data i want to back up is over 100TB the absolutely must not lose is approximately half of that (bussines data for home based business) the other half is stuff i could easily download again if I needed too. Also not all data would fry since my main server is also connected to an external Disk Shelf where i plan to add my array pool to about 24 then start stacking my cache pool in a decent raid mode that is supported by BTFRS and unRAID. I have a total of 36 Bays available (12 in the server and 24 in the disk shelf) for the one server the servers are using server grade cases and PSU's not some cheap off the shelf PSU and power is managed by the servers backplane so i would probably lose the backplane before the drives I am totally aware i need an off site solution and very very slowly getting all my data to G Suite but the 7MB/sec speed capp to not go over the 750GB/day upload is going to take a while to get the 76TB in one server and 18 in the other and 7 or so in the last server all uploaded
  9. Hi @limetech Hi Tom I am really looking forward to the multiple pools featire in 6.10 I recently added a Netapp DS4243 (upgrading IOM cards to SAS 6gbps to kind of make it a DS4246) my main unraid server already has 14 drives in it (12 hot swap parity and array disks as well as two SSD's for cache sitting behind the fan wall) so I will never be able to fill my Netapp for the array due to that being a total of 36 Drives 34 for Array and 2 for parity I am looking forward and really hoping the multiple pools will not be resctricted to 30 (2 parity and 28 array) total between all pools. Are you able or will to confirm? I am hoping we will be able to have upto 30 disks and 24 cache drives for each pool. I know this is a big ask to but what about more than 2 parity options??? big arrays like 30 drive and bigger will surely have certain senarios where more than 2 disks fail at one time causing loss of data and unless those of us with huge deep pockets (im a new dad so thats not me lol) would have to create secondary servers or rely on other methods to back these large arrays up in the very rare event of that 3 disk failure senario I don't think its unreasonable to allow for 3 to 5 parity as a new max remove it from max array disks if needed when used keep max 30 array/parity like 5 parity-25 array, 3 parity-27 array I'm sure you get the picture just wondering if its feasilble or is it too technically challenging to implent? i guess in a way if the Multiple pools allows for multiple pools of 30 disk/24 cache etc. then its moot as we could have dual parity for every 28 drives per pool thanks for reading my ramblings hope to hear from you
  10. this isn't a percentage per-say but you can get an actual read out already on the main tab like this to get it, go to settings, display settings then scroll to bottom and check all the Used / Free columns setting to find your preference my image is using Bar (Color)
  11. I did like 3 years ago lol this my post was to be able to ssh without using a password some things need CLI when GUI isnt available
  12. I have SSH keys enabled for my Putty, and linux and mac terminal i found it on a linux forum to use and create Ssh keygen On host: ssh-keygen ssh-copy-id root@unraid hostname or ip On server may need to run these as root chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys then add this to the go file in tools->config file editor (it will make the key and ssh file persistent across reboots since the live unraid system lives in ram) #SSH Keys Copy and enable mkdir /root/.ssh chmod 700 /root/.ssh cp /boot/config/ssh/authorized_keys /root/.ssh/ chmod 600 /root/.ssh/authorized_keys as for your questions about fail2ban and lets encrypt i cant answer those I have my own reverse proxy running in a VM on another server and dont expose my servers host ip to the internet (dockers and VM's have access since they cannot access the host directly)
  13. you can use the SSH plugin called "SSH Config Tool" on the app tab to enable multi user SSH access it works well if you dont want to use root when using SSH more info here
  14. network stats is a plugin i suspect you can shut down the server move the USB to a computer and go to /config/plugins remove the netstats plg file and its folder then move USB back to the server and try and boot
  15. I noticed using br0 that the webgui in the contextual stopped working on unRAID 6.7.4 binhex doesnt want to find and fix what ever changed since then back when i noticed it i back dated and the option was there again. updated and its gone again something changed in unRAID in 6.74 that conflicts or errors out, something other plex dockers dont seem to have a problem with
  16. yes binhex container wont show webgui link since unRAID 6.7.4 i have personally spoken to Binhex here and on the github link and he says support for using custom is not built in...it works still access the webgui via the IP:32400 and you will get in I have found linuxservers Plex works fine when using br0 as does the official container but i prefer the binhex/arch-plexpass container myself. to fix the need for an ip in my browser I added a fqdn to my Pi-Holes lan.list file in /etc/pihole made mine like this 10.0.1.124 myplex.mydomain.ca myplex now i can ping myplex and get a reply and bring the gui up by typing myplex.mydomain.ca:32400
  17. just saw this banner come up on two of my 3 unRAID server Happy New Years and may 2020 bring many more great things to everyone
  18. Hi All I just started seeing this in FCP for my HGST HUH728080ALE600 about two months ago on one of my drives there is no errors in SMART at all and my other WD red's and HGST and Seagates are fine (all 8TB drives) the closest to same model I have working is HGST HUH728080ALE604 (note the 4 instead of 0 on the end) hdparm -W 1 /dev/sdg /dev/sdg: setting drive write-caching to 1 (on) write-caching = 0 (off) wierd thing is disk speed actually shows this disk perform just as good as the other drives in the array in terms of read and write speeds
  19. it does seem very lagging over the unRAID VNC client as well so ill try a new template and see how that works, thank you for the suggestion
  20. nope same system as pre-6.7...Dual Xeon E5-2680 v1's the VM's are on a cache array od dual 1TB SSD's and i have plenty of ram for unRAID and the VM's 94GB ram with 16GB allocated to Windows, 16GB to one Fedora 30 Server and 1GB to another Fedora server it does seem most likely to be an issue introduced in unRAID 6.7 RC'S and the microsfot beta RDP client for my 27" 5K iMac, while still sluggish via a windows 10 laptop i have its not nearly as bad as when i use the RDP for my iMac.
  21. my three are however my windows 10 is awwwfuly sluggish and slow..still trying to see if its the mac rdp client or vm itself
  22. Hi @limetech great work team I didn’t seen anything about the missing VM’s issue from RC6 I’m still running RC5 because of that any word if this is resolved in RC7?? edit: if I am right in my thinking this is the fix? “Revert libvirt from 5.9.0 to 5.8.0 because 5.9.0 has bug where 'libvirt_list_domains()' returns an empty string when all domains are started instead of a list of domains.”
  23. im getting same issue on one of my HGST_HUH728080ALE's 8TB Sata drive that JUST showed up. I have 3 others of the same model without issue it just wont enable. I have another similar model with no issue HGST_HUH728080ALE604 is fine HGST_HUH728080ALE600 is the one that wont enable with the hdparm or sdparm commands root@Thor:~# hdparm -W 1 /dev/sdm /dev/sdm: setting drive write-caching to 1 (on) write-caching = 0 (off) also tried root@Thor:~# sdparm --set=WCE /dev/sdm /dev/sdm: ATA HGST HUH728080AL T7JF root@Thor:~# sdparm -g WCE /dev/sdm /dev/sdm: ATA HGST HUH728080AL T7JF WCE 0 [cha: y] diagnostics attached thor-diagnostics-20191114-1147.zip
  24. 6.8 RC4 and on corrected the database corruption that some got (I never did see it for the last few years) try my method I have since my last post removed all the manjaro VM's