landS

Members
  • Posts

    822
  • Joined

  • Last visited

Posts posted by landS

  1. My rig is located on a high shelf in our bedroom closet which is limited on depth.  Swapping out Internal disk’s requires getting it down with a ladder with little maneuver space and this is too brutal on my permanently injured back and knee.  

    ….

    I am swapping out my 40# Fractal Design’s R4 Silent (18.27" x 9.13" x 20.59") for a unit with front hot-swap bays.   I’m also going from 8 4TB drives to 2 20TB drives.  This rig has 2 GPU’s in it for 2 separate VMs, 1 Full Size ODD, and an ATX Mobo (X10SRA-F).

    ….

    While not 5.25 bays top to bottom, this may help someone out:

    18# InWin IW-PE689 (18.1" x 7.9" x 16.9"). 

    … Has 4 External 5.25 bays and 1 External 3.5 bay.

    Icy Dock MB155SP-B 5x3.5 (3x5.25” bay) with a Noctua NF-B9

    Icy Dock MB741SP-B 1x2.5 (1x3.5” bay)

    SilverStone Technology EPDM Sound Dampening Foam

  2. Thanks itimpi!

     

    I rather like the option for disk fault tolerance, and also think the backup is a good idea.  I'll stick to 1 parity and 1 pre-cleared spare. 

     

    I have a second unraid server with no exported network shares that I turn on quarterly and run 2 backup scripts:

    • For write once, never change: rsync -r -v --progress --ignore-existing -s /mnt/disks/TOWER_WriteOnce/WriteOnce/ /mnt/user/Media/WriteOnce
    • For write many: rsync -av --progress --delete-before /mnt/disks/TOWER_WriteMany/WriteMany /mnt/user/WriteMany

     

    In addition, I run the Crashplan Docker on folders containing important data.

  3. Strange question for you goodly folks. 

    Is there any benefit to using 2 Parity Disks with 1 Data Disk?

     

    My array is comprised of 5 data disks and 2 parity drives - all 4 TB with an average power on time of around 7 years.

    I am replacing these with 3 20 TB disks:  2 Seagate IronWolf Pro CMR & 1 WD Red Pro CMR

     

    If no benefit to using 2 Parity Disks with 1 Data Disk, I will keep 1 of the disks as a pre-cleared hot-spare. 

     

    Thanks!

     

  4. Alas, this appears to be a troublesome beast.

     

    I updated from v6.12.1 to 6.12.2.

    In the Crashplan Docker’s WebUi it shows in the left hand popout:

         Crashplan v11.1.1 & Docker Image v23.06.2

     

    Image 1 is what the Crashplan WebUi looks like when I start the docker.

    Image 2 is what the Crashplan WebUi looks like after I insert the password and click Continue.  This is where the message pops up.  If I close the WebUi and open it up the problem persists.

    Image 3 is what Crashplan’s Website looks like.  The only item of note here is that the computer shows as online.   

    1.JPG

    2.JPG

    3.JPG

  5. Thanks for the quick turn-around Djoss.

     

    Alas no, Docker's WebGui Still shows the "upgrading to new version" message after entering login credentials into Web Gui's Interface.

     

    Edit after 3 hours:

    Docker's WebGui still throws the "upgrading to new version" issue

    Docker's WebGui shows version 11.1.1.2

    CP's Website Portal indicates Computer is Online & Backup is 100%

    CP's Website Portal shows version 11.1.1.2

  6. Howdy Folks! I'm in need of some help.

    Part 1 ---------------- Typical behavior & problem

    Typically when storms are coming through I go into the Web Gui, stop the docker, stop the virtual machines, stop the array, and only then power down.  The Web Gui always comes back up after the server is powered back on.

    Last night a storm came in while I was away and the server shutdown via UPS due to an extended power outage.

    After reboot I can access dockers, VMs, network shares – but not the Web Gui

     

    Part 2 ---------------- Troubleshooting

    Via a IPMI KVM Redirect I can access the terminal.

    Running the following immediately allows access to the Web Gui:

    … /etc/rc.d/rc.docker stop

    … /etc/rc.d/rc.php-fpm restart

    From this point:

    Diagnostics obtained (tower-diagnostics-20230507-0856)

    … Stopping the VM worked fine

    … Stopping the Array did not – stuck with /mnt/cache: target is busy – retry unmounting disk share(s) – “Log Snippet” at bottom of post

    … After about 15 minutes of this I then pressed power off in web gui

    Part 3 ---------------- Power back on after step 2

    Web Gui is accessible from Windows Os only after boot & an automatic parity check was initiated which is atypical (tower-diagnostics-20230507-0916)

         I only use http://###.###.#.###/Main to access the GUI – Chrome or Firefox browsers

         I can now fully access it via a Windows PC (had to dust this off)

         I can no longer access it via Android or Linux device on same network (which is all we really use here)

         I typically only access via Android

    Part 4 ---------------- Questions

    1 – What can I do so that this behavior doesn’t happen again?

    2 – How can I make the Web Gui Accessible via a non-Windows device again?

    Thanks folks!

    Log snippet from part 3 above:

    May  7 08:58:33 Tower  emhttpd: Stopping services...

    May  7 08:58:33 Tower  emhttpd: shcmd (116): /etc/rc.d/rc.libvirt stop

    May  7 08:58:33 Tower root: Stopping libvirtd...

    May  7 08:58:33 Tower dnsmasq[7063]: exiting on receipt of SIGTERM

    May  7 08:58:33 Tower  avahi-daemon[5931]: Interface virbr0.IPv4 no longer relevant for mDNS.

    May  7 08:58:33 Tower  avahi-daemon[5931]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.

    May  7 08:58:33 Tower  avahi-daemon[5931]: Withdrawing address record for 192.168.122.1 on virbr0.

    May  7 08:58:33 Tower root: Network a4007147-6d28-4b27-8a73-0b1a1672c02b destroyed

    May  7 08:58:33 Tower root:

    May  7 08:58:37 Tower root: Stopping virtlogd...

    May  7 08:58:38 Tower root: Stopping virtlockd...

    May  7 08:58:39 Tower  emhttpd: shcmd (117): umount /etc/libvirt

    May  7 08:58:39 Tower cache_dirs: Stopping cache_dirs process 5345

    May  7 08:58:40 Tower cache_dirs: cache_dirs service rc.cachedirs: Stopped

    May  7 08:58:40 Tower Recycle Bin: Stopping Recycle Bin

    May  7 08:58:40 Tower  emhttpd: Stopping Recycle Bin...

    May  7 08:58:40 Tower  emhttpd: shcmd (119): /etc/rc.d/rc.samba stop

    May  7 08:58:40 Tower  wsdd2[7683]: 'Terminated' signal received.

    May  7 08:58:40 Tower  wsdd2[7683]: terminating.

    May  7 08:58:40 Tower  emhttpd: shcmd (120): rm -f /etc/avahi/services/smb.service

    May  7 08:58:40 Tower  avahi-daemon[5931]: Files changed, reloading.

    May  7 08:58:40 Tower  avahi-daemon[5931]: Service group file /services/smb.service vanished, removing services.

    May  7 08:58:40 Tower  emhttpd: shcmd (122): /etc/rc.d/rc.nfsd stop

    May  7 08:58:40 Tower  rpc.mountd[4443]: Caught signal 15, un-registering and exiting.

    May  7 08:58:41 Tower  emhttpd: Stopping mover...

    May  7 08:58:41 Tower  emhttpd: shcmd (123): /usr/local/sbin/mover stop

    May  7 08:58:41 Tower kernel: nfsd: last server has exited, flushing export cache

    May  7 08:58:41 Tower root: mover: not running

    May  7 08:58:41 Tower  emhttpd: Sync filesystems...

    May  7 08:58:41 Tower  emhttpd: shcmd (124): sync

    May  7 08:58:41 Tower  emhttpd: shcmd (125): umount /mnt/user0

    May  7 08:58:41 Tower  emhttpd: shcmd (126): rmdir /mnt/user0

    May  7 08:58:41 Tower  emhttpd: shcmd (127): umount /mnt/user

    May  7 08:58:43 Tower  emhttpd: shcmd (128): rmdir /mnt/user

    May  7 08:58:43 Tower  emhttpd: shcmd (130): /usr/local/sbin/update_cron

    May  7 08:58:43 Tower  emhttpd: Unmounting disks...

    May  7 08:58:43 Tower  emhttpd: shcmd (131): umount /mnt/disk1

    May  7 08:58:45 Tower  emhttpd: shcmd (141): umount /mnt/cache

    May  7 08:58:45 Tower root: umount: /mnt/cache: target is busy.

    May  7 08:58:45 Tower  emhttpd: shcmd (141): exit status: 32

    May  7 08:58:45 Tower  emhttpd: Retry unmounting disk share(s)...

     

     

     

    tower-diagnostics-20230507-0916.zip tower-diagnostics-20230507-0856.zip

  7. Good point JonathanM.  Our AC runs 9 months a year... with a setting of 78*F.

     

    I believe it highly likely I'll need to replace 2 of the drives in the next 2 years:  $140.

    Yearly Energy Savings over the next 2 years:  $210 (lowered HDD power + 9 months of AC)

    Likely near term Cost savings: $350.

    That makes the decision, not a $700 one, but a $350.

     

    Mhh... is $350 worth:

    Reset of the bathtub curve / peace of mind

    Reduced Noise

     

    It very well might be.

  8. Thanks CharNoir. 

    I mainly wanted a sanity check on the electrical cost savings --- another set of eyes to see if what I’ve stated appears correct.

     

    Saving $60/year at a cost of $700 certainly doesn’t make economical sense.

    Paying $700 now when I could replace an occasional drive for $70 also doesn’t make economical sense.  

     

    However, a lot of my drives in my main machine are reaching the 10+ year powered-on mark and the backside of the bathtub appears to be coming into play. 

     

    As such I need to weigh:

    Cost Savings

    Reset of the bathtub curve / peace of mind

    Reduced Noise (this is in my master bedrooms closet)

    Reduced Heat

  9. Howdy folks,

     

    My server has 7 OLD 4TB 7200 drives (2 parity/5 data) for 20TB of storage.

    These do not spin down due to my using crashplan (on about 3 Tb worth of the data).

     

    I estimate that these consume about 8 watts each.

     

    I believe I can reduce the array to 2 disks by moving to 2 20TB disks (1 parity /1 data) which would save 40 watts….

    I believe this works out to be 350 kWh over the course of a year. 

     

    My utility company charges $0.173/kWh (and climbing)… so by moving from 7 4TB drives to 2 20TB drives I’d save a whopping $60/year.

     

    Does this math look about right to you folks --- or am I missing something glaring here?

     

     

    A CMR SATA Seagate IronWolf Pro or Exos X20 look to go for $350 new - and $700 is a heck of a lot more than the occasional replacement cost of $70. 

    Beyond saving electric costs - the server is in my bedroom closet & reducing noise would be nice.

    Also it would be nice to reset the long-int-the-tooth drives. 

     

    Thanks!

  10. Howdy folks,

     

    Anytime my Windows Work machine is turned on - on same network - Im getting CONSTANT logging showing:

    Oct 15 14:59:27 Tower nginx: 2022/10/15 14:59:27 [error] 4424#4424: *2478717 limiting requests, excess: 20.122 by zone "authlimit", client: (Redacted Work PC IP number), server: , request: "PROPFIND /login HTTP/1.1", host: "tower"

     

    Any ideas on how to remediate this?

     

    Thanks!

  11. I’d love to update from 6.10.3, however I also rely upon nerdpack to install the following:
    screen-4.8.0-x86_64-4.txz
    ncurses-terminfo-6.1.20191130-x86_64-1.txz
    tmux-3.2-x86_64-1.txz
    mcelog-161-x86_64-1.txz
    perl-5.32.0-x86_64-1.txz
    kbd-1.15.3-x86_64-2.txz

     

    I’ve downloaded from https://slackware.pkgs.org/15.0/slackware-x86_64/ the following:
    screen-4.9.0-x86_64-1.txz
    ncurses-6.3-x86_64-1.txz
    tmux-3.2a-x86_64-1.txz
    mcelog-180-x86_64-1.txz
    perl-5.34.0-x86_64-1.txz

    And I've copied the top 4 txz files to flash's /boot/extra

    I have not yet downloaded kbd-1.15.3-x86_64-2.txz, as the latest version is only available for slack 14.2.

     

    Question 1a - Do I first Uninstall the 6 tools using NerdPack, and then
    Question 1b - ...Uninstall NerdPack, and then
    Question 1c - ...Reboot the server?

     

    Question 2 - Are all txz files in /extra automatically installed at time of boot?

     

    Question 3 - Do I need to determine the version of slackware Unraid is using & manually download the matching set from https://slackware.pkgs.org/**VersionNumber**/slackware-x86_64/ every time I update unraid going forward?

     

    Question 4 - What happens if I boot into a newer version of Unraid with an older txz in /extra?

     

    Question 5 - How do I confirm what tools are already included in Unraid's OS?  Looking at the forums I *think* perl is already included - hence the reason I've not yet copied it into /boot/extra.

     

    Thanks folks!

    • Like 1
    • Upvote 1
  12. JonathanM – I loved the Joke (and the history lesson).  


    Squid – 1st, great books.  2nd, I was going to make a poor taste cat joke back to JonathanM, however this pretty much takes care of it.   3rd, pretty certain I need to get a hold of those dvds for a watch through. 


    Nomisco – that’s really cool, hadn’t come across one of those before. 
     

  13. 6 hours ago, PicPoc said:

    Does this MB really works on Passthrough ?

    Thanks ;)

    Back in 2015 it worked with XEN pass-through - - - this is when limetech was just beginning to to allow VMs.   I never tested it with KVM.    

    Tower2 has always been my 1x/month backup of Tower1.  I update it before Tower1 and try new concepts on it first to minimize disruptions to Tower1. 

    Alas, I updated to faster/cooler cpu/mobo/ram a couple years ago and no longer have this to test. 

  14. I have an 860 pro 1 Tb.   2 always on windows 10 VMs with numerous pass through devices.  1 workstation and 1 Steam stream for all gaming.  I also have 1 always on docker.   In addition to this I have 2 VM and 2 dockers that are turned on as needed. 

    I have zero issue with stability or speed. 

    This is all on a single cache drive. 

    Seperatly I store all the games on a user share in the array.  This share is mounted within the VM.  Unraid lets vms access shares within the server, not across the network. Games load at the speed of the mechanical hard drive.