Jump to content

ezhik

Members
  • Posts

    466
  • Joined

  • Days Won

    1

Everything posted by ezhik

  1. Hey Guys, I've purchased a bunch of HP H240 HBAs (and set the mode to HBA with "balanced" power settings) and I wondering what would it take to migrate the array from one HBA (let's say SAS2108) to HP H240? I did a quick test and the sata drives were detected OK and were assigned to their respective positions, however the two cache drives that I have (patriot 240gb ssd) were appearing as unassigned. And when I attempted to re-assign them - the page was getting refreshed and the drives stayed unassigned... Has anybody performed similar operation before when migrating between HBAs? Thank you, - ezhik.
  2. I got a few LSI 9210-8i and 9211-8i if you are interested. Located in Canada.
  3. Hello, I have a few HP H240 HBAs en-route. Does unRAID 6.2 support them? If not, would the generous development team add support for them? Cheers.
  4. If you are using nginx, there is logging that is enabled by default. You might want to disable that. The logs grow pretty heavily.
  5. That's very odd. I have to similar setups, one shuts down before the other. And the second server actually does more work... Can we get some debug logging on what is preventing it from the shutdown?
  6. There is an advanced powerdown plugin that can handle many situations such as you are experiencing. You can find it here: https://lime-technology.com/forum/index.php?topic=31735.0 Just be sure you read the entire first post in the thread so you get the right plugin for version 6. Why should we even need a plugin for this? The server shouldn't take more than 30 seconds to shut down...
  7. It eventually shuts down, just takes a *really* long time and my UPS won't hold the charge for that long. ~ 5-8 minutes to shut down. Array of total 10 drives: 2 parity 6 storage 2 SSDs for Cache.
  8. Alright so stopping the array is spamming the above message in syslog... Over and over and over again...
  9. ACPI shutdown is taking longer than anticipated... tailing into /var/log/syslog shows: Aug 26 15:25:45 unraid2 emhttp: Unmounting disks... Aug 26 15:25:45 unraid2 emhttp: shcmd (2594): umount /mnt/disk1 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk1: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2595): rmdir /mnt/disk1 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk1': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2596): umount /mnt/disk2 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk2: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2597): rmdir /mnt/disk2 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk2': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2598): umount /mnt/disk3 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk3: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2599): rmdir /mnt/disk3 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk3': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2600): umount /mnt/disk4 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk4: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2601): rmdir /mnt/disk4 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk4': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2602): umount /mnt/disk5 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk5: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2603): rmdir /mnt/disk5 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk5': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2604): umount /mnt/disk6 |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/disk6: mountpoint not found Aug 26 15:25:45 unraid2 emhttp: shcmd (2605): rmdir /mnt/disk6 |& logger Aug 26 15:25:45 unraid2 root: rmdir: failed to remove '/mnt/disk6': No such file or directory Aug 26 15:25:45 unraid2 emhttp: shcmd (2606): umount /mnt/cache |& logger Aug 26 15:25:45 unraid2 root: umount: /mnt/cache: target is busy Aug 26 15:25:45 unraid2 root: (In some cases useful info about processes that Aug 26 15:25:45 unraid2 root: use the device is found by lsof( or fuser(1).) Aug 26 15:25:45 unraid2 emhttp: Retry unmounting disk share(s)...
  10. That would be nice indeed. +1
  11. And my reverse proxy is only local. I am against exposing this externally.
  12. I advise strongly against this if it is to be then exposed to the Internet. If it is on the LAN then it's fine. I guess there are use cases where HTTPS is required on your LAN too. No doubt. This can also be easily brute-forced. The user id is always the same (root) and I don't know of a way to change that.
  13. Yep. I have mine setup that way, but you want to make sure your switch is connected to the battery as well so that the server can receive the shutdown request. -- MASTER -- -- SLAVE --
  14. Add an nginx docker container with mapping port 443 to 443. Use openssl to generate certs. https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04 Create a config for reverse proxy: site-confs/www.conf --- server { listen 443 ssl; server_name unraid-ssl; ssl_certificate /certs/unraid.crt; ssl_certificate_key /certs/unraid.key; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://your_unraid_ipv4:80; } } ---
  15. Has anybody attempted to set this up?
  16. I have the same one, no problems. Make sure it is connected to USB2.0 port.
  17. Go for this one I'd say: http://www.ebay.com/itm/NEW-Intel-I350-T4-PCI-Express-PCI-E-Four-RJ45-Gigabit-Ports-Server-Adapter-NIC-/262159517676 Also supports SR-IOV. Cheers.
  18. Oops I meant to quote the other gentlemen. Sorry, wasn't intended for you. @NotYetRated: Try: # lspci |grep -i 'network' # cat /etc/udev/rules.d/70-persistent-net.rules See if there is anything in there.
  19. Aww pooh thats what I was planning to use this for. Unfortunately for me I cannot see that unRaid is picking it up at all, tried 2 different slots. Going to drop it in my desktop to make sure the cards not a dud. Thanks for the replies everyone. Try: # lspci |grep -i 'network' # cat /etc/udev/rules.d/70-persistent-net.rules See if there is anything in there.
  20. No it won't move back and forth unless you change the setting. If cache fills up it just doesn't move the file(s). So what are you saying: 1) Mover ignores data stored on a cache drive when 'Use cache disk' is set to 'preferred'. 2) unRAID will stop moving data as soon as cache drives fills up. 3) Mover will not start moving data unless 'Use cache disk' is changed to 'Yes'. 1) Yes 2) Not exactly. If the copy operation of a particular file fails, the partial copy on the target is deleted but the script keeps running. For example, say there's 500MB free and it's copying a 1GB file - that will fail, but if the next file to be copied is smaller than 500MB that will succeed. 3) Files are possibly moved from array to cache for 'Prefer', and from cache to array for 'Yes'. The mover will skip any shares with setting 'Only' or 'No'. Yes, bear that in mind when assigning cache setting 'Prefer'. BTW with btrfs cache pool it's not uncommon to see very large cache disks. Once the 'mover' script decides it can move data of a share, it completes the operation on that share without checking if the config setting changed during the transfer. Don't do that unless you know the implications. You can look at the mover script: /usr/local/sbin/mover and see exactly what's going on. Thank you
×
×
  • Create New...