Jump to content

mfwade

Members
  • Content Count

    24
  • Joined

  • Last visited

Community Reputation

5 Neutral

About mfwade

  • Rank
    Member
  1. Good Morning, I am in the process of swapping out my existing Supermicro server for another server, an HP Proliant DL385 G7. I know it’s dated however, it does have more horsepower than the current Supermicro. Specs are as follows: -Qty. (2) AMD Opteron Processor 6282 SE @ 2.6 GHz -128 GB Ram -BIOS A18 -Will connect to existing external 2.5 and 3.5 SAS (EMC) enclosures -Drrive counts (30 - 4TB SAS SSD) - (Numerous Unassigned Devices HDD’s) -Running in upwards of 15 docker containers (Plex, Radarr, Sonarr, Minio, NGINX, etc..) Questiosn: -Should I continue to use the existing LSI 9211-8i SAS controllers or should I look at something different? Currently, the parity check runs at 55MBs so a tad on the slow side for SSD. I suspect the aging architecture, the cards and/or bus, and the dual parity may play a part in slower speed. -Right now I am using an HP 6GBs SAS expander to connect all of the internal SAS drives on the Supermicro along with using the single external connector on the SAS expander to connect the external shelves. I would like to move away from the expander and use a dedicated card. Are there any recommendations for either a PCI 2.0 or backwards compatible PCI 3.0 card that you would recommend. I am looking at an LSI 9207-8e however, I am not opposed to using something different. -Any opposition to bumping up the memory to 256 GB. I have it so why not use it.... -Does anyone out there have access (and willing to share) HP BIOS and firmware. My IT purchasing has centered around IBM, Dell, Cisco, etc. but never HP. Any assistance is appreciated. I would like to update the server if i can. -Any recommendations on settings specific to HP Proliant gear and UNRAID? -Any settings specific other than what I am currently using (SSD Trim plugin) to account for the all SSD array? I am sure I left some things out. Feel free to ask any other question pertinent to making any recommendations. Thanks in advance for your support! -MW
  2. Only the two that are showing, sdv and sdw (the ones in the picture above). -MW
  3. I guess I am the only one having this issue. Sigh... Maybe Limetech will see this and a) say yep, there is an issue or b) tell me to deal with it, take my meds or have a beer or c) say hey, that's a new one.... I guess I should have added, Unraid 6.7.0. At any rate, I do believe it's simply cosmetic. -MW
  4. Definitely a typo. Is set to 6000000. Comes out to roughly 6.29TB ish....
  5. Limetech, I tried that as well. It doesn't work with any of the permission variables. Maybe it just wasn't meant to be and I will have to go out and buy an older albeit used time capsule and replace the drive. The main reason I would like to get this to work is my existing TC is starting to fail. Again, thanks to all for the assist! -MW
  6. Good Morning, Moderators, please let me know if this needs to be moved. I posted it here as I am not having an issue with the plugin rather this seems to be nothing more than a GUI issue. Please advise. I am observing a weird issue with the Unraid web interface. On the main Dashboard page, at the bottom under Unassigned Devices I see several disks, in this case 4 however, when on the Main tab, I actually only have 2 mounted. See the screenshots below. The first one is from the Dashboard page, the second is from the Main page. This behavior isn't affecting the use of the mounted shares rather just messes with my OCD Attaching diagnostic for your review. All in all, I am really liking the new interface. Great job Limetech!! -MW unraid-1-diagnostics-20190514-1050.zip
  7. So, not sure if we should continue on with this thread or open a new one. Moderators, please advise. I tried once again this morning to get the Time Machine (via SMB) to work with 2 Mac computers. 1 running Mojave and the other running High Sierra, both with the same results. I create a share called 'Time Machine' or 'test-1', etc,. Assigned the following SMB attributes (export: yes/time machine, volume size: 60000 for 6TB, and security: private). When looking in either Mac's Time Machine properties, and attempting to add a new disk, the new share is not visible. However, if I mount the SMB share via Finder, Go, Connect to Server, then I am able to see and use the disk in Time Machine. I have also tested using AFP. When creating the same type share via AFP, the following settings were used (export: yes/time machine, volume size: 60000 for 6TB, volume dbpath: empty, and security: private). When I go to browse for the share in Time Machine, I am able to see it. The issue comes when I try to mount it in the Time Machine settings. It takes in upwards of almost 20 or so seconds to 'connect' to the drive. When it does finally connect, I am prompted for my credentials, and then it errors out. Unfortunately, I did not capture any screen shots of the errors this morning. I will take care of that when ai get home this evening and post them. So in summary, If I am not following the proper procedures when using an SMB share with Time Machine, please let me know. I am under the assumption that it will simply appear much like an AFP share. If others have been able to get this to work, please share your settings for not only creating the share but also how it is exported. I am attaching my diagnostics file. Maybe it will be of use to someone in the hopes of figuring this out. Thank you to everyone in advance for reading and providing commentary. The support you provide is sincerely appreciated. -MW unraid-1-diagnostics-20190514-1050.zip
  8. Tucubanito07, Let me know how you make out with your settings. I will open a new thread for my issues with Time Machine backups. -MW
  9. I don't believe that would affect anything other than how files and folders are written to disk. -MW
  10. I used the following however, I am unable to provide the Time Machine screenshot as I did not configure my VPN to allow discovery. -MW
  11. Yea, I am able to backup over the wire and wireless with the AFP protocol. The variables are all set in Unraid (web GUI) when setting up the ‘share’. -MW
  12. I am not having the same issue per se, rather I am unable to view the SMB share in Time Machine. I can create with all of the variables and then when in time machine, add disk, nothing is visible. In contrast, when I create a share for Time Machine using AFP, it shows up and am able to use it.
  13. I know this is an older thread however, wanted to get your thoughts on how well your solution is working. I too am working with multiple Minio containers however, instead of Duplicati, I am testing Arq (Mac as it closely mimics Time Machine) and Cloudberry (MS Server and MS Workstations). Looking forward to hearing about your experiences. -MW
  14. Please accept my apologies. I missed this section in the readme... "Certs are checked nightly and if expiration is within 30 days, renewal is attempted. If your cert is about to expire in less than 30 days, check the logs under /config/log/letsencrypt to see why the renewals have been failing. It is recommended to input your e-mail in docker parameters so you receive expiration notices from letsencrypt in those circumstances." Thanks again for all the hard work. It is truly appreciated! -MW
  15. If I may, I am setting up an S3 Compatible (Minio) container and would like to use Letsencrypt to generate the certs. I found another docker online (not Linuxserver) that states: -This image runs certbot under the hood to automate issuance and renewal of letsencrypt certificates. -Initial certificate requests are run at container first launch, once the image responds on a specified health check url. -Then certificates validity is checked at 02:00 on every 7th day-of-month from 1 through 31, and certificates are renewed only if expiring in less that 28 days, preventing from being rate limited by letsencrypt. -Issued certificates are made available in the container's /certs directory which can be mounted on the docker host or as a docker volume to make them available to other applications. So far everything is working great with the Linuxserver container (huge thank you by the way). Am just wondering how the container handles auto renewing the existing certs. Thanks for a great container! -MW