Jump to content

bsim

Members
  • Content Count

    173
  • Joined

  • Last visited

Community Reputation

0 Neutral

About bsim

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The main reason I had to build all of the variables manually was because the unraid system can't pull in the compose file that the docker(s) came with (using the gui)...the compose file actually has all of the seperate dockers configured in one file...What differences are there between what unraid needs and what a typical compose file looks like? Has anyone considered making a plugin to translate standard compose files into what unraid requires? Or even the ability of unraid to export a compose file for a configured docker so that users can add apps to the community apps easier for others?
  2. Is it possible to create a compose file for each of the dockers that I have if they didn't come from compose files originally?
  3. will docker compose pull all of the settings (variables, ports...) that ive setup into something that unraid would be able to reproduce easily for others?
  4. So, I've setup Ambar from dockerhub using Community apps...But it has several very specific dependency dockers that are required to run it that are also housed in the same Ambar docker repository as part of the project. After getting the several dockers setup (with all of their seperate settings), Is there a way to create an overall unraid template for pulling all of the seperate ambar dockers together and with all of their seperate settings that I scoured to find?
  5. So...I've finally got Ambar docker (https://ambar.cloud/docs/installation/) up and running with a few bugs yet to work out...but during the celebration of completing the massive frustration of configuration options of all of the different dockers necessary, I had a heart drop for a moment. The user shares all disappeared (but disk shares and drive space didn't change), so remembering that user shares are dynamically created, and figuring something with unraid just crashed in the background, I did to clean reboot (after waiting for a long while for attempt to unmount user shares). Everything came backup up ok afterwards, but I'm thinking that my docker mappings had something to do with the crash. I was relating docker location(s) IE "/usr/share/elasticsearch/data" directly to "/mnt/user/appdata/ambar" via docker paths. At some point as I was restarting several dockers at once a few times over, the crash occurred. (I pulled a diag before restarting just in case anyone wants to fish out the source issue). Is there something wrong with directly relating the above folders that would cause the user shares to crash like that? Is there a better/safer way to refer docker data to the cache drive shares ?
  6. So...I've finally got Ambar docker up and running with a few bugs yet to work out...but during the celebration of completing the massive frustration of configuration options of all of the different dockers, I had a heart drop for a moment. The user shares all disappeared (but disk shares and drive space didn't change)...so remembering that user shares are dynamically created, and figuring something with unraid just crashed in the background, I attempted to clean reboot (after waiting for a long while for attempt to unmount user shares). Everything came backup up ok afterwards, but I'm thinking that my docker mappings had something to do with the crash. I was relating docker location(s) IE "/usr/share/elasticsearch/data" directly to "/mnt/user/appdata/ambar" via docker config paths. At some point as I was restarting several dockers at once a few times over, the crash occurred. (I pulled a diag beforehand just in case anyone wants to fish out the source issue). Is there something wrong with directly relating the above folders that would cause the user shares to crash like that? Is there a better way to refer docker data to a cache drive (that doesn't move certain folders to the array)?
  7. Holy crap huge difference! So.... So with only a few standoffs different (a few movable, a few permanent onces electrical taped well), my 4U Supermicro 24 bay case works well with the supermicro x9dr3-ln4f+ motherboard (was worried about the standoffs and the size difference), now running dual Intel E5-2637V2 's...parity check now runs at the drive speeds, and using the array during parity is even just as fast, but does slow down the parity check, but just slightly. I don't notice any server lag when using the array during parity checks either. I wanted to share my adventure with the supermicro x9dr3-ln4f+ motherboard upgrade...Ordered from ebay for 180$ (included face and processor sinks), received my processors (120$ for the pair), installed everything...wasn't getting any post or beeps. pulled all but one processor...nothing....swapped processors...nothing... Previous research had shown that unless the motherboard bios was at least 3.0, it would not be compatible with V2 processors, so it was most likely my bios version....(great...bios upgrade without a processor? i thought)... Luckily, the BMC on X9 boards and above has a feature that allows remote bios updates without any processors or memory being installed! The one catch was that there was a required "OOB license" that was necessary in order to use either the direct web interface of the BMC or use the command line "sum" from supermicro. Luckily, there is a Perl script on the web from a frustrated user (that had reversed the BMC license code) that allows you to generate your own OOB license key. It works on all boards as of recent. So, I was able to update the BMC firmware easily through the interface, then update the motherboard BIOS directly with the new BMC interface. After that it worked like a charm! So where does it leave me with figuring out what my issue was? I'm still left with either the motherboard didn't have enough lanes to support the hard throughput, a possible bug in the bios that caused issues, or unraid wasn't quite supporting the motherboard somehow (driver bugs?). Thanks guys for your help! Time to ebay!
  8. So far (running parity check right now) the slower speeds have nothing to do with the single thread rating of the processors...I'm now running double the single thread rating processors (dual Opteron 6272's for dual Opteron 6328's) as the originals...and almost no change at all. I saw a bigger jump (~15MB/s) migrating 3 1tb drives onto one 5tb drive and pulling the 1tb drives out of the array. Still pulling only mid 50's from a parity check. I'm still leaning towards my issue being a limitation of the motherboard OR some sort of connection between the motherboard and unraid (drivers). Is there a way to tell what driver is being used for the drives through unraid? Is there a different driver for ahci vs the driver or are they all rolled into one? Can I tell what ahci driver is being used? --------------------------------------------------------------------------------------- UPDATE: I may have answered my question with more research... dmesg | grep -i --color ahci ahci 0000:00:11.0: AHCI 0001.0100 32 slots 6 ports 3 Gbps 0x3f impl SATA mode grep -i SATA /var/log/syslog | grep --color -i 'link up' kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Are these only referencing the odd motherboard unassigned device I have or is this being used for the entire array?
  9. Added 2 5TB drives, removed 3 1TB drives...will remove 3 2TB drives...plan on adding another 5TB drive...so yes, 3 slow drives will be removed overall...I was primarily thinking of the older drive speeds dragging down my averages...the replacements will bring up my average direct drive speeds by 50-75MB/s...question is if this translates into faster parity speeds...I will find out!
  10. As an update, I've removed/replaced the 1TB drives from the array and my parity speeds went to the 50s...still low but better...plan on migrating out the 2tb drives to see if it affects me further. I did confirm that my supermicro backplane is simply a pass through. While I have the different processors on hand, I plan on swapping in dual AMD Opteron 6328's OS6328WKT8GHK...they about double my single thread rating (higher speeds, but lower cores)...a step just to determine if the processors are really holding me back (vs the motherboard itself). After that, I plan on swapping in a new motherboard completely with a Supermicro X9DR3-LN4F+ and dual Intel E5-2637V2's, I gain my built in KMV again, and gain 4 PCIe3x16s. I can't find the old motherboard (Supermicro H8DGi) bus speed/lanes anywhere but it was PCIe2, the new motherboard will have a bus speed of up to 8 GT/s and PCIe3. Gives me a great upgrade path to an SSD rack for the future.
  11. Thats the point, new config then unassigning drives=the drive icons never went blue, and never gave me the option to start the array until I unassigned before new config, then got the blueballs and the option to start rebuilding.
  12. I did attempt to unassign them twice after "new config", neither took (just invalid configuration both times)...could it be the removal of three drives at once that caused a glitch? As soon as I did unassign before "new config" it worked flawlessly. Overall doing it before does seem to make much more logical sense for the process.
  13. It looks like the "shrink array" page is the problem...It has you unassigning the drives after the "new config". Using a bit of logic I was able to figure out that unassigning the drives then "new config" works beautifully. The document needs to be fixed.
  14. Problems following https://wiki.unraid.net/Shrink_array --------------------------------------- I have a dual parity array and have recently done a full parity check without error, running the latest unraid pro... I have a full printout of all drive assignments I rsynced (with remove) all data off of three old 1tb hard drives (to the array) and confirmed they are empty I shut down the array, I changed the included shares list to only include the drives I want to keep in the array (checked all but 3 drives) Then tools, new config, retain all, yes, apply On main, I unassigned the three drives to be removed double checked all other drives are listed correctly I cannot start the array due to "invalid configuration" to rebuild parity without the drives. What am I missing? Is this a bug? Is there a work around? Shouldn't I be able to remove as many drives as I want and just rebuild parity?
  15. does this conversation sound like my processor single thread rating is my problem? Is there anything else that may be the problem?