KcWeBBy

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by KcWeBBy

  1. I've also now noticed this is occuring with 6.8.2, any ideas on what to look for? Thanks in advance.
  2. A feature request: Could it be possible to enable an option to backup one docker at a time? This would be to minimize the amount of downtime for smaller dockers, when a larger docker exists in the set? To further expand on this, would it also be a nice feature to be able to schedule individual dockers to be backed up on individual (or grouped) scheduled? So that less dynamic data dockers could be backed up once weekly, while other more dynamic could be daily? Thanks in advance!
  3. I have been using the letsencrypt docker to do all of my reverse proxying, including the unraid gui. while I know this is "less-secure" i have some ipfiltering setup on who can access the reverse proxy. My trouble is, I cannot use the VNC viewer novnc while logged in using the reverse proxy. Does anyone know how this mechanic works, and how I might be able to make it work? I'm assuming some configuration is required to proxy the web socket, but any help would be great. Thanks!
  4. That is consistent with what I'm seeing.... Not "exactly" the same free space, but depending on the freespace at the beginning of each file operation, its selecting a drive. Thanks again for the help
  5. Ahh, yes.. -- I am at Most free -- I've edited my prior post to include that.
  6. True... Stripe - like in that for each file, a different drive could be selected... especially since I'm using the "most-free" allocation, and all the drives were empty when I started. I have 4x 12GB/sec capable LSI 9305 controllers in the box, I'm just surprised I'm not getting faster disk to disk copy speeds... Enterprise Drives all the way around (new ar WD Gold, old are Older WD Golds
  7. Yes.... the "user" share, but different folders underneath, and all shares are sharing all drives... So I guess I would expect a "striping" type performance which would be much higher, in the GB/Sec perhaps even. Thanks for your response.
  8. Thanks guys... A couple of things that may not have been clear... My Source for the copy is three different UNRAID servers, with the same share names (not the smartest I know)... Each of the servers has at least 8 drives, the biggest has 24. My new server has 10 (10TB) drives, which makes it the highest capacity. (others are 16TB, and 48TB) the 16 TB is full, the 48 TB is about 30% full. in correcting my "ways" I'm looking for the fastest way to copy the data from the user shares, to the new server during some downtime on the old servers. I shy'd away from rsync, and have been using rclone instead. I installed the 8 drives from server 1, into server 3 (new one) and mounted the 8 drives using unassigned devices. I have eight copies of rclone running in screen (one against each of the drives, to the new /mnt/user target after creating the shares. I'm getting overall about 200MB/sec throughput, which is way lower than I was hoping for. I have cache, and parity shut off for all of the new shares. Each time I started a new instance of rclone, for another one of the drives (no instance is running against the same source, but all are hitting the same dest(s)) I would see a 40-80MB/sec jump in throughput... so I don't think its an IO limit. I have four HBA controllers in my server, and I have spread the disks out so I have currently 5 disks on each controller, 3x new 10TB drives (user share), and 2 2TB drives from the old server (unassigned devices) Any recommendations on how I could speed them up? 12 hours in, and I'm only 7 TB into the copy... I won't probably interupt this copy, but have to do it again on the other server which has 24x 2TB drives. (which I can load in the server all at once) CPU / Memory and of course Network IO are flat idle on this server. Here's a 24 hour Graph attached... You'll notice 26 Disks because of the 10 SSD's installed for cache (but not in use yet)
  9. Yeah... It's for high-speed 4K video editing... its a big ass server... it will be interesting when I get to the point of copying the second drive into the user shares... I imagine rsync will try to delete files I'm going to have to research that
  10. Tdallen,. Thanks for the reply. Yes. I have 60 bays in my new server. 45drives xl60 I was thinking of putting the old drives in, using unassigned devices to mount, and then using rsync to duplicate the data to the new user shares.. the no parity is smart add.. I would have missed that. I am already skipping the cache setup for the shares until this copy is completely done.. then enabling and transferring the Dockers and VM that access the data.. hopefully that part becomes pretty transparent. My new array is 10x 10TB drives, with two for parity, and 10 128GB SSD drives for cache. Thoughts?
  11. I'm sure someome has already covered this, but I'm moving three servers to my new giant server. I have new drives and a new array.. is it possible to move the drives out of the protection of the parity of the old servers and drop them into the new unraid server and move the data over to my new drives? Otherwise, I'm going to have to invest in a 10gig network infrastructure. I have three servers, with roughly 64 TB split between them. Your response and assistance is appreciated. Thanks!!
  12. I have had this docker working for some time now, but I am now seeing an influx of: Sqlite3: Sleeping for 200ms to retry busy DB. In the Log file. I've launched a console and tried to run some troubleshooting commands to see if it was a corrupted database, but it doesn't appear that the sqlite3 binary is available? Any help would be great, this is causing some crashes/lock ups as it can't access the database for some time (until I restart the docker, which probably doesn't help the database health) Thanks
  13. wow, yeah, I haven't yet messed with the nerdpack, but I do have it installed. It deserves more time exploring it than I currently have. I'm not sure I have the time currently to invest in figuring it out. I have a proxmox environment that I already monitor the vm's with nrpe. I was hoping I could find someone who was doing it and copy/paste to my machine. unRAIDBM – 16TB unRAID 6.2.4 Pro Dell 2900:2x 5160-3Ghz:48GB:SAS1068E:SmartUPS8KVA pveUnRAID – 100GB unRAID 6.3.0-rc9 Basic ProxMox VE VM:6 cores:48GB:qcow2:SmartUPS8KVA Dockers:BindDNS:musicbrainz:mysql:nginx:phpmyadmin:plex:plexEmail:plexpy:PlexRequests:portainer:pveAD:pvePlexpy unRAIDSuper – 48TB unRAID 6.3.0-rc9 Pro SuperMicro X8DT6:2x L5630-2.13Ghz:68GB:SAS2004:SmartUPS8KVA
  14. Yes, thank you for that. I figured I would need to build a tgz file to be unpacked at startup, but what needs to be in there, and how to build it is what really perplexes me at the moment. I am familiar with the ramdrive OS concept, and have integrated a couple of custom scenarios into the mix already for saving logfiles, etc, but I do not, as yet, understand how to build the NRPE apps into a tgz that could be reproduced on boot. Thanks for the reply!
  15. Hi there... I'm looking for a bit of advanced help (compared to what I see on most topics here). I'd like to create a custom rc.d script that installs and runs the client for nrpe as part of the boot process, so my unraid servers will be capable of reporting through my nagios installation regarding various conditions. This topic from 2008 eludes to a process, but actually has no script to install packages into the bootable config folder. I have checked and no plugin exists. Dockers would be counter productive, as I want to run drive level / process level monitoring queries. **LINK** https://lime-technology.com/forum/index.php?topic=1953.60 Anyone done this? I have several machines I'd like to run this on. I'm a good scripter, but I'm unsure how to build a script to build the tools and run them the same each time (and keep the package updated) Thanks in Advance! -=- unRAIDBM - Dell 2900 8x2TB 48GB RAM 1x5TB Parity Drive 1x500GBSSD Cache pveUnRAID - (Docker Deployment Server) ProxMox VE Machine 8 cores 48GBRAM 100GB Data/Parity Drive (running on a 64 core 1TB RAM proxmox host) unRAIDSuper - SuperMicro X8DT6 72GB RAM 2x5TB Parity Drives, 2x1.2TB 2.5" SSD Cache Drives, 24x2TB data Drives (48TB Array) Unconfigured Storage: 2x Dell EqualLogic PS100E 14x 2TB SAS Drives, Dual Type 2 Controllers 28TB 4x1GB iSCSI Interfacese
  16. Wow... just when I thought I had something.... Now I feel stupid, I never even saw the switch until you said something.. Thanks!
  17. Hi Folks.. Trying some advanced docker stuff (porting some non-unraid friendly configs to unraid). I've tried this on current release, as well as pre-release (I have multiple servers). Currently, it is not possible to configure <ExtraParams> options via the webgui for unraid. One would think this would be available to view under "Show Advanced Settings", and able to "Add another Path, Port or Variable" but there isn't an option. Any ideas on this? I have added it to the xml file, and it works, but I cannot edit the image settings via the GUI or the ExtraParams get removed. See my XML for phpmyadmin: <?xml version="1.0"?> <Container version="2"> <Name>phpmyadmin</Name> <Repository>phpmyadmin/phpmyadmin</Repository> <Registry>https://hub.docker.com/r/phpmyadmin/phpmyadmin/</Registry> <Network>bridge</Network> <Privileged>false</Privileged> <Support>http://lime-technology.com/forum/index.php?topic=42423</Support> <Overview>The world's most popular open source database management[br]&#; &#; &#; </Overview> <Category>Network:Other MediaApp:Other Other: Tools:</Category> <WebUI>http://[iP]:[PORT:8080]/</WebUI> <TemplateURL/> <Icon>https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/my$ <ExtraParams>--link mysql:db</ExtraParams> <DateInstalled>1484504071</DateInstalled> <Description>The world's most popular open source database management[br]&#; &#; &#; </Description> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>8080</HostPort> <ContainerPort>80</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data/> <Environment> <Variable> <Value>1</Value> <Name>PMA_ARBITRARY</Name> <Mode/> </Variable> </Environment> <Config Name="Host Port 1" Target="80" Default="80" Mode="tcp" Description="Container Port:80" Ty$ <Config Name="Key 1" Target="PMA_ARBITRARY" Default="1" Mode="" Description="Container Variable: $ </Container> Of course, this docker won't work without the --link settings (even with PMA_ARBITRARY enabled) Any ideas? Thanks!
  18. And success.. One inode of 0 bytes lost, but no other evidence of corruption. Just to recap, incase anyone else has this issue. I edited /boot/config/disks.cfg to make Autostart = "no" I rebooted and started the array in maintenance mode via the gui. I ran xfs_repair -v /dev/md8 ..the results indicated nothing could be done due to log metadata needing to be replayed.. it suggested a try mounting to replay, and then if that didn't work the dreaded -L I attempted (and failed) to mount the affected disk, in this case /dev/md8.. ...the failure caused me to have to hard boot (after the dump I copied in my last post) I started the array in maintenance mode again via the gui, and this time via terminal i did: xfs_repair -L /dev/md8 20 minutes later, it reported success and I mounted the drive and checked files. After this check was successful, I rebooted, upgraded to 6.2.2 and am now bringing my array online for good this time. Hope this helps someone else. Until next time.
  19. Quick update. I edited offline (in another computer) the disks.cfg so that it would not auto-start the array, and attempted to mount md8 (the affected drive).. I get this into syslog and a "Killed" message to my terminal. After doing this, I have no apparent option but to run a xfs_repair -L /dev/md8 after a fresh reboot to see if that can repair the drive.... wish me luck.
  20. Hey there, first off, I committed the first mortal sin by not being able to get a copy of the logs prior to my first hard reboot. My system hung, while doing a mover operation, and I left it for 18 hours, as I was still able to use the shares, and manipulate the files, but the mover was not actually moving anything, and was not responding to a SIGTERM. So.. I hard booted.. When it came back up, I noticed that the web interface was running (emhttp) but not accessible. I looked into htop and found it waiting on a hung "sync" process. I am unable to powerdown / reboot / shutdown -h now the machine. it posts the broadcast message, but does not actually affect any change. I can only assume the Sync is in some type of hardware IOWAIT, and holding processes up. Now, upon reboot, I did capture a set of logs, but its the most basic (only since the last boot).. You will see that disk8 is showing XFS errors, and what looks like a SEGFAULT message but I'm not too sure about that. I have booted into safe mode, and attempted to run xfs_repair -nv to see what would happen, and got absolutely 0 output after 2 hours of running (still on the first line of output) In fact, I could not SIGTERM the process, and had to once again hard reboot. While this process was running, the sync process was still running, and showing "D" for status in Htop (I think that means zombie?) I have a large amount of data on this drive, and would like to recover most of it. Is there a way to fix the XFS on the drive without the sync process starting up when the machine does... BTW, the whole time the sync process is running, there are no I/O lights on my drive, so I assume its truly hung. I'm not a novice administrator, but this one has me baffled, I'm pretty new to XFS and UnRaid. Any help would be appreciated.... A couple of areas I could use help.. with No GUI, I'm limited to command line. How can i Identify Disk 8 ? If I can identify it, can I then remove it and ask the parity to rebuild its contents? How can I prevent the sync process from locking up on me... What does this do, is it essential to the functioning of the array? Is there a way to fix the filesystem direct to the drive, even if it destroys the parity? I think maintaining the parity is what's causing the problems. Am I on the right track? Is there an easier fix? Your help is appreciated Thanks. unraidbm-diagnostics-20161020-2224.zip