KcWeBBy

Members
  • Posts

    21
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KcWeBBy's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I've also now noticed this is occuring with 6.8.2, any ideas on what to look for? Thanks in advance.
  2. A feature request: Could it be possible to enable an option to backup one docker at a time? This would be to minimize the amount of downtime for smaller dockers, when a larger docker exists in the set? To further expand on this, would it also be a nice feature to be able to schedule individual dockers to be backed up on individual (or grouped) scheduled? So that less dynamic data dockers could be backed up once weekly, while other more dynamic could be daily? Thanks in advance!
  3. I have been using the letsencrypt docker to do all of my reverse proxying, including the unraid gui. while I know this is "less-secure" i have some ipfiltering setup on who can access the reverse proxy. My trouble is, I cannot use the VNC viewer novnc while logged in using the reverse proxy. Does anyone know how this mechanic works, and how I might be able to make it work? I'm assuming some configuration is required to proxy the web socket, but any help would be great. Thanks!
  4. That is consistent with what I'm seeing.... Not "exactly" the same free space, but depending on the freespace at the beginning of each file operation, its selecting a drive. Thanks again for the help
  5. Ahh, yes.. -- I am at Most free -- I've edited my prior post to include that.
  6. True... Stripe - like in that for each file, a different drive could be selected... especially since I'm using the "most-free" allocation, and all the drives were empty when I started. I have 4x 12GB/sec capable LSI 9305 controllers in the box, I'm just surprised I'm not getting faster disk to disk copy speeds... Enterprise Drives all the way around (new ar WD Gold, old are Older WD Golds
  7. Yes.... the "user" share, but different folders underneath, and all shares are sharing all drives... So I guess I would expect a "striping" type performance which would be much higher, in the GB/Sec perhaps even. Thanks for your response.
  8. Thanks guys... A couple of things that may not have been clear... My Source for the copy is three different UNRAID servers, with the same share names (not the smartest I know)... Each of the servers has at least 8 drives, the biggest has 24. My new server has 10 (10TB) drives, which makes it the highest capacity. (others are 16TB, and 48TB) the 16 TB is full, the 48 TB is about 30% full. in correcting my "ways" I'm looking for the fastest way to copy the data from the user shares, to the new server during some downtime on the old servers. I shy'd away from rsync, and have been using rclone instead. I installed the 8 drives from server 1, into server 3 (new one) and mounted the 8 drives using unassigned devices. I have eight copies of rclone running in screen (one against each of the drives, to the new /mnt/user target after creating the shares. I'm getting overall about 200MB/sec throughput, which is way lower than I was hoping for. I have cache, and parity shut off for all of the new shares. Each time I started a new instance of rclone, for another one of the drives (no instance is running against the same source, but all are hitting the same dest(s)) I would see a 40-80MB/sec jump in throughput... so I don't think its an IO limit. I have four HBA controllers in my server, and I have spread the disks out so I have currently 5 disks on each controller, 3x new 10TB drives (user share), and 2 2TB drives from the old server (unassigned devices) Any recommendations on how I could speed them up? 12 hours in, and I'm only 7 TB into the copy... I won't probably interupt this copy, but have to do it again on the other server which has 24x 2TB drives. (which I can load in the server all at once) CPU / Memory and of course Network IO are flat idle on this server. Here's a 24 hour Graph attached... You'll notice 26 Disks because of the 10 SSD's installed for cache (but not in use yet)
  9. Yeah... It's for high-speed 4K video editing... its a big ass server... it will be interesting when I get to the point of copying the second drive into the user shares... I imagine rsync will try to delete files I'm going to have to research that
  10. Tdallen,. Thanks for the reply. Yes. I have 60 bays in my new server. 45drives xl60 I was thinking of putting the old drives in, using unassigned devices to mount, and then using rsync to duplicate the data to the new user shares.. the no parity is smart add.. I would have missed that. I am already skipping the cache setup for the shares until this copy is completely done.. then enabling and transferring the Dockers and VM that access the data.. hopefully that part becomes pretty transparent. My new array is 10x 10TB drives, with two for parity, and 10 128GB SSD drives for cache. Thoughts?
  11. I'm sure someome has already covered this, but I'm moving three servers to my new giant server. I have new drives and a new array.. is it possible to move the drives out of the protection of the parity of the old servers and drop them into the new unraid server and move the data over to my new drives? Otherwise, I'm going to have to invest in a 10gig network infrastructure. I have three servers, with roughly 64 TB split between them. Your response and assistance is appreciated. Thanks!!
  12. I have had this docker working for some time now, but I am now seeing an influx of: Sqlite3: Sleeping for 200ms to retry busy DB. In the Log file. I've launched a console and tried to run some troubleshooting commands to see if it was a corrupted database, but it doesn't appear that the sqlite3 binary is available? Any help would be great, this is causing some crashes/lock ups as it can't access the database for some time (until I restart the docker, which probably doesn't help the database health) Thanks
  13. wow, yeah, I haven't yet messed with the nerdpack, but I do have it installed. It deserves more time exploring it than I currently have. I'm not sure I have the time currently to invest in figuring it out. I have a proxmox environment that I already monitor the vm's with nrpe. I was hoping I could find someone who was doing it and copy/paste to my machine. unRAIDBM – 16TB unRAID 6.2.4 Pro Dell 2900:2x 5160-3Ghz:48GB:SAS1068E:SmartUPS8KVA pveUnRAID – 100GB unRAID 6.3.0-rc9 Basic ProxMox VE VM:6 cores:48GB:qcow2:SmartUPS8KVA Dockers:BindDNS:musicbrainz:mysql:nginx:phpmyadmin:plex:plexEmail:plexpy:PlexRequests:portainer:pveAD:pvePlexpy unRAIDSuper – 48TB unRAID 6.3.0-rc9 Pro SuperMicro X8DT6:2x L5630-2.13Ghz:68GB:SAS2004:SmartUPS8KVA
  14. Yes, thank you for that. I figured I would need to build a tgz file to be unpacked at startup, but what needs to be in there, and how to build it is what really perplexes me at the moment. I am familiar with the ramdrive OS concept, and have integrated a couple of custom scenarios into the mix already for saving logfiles, etc, but I do not, as yet, understand how to build the NRPE apps into a tgz that could be reproduced on boot. Thanks for the reply!
  15. Hi there... I'm looking for a bit of advanced help (compared to what I see on most topics here). I'd like to create a custom rc.d script that installs and runs the client for nrpe as part of the boot process, so my unraid servers will be capable of reporting through my nagios installation regarding various conditions. This topic from 2008 eludes to a process, but actually has no script to install packages into the bootable config folder. I have checked and no plugin exists. Dockers would be counter productive, as I want to run drive level / process level monitoring queries. **LINK** https://lime-technology.com/forum/index.php?topic=1953.60 Anyone done this? I have several machines I'd like to run this on. I'm a good scripter, but I'm unsure how to build a script to build the tools and run them the same each time (and keep the package updated) Thanks in Advance! -=- unRAIDBM - Dell 2900 8x2TB 48GB RAM 1x5TB Parity Drive 1x500GBSSD Cache pveUnRAID - (Docker Deployment Server) ProxMox VE Machine 8 cores 48GBRAM 100GB Data/Parity Drive (running on a 64 core 1TB RAM proxmox host) unRAIDSuper - SuperMicro X8DT6 72GB RAM 2x5TB Parity Drives, 2x1.2TB 2.5" SSD Cache Drives, 24x2TB data Drives (48TB Array) Unconfigured Storage: 2x Dell EqualLogic PS100E 14x 2TB SAS Drives, Dual Type 2 Controllers 28TB 4x1GB iSCSI Interfacese