koyaanisqatsi

Members
  • Content Count

    54
  • Joined

  • Last visited

Community Reputation

2 Neutral

About koyaanisqatsi

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Which Docker image are you using? I'm just finally getting back to updating my cacti image (chestersgarage/cacti), and the 1.2.x versions of cacti are substantially different from 1.1.38, which completely broke my Docker build and my latest attempt at updating the image. So I'm sort of in the same boat as you right now. But I should have things figured out in the coming weeks.
  2. OK, that's what I figured, but I wanted to ask and be sure. I'd hate to assume and find out later there's some performance issue because of it. I see what you mean about shared DMI. My block diagram shows 8GT/s DMI into the CPU from the PCH and 8GT/s PCIe which is dedicated to the storage controller (x8 on a x16, but nothing is using the other 8 lanes). Thanks!
  3. Hey All, Did some searches and couldn't find any indication whether placing the storage controller on PCIe lanes that connect directly to the CPU or to the chipset were better for unRAID. Is there any discussion on this? If not, I'd love to hear experiences. I've currently got a Core i3-7350K CPU @ 4.20GHz on a Supermicro C7Z170-SQ, with an 8-port Marvell 88SE9485 SAS/SATA 6Gb/s controller and 8 6TB WD Red EFRX disks. Right now, the controller is in a slot on CPU PCIe lanes. But I could move it to a slot that has chipset PCIe lanes if that's a better setup.
  4. This could be dangerous. Power failure can corrupt the file system, and writing to a corrupt file system can make the corruption worse, even so far as making the file system no longer accessible. That's if it doesn't throw some other kind of error trying to deal with the corruption. Always best to let a parity sync run to completion after a power failure. The solution to power failure parity syncs is to get a UPS. Prevent the corruption in the first place and you won't have to deal with a sync.
  5. If you have recently rebooted unRAID or restarted your Cacti container, this can happen. I have worked around it so far by enabling Advanced View in the Docker screen, and then force-update the Cacti container. This causes it to go through an initial setup that reestablishes the DB connection properly. It's a bug in the image I'm working on. But I've had some life changes that tore me away from this project for a long time. I have every intention to come back to it and refine the image. To be clear, this workaround only works with my image: chestersgarage/cacti
  6. Old thread, but seems to have not been resolved yet? I found an issue that was causing this on my server - to a lesser impact than I see described here - but maybe this will help. Also, I wasn't using Cache_Dirs prior to posting this, but I just installed it through CA. I found that the disks which spun up unnecessarily had empty folders from shares I had moved to other disks. So any access to that share would spin up the unnecessary disk(s) as well as the share disks. In terminal, I made sure that any disks not (supposed to be) associated with the share had no traces of the sha
  7. It was too much of a rabbit hole, so I built my own container. It aims to be fully fault-tolerant, self-healing and modular. And it's a bit smaller than the other leading brand, running on Alpine Linux rather than Ubuntu. I haven't figured out the unRAID template thing to make a Docker container work natively with unRAID yet, so please offer some tips on that. I read the docs on it a while back, and I need to go over them again. But here's how it works on the command line, for now. DISCLAIMER: This is a work-in-progress. It may have significant bugs, and
  8. I should have posted this here instead:
  9. koyaanisqatsi

    Turbo write

    I turned on turbo/reconstruct-write part way through a 600GB move from one disk to another, and this is what I got in stats. Roughly twice the write throughput Cacti graphs: sdc is parity sdh is the source disk sdi is the destination disk
  10. Well, I have a great solution to the persistent storage, as long as you're in the Pacific timezone. ? I was able to grab the mysql data and configs from a container I had started, but not configured yet, and clean them up a it. I created a nifty little starter package that you extract into your Cacti appdata folder before starting the container in unRAID. But the way the container sets the timezone when it starts up is not working with the starter data. I should be able to figure it out, though. I'm determined to either make this work the way I want, or just build my
  11. Awesome! Glad you got it working. I'm still messing around with mine. I had some things to do today, so haven't been working on it most of the afternoon. I want to stop MySQL before taking a copy of the initial data, just to make sure there are no data consistency issues. Then I'm going to try and package it all up so it's easy for others to use. The time zone thing seems really inconsistent. I'm not sure what to make of that.
  12. The issue with persistent storage is that the container starts up with the expectation that the mysql database already exists, because mysql is preinstalled. You can't point /var/lib/mysql at an empty folder. But I'm hoping I can point it at a folder with all the right mysql data in it already. I'm working right now on starting up a generic container, then making a copy of the mysql data location before anything is done to it. THEN, I'll attach that as a volume to a new container and configure from there. All said and done, I'd like to offer the "starter pack" as a download so others ca
  13. If you run it as documented, it's completely volatile, and you lose everything if you delete the container. unRAID always deletes containers when they are stopped. I'm still working (slowly) on addressing that in my installation. Last night I discovered the backup and restore commands are robust, though not enough to establish a persistent storage set up. But they do allow for easily not losing your data history. I've set up a cron task inside the container to run a backup every hour. And the backups are kept on persistent storage. Then when I have to restart the container or reboot, I j
  14. Every time parity check runs, whether manually or on the periodic schedule, I get the same result: Last check completed on Wed 08 Aug 2018 10:00:22 PM PDT (yesterday), finding 5 errors. In all cases, I've told it to correct errors, so they should have been fixed long ago. I've run the SMART tests on all array disks. None of the reports show any issues. What else can I check? Any ideas?
  15. Ah, good point! I was wondering if what I had seen was the case - which is the only disks that seem to be accessed during a write are the target disk and parity. Makes sense. I am not aware of turbo. Need to read up on that.