Jump to content

air_marshall

Members
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

0 Neutral

About air_marshall

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. does docker-compose persist after reboot if you install it this way?
  2. So I copied the emulated contents of disk5 off of the array onto a USB drive, no faults or errors occurred during this process. Having completed another extended SMART on disk5 without issue, combined with the fact there was no critical data on disk5 and me not having a spare disk or willing to spend on one I elected to rebuild to the same drive. This completed but with Errors concurrent with read errors from disc1. Why has this happened during the rebuild but not the copy of the same emulated data beforehand? Realistically how much data are we talking about as being corrupt? The latest diagnostics is attached detailing the sector errors, which I should add is now increasing with drive usage, so disk1 is definitely on the way out. Regarding data on disk1. If parity data is not part of the corrupted rebuild data on disk5 (i doubt there is any way of confirming this) I should be able to a) replace disk1 and rebuild without data corruption OR (if I am happy to shrink the array - which I am) unassign the disk1 and then copy the emulated data to another drive without data corruption; then proceed to shrink the array using the methods in the documentation. Should I run a non-correcting parity check first? There is some critical data on disk1, I have already copied that off to an external source with the drive still assigned and mounted, there were no errors during this copy process. tower-diagnostics-20201121-1408.zip
  3. Ok thanks. I'm still trying to comprehend the situation here. So in all cases I should address disk5 first, regardless of the SMART errors on disk1? Errors during the rebuild of disk5 may result due to the SMART issues with disk1. Is that correct? Therefore if I retain disk5 in its current state I may be able to rescue data from it if required? If disk5 is currently mounted what is wrong with just copying all the data off it? Or is the data actually replicated from parity despite the mount? IF I get a successful rebuild of disk5 what do I need to do to address disk1 if I am happy to make it redundant.
  4. Thank you for trying to help me. I think I'm struggling for drives in order to do this and having just spent on 2x4tb units I'm not inclined to get another. I have a spare 1tb 2.5 spinner that I can use and hasn't been used much, at the moment I've been using it externally to copy the critical data off disk1 and disk5. I also have a spare, slow, old 500gn 2.5 spinner that used to be my cache before I upgraded to an SSD for this purpose. I can probably also free up disk9 a new 4tb unit. Should attempt use ddrescue before attempting a rebuild to disk5? I will set about copying everything off disk5 now to the external 1tb I have in the meantime. At least then I have it so if there are any issues with a rebuild I might be in a better place.
  5. I'm not sure what you mean. Disk 5 still shows as disabled but all data remains available due emulation. /mnt/disk5 is also listed and available There is only 300gb on disk 5 so I could transfer it off to a usb drive before attempting a rebuild to the same 4tb drive. Regarding issues with the rebuild, could these result due the SMART errors on disk1 that have also appeared? Thank you for trying to help me understand this situation.
  6. Thanks @JorgeB The extended SMART passed ok and a did a parity swap followed by a rebuild and all was well. I now have bigger issues! See my other post! Eeeeeek
  7. The Issue - 1 Disk Disabled and 1 Disk with SMART Errors.... Last night when for all intents and purposes the array wasn't doing anything I received the following push notifications: Tower: Warning [TOWER] - array has errors Array has 1 disk with read errors Tower: Alert [TOWER] - Disk 5 in error state (disk dsbl) WDC_WD40EZRX-00SPEB0_WD-WCC4ECKCLDV9 (sdb) Here's my first mistake, I didn't save the syslog before a reboot 😞 [idiot] I went to the webgui with the intention of checking disk 5 SMART errors or report. However, when clicking on the disk the system was unable to retrieve any attributes. The read error count was up at 80. I checked the "Disk Log Information" and it did list some errors, IIRC related to "ata hard resets and restarting the link". There was more than one. At this point I felt a reboot would be useful to see if the connection to the disk could be re-established. Upon reboot the disk was still disabled (red x) but connected and settings and reports were available. I ran a short SMART test, an extended test was run only a week or so prior (see background info below if you feel it's relevant!). It was and I believe remains clear. However, having read the wiki, I believe I can only re-enable that disk by rebuilding to it, is that correct? I saw the "Read Check" option was now available and believed the system was prompting me to do one, so I ran one. I now realise it's the only option available when a drive is disabled. Nevertheless, off it went. A few hours into the "Read Check" disaster struck with the following notification: Tower: Warning [TOWER] - array has errors Array has 1 disk with read errors Tower: Warning [TOWER] - reported uncorrect is 47 SAMSUNG_HD154UI_S1Y6JDWZ504027 (sdg) Tower: Warning [TOWER] - current pending sector is 18 SAMSUNG_HD154UI_S1Y6JDWZ504027 (sdg) WTF! But it hasn't disabled the disk (maybe because I already have a disk disabled - I don't know) I have since run a short and extended SMART test on that drive, the reports should be in the diagnostics file attached. I definitely has issues. I have been through what's on those drives, thankfully MOST of it, but not all, is non-critical data. I am in the process of copying off the array as much critical data I can, however I am unsure as to the data integrity state of the stuff on Disk 1, and Disk 5 is emulated. I know the array is currently unprotected What are my options here? What order do I need to go about things in order to minimise further risk to data The array is big enough to cope with dropping disk 1 altogether and I'm happy to shrink the array when all this is done. I'm just out of my depth here. Background Info I have recently completed a re-casing and upgrading project on my long time Unraid server. Earlier in the year I moved it from a HP N54L with a Mediasonic eSata Probox to a Lenovo P300 with said same Mediasonic eSata Probox. There were no issues with this phase, it was stable for a good 6 months without issue. In the last couple of weeks I re-cased and consolidated the whole thing into a Fractal Design Node 804 case, doing away with the Mediasonic eSata box. Again no real issues...APART FROM DROPPING A STACK OF 4 DRIVES ON THE FLOOR!!! ARHGHGHGHG. When I got the system back-up one of my 2TB drives had a hard mechanical failure from the drop. It was just loudly clicking under power and the system couldn't initialize the drive. It would eventually boot, obviously the drive was disabled but the array came up anyway. I had a new 4TB drive that I was adding to the array anyway so I pre-cleared it and attempted to rebuild the disabled drive to it, however it was slightly bigger than my 4TB parity (as that was a shucked drive) so I had to do a parity swap, followed by a data rebuild to the ex-parity drive. That drive is Disk 5, the first one in question here. After I had finished the data rebuild and run an extended SMART on the drive I left the array stable for the a couple of days in which time I believe it did a Parity Check finding 0 errors. I then set about converting 4 drives from rfs to xfs as I had a mixed array and wanted to consolidate. A lengthly process using UnBalance to clear them down and then reformatting and moving data about to clear the next one. Some of those drives were my oldest, so during that process probably had the most use they'd had for ages. Either way I got through it without too much bother. I then added an additional 4TB drive to the array to expand it further just the other day, which did involve moving some cables able, but the system had been up for a good few days for the issues described at the top of this post began. tower-diagnostics-20201119-2244.zip
  8. Hi Guys and Girls, Looking for some advice/assistance here after a good thorough search on the issue. Long story short - I dropped a data disk spinner during a re-case and it seems the drive is now bad. Horrendous clicking on power, almost results in boot failure but we eventually get there and the drive is just missing. I haven't started the array since due fear of data loss given the findings below. It's highly likely that the drives are now plugged in to different controllers than there were before. I had 4 drives in a Mediasonic ProBox vis eSata from one of the PCIe controller cards. All drives are now internal via mobo sata and 2x differed PCIe Sata cards. However, the 2 drives in question are now on the mobo sata. No worries I thought, I have a new (pre-cleared) Toshiba X300 4TB for the array anyway so I'll just use that as a replacement drive instead. WRONG - Parity Drive must be biggest in system.... Wait, my parity drive is a WDC 4TB how can this be.... Anyway: Disk /dev/sdb: 3.65 TiB, 4000753476096 bytes, 7813971633 sectors Disk model: WDC WD40EZRX-00S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: F60C7E9F-92E4-4F57-86B1-1E6FB3E1DB77 root@Tower:~# hdparm -N /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 04 51 40 00 21 04 00 00 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 04 51 40 01 21 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 max sectors = 7813971633/1(1?), HPA setting seems invalid (buggy kernel device driver?) Disk /dev/sdf: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: TOSHIBA HDWE140 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 I'm just running an extended SMART on the WDC to check that, but want to seek guidance in the meantime. I'm assuming the WDC drive is short on the size / sector count based on a couple of reports from friends of their own 4TB drive size stats. As far as I can remember the WDC drive has never been on a gigabyte board but it may have been shucked from an external drive (I have no record of purchase though). What is my best option here to ensure no data loss and minimise downtime. 1. Accept the difference and carry out a parity swap 2. Investigate and try and fix the size difference, then add the Toshiba as the replacement drive for the failed unit (with you help) 3. Do something completely different and much more robust guided by all you smarter experts on this forum. First time facing this issue. Any more info required please let me know. TIA, Dan tower-diagnostics-20201107-1353.zip
  9. I'm wondering this too. Looking at the master docker info you'd have to specify some more variable. I'm getting InnoDB and subsequent mysql errors on install so can't yet get it up. Everything stock as per the video apart from eth0 set to a static IP. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='shinobi_pro' --net='eth0' --ip='192.168.50.200' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'ADMIN_USER'='admin' -e 'ADMIN_PASSWORD'='password' -v '/mnt/disks/cctv/':'/opt/shinobi/videos':'rw,slave' -v '/mnt/user/appdata/shinobi_pro':'/config':'rw' -v '/mnt/user/appdata/shinobipro/database':'/var/lib/mysql':'rw' -v '/mnt/user/appdata/shinobipro/customautoload':'/opt/shinobi/libs/customAutoLoad':'rw' 'spaceinvaderone/shinobi_pro_unraid:latest' a8aa5c000266e29966658067e0cdc657becf86d88cac66537f73f209a2e4b39f The command finished successfully! Log result after install: Copy custom configuration files ... cp: cannot stat '/config/*': No such file or directory No custom config files found. Create default config file /opt/shinobi/conf.json ... Create default config file /opt/shinobi/super.json ... Create default config file /opt/shinobi/plugins/motion/conf.json ... Hash admin password ... MariaDB Directory ... Installing MariaDB ... Installing MariaDB/MySQL system tables in '/var/lib/mysql' ... 2019-12-28 12:46:49 0 [ERROR] InnoDB: preallocating 12582912 bytes for file ./ibdata1 failed with error 95 2019-12-28 12:46:49 0 [ERROR] InnoDB: Could not set the file size of './ibdata1'. Probably out of disk space 2019-12-28 12:46:49 0 [ERROR] InnoDB: Database creation was aborted with error Generic error. You may need to delete the ibdata1 file before trying to start up again. 2019-12-28 12:46:49 0 [ERROR] Plugin 'InnoDB' init function returned error. 2019-12-28 12:46:49 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 2019-12-28 12:46:50 0 [ERROR] Unknown/unsupported storage engine: InnoDB 2019-12-28 12:46:50 0 [ERROR] Aborting Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. The problem could be conflicting information in an external my.cnf files. You can ignore these by doing: shell> /usr/bin/mysql_install_db --defaults-file=~/.my.cnf You can also try to start the mysqld daemon with: shell> /usr/bin/mysqld --skip-grant-tables --general-log & and use the command line tool /usr/bin/mysql to connect to the mysql database and look at the grant tables: shell> /usr/bin/mysql -u root mysql mysql> show tables; Try 'mysqld --help' if you have problems with paths. Using --general-log gives you a log in /var/lib/mysql that may be helpful. The latest information about mysql_install_db is available at https://mariadb.com/kb/en/installing-system-tables-mysql_install_db You can find the latest source at https://downloads.mariadb.org and the maria-discuss email list at https://launchpad.net/~maria-discuss Please check all of the above before submitting a bug report at http://mariadb.org/jira
  10. I believe they've moved it in the latest betas but that should also have been incorporated into the latest stable, which isn't available as part of this container yet (still!). I copied the dzvents from the windows install and fixed the permissions, seems to work fine for most things BUT I if I try to update a virtual temp sensor with a script I get reboots every minute - this can't be replicated by others on the domoticz forum, so can only assume it's something weird with the container... I'm debating moving it to a dedicated pi as my house gets more dependant on it and I need to ensure the WAF!
  11. Forgive my ignorance, but can I just use the branch on github you created anyway, without the tag? If so, how?
  12. Any news on the ETA for the stable-3.8153 tag? Many thanks!
  13. Can we get a tag for the latest stable? 3.8153 Currently using the 'latest' tag but experiencing some of the bleeding edge bugs which is ruining the WAF, but I need functionality that was added between the stable tag on the repo and the latest stable release.
  14. Is this docker actually following the develop branch? Settings > Updates shows: 0.2.0.778 - Jun 20 2017 develop Installed But the develop branch is being updated all the time and the docker was last built 5 days ago by linuxserver.io
  15. I don't so it's worked fine. I did have a look for a "config" file of some sort to tweak the ports al la sabnzbd etc but couldn't find one for domoticz. Need to get my teeth into events, blocky and scripting now. Great use of a box that's already on 24/7 using it for home automation. Props for the docker!