Jump to content

aim60

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by aim60

  1. For experimentation, I'm running two Plex dockers (one LinuxServer and one Binhex), using different appdata folders. Since you can't change the port that Plex listens on (38400), I needed to assign a different IP address to one of the dockers. So the Binhex docker is running on Custom:br0, and a fixed IP address. br0 is a Linux bridge, not a Docker bridge, so port mappings are not required. Its like Host mode. I'm still on 6.6.7, but that shouldn't matter.
  2. I believe that within unraid you can only use vlans for dockers and vms. See if you can configure your switchport to send vlan 10 traffic as untagged, while sending the other vlans as tagged.
  3. Make sure all shares are set to COW. Any problem converting the system share containing the docker and libvirt images to COW?
  4. I'm planning on adding a new SSD to my cache pool, and was wondering if and how Unraid users test new SSDs. I give spinners a SMART long test, and 3 Preclear passes.
  5. While testing the "Virtual Machine Wake On Lan" plugin, I discovered that I had two VMs using the same MAC address (sloppy editing of the XML's). It would be useful if Fix Common Problems could scan the docker configs and VM configs for duplicate MACs, and the docker configs for duplicate IP's.
  6. Hi Thanks for the great work. Could you please add md5deep. Compiled version here: http://packages.slackonly.com/pub/packages/14.2-x86_64/system/md5deep/md5deep-4.4-x86_64-2_slonly.txz Thanks
  7. aim60

    Safe Powerdown

    Request: Please implement Safe Powerdown so it can be activated from the console/telnet session. Reason: For the past several years I have been using only Cyberpower PFC UPS's. They are the reasonably affordable and designed for PFC power supplies. Unfortunately, they don't work correctly with APCUPSD when "Power Down UPS After Shutdown" is set to yes. I would like the ability to run Cyberpower's UPS software in a Docker container or VM, and have it initiate a power down via a scripted telnet session to UnRAID. Thinking ahead. Still on UnRAID 5.
  8. +1 Feature Request - The option for Automatically Created (the way it works now) or Manually Created Only user shares Would rather not have to start directory names with a "." to have unRaid ignore them. Why not put those directories on a disk that is excluded from user shares? I keep the contents of each disk categorized by function. Its not a big deal either way, but the sysadmin in me would like to have the choice. Been wanting to throw this one out there for years. Just chimed in now because I discovered that I'm not the only one who would like this. Tom, there are many other things with higher priority. I'm just requesting that you keep this on your list of things to consider for a rainy day when you have nothing else to do.
  9. +1 Feature Request - The option for Automatically Created (the way it works now) or Manually Created Only user shares Would rather not have to start directory names with a "." to have unRaid ignore them. Why not put those directories on a disk that is excluded from user shares? I keep the contents of each disk categorized by function. Its not a big deal either way, but the sysadmin in me would like to have the choice.
  10. +1 Feature Request - The option for Automatically Created (the way it works now) or Manually Created Only user shares Would rather not have to start directory names with a "." to have unRaid ignore them.
  11. dgaschk One disk has 4 pending sectors, but they're the same 4 that have been there for years. And the parity check (with all new cables) shows no hardware errors. garycase I will definitely look into check-summing the files. A backup server is also worth thinking about, as is segregating the really important files so a copy can be taken off-site. I've also been slowly coming to the conclusion that the only way forward is to do a correcting parity check. And I've realized that 3000 blocks with errors is only about 3MB of data, so damage may be minimal. Thanks guys for the input.
  12. As a result of Black Friday purchases, I’ve been upgrading disks, and retiring the oldest ones. I’ve had a few disk problems in the last 9 months, which turned out to be sata cable related. My plan was to disturb things as little as possible, do all of the disk upgrades, and when things were stable replace all of the sata cables. Bad decision. I replaced disk6, a ST31500341AS 1.5TB, with a 2TB WDC_WD20EARX-00PASB0, and initiated a rebuild. The result was many disk read errors on disk2. I canceled the rebuild. From the errors in the syslog I concluded that all of the errors were related to the sata cables. I replaced all of the older sata cables. While I was in the case, I also noticed that the power connector to disk2 was not fully seated, and fixed it. Before continuing, I successfully ran smartctl short disk tests on all disks. I re-initiated the rebuild on disk6. This time the result was many read errors on disk5, and unraid marked disk5 as missing. The syslog again indicated to me that the errors were cable/power related. Disk5 still had one of the older sata cables, and in hindsight, it was on the loose side. So I replaced the remaining sata cables in the system. At this point, I needed to establish confidence in the hardware. I re-installed the original disk6, and replaced super.dat with the one from before the first disk6 replacement. The array was set to not auto-start, and I powered up the hardware. I successfully read over 1GB from each disk with dd if=/dev/sdx of=/dev/null bs=65536 count=20000, then initiated a nocorrect parity check. The hardware seems stable. The results of the parity check were: 49 sync errors within 1 second (housekeeping area?) 1 sync error sometime later 3000+ sync errors after sector 2930245632 If my calculations are correct, the 3000+ errors all occurred within 16GB of the end of a 1.5TB drive (disk6). An fdisk of disk6 is attached. My Question - Since the parity disk reflects the rebuild of a 1.5TB disk6 onto a 2.0TB disk6, might the 3000+ errors all reflect the reiserfs housekeeping of increasing the size of the disk? Or do I have corrupted data? In other words, can I run a correcting parity check and be reasonably confident that I have no data corruption? I have no backups and would l like as much as possible to avoid further corruption. Any suggestions on how to proceed would be greatly appreciated. I’m thinking that once things are stable, I’ll run a reiserfsck on all of the data drives. An observation – anyone running a server without removable drive bays, that does a fair amount of moving/replacing drives, should strongly consider replacing their sata cables regularly. The ones I just installed are Monoprice sata3 cables, and they seem more secure than any other cables I’ve used. 5.0.2RC1, C2SEE, Celeron 1400, 4GB, Corsair VX450, (1) SIL3132 PCIx SATA controller, Intel PCI NIC, 7 drives in total. Syslogs_etc.zip
  13. Same price on Amazon. http://www.amazon.com/Seagate-SATA-Cache-Drive-ST4000VN000/dp/B00D1GYO4S/ref=sr_1_1?ie=UTF8&qid=1385609553&sr=8-1&keywords=4tb+nas+hard+drive
  14. Hi Joe, found a quirk. Running unRaid 5.0-rc16c and preclear 1.13. In the unRaid disk settings, set Enable Auto Start to No, then reboot. Run preclear_disks.sh -l. All disks, whether assigned to the array or not, are listed as available for clearing. Start the array. Only the correct disks are listed for clearing. Stop the array. The correct disks are still listed.
  15. Interesting... I don't doubt you, but I see no way for that to occur... (In other words, I'll have to test it myself) If running with no option specified the "default" will be that you've specified in the unRAID "Settings" page. The partition start is set prior to entering the "cycle" loop. It is un-changed (as far as I know) otherwise. Before I start my test, are you sure you have 4k-aligned as the "default" set on your server? (please double-check, so I can duplicate your situation here) Also, once the second cycle is complete, let me know what the output says. You might even run preclear_disk.sh -t /dev/sdc and let it tell you how the disk is partitioned. I'll be curious what it says. Joe L. The default is 4k-Aligned. I'll run -t when the cycle is done. Also, the server isn't needed at the moment. I'll repeat the 2-cycle test to verify. Any files that would be of use? Should have done more verification before posting. What I encountered was the bug that you fixed in version 1.9 (preclear defaults to 63 sector alignment on unRaid 5 with no -a or -A).
  16. Interesting... I don't doubt you, but I see no way for that to occur... (In other words, I'll have to test it myself) If running with no option specified the "default" will be that you've specified in the unRAID "Settings" page. The partition start is set prior to entering the "cycle" loop. It is un-changed (as far as I know) otherwise. Before I start my test, are you sure you have 4k-aligned as the "default" set on your server? (please double-check, so I can duplicate your situation here) Also, once the second cycle is complete, let me know what the output says. You might even run preclear_disk.sh -t /dev/sdc and let it tell you how the disk is partitioned. I'll be curious what it says. Joe L. The default is 4k-Aligned. I'll run -t when the cycle is done. Also, the server isn't needed at the moment. I'll repeat the 2-cycle test to verify. Any files that would be of use?
  17. Joe, running preclear_disk version 1.7 on unRaid 5.0 Beta6a. The default partition format is 4K aligned. Called with "preclear_disk -c 2 /dev/sdc" (no -a or -A). Cycle 1 ran with partition start 64. Cycle 2 is running with partition start 63. If its significant, there are no disks assigned to the array.
  18. You basically experienced "resource contention" (too much going on, too little memory for it all to happen at once) One of the pre-clear processes was using some resource others needed, so they waited until it was free. You will benefit from the three parameters I added most recently that allow you to specify smaller block sizes when reading and writing and a smaller number of blocks as well. Those parameters are: -w size = write block size in bytes -r size = read block size in bytes -b count = number of blocks to read at a time They are described in more detail in this post: http://lime-technology.com/forum/index.php?topic=2817.msg39972#msg39972 Joe L. Thanks
  19. Have been burning in (2) ST32000542AS 5900RPM 2TB drives, by running 2 copies of Preclear simultaneously. With version .9.8, each pass takes 30 hours! I manually spindown the array nightly from the standard web interface. During the 4th pass, pressing the Spin Down button caused the web interface to become completely unresponsive. Couldn't even launch the web interface from another PC. UnMenu worked fine, and I could access files files via the disk shares or user shares. Gave up for the night. In the morning, all was well. The syslog showed a 24 minute delay between pressing Spin Down and Unraid trying to spin down the drives. Dec 3 20:51:09 Tower emhttp: shcmd (54): sync Dec 3 21:15:12 Tower emhttp: shcmd (55): /usr/sbin/hdparm -y /dev/sdg >/dev/null Dec 3 21:15:12 Tower emhttp: shcmd (56): /usr/sbin/hdparm -y /dev/sdc >/dev/null Dec 3 21:15:12 Tower emhttp: shcmd (57): /usr/sbin/hdparm -y /dev/sda >/dev/null Dec 3 21:15:13 Tower emhttp: shcmd (58): /usr/sbin/hdparm -y /dev/sdb >/dev/null Dec 3 21:15:13 Tower emhttp: shcmd (59): /usr/sbin/hdparm -y /dev/hdg >/dev/null There are 3 other examples in the log, when the simultaneous Preclear's were running, where pressing Spin Down immediately spun down the drives. 4.5 Beta 11, md_num_stripes=5120, C2SEE, 4GB RAM, disks connected to the integrated ICH10.
×
×
  • Create New...