Ambrotos

Members
  • Posts

    114
  • Joined

  • Last visited

Everything posted by Ambrotos

  1. No. Network interface bridging and Docker container bridging are two different things, and one isn't really dependent on the other. Network interface bridging causes your unraid box to perform Layer 2 Ethernet packet forwarding between the two bridged interfaces; you could conceptually think of it as turning your NAS into a simple unmanaged Ethernet switch. Docker container bridging has to do with the way ports are mapped and exposed between a container and the host system. If you configure a container for "host", then all ports used by any services in the container are exposed to the host. If you configure it for "bridge", then only the ports explicitly mapped in your configuration are exposed to the host. Honestly, I think the naming conventions here are a bit misleading; "host" and "bridge" mean something different in the context of virtual machines, and I've spent a lot more time with ESXi than I have with Docker -A
  2. Figured I'd respond just for the sake of completeness. That was my solution as well. For lack of other options, I wound up replacing the SASLP-MV8 with an IBM m1015 (flashed to IT mode) back in early January. It's be humming along nicely ever since. -A
  3. I'm having trouble with a new system I'm building for a friend. I've narrowed the problem down to some combination of the controller I'm using (a Supermicro AOC-SASLP-MV8), and the inclusion of a 12-bay SAS expansion chassis. With everything connected but no drives in any of the bays, everything boots properly and I can even see via dmesg that the controller has successfully detected and discovered the SAS expansion. However, as soon as I put a single drive (1TB WD Red) into the chassis, there's a 50/50 chance that I'll get the attached error. This can happen if I attempt to boot the machine with a drive installed, or if I boot the machine with no drives, then insert a drive the error will occur immediately after drive insertion. Unfortunately I can only take photos, since when the error occurs the system hard crashes, so I can't even grab syslog or anything. Some specifics: -Running unRAID 5.0.4 Plus -AOC-SASLP-MV8 upgraded to FW 3.0.1.21 -unRAID boots fine with drives connected directly to Mobo's built in SATA controller -Just to test, I booted from a Ubuntu 13.10 live CD and was able to insert and mount a drive without a problem. This would suggest to me it's maybe a problem in the version of libsas or mvsas included with unRAID... ?? Unfortunately I don't have another SAS controller to test with the expansion chassis, nor do I have a SFF-8087 to SATA cable to test connecting the drives directly to the controller. Though, given how well these controllers seem to be regarded on this forum, I imagine it's not a problem with the controller itself... Does anyone have any suggestions? Recommendations? Can I provide any additional information that would be helpful? Is there a way to make syslog persistent across reboots so maybe we could see the whole debug trace? Thanks in advance for any help, -A
  4. PCRx is correct. I install my plugins (e.g. sabnzbd and sickbeard) to /mnt/cache/.plugins and it doesn't create a share. -A
  5. Just a quick update. Over the weekend I ran a reiserfsck --rebuild-sb reiserfsck --check reiserfsck --rebuild-tree on both /dev/md4 and /dev/md6. The issue seems to have been corrected. Only one file on each drive was found to be corrupted and removed from the trees. Both files are easily re-downloaded. As to the original cause of the failure, we had a city-wide power outage last week that lasted overnight -- long enough for my UPS to fail. I'm suspecting that as the culprit, but I'll continue to monitor and if the same drives experience problems again I plan to replace them. Anyway, thanks for the advice everyone. -A
  6. The card has been working in unRAID quite well for a couple years now. Since long before these reiserfs errors began anyway. When I first noticed that opcode error in the log, I contacted LSI and their official position is "This error can be safely ignored". If it were an issue with the card, why would the file systems on drives md4 and md6 be corrupted, but not on drives md1-3,5,7-12? -A
  7. Ah, right. Well, good thing I decided not to take that route then, huh? Here are the syslogs. -A syslogs.zip
  8. I see the following error in my system log. It occurs continuously, once or twice a second. reiserfs_read_locked_inode: i/o failure occurred trying to find stat data The error is logged on both md4 and md6. I did a bit of forum research, which lead me to this article: http://lime-technology.com/wiki/index.php?title=Check_Disk_Filesystems So, following the instructions, I stopped the array, started it in maintenance mode and ran a reiserfsck --check on both /dev/md4 and /dev/md6. In both cases the check resulted in the following: bread: Cannot read the block (244190637): (Invalid argument). reiserfs_open: Your partition is not big enough to contain the filesystem of (244190637) blocks as was specified in the found super block. Failed to open the filesystem. If the partition table has not been changed, and the partition is valid and it really contains a reiserfs partition, then the superblock is corrupted and you need to run this utility with --rebuild-sb. The article I linked to above explicitly states not to execute the --rebuild-sb operation without advice from an expert, so I'm here seeking that advice. What should I do? Is there any other information that would be helpful? Just for comparison I ran a --check on /dev/md5 to see what the results were and the --check came back with "No corruptions found". This suggests to me that there is a legitimate problem with both /dev/md4 and md6. If it were only one drive having this problem I would just pull it out, replace it with another drive and let the parity rebuild a proper filesystem over time. However, because I have two questionable drives, I'm not confident in the array's integrity. I recently upgraded to unRAID Server Pro 5.0 final. Previously I had been running one of the RCs. I can't say for sure whether this problem was introduced as a result of the upgrade, or whether I'm just noticing it now and the upgrade is irrelevant. Either way, I haven't noticed any array dataloss yet. Thanks for any help you can provide, -A
  9. I know what you mean. I'm eagerly awaiting its release. I know he writes this stuff in his spare time, but I hope he hasn't lost interest It's been a while since he said "final testing". -A
  10. It's a Supermicro X7SBL motherboard with a Core 2 Q6600, and 8gigs of ECC DDR2. Is anyone else that's having problems with their LSI cards using an expander? I've got a 12-bay enclosure/expander, maybe that's having an impact. -A
  11. I did 5 spin down/up cycles each at 1 minute intervals. Still don't see any errors. -A
  12. I'm running beta14 on a LSI 9690SA with 5 drives. It's not dev/test because I wasn't aware of any issues with LSI controllers until I started reading this thread. All of my drives are configured to spin down, and I've never seen a single error on any of them. I realize this isn't what most people are reporting. I'm not sure what the difference could be. Anyway, there you go: someone who's successfully run beta14 on an LSI controller -A
  13. LOL. I read dgaschk's message first and thought to myself "pirates? wth, I paid for unraid..." Anyway, I stand chastised. I started off OT though, so that's gotta count for something... -A
  14. That's the thing though, I WANT my user shares to use the cache drive. I just don't want folders created in the cache drive's root to show up as shares. I still don't think what I'm describing is intended behaviour. If I create a folder via CLI in /mnt/cache, then it appears as a share. However, it does not have an entry in /boot/config/shares, and I cannot delete it properly via the user shares menu on the web GUI. You select delete, click Apply and it just kicks you back to the user shares menu. Anyway, it doesn't matter. I've avoided the problem by naming the folder with a leading period. I hadn't noticed the "only" option in the user share config screen until you mentioned it. That's interesting, but I doubt I'll use it My cache drive is intentionally rather small. -A
  15. I'm not sure I know what you mean... the cache drive is configured to Exported = no, and in the Samba share settings menu I've got 'cache' listed in the excluded disks. I would have thought that would prevent anything on the cache drive from being shared
  16. I didn't do SSD, though I considered it. I absolutely love my Vertex 3 for my desktop machine, but something about putting them in servers still makes me uncomfortable... never claimed it was rational I have a LSI 9690 RAID card, so I put 4 Western Digital Black 10k RPM drives in a RAID 5 array and use that as my primary ESXi datastore. All of the drives used by the UNRAID VM (3x2TB and 1x250GB cache) are direct mapped via RDM. So far, I've been very happy with the performance. -A
  17. I've noticed occasionally that the system also creates a user share for folders in the root of the the cache drive. I think this might be a bug though (5b14), since it doesn't do it every time, and you can't delete the folder from the web GUI once it's been created. Anyway, if you prepend the folder name with a period, it's ignored. So I just installed Sabnzbd, Sickbeard, and Logitechmediaserver in separate folders within /mnt/cache/.packages/. Sabnzbd downloads to /mnt/user/Downloads. Sickbeard processes the files from there and moves them to /mnt/user/TV -A
  18. Just a note for anyone trying to get this board to work with VMWare ESXi 5.0. This board is technically capable of VT-d (DirectI/O), but I struggled forever to get ESXi 5.0 to recognize it. Very long story short, the latest publicly available BIOS (1.2a) is broken and VT-d support doesn't work. After several emails, cases, and phone calls, I finally got someone at SM to send me their latest beta BIOS for the MB which seems to work very well, and completely fixes the VT-d issues I was having. If anyone's having similar problems with this MB and is interested in the BIOS, just shoot me a PM. Andrew
  19. Yeah, he helped me with some troubleshooting some time around Dec 30th or so. He's alive -A
  20. Guys, Finally some good news! Tom got involved and was able to identify the issue. During boot there's a point at which it tries to mount non-root filesystems (e.g. /boot). In order to give the USB system time to initialize there's a 5 second sleep inserted before the mount is attempted. This pause apparently wasn't enough for my MB. The mount command resulted in "device does not exist", and the startup script continued on. Tom gave me a build which actually loops until it detects that USB has completed initialization. And now my system finally boots properly. Thanks for all your assistance. Its nice to know there's such a helpful community behind this product. Andrew
  21. The system is a Lenovo M90p, details here: http://support.lenovo.com/en_US/product-and-parts/detail.page?DocID=PD005445 I know, probably not your typical server hardware. I'm using it because I had it handy, it's compact, low wattage, more than enough horsepower, and with the low-profile SATA PCI card I have (not yet installed) I can get about 6TB of storage out of it. Front panel USB port, so MB headers. Since you asked I've tried using one of the rear ports. No change. Andrew
  22. OK, so... updates: I tried again with a 8GB Patriot XT, with the exact same results. No dice. I also (just on a whim) tried booting UNRAID 4.7 since I realize the current 5.0 is technically a beta. No difference there either. The BIOS is now the most current available from Lenovo. I poked around in the BIOS, there's not much configuration available for USB. You can disable individual USB ports, or enable/disable Legacy support. I tried disabling legacy, but that just made my keyboard fail to detect. No dice here either. Tried this too. No difference. It seems to me that once we get to "last resort", we're out of ideas. Am I wrong? What are the next steps here? If I were to give direct telnet access to my UNRAID server, would someone with more experience with the software be willing to log in an poke around? Maybe I'm missing something simple? Andrew
  23. It's the first I've mentioned trying the flash in another machine because I only did the test yesterday afternoon. I'll give another flash and give it a try today. I almost hope it doesn't work either though, because as I mentioned I've already purchased a Plus license for this one. I got the impression that licenses were non-transferable. Do you think Tom would be amenable to re-generating my key if a different flash works? As I understand it, the HP USB utility is used primarily when you're unable to create a bootable flash drive using the standard windows format utility. This isn't my problem. My flash drive is bootable, in that my PC recognizes the flash drive's MBR, and properly executes the boot sector. In other words, Linux loads. Just because it doesn't load properly doesn't mean my flash stick isn't "bootable". Anyway, I'll report back on the flash stick and BIOS tweaks later today. Andrew
  24. This IS a clean install. As I said in my original post, all I've done is followed these installation instructions, and nothing else. http://lime-technology.com/support/unraid-server-installation As a test, I took the flash to my father's house and booted his existing UNRAID server with my flash stick. It works perfectly fine in his machine. It boots, mounts the flash, starts emhttp. The problem is not with my flash. The problem is a result of some compatibility issue between my server hardware and the UNRAID software. Is there nothing in the syslog I attached to my first post that might give us a clue? If not, can I increase the logging level to provide more information? Is there another log that would be helpful? Andrew
  25. I unplugged all but the flash drive. Exact same behavior. Tower login: root Linux 3.1.1-unRAID. root@Tower:~# root@Tower:~# root@Tower:~# root@Tower:~# ls -l /dev/disk/by-label/ total 0 lrwxrwxrwx 1 root root 10 2011-12-30 21:52 UNRAID -> ../../sda1 root@Tower:~#