gubbgnutten

Members
  • Posts

    377
  • Joined

  • Last visited

Everything posted by gubbgnutten

  1. ...and for clarity please include what you mean by "getting hacked" in your answer to jonp's question. If a service is insecure it simply shouldn't be exposed to the Internet. Depending on the situation, I might possibly share such a service with select friends using OpenVPN or SSH tunnels. If I had a (supposedly) secure service exposed to the Internet and got lots of malicious access attempts from Chinese IP ranges, I would have my firewall block all known Chinese IP ranges from accessing the service as I don't expect any connections from China. Blocking those IP ranges wouldn't actually improve security that much, of course, but it wouldn't make it any less secure and my logs would probably get slightly less cluttered. Win.
  2. Hey, limetech, will -rc1 be out anytime soon? Actually, the forum is still telling me "Latest 64bit beta release: v6.0-beta15" despite this thread.
  3. The powerline part is most likely your number one limiting factor. As for 1080p streaming, not so ironic. It doesn't require that much bandwidth.
  4. Sure. In this case, more data has been transferred than the cache drive can hold, so speeds would certainly be lower towards the end with parity. But I have a hard time imagining that these volumes of data will be transferred very often. No biggie. More worrying, though: If the web-UI in the screenshot is up to date, the cache drive isn't being used for the destination "movies". The amount of used space looks more like just a Docker image (around 10G). No user share write caching... I typically use FTP for transfers, so I can't comment on the overhead and transfer rates for SMB. Still, you might be slightly limited by the speeds of the HDD you are reading from and/or the ones you are writing to, but it isn't a life-changing difference in speed. From a network perspective it's pretty good, with gigabit networking you can't go much faster so I'd say "just enjoy it" if it hadn't been for two things... 1. Speed tests without a proper parity are not very interesting to me, that's only good for the initial transfer and not real-life use. 2. The cache drive don't seem to be in use for the transfer. If the web UI in the screenshot was up to date, you should probably double-check your user share cache drive settings.
  5. I am also a bit annoyed by the lack of an ever-present power down button in the web UI and the need for multiple steps, but just a bit. Pushing the power button works just fine for me (thanks LT for fixing that bug a bunch of betas ago!). If you need a workaround for quick and easy power down via a browser, bookmark http://tower/update.htm?shutdown=apply (replace tower with the correct server name).
  6. That step #6 is probably a better choice than some other solutions, but if it gets to that - please give recycling a chance
  7. I guess the easiest way to see if the cache drive is properly set up is to write data to a user share and check the web UI if the used space on the cache drive went up. As for configuration, you might have to also check the individual shares as well and make sure that they are set to use the cache drive... I had that problem once, added a cache drive but not having it used. That was a long time and many versions ago, though. Anyways, a couple of things to consider regarding your performance evaluations: If you have plenty of RAM in your server it will use RAM to buffer lots of writes. As long as there is RAM available, files transferred to the server over the network will be as fast as the network can shuffle data. You won't notice any difference between cache drive/no cache drive writes until you've written enough data to fill the RAM buffer. With 16GiB of RAM and a benchmark application writing a 4GiB file over the network for example, the network will in virtually all cases be the bottleneck. It doesn't matter if the transfer is destined for a SSD or an array device with parity active. If you performed your tests with only one data drive assigned you hit a special case for the parity update which made it much faster than it would have been with multiple data drives.
  8. I think you are making it way too complicated... Let your Mac and CCC bother with HFS+ and disk images. unRAID should only provide storage of the image file created/edited via your Mac, and for that you don't need additional software.
  9. So true. I have 16GiB of RAM in my server, and most transfers clock in at well above 100MB/s (112MB/s typically) but if I issue the "sync" command directly afterwards it takes some time for it to finish... The RAM buffer generally keeps the perceived array writes quick enough for me not to bother with a cache drive and Mover, or setting reconstruct-write manually in everyday use.
  10. Right, probably should've mentioned the connection. I wasn't just randomly changing settings, although I realise it might have sounded like that. After upgrading BIOS and verifying that all (reasonably related) settings still had decent values, I started looking up the error messages more thoroughly and the trace eventually led to a discussion about VT-d. Archedraft's suggestion of googling "dmesg vtd marvell" probably says it all, but if needed I could provide dmesg output with the problem.
  11. Is what safe to use? If you simply mean beta 8 then it is certainly as safe to use if you are using base unRAID features. Assuming by base features you do not mean disk replacement expansion. It does not support replacing a smaller disk with a larger disk. Well sort of. You can replace a smaller disk with a larger disk, and your data will still be safe, right? You just won't get any of the extra space automatically at this time. Annoying for sure, but I'd still consider it "safe to use" in the context of that particular flaw, although it is obvious that there are different opinions regarding what "safe to use" means... Speaking of beta software, it is certainly the case that things that “used to work” in the stable release might stop working in a beta. After running betas flawlessly for a while on a dedicated lab computer I decided to try 6b8 on a more normal server that previously ran 5.0.5 flawlessly. Didn’t start too well, of course… Long story short, two disks were missing after booting 6b8, and I had to chase the problem for a bit before I could have everything up and running. Was tempted for a while to revert to my backup… So what went wrong? The server in question has a Gigabyte Z87X-UD3H motherboard, and the two missing disks were both connected to the Marvell 88SE9172 chip. Combing the dmesg output, the kernel indeed detected the disks when booting, but then lost them after DMAR errors. Upgrading the BIOS didn’t help, but disabling VT-d in BIOS made everything work nicely.
  12. Trying out a Z87 board at the moment. It is not the same model or brand, but I guess the chipset-related stuff is still somewhat relevant: 1. Connected the unRAID flash drive via an adapter to an internal USB header. So far so good, very nice. 2. Running pre-clear for 4-8 cycles on three drives in parallel. No problems for the first 66 hours (and counting). Speed as expected. 3. No luck with the I217V LAN, using a separate Intel NIC for the time being. I was considering that particular ASRock board for a brief moment, but then remembered I usually stay away from that brand. Bad experiences from years back (probably not relevant anymore of course).
  13. In what situations did you encounter data corruption and controller timeouts? I'm using a Rocket 640L on 5rc11 (no enabler script though), and am in the process of adding my second drive to it. The second drive is currently pre-clearing and I've run a parity check at the same time to stress the system a bit. So far nothing bad reported in the system log, and no parity errors found. Still waiting for the pre-clear results, and I guess simultaneous writes to both drives on the 640L remain to test as well. EDIT: Two drives worked perfectly. Pre-clearing the third drive brought down the system, catastrophic failure. And perfectly reproducible. Two drives fine, three drives disaster. Controller card disappears under load.