Jump to content

bnevets27

Members
  • Posts

    576
  • Joined

  • Last visited

Everything posted by bnevets27

  1. What game would actually take advantage of all that power anyhow?
  2. Does this plugin currently have a way to offset/correct a temp value? I know my temp is off by 30 degrees. So I just have to remember to subtract that the value shown but if its possible to have an offset that would be great. Unless I'm just one of very few that is unlucky enough to have sensors that don't report correctly.
  3. If an issue comes up with the cable I do have crossover cables. But as mentioned it's likely I won't need one.
  4. I wanted to try quotes but the way UD is setup is you can't enter the full path. You enter the IP and then it loads available shares via the drop down. Unless I'm missing something. It's entirely possible that this is a unraid NFS export issue but from what I've been able to find it looks as unraid does support spaces......
  5. Should have known everyone would be curious. Reason for wanting this is: My current server has all my files/media on it. I've built a new server that I want to offload all the work of the plugins/dockers but all my data still resides on the current (becoming archive) server. I want to use the now archive server (won't be running any dockers) as a dumb box to server media to the new production server that will do doing the heavy lifting (plex/emby etc). So with all the traffic happening on the production server (serving files from it locally and serving files mounted via NFS from the archive server to multiple clients) I feel it could be possible to saturate a gigabit connection during read from multiple clients. Have I explained that well enough? I'm welcome to have my logic questioned and told I'm crazy.
  6. How would one go about mounting an NFS share with a space in the name? Currently UD strips everything after the space and therefore won't mount. (Of course removing the space from the share name but unfortunately that's not really an option for me at the moment)
  7. So I want to connect two of my unraid servers together. I don't have a managed switch so I can't take advantage of bonding. With 2 NIC's on each server, I want one port to be directly connected between the two. The other port on each server will be connected to the router. Is this possible? If so how would I set this up?
  8. I've done so many tests now my head is spinning. garycase, I had actually tried that but tried it again and sure enough the drives don't fail on the on onboard controller. Now one of my problems I was/am having is that since the drives failing to return zero's isn't consistent I have been having a hard time getting good troubleshooting info. I had to change my test routine. I've now been using a batch of 250GB drives. Here's what I've tested thus far. I removed all of the ram and tested one set of 4 in another machine for 24hrs/8 passes, no errors. That same memory was then tested again in the server for another 24hrs/ 8 passes. No errors. All drive tests from here on were post-read-verify only with the fast post read switch. I ran 5 drives all connected to aoc-sata2-mv8 , 3 passed, two failed. I ran 5 drives all connected to onboard controller, all passed. (Suspected could be the card) Moved the aoc-sata2-mv8 card to another machine setup for testing. I ran 5 drives all connected to aoc-sata2-mv8 , all passed. Moved aoc-sata2-mv8 back to the server, and removed the second aoc-sata2-mv8, so now only one aoc-sata2-mv8 is plugged in. I ran 4 drives all connected to aoc-sata2-mv8 , all passed. Reinstalled the second aoc-sata2-mv8. I ran 4 drives connected to the first aoc-sata2-mv8, and 4 drives connected to the second aoc-sata2-mv8 , all passed. (Everything looks to be working. rerun to confirm) Reran same config as above. Some/most drives failed (Can't remember the exactly, forgot to take a screen capture) Added a few more drives for 12 in total. 6 on each controller. Now all drives fail. I've hooked up a meter and monitored my voltages going to my backplane and they seem fine, had the meter monitoring and didn't drop bellow these values. 12v = 12.01v and 5v = 4.9v (there's a little voltage drop over the cables as the voltage is a little higher right off the PSU) I've spent weeks now trying to figure out whats going on. I'm at a loss. Should I forget about not getting zero's back and move on? Is that wise?? EDIT: Ran with 6 drives attached to one aoc-sata2-mv8. All drives passed. Ran with 6 drives attached to the other aoc-sata2-mv8. All drives passed. Ran with all 12 drives attached. 3 Failed Ran with: 6 drives attached to one aoc-sata2-mv8 and 1 drive attached to the other aoc-sata2-mv8 and 5 drives connected to the on board controller. All passed Ran the above test again all passed a second time. Ran with 8 drives attached to one of the aoc-sata2-mv8 - all passed Re-ran the above test - all passed Ran with 8 drives attached to one aoc-sata2-mv8 and 4 drives connected to the on board controller. All passed. Ran with 4 drives attached to one aoc-sata2-mv8 and 4 drive attached to the other aoc-sata2-mv8 1 drive failed Ran with 7 drives attached to one aoc-sata2-mv8 and 3 drive attached to the other aoc-sata2-mv8 3 drives failed
  9. Well the WD black precleared fine on the first run. Tried to do just a post read on one of the greens that had previously failed and it failed again. Suggestions?
  10. I'm having some trouble getting my drives to preclear successfully. My drives are returing non zero values on post-read. Firstly I have run memtest for 24hrs and the drives don't have an issues with reallocated sectors. My server specs can be foundhere. The oddity is one of the drives I ran 5 times (1 pass each time) and it enventualy passed successfully on the fifth try. I just ran 5 more drives. I ran 3 at once, all failed. Ran the final 2 and they both passed. These drives are split over the two controllers, with one being plugged directly into the motherboard. Date - Failed/Passed returning zero value 1 - (4TB Seagate)(Controller 2) 21/01 - Failed 30/01 - Passed 2 - (4TB WD Red) (Onboard controller) 21/01 - Passed 11/02 - Passed 3 - (1TB Samsung)(Controller 2) 21/01 - Failed 23/01 - Failed 30/01 - Failed 05/02 - Failed 08/02 - Passed 4 - (4TB WD Green)(Controller 2) 11/02 - Failed 5 - (4TB WD Green)(Controller 1) 11/02 - Failed 6 - (4TB WD Green)(Controller 1) 11/02 - Failed 7 - (4TB WD Green)(Controller 1) 11/02 - Passed 8 - (2TB WD Black)(Controller 1) 11/02 - Not run yet On drive 3 I think I tried a different cable with no change and went back to the original but still failed. All WD drives were ran through garycase's test routine with no failures: Any ideas? I can post more details later if its helpful.
  11. I just got one of these. I would also love if it could be made to work with unraid.
  12. Wow thanks CHBMB! After trying too many different things, I ended going from many apps working behind a reverse proxy to non of them working at all, even locally. Nuked the docker, started from scratch so far so good building everything back up. But the root of my problem was one little /! Thanks for the heads up on the /mnt/cache as apposed to /mnt/user. While building back up I'll change to that config. I just started with v6 and dockers and have been trying to figure out best practices.
  13. Config mapped to: /mnt/user/appdata/hydra Removed everything and reinstalled. Oddly made it worse.... now I get "530 service unavailable" Another piece of the puzzle might be I'm trying over SSL
  14. Still can't get it to work. Maybe this is the problem? 2016-01-27 12:00:24,144 - ERROR - nzbhydra - Fatal error occurred Traceback (most recent call last): File "/config/hydra/nzbhydra.py", line 151, in run web.run(host, port, basepath) File "/config/hydra/nzbhydra/web.py", line 875, in run app.run(host=host, port=port, debug=config.mainSettings.debug.get(), threaded=config.mainSettings.runThreaded.get(), use_reloader=config.mainSettings.flaskReloader.get()) File "/config/hydra/libs/flask/app.py", line 772, in run run_simple(host, port, self, **options) File "/config/hydra/libs/werkzeug/serving.py", line 625, in run_simple inner() File "/config/hydra/libs/werkzeug/serving.py", line 603, in inner passthrough_errors, ssl_context).serve_forever() File "/config/hydra/libs/werkzeug/serving.py", line 512, in make_server passthrough_errors, ssl_context) File "/config/hydra/libs/werkzeug/serving.py", line 440, in __init__ HTTPServer.__init__(self, (host, int(port)), handler) File "/config/hydra/libs/SocketServer.py", line 420, in __init__ self.server_bind() File "/config/hydra/libs/BaseHTTPServer.py", line 108, in server_bind SocketServer.TCPServer.server_bind(self) File "/config/hydra/libs/SocketServer.py", line 434, in server_bind self.socket.bind(self.server_address) File "/config/hydra/libs/socket.py", line 228, in meth return getattr(self._sock,name)(*args) error: [Errno 98] Address already in use I assume its complaining about the port? I left it as default and I don't have anything else running on that port that I'm aware of. Still have no trouble locally and I do get some respsonce as seen by the screenshot. One other thing I should mention is I did find this on docker before it was made offical and had that docker installed but it has been removed
  15. Thanks for putting this together! Anyone else having trouble with hydra displaying properly behind a reverse proxy? I have no trouble with it locally and no trouble with any other of my apps
  16. That's the one. (I was posting on my phone and didn't have the link at the time)
  17. Well I was talking to Tom via email and brought up the fact that I would prefer to use a USB card reader. And he gave me a link to one that he said a customer had used successfully. So I would assume he's still ok with us using them. But like NAS said it's hard to find any that have a unique GUID.
  18. So unless I missed it we never got a response to what happens after 30 days. Does unraid cease to work? I think it would be good if after 30 days unraid defaults to storage size limitation. Building and testing a new system can be done in 30 days if you have the free time in those 30days to do so. And that rarely happens in the real world. Getting all the different parts to work together (dockers taking to each other, running scripts etc, setting up VM's) takes longer then a month and then there is testing after that. I bought a second license so I could work on my test server before replacing my production server. But at the new pricing levels I would be less happy having to buy a second key just for testing. The other point is, at least the way the licensing worked before, I would suggest the free version to friends, and they could at their leisure figure out how to get everything working the way they wish, and then once they out grow the free size limitation they would upgrade. Even a free option for 1 parity, 1 data and 1 out out of array disk or cache to run dockers/VM/puglins on, with no other option would be good to get people into unraid. I understand the concern now that drive density is rising and therefore with the old model you could have a relatively large storage size with the free license and therefore no reason to buy a license at all. So I do agree the license model needed to change but I personally hate time trails. I never get to test out the product fully in 30 days and I suspect most others also have that problem.
  19. Yes, I wanted to post what I had before people wasted money. I have a new solution I'm working on that should satisfy all parties. I'll post when unRAID 6 is released. Since unRAID 6 is "close" to release any more info on what this new solution might be?
  20. Waiting for a date is one thing. Waiting for a date and seeing it pass by time and time again is another. I would definitely rather wait and have a stable product. It's our data we are playing with here and not just some bleeding edge software. But at the same time I have been postponing my own personal projects based around unraid. I've too many times (a few times during this 6.0 beta) started playing with my configuration, get it working (or not in some cases) then a week/month later the entire process for setting it up changes or something else changes and I have to start from scratch. And so is the nature of a beta but I'm sick of waiting for a beta/rc with a feature freeze. Like what kodi (xbmc) team does. They get to a point and freeze any new features, work on only bugs for a while then release. This works great. I would willing start participating in the beta testing again as soon as I knew the features wouldn't change the next month.
  21. Alternatively you can edit the file on the USB stick. This way you don't have to shut down your main server. Not at my computer right now but the file is in the config folder and I think it either network.cfg or identity.cfg
  22. I feel most of the users here would agree with that. Luckily it does sound like the priority is dynamix with unVM's coming later according to this: I think the implementation of hardware profile reporting is more important then a "nice to have". I think unVM's shouldn't be released without it. Why? I feel that people will try the unVM when it comes out and it if fails to work that data won't be recorded. Once a update is released to included hardware profile reporting I doubt as many people will retry to rerun the unVM. Collecting "invalided" hardware profiles I feel will be just as important as "valid" hardware profiles. Support will also be easier as if multiple people have trouble with the same hardware, likely its just not going to work and its not just a configuration problem. This will also allow people to narrow down which component isn't cooperating. I'm not in a big rush but I would like to convert my configuration to docker containers and I don't really want to do that until things stop changing and settle down a bit. unVM's is definitely something I would love to play with but I think core improvements need to make it out the door soon/first. Hopefully you guys don't get caught up too much adding cool things, we all have the temptation of just "one more thing". I do get the feeling that unRAID wants to have a "set" feature list once 6.0 gets pushed to final, for the purpose of showing off the capabilities of unRAID. UI improvement and things the community feels should have be core function long ago just don't sound as cool or exciting as running VM's.
  23. Only thing I have to add is nzb360, for a while it was definitely the best android app for all automated newsgroups. Just now seeing nzb+, looks like other apps have caught up. But it was and still is my app of choice, it also looks beautiful. Unfortunately it was pulled from the app store by Google so it does have to be side loaded but the developer is great and worth supporting. Thanks for your list binhex, most I have heard of but it's nice to see what other people swear by.
  24. Great idea, I regularly forget that running new config will wipe out all assignment which causes a headache. Sent from my SM-N9005 using Tapatalk
×
×
  • Create New...