johnodon

Community Developer
  • Posts

    1879
  • Joined

  • Last visited

Everything posted by johnodon

  1. I thought about doing that and then talked myself out of it. lol I'll go this direction. Thanks! John
  2. My UPS is offline ATM...need new batteries.
  3. UPDATE: OK...I see what is going on. I am ending up with 2 libraries...one called "config" which lives on my cache drive and one called "data" which lives where I store my e-books. I can switch between them which is OK, but, is there a way to permanently delete the "config" one? I tried to "remove" it but it just comes back. QQ... I'm not fully understanding and think I am doing something wrong with Calibre. How do you store your library and your books in two different locations? I have /data mapped to /mnt/ and /config mapped to /mnt/user/Docker/calibre/. I want to keep my books in my user share (/data/user/Books/ to calibre) and save the library and config files on my cache drive (/data/user/Docker/calibre). I keep ending with a copy of my e-books in the config directory. John
  4. How many watts does this beast draw idling? I have some older 24 bay - X7 supermicro's that are only used in backup duty with noisy redundant power supplies that pull 200 watts idling and 350 watts booting. Does yours pull that much power? Mine work wonderfully for backup units that only run when they are started via ipmi for 2-3 hours per week. One of these days I am going to gut one of my x7 24 bay servers, and put a more up to date MB and CPU in them and make them work 24x7 again. http://lime-technology.com/forum/index.php?topic=26227.0 It's funny how many people have asked me that in the past. TBH it has never even been a consideration for me. That thing could be doubling my electric bill and I would never even know it. John
  5. Someone must have rubbed Jon's head for good luck! EDIT: RE: my comment above...Sparkly and CHBMB...try to keep your minds out of the gutter!
  6. Since you've got so much CPU to spare, you should add the BOINC or Folding@Home dockers to your setup. Monthly bandwidth allotment is my issue. My ISP is the only one in town and they know it.
  7. I always write directly to disk shares... Perhaps that's the issue ... maybe it's only buffering writes to user shares. Have you tried that to see if it makes a difference in the apparent speed? I see the same speed writing to disk share. I don't get it... do you remember anything else you've changed from default? Have you ever tried vanilla unraid on that machine? (and did u get the same performance) I just rebuilt unraid from scratch. Other than Docker and KVM, my system is about as vanilla as you can get.
  8. Yeah. Older HW but for what I ask my server to do it suits me very well. 10 dockers (and growing) + 4 VMs. I rarely see my CPU utilization peak higher than 15% (typical is < 5%) and memory is pretty much static at 16%. Right now I am streaming BDRIPs (lossless) on 2 Kodi VMs, Sonarr/Couchpotato/NZBGet/Deluge are doing their thing, MariaDB, headless Kodi, Emby Server, Calibre... And this is what I see: I could not be happier with this config! Major props to the LT team. I was really about to jump ship. I'm glad I hung in.
  9. It's always been that way. They try not to break unraid as a guest, but when push comes to shove, they will not allocate extra time to sorting out issues that may appear. After things have calmed down a little after the 6.0 release, and they are working on 6.01, they will probably be a little more willing to make changes if you can tell them exactly what needs to be done to fix issues you are having. It will most likely be on the community to sort out the issue and what needs to be done, then limetech will make the changes if it doesn't interfere with bare metal usage. At least that's the way it's worked in the past, I'd wait a month or so and bring it up with Tom to find out if his position on unraid as a guest has changed. Exactly the way I think it should be. It's nice that the guys don't just cut someone off if they do something a little different that better suits their needs.
  10. I always write directly to disk shares... Perhaps that's the issue ... maybe it's only buffering writes to user shares. Have you tried that to see if it makes a difference in the apparent speed? I see the same speed writing to disk share.
  11. Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were Exactly. I think I paid $70 for a matched pair of 5530s. I paid more for the f'ing heatsinks. I also really lucked out on the MB...$155 system pull. You did indeed luck out on the board. They're still over $400 most places, and even on e-bay they're generally over $300. I assume you had to actually pay "real" prices for the memory ... or did you luck into a deal on that as well? 12 sticks 4GB 2Rx4 PC3-10600R 24GB Hynix HMT151R7BFR4C-H9 FBD Server RAM ECC Reg (again...system pull) = $210 shipped.
  12. I left them at the defaults: 1800/1280/384. Very interesting. I wonder why your system is (apparently) buffering your entire write while ysss's isn't. He doesn't have 48GB, but he does have 24GB, which should be PLENTY to cache most files. I thought perhaps you had adjusted the number of stripes significantly upward and that accounted for the difference, but apparently that's not the issue. Have you by any chance run pauven's "tunables tester" ?? I ran his util about a year ago but wasn't floored by the results it provided so I didn't change anything. No one setting was that much better than the rest.
  13. Definitely a nice array ... but not all that much "cabbage" when you use older technology => you can buy Xeon 5530's these days for ~ 1/10th of what they were when they were Exactly. I think I paid $70 for a matched pair of 5530s. I paid more for the f'ing heatsinks. I also really lucked out on the MB...$155 system pull.
  14. Actually, I did test that. As soon as Windows thinks the transfer is complete (writing to the array from WIN8 desktop), I yank the Ethernet cable from the WIN8 desktop. No corruption on the array side and file is completely intact. John
  15. I left them at the defaults: 1800/1280/384.
  16. Correct. I do have a cache pool but do not use it for caching...only VM and conctainer storage via cache-only user shares. Correct. Writes were directly to the parity protected array. No cache drive was harmed in this example. Agreed. A very large write would give me true disk/network performance benchmark. However, sicne I have 48GB of ECC, I have yet to find a need to transfer a file of that size. In fact, I can think of only 3 that I have...the double bluray of the extended versions of LOTR are each in the ~65GB range when merged into a single MKV. As most of my largest files are normal BDRIPS (~20GB), my RAM handles them just fine. John
  17. When I ran unRAID in a VM in ESXi, I chose to PXE boot it. That way upgrades were a snap. Just drop a new bzroot and bzimage on your PXE server and reboot. BTW...just be aware that LT has made the decision to NOT officially support virtualized unRAID instances: http://lime-technology.com/forum/index.php?topic=40564.0 John
  18. We don't need no stinkin' cache drives. Reading from array over Gbit... Writing to array over Gbit... Prior to beefing up my server and unRAID moving to x64, my high watermark for writing to the parity protected array was ~35MB/s.
  19. That Restart option is a nice addition. Now I just need a way to control the startup order of my VMs and containers.
  20. NM...browser cache. That should be permanently ingrained in all of our brains at this point.
  21. UPDATE: clear browser cache fixed. I have 10 containers and none have that option. Can others please confirm? Thanks, John
  22. UPDATE: clear browser cache fixed. Unless I am misunderstanding, I am not seeing this. This should add a Restart option to the context menu when you click on a container icon? John
  23. Upgraded rc6 --> rc6a. I never did add the --dns switch so didn't have to bother with that. After reboot all is looking good. Thanks for the update! John