chickensoup

Members
  • Posts

    528
  • Joined

  • Last visited

Everything posted by chickensoup

  1. Hi guys, I recently acquired a replacement case which supports dual ATX power supplies. I'm currently at 14 drives and a few months ago my original single 12V rail 550 just wasn't holding up every boot anymore. I swapped it out with a spare 750 I have and it's been OK but the new case got me thinking. I'm at a place in terms of drive count where it's hard to avoid using power adapters in one form or another. I was curious about how much difference (in terms of power draw from the wall outlet) would make, so I ran some tests before my case transplant which is currently underway, to determine if it may be worthwhile. The difference is surprisingly negligible and would give me the benefit of ditching a number of molex-sata adapters I currently have in use. I didn't think it would be worth the extra power, but I think I may actually go down this road after all. Since I'm certain someone will ask, you can power them on together using one of these The wall outlet power draw was measured using a TP-Link HS110 Smart Wi-Fi Plug I'll be posting the actual build log (case transplant + upgrades) shortly with photos of the new case but I'm still waiting on some parts and it's not finished yet... so stay tuned if you are curious Results are below!
  2. Bumping this thread again to see if there are any plans to implement in the near future. I've had my server running extremely well for around a decade now (pre-4.7) and as a result, have some smaller disks I would prefer to phase out completely, reducing my disk count, rather than replace. I'm aware there are ways to do this but I'd currently require some assistance from the forums to make sure I don't break anything (i.e. rsync/move data on to alternate disk, remove, rebuild parity). The idea of being able to 'decomission' a disk seems like a great idea.
  3. I'm actually in a similar situation in terms of drive numbers and trying to find a replacement case. I currently have 11 data, dual parity and want to add a cache disk for a total of 14 X 3.5" disks. I'm currently using a custom modified case (old server case very heavily modified) which supports up to about 17 disks but several of them are incredibly difficult to swap out. I was looking at potentially the Fractal Design Define XL, though the current revision (the XL R2) supports one less disk than the original. Edit: I just jumped to another post after writing this and found someone who built one of these with 17 disks. You can buy an extra 4-bay drive cage from FD as an optional extra. This means 12x3.5" and 4x5.25" even without getting creative for those last couple disks. Add the fact that I live in Australia, cases like the Norco 4224 or even the 4220 are super expensive to ship from the US and/or hard to find.
  4. I'll give you some ballpark figures based on my personal experience with using unRAID for several years. Hardware doesn't really make a lot of difference unless you are bottle-necking somewhere. Performance is mostly dependent on drive selection and network performance. The use of a cache drive should, in most cases, cap gigabit ethernet for writes. Write speeds will somewhat vary depending on the size of the files you are writing and which drive you are writing to (newer, higher capacity and higher rpm drives will perform better). Writing to array with a decent cache drive: 100mb/sec+ Writing to array without cache drive, good drives: ~60mb/sec Writing to array without cache drive, slower drives: ~40mb/sec Reading from array, good drives: limited by drive performance, single files are usually ~80mb/sec Keep in mind this list is very dependent on configuration, hardware, fine-tuning, file sizes, file system.. there are a lot of variables.
  5. Wow, I've never even noticed that before. It's been a while since I've been actively keeping up with everything. File is attached. unraid-diagnostics-20160912-1002.zip
  6. So I had a 'failed' parity drive for some time and since I haven't found the time to test it, I bought a new drive the other day. Now both seem to be displaying the same symptoms. Both drives are WD30EFRX (3TB WD Red). I've already replaced the SATA cable, but may do so again. No other drives seem to be experiencing any issues. Not entirely sure what is going on. New drive pre-cleared without issue, as far as I can tell. It was only after I added it to the array did it start to 'fail.' I'm not convinced this is drive-related. Attachments: HostReset - My most successful attempt at a SMART report via unRAID, got to 90% then 'host reset' ... Syslog from today I'll try and get a full SMART report for both drives but this doesn't seem to work via the web interface either... syslog.txt
  7. I really like ESXi and considered using unRAID as my Hypervisor for a long while but I don't think its mature enough yet. ESXi has been solid. Also, sorry to hijack but I've just setup 5.1.0 with a couple of VMs and actually wanted to move my existing unRAID (v6) onto it, can anyone point me in the right direction of a walk-through/thread?
  8. Upgrading to V6 and converting your drives to XFS may yield an improvement though to be honest, 40MB/Sec if you are talking sustained is reasonable already. If you want much faster but are concerned about the risk of using a cache drive you can mirror two drives using traditional RAID and set the new volume as your cache drive. This would likely give you over 100MB/Sec all day every day and protect you from loss of a single cache drive.
  9. I would go with what Gary suggested. Try setting the spindown time to 3 hours or something. It should stay spinning all day if it's actually being used. Will just be that first access in the morning which will be delayed but you could setup something to poll the drive in advance to spin it up. Theres probably also a plugin or two that would let you set specific times (though I'm not positive about this)
  10. This is something I would love if it were possible. At the moment I have drives spin up "seemingly randomly" when nothing is really using them or, for example, I will open up my mapped drives to look for something to watch and in the process 3 drives will spin up before anything has even been opened (at the folder level too - no thumbnails or anything). Perhaps I need cache_dirs to mitigate this? It'd be awesome if there was a way for the syslog to add an entry when a disk is spun up and (if possible) the first file accessed on that disk. This would make diagnosing random spin ups a lot easier.
  11. Hey guys, For the last few months I've had some intermittent issues with slow copying to my unRAID box. At first I suspected my cache drive, then a particular array drive but I'm still having performance issues. I have removed my cache drive and writing directly to the array, copies level out at about 2mb/sec. Apart from the above, I've also tried upgrading from 5.0.6 to 6.0b14b and while again I thought this fixed the issue- it hasn't. No dropped packets.. root@Tower:~# ifconfig eth0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 ether 1c:6f:65:80:5f:a4 txqueuelen 1000 (Ethernet) RX packets 2759309 bytes 1519961540 (1.4 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 4346454 bytes 6418082330 (5.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 root@Tower:~# Attached: I did notice a SMART attribute (188) Command Timeout when looking at my parity drive. Since this only seems to be affecting writes and doesn't appear to be specific to any particular disk. Could it be my parity drive on it's way out? It doesn't seem to be limited to a specific machine or operating system as copies affect my Windows 7 PC, Windows 8 PC and Linux Mint VM (hosted on Win7 PC) when writing to the unRAID PC. These are all connected to the same switch however when copying between the Windows PC's I get consistently over 100MB/Sec. syslog22-3-15.txt
  12. Your test server has more storage than my primary server :-(
  13. Following the OPs method don't you also need to unassign the old cache drive and start the array again before shutting down?
  14. ..by boards i mean discussion boards/threads not motherboards :-]
  15. I'd probably get on eBay and find a full atx board for your 1090t. That way you can bump up the ram and add controller cards. Unraid 6 supports virtualisation as host via either KVM or Xen. You can look into each of the boards about these to learn the differences and if there are any compatibility issues.
  16. LOL Jon, made my day. Is that the official LT response? :-P
  17. Fyi included in each release is a readme file with step by step instructions on how to i pgrade based on your current version. 5.0.6 is very stable and mature now and i would recommend it over 4.7. That said, v6 is a huge upgrade with a tonne of new features and the latest betas are very promising. It is likely an RC for v6 is not far away (take that with a grain of salt..)
  18. Upgraded to 6.0b14b today from 5.0.6 - no plugins, addons, dockers etc. just plain old vanilla unRAID. Upgrade went very smoothly but I'm noticing similar issues to other people in relation to disks not reporting their spin-status correctly. Also it seems like some of my disks are spinning up randomly but I'll get more information on this. I rebooted once after the initial reboot before noticing the following: I was getting the following errors.. No sensors found! Make sure you loaded all the kernel drivers you need. Try sensors-detect to find out which these are. So I ran sensors-detect and followed all the steps. It detected 'it87' and 'coretemp' (i think) but I wasn't sure what to do with this information. It said to run lm_sensors and a quick search on the forum for this found mostly unanswered threads or stuff relating to 4.7. e.g. http://lime-technology.com/forum/index.php?topic=38153.msg353392#msg353392 http://lime-technology.com/forum/index.php?topic=36543.msg351929#msg351929 I did tryrunning 'modprobe it87' and 'modprobe coretemp' but neither yielded any result (or error). Not really sure where to go from here. Also, what's the poll time on drive temperatures? I did see them update but it seemed to take a long time.. Syslog attached. syslog.txt
  19. For the array: XFS For the cache: btrfs This is also how the defaults are set up for unRaid-6. Do you still recommend btrfs for the cache drive if you are not cache pooling? Due to some of the ongoing btrfs related issues I've noticed on the forum I was planning on using xfs for my cache drive when I upgrade from v5.
  20. Sorry if this isnt a constructive post but I'd just like to quickly say thanks to Tom & Jon for all their hard work. It's a small team delivering a big product and I just don't think they deserve the criticism at all. As stated, this is a beta and should be treated as such. LT normally has amazingly stable beta's and they shouldn't be abused for a slip in an unfinished product. Keep up the good work guys.
  21. Hey guys, The last couple of shutdowns have resulted in the array being unable to unmount all the drives for some reason and required a hard power off. I have been running a parity check overnight and from around 80% (as far as I can tell) the speed has dropped right down to around 5MB/Sec. It is still moving along just very, very slowly. I've had a quick look at the smart reports for each drive and I can't see anything abnormal. Any suggestions? I'm thinking after the parity check completes I might just use HDTune or something to benchmark each drive and see how they look. Syslog attached. The only thing I have noticed lately is "Disabled IRQ16" .. root@Tower:/dev# cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 23 0 0 0 IO-APIC-edge timer 1: 1 1 0 0 IO-APIC-edge i8042 9: 0 0 0 0 IO-APIC-fasteoi acpi 16: 402996106 403258672 403201780 403143451 IO-APIC-fasteoi uhci_hcd:usb3, uhci_hcd:usb9, sata_mv, ahci syslog_24-01-15.txt
  22. The troops are getting restless again :-) I too have been holding out for the next release as I'll probably update if the current issues are addressed. For now, there are too many bugs for me.
  23. Each to their own but personally I would use the following setup: [slot1] 3TB HDD - TV Shows [slot2] 3TB HDD - Movies, Documentaries, Downloads [slot3] 3TB HDD - Music Videos, Music Albums, iTunes, Photo, Home Movies, ISO images However this is just due to knowing my personal usage over time. TV for me is by far the biggest space eater and music is barely even the same size as some TV shows. Take a look at your current setup and usage habits and base it off that rather than spreading the categories between the drives. You may have already planned this out though, just offering some advice :-)
  24. I'm also a little confused about what you are trying to achieve. If you are simply migrating then I would buy a data disk for WHS and then move data off your array one physical disk at a time, unassigning each as they become empty, moving them to the WHS box and repeating until you have no more disks left. This way you maintain parity protection as you go. However your comment about leaving the unraid server in tact implies you are intending on having two servers?