Fireball3

Members
  • Posts

    1355
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Fireball3

  1. Nice write-up gary. If i remember correctly, you are storing your backups on single drives. How often do you check your backup drives to ensure that you don't have errors on them? Prior to using unRAID I also had my data stored all over the place on different drives. After having the server ready I found that some files were not readable when I moved them to the array. The monthly parity check ensures that the data on the array is readable but if you store the backups on single drives and don't verify them regularly you may be lulled into a false sense of security.
  2. @nacat78 I was able to flash 4 controllers with that toolset. Here also... If you follow the instructions it should work. If not, maybe the are new/other controllers with different firmwares that prevent the crossflash? If you tried the P17, did you also use the sas2flsh.exe that comes with the P17 firmware? This strongly reminds me of the D2607. Perhaps the procedure outlined below is also working for your H310s? @techsolo With regard to the D2607: Try the procedure in this post: http://lime-technology.com/forum/index.php?topic=12767.msg266471#msg266471 Here you will find the H200.sbr.
  3. I followed this post and set up a file for myself. Unfortunately I get this errorlog. [pre][/pre] failed parsing crontab for user root: /boot/auto_s3_sleep.sh 1> /dev/null 2>$stuff$1 (Minor Issues) [pre][/pre] My crontab -l looks like this: # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system, but if they don't the average newbie # might get quite perplexed about getting strange mail every 5 minutes. :^) # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null # # Import crontab entries from file # Start of custom crontab entries 00 1 * * /boot/auto_s3_sleep.sh 1> /dev/null 2>&1 # End of custom crontab entries# Import finished root@Tuerke:~# Appreciate your help!
  4. I experienced the same behaviour. I filled my cache drive and then I tried to write directly to the share on the array (same folder name as on the cache drive) and it reported "disk full". There was definitely enough free space on the share. I created a directory with a different name on the array and copying worked.
  5. My disk10 is not used in the share although it is configured to do so. See attached sceenshots. To help myself, I made the dirs manually on the disk10 like they are in the share and a the moment I'm copying to disk10/../... - that works. When browsing the share, I can see the content of disk10. But when I try to copy to the share, it says "disk full". What am I missing?
  6. Thanks for the links but I have no clue what requirements apply to use them for e.g. 1080p/DTS. Are 8 bit per channel enough? Is 1.65Gbps per channel enough?
  7. Would a mod please be kind and split the extender posts into a dedicated thread in the hardware section. Since I have a CAT7 wiring throughout my house, what would be the best ethernet solution? I wont be able to pull HDMI cables. These kind of extenders are kind of pricey compared to RasbPi and Rokus!
  8. Someday you will have to update you mobo... If I had a choice, I would prefer nothing but the screen in my lounge.
  9. If I remember correctly, it was ironicbadger who streamed video over IP. He also streams the video and USB of his Windows client (that is running in a VM) on the unRAID server hardware to his TV in the lounge. Search for him and follow his signature to his blog to see a video. So, there is no need to have the server beneath your TV. Actually, this is a very interesting possibility to use the GPU of the unRAID server that is doing nothing most of the time. Get some adapters to put the video on the ethernet an back on your TV. Control the XBMC via smartphone app. Save a media player hardware box. Correct me if I'm wrong... Edit: Here's the mentioned blog.
  10. I haven't enabled the reconstruct write mode. Is it enabled by default in 5.04? Is there a log entry if it is enabled? Can I check the status in the cli?
  11. Yes, writing to a disk in the parity protected array - not to the cache. The GUI shows writes to the share disk and to the parity disk. Cache is empty. I have to do some hashes to verify the data.
  12. I'm really wondering what's going on. I'm copying from my desktop via GB LAN to the !share! - not to cache with ~80 MB/s for about 10 min. now! Both drives (share and parity) are ST4000DM000. Yesterday I copied to the same share (drive) with the usual 40 MB/s. syslog_16.12.2013.txt
  13. Yes ironic, to be honest, most of my virtualization knowledge I got from your blog . But there are still may things that are supposed to be known. Imagine you have to explain virtualization to your wife... That's where it should start.
  14. I'm running unRAID for 4 months now and I'm a linux noob. I'm very impressed of your skills with linux and those things. Let me describe the situation from the perspective of an user-noob. When building my server I was very pleased to be able to build an inexpensive and economical machine to consolidate all my drives in one place and have some fault tolerance. I set up with some decent hardware as suggested in the wiki. When I started reading through this forum I didn't even know what ESXi was. Never heard of KVM and XEN an all those virtuality things. I've been following the various threads and I'm wondering where it will be going to when unRAID will have to run on/in a VM by default because there won't be any more plugins that run on stock unRAID but in a dedicated VM. At the moment I can say I understand "in principle" what you're doing here but I feel like I'm missing something in order to be able to discuss with you. I can find very well made instructions how to set up things but to understand the basics of virtualization ... nope. Perhaps one of you guys can put together the "ABC of virtualization" for the noobs in here? I dare to say that many of us are simply overwhelmed by this topic just because we don't know what it means hence the sparse feedback! Either keep it simple or make sure it's simple to understand. My concerns about this advance: How is the virtualization affecting hardware requirements for the "bare metal" users? What will be the minimal hardware requirements? Is the VM host running from a thumbdrive also? While I like the idea of beeing able to run plugins in another environment without interfering with unRAID and I like the idea of having plugins from a well maintained source - what is the downside? How is the decision for an OS affecting this? I'm well aware of the flaws of unRAID e.g. stability of emhttp, plugin support, GUI apparance, security... and I would really appreciate a solution to this and a more consolidated way of development. But at first sight, running it in a VM brings more complexity to it and calls for more knowledge (ABC of VM) in setting it up and maintaining it. Just like already highlighted in this thread, at the moment many highly skilled people work on many independent projects trying to accomplish more or less the same thing. What a pity! Because it's often redundant work and it's often a one man show and when it comes to long term availability and support it will often end up in a dead end. Edit: noticed some of you feel like me...
  15. Somebody posted his trials with link aggregation. As far as I remember the increase in speed was not significant. In another thread I read that link aggregation is most effective on multiple connections. Means, you will benefit only if you access the server from different clients.
  16. That is true but somewhere I saw a chart showing the failure rates over temp. The lowest failure rate was at ~40°C and rising with higher temps. 40 - 45°C is OK for me but above 50°C I feel somewhat uncomfortable although the spec says 60°C. WD greens and Seagate ST4000DM000 are also specified with 65°C respectively 60°C. You will probably have 4 or 5 120mm fans. I think one appropriate switch should easily handle that. At least the 5V rail delivers enough current.
  17. Indeed. You won't notice any heat problems until you do a preclear of some/all adjacent drives or a parity check. In normal operation you'll be just fine but during the monthly parity check cooling will be problematic. Fan noise is basically a matter of rpm. If you have easy access to your case you could install fans that rev @3000 rpm on 12V. Use a switch to run them on 5V @~1200 rpm during normal operation and on 12V during parity checks. That should be possible but requires discipline. But it won't let you automatically schedule the parity check since you always have to switch the fans. A skilled electrician could assemble a relay card and use the serial interface to do this based on drive temps. That would be the ultimate solution. I have a WD black together with 2 green drives in an SNT 2131 backplane. I need to keep rpm/noise down because the server is also sitting next to me. The installed fan has a flow rate of 35,7 m3/h @1700rpm. During parity check the temps of the drives in this backplane will rise up to 52°C. I will have to exchange the WD black with a "green" drive. I'm very pleased with fans from Noiseblocker.
  18. In fact it's not an SSD. A friend of mine replaced his old server drives of his business server. == WDC WD2502ABYS-01B7A0 WD-WCAT19878839 == Last Cycle's Pre Read Time : 0:45:59 (90 MB/s) == Last Cycle's Zeroing time : 0:44:11 (94 MB/s) == Last Cycle's Post Read Time : 1:44:32 (39 MB/s) Usually the drive is empty so writes are well above 100 MB/s. The preclear report is an average value. Considering 125 MB/s (minus overhead) is max. GB speed ... I was really astonished about the throughput of this drive. I have 500 GB Seagates or Maxtor (consumer drives) that perform like this: == ST3500641AS 3PM0CMG3 == Last Cycle's Pre Read Time : 3:15:47 (42 MB/s) == Last Cycle's Zeroing time : 3:19:51 (41 MB/s) == Last Cycle's Post Read Time : 5:54:07 (23 MB/s) If you go with a current server grade HDD as cache drive you can easily max out the NIC.
  19. 1-4 no xp with virtualization here 5. SSD: Not really necessary as cache since the NIC is the bottleneck. A fast HDD will be fine. SSD maybe good for virtualization and/or plugins? 6. The mobo connectors look like usual SATA. 7. Nice case, but again no xp with it here Cooling: You won't be happy with the WD BLACK and the backplanes if you want the rig to run at a low noise level. Look at the small holes in the backplanes - they are limiting the air flow. Unfortunately this is common to all backplanes...small holes in the PCB. If possible use the backplanes with "green" drives or better avoid the backplanes. I'm not even sure if unRAID supports hot swap. If you plan to use the server for storage, use "green" drives. WD RED series or Seagate NAS drives. Power distribution: I suppose your PSU has modular cables. I took one of the power cords and carefully removed the connectors. Somebody posted a source in the US where they can be orderer - I had to improvise. Then I placed them in the gaps on another cable that I planned to use because I found the distance between 2 connectors was too far. Depending on the cable lenght you can easily string together 8 drives with one wire. In addition to that i used this forward breakout cables. With them I need 2 connectors from the PSU to power 8 drives. They help to tidy up the case. Unfortunately they're FSC spare parts and neither easy to get nor inexpensive. Look for part number T26139-Y4023-V501 For HBA: Check the controller list in the wiki. You will probably want a 8-port PCIe x8 controller. Either an HBA or an SAS RAID controller that can be flashed to IT mode. It's all in the list. Pay attention to obtain forward breakout cables! If your case will be fully populated one day, you will have to find another place than your man-cave for it. Silent fans are good but you can't silence 20+ drives. Luckily they don't spin all at once all the time...
  20. In 2009 i bought a QNap with a net space of 4.5 TB. Short time after that I noticed unRAID and if I hadn't bought the NAS I would have build my unRAID in 2009 already. Some months ago I experienced a drive failure and I lost some data. Although no critical loss I had to react. I had many drives with all kind of data spread in different enclosures since the NAS had no capacity left. I figured that each of those drives could fail and then I would probably loose more valuable data. So I had to improve this situation and get at least a single drive redundancy. The key points that make unRAID so interesting to me are: - single drive fault tolerance like RAID5 - even with more than one drive fault you don't loose everything on the array - data is recoverable from each drive without the need of having the array working - no limitation in drive sizes like in a RAID5 (finally put those 250GB and 500GB drives to good use) - I had many different sized drives storing data at home - easily assembled server with older, cheap hardware - possibility to grow a real big data storage at relatively low costs (compared to a commercial solution) - working, stable, file server software (core) - enhance functionality with plugins of all kind - knowledgeable and friendly community cons (after using unRAID for about 4 months now): - apart from basic file server, most of the other features have to be added by the use of plugins - plugins on the other hand may interfere with the GUI causing instability - to bypass this it is recommended to run unRAID on a virtual machine and plugins on another - adding more complexity and hardware requirements Compared to FlexRAID and SnapRAID - don't have them working but see here. I'm only using samba shares. Performance is good. When writing to cache I can max out the GB LAN.
  21. Somebody here who can adjust the syntax in this script please?
  22. He should try it. I read somewhere that people have been successful in doing so.