Brucey7

Members
  • Posts

    304
  • Joined

  • Last visited

Everything posted by Brucey7

  1. I would try a different file system. I suspect your problem might disappear.
  2. You would be astounded at what goes on in Thailand. There is a company listed on the Thai stock exchange with stores in every major city. All they sell are pirated VCD's, DVD's and Blu-Ray's. The printing on the boxes is low resolution, although usually the discs themselves are perfect. You raise an interesting point. If you previously bought a VHS tape, you have a non-exclusive licence to the movie. I wonder what the legal situation would be if you downloaded a high definition version of it.
  3. Do you mean I shouldn't be downloading lots of movies from thepiratebay etc. I am shocked! ;-)
  4. Forgetting the red ball issue, which is another problem entirely. The problem you are describing is a well known bug in the implementation of the Reiser file system on some platforms, not just unRAID (but it fails "safe" so it's never been fixed). You are probably finding It occurs on a nearly full disk? Once it times out and you restart it, it is ok some of the time? If you have another copy (read or write) running from the same client, that fails also? It will go away in unRAID 6 if you move away from the Reiser file system.
  5. I too had a bad port on a Norco. First the drive appeared but unformatted, attempting to format it caused unRAID to loop and become unresponsive (however, formatting a bunch of disks at the same time, unRAID correctly recognised the problem and skipped over that port - note, probably a bug in unRAID). In diagnosing the fault with new disks (it was actually a brand new system), it trashed the filesystems on the disks. The disks were actually blank, but formatted for unRAID. When putting the disks back in other slots, unRAID started normally, but then hit a write error attempting to write to any disk that had been in that faulty Norco slot. All those disks needed formatting with something else first or pre_clearing again.
  6. A typical computer PSU is the most economical way of providing power to 12 volt swimming pool lights. Swimming pool lighting manufacturers haven't worked out that a switching power supply is by far the cheapest way of providing 12 volts at high current. They are still selling huge, expensive transformers.
  7. I have found that the webgui is quite sensitive to bandwidth between your client and the server. If you're on WiFi and maxing out your bandwidth, e.g. copying large files, that is often enough to hang the webgui. Reboot solves the problem.
  8. Partly it's a real pain. Annually, we bring a server in the car (that's the painful bit) and then sync it using Beyond Compare (which is a stunning piece of software). Monthly, we use Beyond Compare to copy all new/modified files/folders since the last sync date to a new disk and post that. I dare say we could do it with a VPN but our internet lines are maxed out 24/7, mine with movies, and his with TV series. Other methods considered, have been replacing a disk/rebuilding parity on the donor machine and posting the pulled disk. Popping the pulled disk into the recipient machine and doing a new config and rebuilding parity.
  9. Am I correct in assuming that choosing one of the new file systems gives a file system per disk? In other words, 5 disks ReiserFS, 5 disks XFS and 1 disk parity is still 5 Reiser file systems and 5 XFS file systems? or is it 5 Reiser file systems and 1 XFS file system with a storage pool of 5 drives in the XFS file system?
  10. Here is something for you to think about. If the main use of your unRAID system is to store movies, and you got 2% bitrot in your movie file, would you want (a) the filesystem to tell you and lock you out of the file, or, (b) to be blissfully unaware since with 2% corruption, most movies are still playable and you may not even notice the corruption. I think it's important that future versions of unRAID have a separate file system per disk (like ReiserFS), this is a big attraction of unRAID to me.
  11. I use another unRAID user. We mirror our movie/TV collections and periodically sync our machines. All my personal stuff, photos etc, I keep on multiple systems and in the cloud.
  12. The value you declare is down to you, but the import taxes will be based on it. I pay Shipito to remove invoices, photo everything, repack them all in one box, secure it with security tape and choose a freight option that is trackable (typically DHL). I prefer to under declare and take the risk, there's something very pleasurable about getting one over on the tax authorities after years of being stung by them.
  13. You declare the value yourself. For $1 they will remove invoices from the parcels before shipping, so yes you can declare a lower value. But you can only insure it for the value you declare.
  14. Well I did say 16TB disks "possibly as soon as 2 years", granted that's the lower end of my prediction. I bought my first unRAID system with 20 of 3TB disks and that was before v5 was released stable. I am already filling up a new tower with 6TB WD Reds, I will be wanting those 16+ TB disks as soon as they are available, I would buy them today if I could.
  15. Has anyone tried using snapRAID as a backup solution to their unRAID? The ability to have (say) triple parity backup stored (say) once per month would interest me. Perhaps using a Windows client with 3 big parity disks in it for weekly or monthly snapRAID snapshot backups, although there is no reason why it couldn't be updated every night. Reading the forums, I see a lot of people see snapRAID as a competitor product, it doesn't have to be that way, it's probably an excellent way of providing snapshot backups of your raid on a periodic basis.
  16. I would disagree, any commercial organisations using ReiserFS will already be migrating away or have their plans in place. It's right around the corner, possibly as soon as 2 years. Those of us who remember Y2K were already in full swing 2 years before. A commercial release by a supplier needs to be available 12-18 months before a deadline, that makes it this year guys.
  17. Greenleaf gave me a tip off about Shipito. I opened an account with www.shipito.com, it's a bit of pain to open the account fully and a bank transfer of funds to seed the account seems to help a lot with verification. Greenleaf delivered 2 of 20 bay servers there and NewEgg delivered 4 of WD Red 6TB drives. Shipito unpacked the drives and repacked them in one box reducing the shipping costs massively, they also have globally negotiated rates with DHL & FedEx which are better than available to the average Joe. I was able to ship 2 servers for less than the cost of 1 if I did it conventionally. One of their services is to open the boxes and remove the invoices, which can help with the customs declaration and ultimate import tax payable ;-) This is also a great way to return drives that fail and only have USA warranty. You can post them back to the manufacturer internationally and give your address as Shipito who will email you with photos when a parcel arrives for you. If anyone in Thailand needs anything, let me know.
  18. A couple of weeks ago, I received my second & third 20 bay towers from Greenleaf. Great service. This time I didn't have to fly one of them & his girlfriend over to Thailand to deliver them :-) I did get quotes elsewhere from this forum but after being accused of being a wind-up merchant because my IP address hops between Thailand & Sweden (how many of us don't prefer to BT anonymously), it made the decision to re-engage with Greenleaf an easy one. Thank you guys.
  19. So I shut down my system, replaced the 4TB parity drive with a 6TB WD Red, and when I turned it on, one of my 2 weeks old 4TB disk drives was red balled. I suspect drives can begin to fail and not be reported until a reboot. After some reading figured out the Parity-Swap process and began copying my 4TB parity drive to the new 6TB drive. I made the mistake of periodically refreshing the WebGUI to check on progress and sure enough, it hung up after a while. Not to worry, I thought I would just let it complete, power off the system and power it on again ready for the second part of the process, rebuilding the array using the old parity drive as a data disk. Unfortunately, a slight bug, even though the copy to the new parity drive had finished, rebooting the system did not recognise it as a parity drive and I had to repeat the whole process (this time resisting the temptation to check on progress). All finished successfully on second attempt.
  20. Do you have AFP enabled or ever enabled AFP, even once just to test or by accident? If any folder on the cache drive has a folder with the same name on an array drive, the mover script can't move it off the cache onto the array. if you move the folder from all array drives back to the cache drive, then the mover will move it to the array, like it's supposed to. however, be careful, as unRAID has a bug which can permanently delete files if you try to move from the array to the cache, under some conditions which I'm not entirely sure I understand. I suspect the pauses I experience are probably causing the problem. E.g., mover attempts to move a folder, creates the blank folder and then times out whilst trying to copy data files, next time it fails to move it because it already exists. I probably did enable AFP at some time or other, I can't remember what I had for lunch let alone the last 3 years. My current server only has 2 user shares, but they span over 60TB with thousands of files/folders. I have a second server shipped this week from Greenleaf, then my servers will each only contain 1 user share.
  21. This is not a solution, but as a workaround it's worth setting up a cache disk and modify your shares to use it. Since the cache disk is completely outside of the array it shouldn't have these issues when you copy to the share (since it's redirected to the cache disk). It moves the data to the array at 3am or 4am or something by default, which means this should become somewhat invisible to you. It's obviously preferable for the issue to not occur at all, but this could at least help mitigate the impact. I used to have similar issues, but since adding the cache drive I don't (however I've also replaced CPU/MB/RAM twice since I last saw the issue so don't know the cache drive was 100% of the solution). I used to use a cache drive, but found it too unreliable. I ended up with files and folders stuck permanently on the cache drive.
  22. I don't have any pauses streaming from my server to a windows client, but when copying a new folder to a10 disk user share, i get a pause whilst it creates the folder of anything up to a minute, another pause before it starts writing the file, sometimes the pause is so long there is a timeout and I have to start over again. There are often pauses in the middle of a long file copy for the same period before it resumes (usually). I have just learned to live with it. Incidentally, it makes no difference if I map a share to the drive and do the same copy, still get delays and sometimes server cannot be found error. Ran Resierfsck on all disks, all fine. I suspect it's something to do with writing to the flash, perhaps everything stops for that.
  23. My systems are purely for storing home media, but with over 15,000 movies, mostly large HD1080p and growing at 30-50 per day, over 35,000 TV series also growing at the same rate, typically 3TB every month. Scalability is important to me. My apps run on a client (windows isd designed for running multiple apps so it's where I run my apps), my data is on the servers. I don't put apps on my servers and vice-versa although I do hold the catalogue for Ember & XBMC on home theatre clients, simply because with vast amounts of data it's too slow to bring them down a cable on demand. My clients are SSD based because of that. I am also from the UK but I live in Thailand.
  24. I retired in 2007. Virtualization had been the focus of management consultancies for many years before then simply to reduce costs, it's got nothing to do with making more use of processors. Most corporates had many servers in their datacenters and by consolidating them with virtualization they could save money. That was the business driver (less hardware, smaller footprint, lower electricity bills). I would rather have one large server than 2 small ones anyday. unRAID (IMHO) is not ideal for most business users, we could talk about the technical reasons, but at the end of the day, it's just not secure enough. Suppose Tom gets run over by a bus (or assassinated by unhappy users whilst MIA) where's the support? is the source code open source? is it lodged on escrow somewhere? where's the fallback plan, where's the fallback plan for the fallback plan? I can't see any reason why anyone would reasonably want to go virtualization at home if unRAID was sufficiently scalable. Virtualization is complex, brings a whole new set of problems and is for propellor heads. I would rather go out for a meal than study another manual. If you can run that extra add-on on a separate box then don't even waste a minute trying to get it on your server, KISS. All these fancy extras are just noise. All I want is a scalable, cost effective, low cost of ownership, reliable file server. I don't want to be waiting for large disk support (and I wouldn't be suprised to see disks bigger than ReiserFS will support as soon as next year). I want a button to turn it on and a button to turn it off, and to grow it at 3TB every month. The vocal minority are the ones who don't simply leave it in a cupboard to do "what it says on the box" but write on here about this or that new feature. I strongly suspect most users just want it to work out the box. I have another concern, by increasing the complexity of unRAID, a small development company might be tempted to save time and not document the code properly, there are usually lots of warning signs if that were the case, and guess what - they're all here. Servers are an enabler and a necessary evil, they aren't an application.
  25. IMHO there is far too much "noise" in the direction unRAID is going. It's a fileserver system for storing data and that is all it should do to the best of it's ability, in the simplest, safest way at the minimum cost. I have zero interest in virtualization, docker, unMENU, Sickbeard or indeed any add-on's whatsoever and I view all development in this direction as bad news (increasing risk). I keep my add-on's where they should be on a completely separate system. We are getting perilously close to the volume limit for a disk in the ReiserFS, disks will probably exceed it within 5 years. 1 parity disk is not enough. 64 bit support is useful. more than 30 disks supported is useful. Everything else is noise and limits the attraction of unRAID to anyone doing a serious study into what they choose to buy to store their increasing collection of home data on, and lets be honest, that is the current & future market for unRAID, it's not the hardcore band of hobbyist/enthusiast who wants it to make them a coffee of tea first thing in the morning whilst singing yankee doodle dandy. I currently have multiple RAID6 servers, an unRAID server filled to capacity with 22 drives and another system on order from Greenleaf. I am a retired Global IT Director of a fortune 500 company and I keep my servers where they should be, out of sight. If one of my staff tried to run applications on a mission critical SAN he would have been fired on the spot. I rely on my fileservers to serve data... on demand with a minimum TCO and maximum reliability, as the saying goes "KISS". I would far prefer to run on one big, safe, economical, reliable, expandable server and for what it's worth, that's the only development I would like to see from Limetech.