Jump to content

outsider

Members
  • Posts

    148
  • Joined

  • Last visited

Everything posted by outsider

  1. Wow! Thanks for that link! I'm currently planning a new server and was planning on using an E5-1603 to start, but at that price, I can't let it pass. I'd buy 2 if I could find a cheap consumer desktop board for under $200 (and upgrade my desktop rig as well).
  2. Thanks for the input. The strategy has always been to buy as much used equipment as possible off ebay. Pulled Xeon CPUs seem to be significantly cheaper then their consumer brethren likely due to a smaller market to offload those CPUs? I've seen hardware that is current (LGA2011) and going on ebay for considerably less then new so I'm keeping an eye out for supermicro mobos and E5 CPUs. I guess the better plan is to just buy a low end CPU now (on a single socket mobo), upgrade the CPU when the time comes that I need more horsepower, and then later if more horesepower is needed then move up to a dual socket... by which point there will likely be some better, more energy efficient socket/CPU which I'd likely want to be on anyway. Makes sense.
  3. I'm in the market for a new build for my unraid box, and I'm curious what everyone;s take is on future-proofing your system. Do I look at an LGA1366 socket server mobo with a xeon 5500/5600 CPU (near the high end of the performance for those chips)? Or do I spend double and get a LGA2011 mobo with a low end E5 cpu (with similar performance to the 55/5600 chips) but have the option to upgrade just the CPU to a faster one in a couple of years? Or spend a little more still on a dual socket mobo to allow for even further expansion in the future? Is it worth buying more expensive components (like a dual socket mobo) where one socket stays empty for a while in the hopes/plans to add something there later, or just buy a new system "later" that fills the needs at that point in the future? What do you think?
  4. I was wondering if this specific use case of the cache drive is possible: Currently mot people use cache drives to improve write speed to their array and the mover script removes that data a per the scheduled settings. Using an SSD is great, but for the most part it leaves the SSD almost always empty. All the reading from the array takes place off platter HD, which is much slower then SSD. You only read data off the SSD (at SSD speeds) up to the point before the data is moved by the mover script. Is it possible to somehow setup the cache drive (and the mover script) to keep a maximum amount of data on the SSD so that both read and writes are accelerated, but at the same time the data is only synced to the array? Somehow setup the cache drive to maintain a certain amount of free space for new writes (say 20-30% of SSD capacity?), and otherwise keep the newest stuff that is written to it on the SSD. The mover script can copy the data to the array for redundancy like it currently does (but just copy it not move it). If I stick a 256 or 512GB SSD as a cache, I'd like to make use of it as much as possible, not just 10-30GB that I write to it daily. What I'm thinking of (in my limited linux knowledge) is an rsync script that runs hourly and syncs the data from the SSD to the array, and maybe another script that cleans up the SSD to always maintain a certain pre-set percentage of free disk space (by clearing out the oldest modified files, and leaving only the newest files on the SSD) Does that make sense? Is this possible? (has this already been done?) Any ideas how to implement this?
  5. Did you make any headway in getting Windows VMs to run smoother under UnRAID?
  6. I've noticed some weird behavior with the "CrashPlan-desktop" docker file. If the "CrashPlan-desktop" is running while there is a sudden loss of power to the unraid system, I can't rdp back into it when the system boots up again. I have to uninstall the "CrashPlan-desktop" image, and re-install it to get rdp access working again. Is there something else I can be doing to the "CrashPlan-docker" image to FIX things rather then just reload/reinstall the image file?
  7. I'm not able to connect to the WebUI for Pydio. I get a blank screen in Firefox, and a "Server Error 500" if I bring up the webUI in Chrome. Is there any more configuration that needs to be done to the docker package (other then assigning /confg and /data)? Looking at the docker logs file, this comes up every time I start the container. I've let it sit for hours in hopes the "refreshing packages" may take long, but still no web interface. What can I do?
  8. Linking 2 gigabit connections does not increase the rate of sending one file. What it gives you is the ability to move two files at that same speed. Try your transfer with 2 files and see if they are both coming in at the same rate.
  9. Further to what archedraft said, the file infected with the virus needs to be execute in order for the virus to be activated. Just having a file with a virus on it on your computer (regardless of OS) doesn't mean the virus is killing your system. It will sit there until you execute the file. As UnRAID is a storage device, it doesn't go around trying to run every file on the harddrives in the array. It just stores the data.
  10. Many ways to skin a cat, but I would suggest starting with creating a 3 disk array to begin with using the non windows disks. Let the system format them (1 parity and 2 data) and then mount one of the windows disks to the system and copy data to the user share. Once the first windows disk's data is copied off, add it to the array (and format it) and repeat the process with the second and third windows disk. (copy data off it, then add it to the array)
  11. What's the expected behavior of a cache drive failing while the array is online? In the event of a cache drive failure, will UnRAID simply skip the cache drive on the next file and continue to write data to the array without much of a hickup? Or is this a catastrophic system halt that requires manual intervention to fix and bring the system back online? Anyone experience this?
  12. There's also Pydio which is sililar to ownclowd.
  13. I too noticed the 6.2 beta and have searched for it thinking it was just me that's not been keeping up to date. There are a few videos that LinusTechTips have put together where they're running 6.2 beta. Things of interest that I noticed were two parity drives. Anyone else catch that?
  14. I learn something new every day! I rememebr back in the unraid 4.x days being told to take a screenshot of the unraid page (drive order) so that in the event of a new config, the drives were arranged in the same order. When did that requirement go away?
  15. How difficult is it to update (or create a new) this container to run the beta version of Madsonic (version 6.0)? I'd like to play with it, but have no experience with docker container management. Anyone else done this already and can share their container?
  16. I've been searching for any info on btrfs and snapshotting but have not found much on this forum. Specifically how unraid handles the snapshots. To my understanding snapshots are available in hidden folders on the file system. I am considering adding an additional drive to my UnRAID array, formatting it with btrfs and enabling snapshotting on a volume. Anyone try this? Anything I should look out for? Thanks!
  17. You can swap the drives all you want, but when you're done you'll have to rebuild the parity drive.
  18. I've wondered this for a while: With a parity disk in use in an array, does unRaid protect the entire data drive (block level) or only the data that is located in the "User Share" folders? I am wondering if I create other folders (outside of the user shares) on one of the disks, if that data is actually stored (and recoverable) in case that specific drive dies. Thanks
  19. I found the "my.service.xml" file in the conf folder of the crashplan container, but I can't find the "ui.info" file? Where is the "ui.info" file located?
  20. Thanks for the reply Rob, You make a good point about the restricting a certain file extension not being an effective way to fix the problem. The read only is good for some things (like multimedia) but I can't make the network folder into which I work read only. It needs read-write permissions unfortunately. It should be straightforward enough to pipe the output of find command into the rm command and remove only the infected files. Then recover those files from backups.
  21. Is there any way to have unRaid restrict the creation of certain files base on filename? The reason I ask is because I just got hit with the TeslaCrypt virus and a bunch of my files on the UnRaid server got encrypted to .vvv files. (this is what TeslaCrypt does; it encrypts your files using RSA2048 encryption and then they ask for money to decrypt your files) Luckily I have offline backups of my files, but it's still a pain to have to go and find all the corrupted files, get rid of them and replace them with good copies from archives. I wasn't really planning on spending my weekend finding and replacing files... arhhh. If there was an easy way to tell UnRaid to not allow the creation of .vvv files, that would certainly help if this happens again. Any thoughts?
  22. Has the "Stale NFS Handle" problem been fixed? I haven't seen any more posts about it since 5rc7 or so. Just wondering if I should upgrade to rc10 and start using NFS again or if I should wait. (currently using samba only even on my linux machines on version 5rc8a)
  23. What's the status of the "Stale NFS" issue that was present in most rc versions? What's everyone's experience with NFS in rc6?
  24. Where does one acquire a copy of rc6? I thought the latest release was rc5.
  25. I blew away my current settings and put on a fresh copy of 5.0 rc1 and started from scratch. Assigned the drives and waited for the initial parity check to finish. Speed results were identical as in last post. I then successfully downgraded to 5.0-beta 14a. (Again, I blew away all the setting and started from scratch to eliminate any bad settings I may have had) Now when I run dd if=/dev/zero of=output.dat bs=1024 count=102400 I get around 30-40MB/s on a disk, and I get 8-12MB/s when doing it in a user share. I'm happy.
×
×
  • Create New...