stewartwb

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by stewartwb

  1. This single M.2 drive is currently set up as the cache drive, with the intent to store virtual drives for VMs and any Docker file storage needs. We formatted it ZFS, assuming newer is better, but I would have used btrfs previously for cache drives. We are still using XFS for the Array drives.
  2. I'm helping my son set up unRAID on a new server (Core i7-13700K, 32GB RAM, Solidigm P44 Pro 2TB SSD cache, 3x4TB protected array). He plans to run a bunch of VMs, including his primary Windows workstation, passing through an Nvidia 3070 GTX video card. I've used unRAID for more than 10 years, but ZFS support is a new feature. We don't plan to set up a ZFS pool with multiple drives, just a single PCIe 4 M.2 drive. Which file system is best for this use case? I know ZFS supports advanced features, but I'm mainly interested in good performance for his virtual machines. Can anyone offer first-hand knowledge of performance benefits / penalties for ZFS over btrfs for this scenario? Thanks! -- stewartwb
  3. Gary - I've not scoured the forums looking for your system config, but I would have to guess that your performance issue is related to your Atom D525 and supporting chipset interacting negatively with a newer kernel or driver version introduced in unRAID 6.xx. That said, one of my servers is still running an old AMD Athlon 64 quad-core CPU, and its performance improved with 6.xx releases. Two quick questions: Are your data drives formatted XFS? Are you using any sort of disk controller card to attach your drives? -- stewartwb
  4. I appreciate the info on compatibility - thanks! I found the direct GIT URL by searching this thread, and all is good. One server upgraded to 6.6, the other is on its way.
  5. Quick question - I stayed on the sidelines for a while - still running version 6.3.5. I'm ready to upgrade, and I would like to run Fix Common Problems / Upgrade Assistant. However, I get an error installing the current plugin, since it doesn't support such an old version of unRAID. I've also tried to install the Community Applications plugin, but I get the same error. Is there a way to install the last version of FCP that still works on unRAID 6.3.5? If not, I'll just wing it, checking everything manually. Thanks!
  6. @SSD - I agree, your saved Credentials in Credential Manager looks good - I see no potential you'll have the issue I had when my Windows 10 upgraded automatically. I'm still curious to see whether @meoge is experiencing the same issue I had, and whether removing old saved credentials could improve his server's behavior.
  7. @SSD - I was able to connect by name and/or IP to the unRAID web interface for both servers. Ping also worked fine for both by IP address. I just couldn't browse files using Windows Explorer (SMB). I tried adding a HOSTS entry to the server I couldn't reach, but that didn't help. Apparently after my Windows 10 VM was upgraded to 1709, the credentials it had saved to connect to Server A by IP and by Server Name would not work correctly, and it threw an unhelpful (Microsoft-standard?) error message each time I tried. The credentials are anonymous, not a custom name / pwd created in the unRAID GUI. Using Control Panel\User Accounts\Credential Manager, I deleted the saved credentials, and everything started working properly again.
  8. @meoge - you seem to have the same issue I was seeing, unable to connect via name or IP address. Have you tried deleting any credentials directed at your unRAID machine? That's what fixed it for me. I figured it out after I'd typed up my post, so I went ahead with the initial post and solution for others who may need it.
  9. I'm at a loss, and could really use help troubleshooting this issue. I have two unRAID servers on my home network, both running version 6.3.5, both configured with static IP addresses. I'm using basic WORKGROUP setup for windows networking, not a Domain or Active Directory. (I'll refer to my older server as Server A and my newer one as Server B) Server A = AMD Athlon, 8GB RAM, 16TB, no Docker or VMs configured Server B = Core i5 4950, 24GB RAM, 20TB, no Docker, KVM is hosting one vm I have one Windows 10 Pro x64 client is running as a pass-through VM on Server B. That VM just auto-updated from Windows 10 v1703 to v1709 (Fall Creator's update). I've been able to access both servers via server name or IP address from any Windows machine or VM on my network for some time. From that VM only, I'm seeing weird behavior trying to browse files in Windows Explorer. I cannot ping either server by machine name. I cannot browse files on either server by machine name I can ping both Server A and Server B by IP address. I can browse files on Server B by IP address. I cannot browse files on Server A by IP address I get Error Code 0x80004005, Unspecified Error From other machines on my network, both Windows 10 v1703 and Windows 7, I can browse files by name or IP address just fine. If I change the static IP assignment on Server A (say from x.68 to x.168), I can browse files by IP address from the VM. If I change that static IP back to its original value, I can't browse files by IP address. I've tried resetting the network settings (netsh int ip reset c:\resetlog.txt) I've tried updating the VM's network driver, using the latest RedHat KVM drivers (virtio-win-0.1.141.iso) I've tried switching which unRAID server is set up as the WINS master browser. I've entirely disabled the Windows Firewall and my AVAST antivirus software. I've tried clearing the DNS and WINS caches using these Windows commands ipconfig /flushdns nbtstat –R ______________________________ After searching for hours, I just found the solution Control Panel\User Accounts\Credential Manager I deleted all saved credentials for that server, every entry, listed by server name and by IP address. After removing those saved credentials, I can browse the server again. I'm going ahead and posting this for documentation, in case anyone else runs into a similar issue. <edit - corrected references to Windows 10 v1703, and corrected typo in title>
  10. Updated, rebooted - no issues. web GUI seems a lot more responsive. Great work - thanks!!
  11. Reading today about a severe buffer overflow issue in glibc that affects DNS in Linux distros... https://threatpost.com/magnitude-of-glibc-vulnerability-coming-to-light/116296/ The issue has been in the glibc code since 2008, and most major distros have been patched fairly recently. Can anyone confirm whether the latest unRAID releases include the glibc patch for this issue? If so, any idea which version of unRAID is the earliest that includes this patch? Thanks!
  12. I have set up unRAID servers for a couple of friends. I have installed a Windows VM on each server to run various scripts to move photos and videos from DropBox to the server. I also have installed TeamViewer so I can connect remotely and assist with issues when they arise. Here's a use case that appears to be unsupported: Connect to Windows VM via TeamViewer (or your preferred remote support tool) Open unRAID web interface Use the Plugins page to update unRAID to the latest version Reboot unRAID so the update takes effect I can't do this because I can't start a clean reboot until I stop the array, but when I stop the array unRAID also shuts down the VMs. I wish there were a way to schedule a clean unRAID reboot once at hh:mm time of day, or mmm minutes from now. Is anyone else looking to do this? Have you found a workaround, perhaps using an unRAID BASH script or plugin? Thanks!
  13. It sounds like unRAID isn't pickup up the settings in your config files on your flash drive. Assuming you're using Windows, try opening the network folder \\tower\flash\config Your server name should be in the file ident.cfg, line #2, with an entry of NAME="Media" Your IP address should be in the file network.cfg Yours should look something like this: # Generated settings: USE_DHCP="yes" IPADDR="192.168.20.150" NETMASK="255.255.255.0" GATEWAY="192.168.20.1" DHCP_KEEPRESOLV="no" DNS_SERVER1="192.168.20.1" DNS_SERVER2="" DNS_SERVER3="" BONDING="no" BONDING_MODE="0" I wonder if the config folder got damaged somehow on your key? I hope this helps you troubleshoot. -- stewartwb
  14. Given the complexity of unRAID storage configurations, there are differing opinions on the correct way to compute free space. See this thread http://lime-technology.com/forum/index.php?topic=28382.msg252487#msg252487 I think free space for a user share should be shown as the total available space remaining on all included drives. I don't think that cache free space should be included in that total. However, I can continue writing to a user share that is full, and the new data will sit in the cache drive, unable to be moved to the array. If a program queries free space before writing, it might abort due to lack of free space, even though there's plenty of space on the cache drive to complete the write operation. So, it's possible that the amount of Free Space should be reported differently depending on how that value will be utilized.
  15. Much appreciated - guided tips like yours are an excellent way to learn. Thanks!! -- stewartwb ps - only 10 more posts to you reach 8192, or 2^13 - quite a milestone! Thanks for all of your contributions, Weebo!
  16. This is great... I'm able to interpret and modify the script to meet my use case, although I know it's not elegant. I created a script for each disk, which I can run when I'm ready to build the hashes to validate the data migration after each file copy operation. Here's what I came up with to grab hashes immediately for disk3, though I should make the volume a command line parameter instead. #!/bin/bash find /mnt/disk3 -type f -exec md5sum -b {} \; > /hash/MD5_$(date +"%Y-%m-%d")_disk3.md5 find /mnt/disk3 -type f -exec ls -lc {} \; > /hash/TS_$(date +"%Y-%m-%d")_disk3.txt I could even simplify further by dispensing with the date stamps. #!/bin/bash find /mnt/disk3 -type f -exec md5sum -b {} \; > /hash/MD5_disk3.md5 find /mnt/disk3 -type f -exec ls -lc {} \; > /hash/TS_disk3.txt Again, thanks for your help! -- stewartwb
  17. Thank you, Alex - that's a great bit of documentation! I like your use case - monthly hashes to guard against silent corruption and bit rot. I seem to have survived the data corruption bug earlier this year, but I welcome a strategy to guard against undetectable corruptions in the future. Here's my current use case - I'm migrating my 2TB array volumes from ReiserFS to XFS. I'm using the checksum program (by corz) from Windows to validate each step before I wipe and convert my next data drive. My process takes about 7 hours per drive to compute the hashes because it has to ship the data over the network. I expect building hashes directly on the unRAID server would be faster. I noticed one difference between your method and my Windows-based method. The checksum program I've been using appears to add the '-b' option to consider each file as binary rather than text, which was shown in the output as an asterisk before the path/filename. Do you think there would there be any benefit to adding that option to your script? I suppose the md5sum utility included in unRAID may automatically handle file types without needing this option. Thanks again for sharing your knowledge and your effort on this front with the unRAID community! -- stewartwb
  18. Alex - thanks for posting your script. I've been searching for a method of generating MD5 hashes per disk, and your script looks like it might be the solution. I'm trying to run this script on my unRAID 6b10a server, but I don't know how to check the results, and I'm not able to interpret the script visually. Is it supposed to create a folder called "hash" on each disk, and drop a single MD5 file there? Also, am I correctly interpreting that each time the script generates hashes for a particular disk it will rename the file with a timestamp, so you can check for unexpected changes in MD5 hash values over time? Thanks in advance for sharing. -- stewartwb
  19. That feature was added in 6.0 beta 7. Use it to create a BTRFS cache pool with multiple drives for fault tolerance. Older versions only allowed a single cache drive, but now you can have multiple drives in a cache pool.
  20. Hi, Lars - here's my experience so far. I had a few disks involved... I had just updated from 5.05 to 6.0b8 and moved 350GB of data from one drive to three others, cleaning it off so I could switch my first drive to XFS. I don't have MD5SUM hashes of those files, but I've checked a bunch of music files with the FooBar2000 validation plugin, used 7-Zip to test a bunch of archive files, browsed large folders of photos, and manually checked the start and end of a bunch of videos. So far, I am not seeing any corruptions. Although we would need hashes made prior to unRAID 6.0b7 (which I don't have) to rule out scrambled files, based on what I've read, I suspect the issue will hit hardest those folks who were running applications that read and write to a bunch of small files. LT reported their source code and compiler work files got truncated or otherwise damaged, and other reports suggest damaged fan art and other meta data files from scrapers that download fan art for XBMC or the like. It sounds like large files would most likely be damaged at the start or end of the file, so that's what I've been testing, and I haven't seen any corruptions. I suspect some reports of corruptions in larger files may have happened long ago, but they are just being discovered due to increased scrutiny. I'm keeping my fingers crossed, and checking those files that I can't easily recreate. I also reverted my server back to 5.05 until 6.0 Stable is released, and I set up a test server for the beta version, as I should have all along. -- stewartwb
  21. It sounds very much like a former contributor (username GrumpyButFun) who was asked to leave the forum. IMHO, he often has good technical insight, but his tone is frequently belligerent and counterproductive.
  22. I know unRAID now offers single-drive BTRFS volumes an array drive format option, alongside the old standby RFS and the other new option, XFS. However, Tom mentioned some issues with BTRFS that appear to be due to its copy-on-write filesystem features. I don't see anyone mentioning these issues when debating which filesystem option is best to use for new array drives. Are the issues Tom mentioned irrelevant when BTRFS is used on an array drive? What I mean is, were the issues he raised only relevant when using BTRFS on the cache drive with multiple partitions and copying data around on that one drive? (And thus the reason he's implemented mounting BTRFS loopback volume files rather than requiring BTRFS on the cache drive to support Docker.) -- stewartwb
  23. I know unRAID now offers single-drive BTRFS volumes an array drive format option, alongside the old standby RFS and the other new option, XFS. However, Tom mentioned some issues with BTRFS that appear to be due to its copy-on-write filesystem features. I don't see anyone mentioning these issues when debating which filesystem option is best to use for new array drives. Are the issues Tom mentioned irrelevant when BTRFS is used on an array drive? What I mean is, were the issues he raised only relevant when using BTRFS on the cache drive with multiple partitions and copying data around on that one drive? (And thus the reason he's implemented mounting BTRFS loopback volume files rather than requiring BTRFS on the cache drive to support Docker.) -- stewartwb
  24. FAQ request: now that we have three options for the file system of our array drives, what does LimeTech recommend we do? [*]Stick with ReiserFS [*]Convert to XFS [*]Convert to btrfs Also, I assume our arrays can include a mix, with different filesystems on different drives in the array. Is this correct? If we plan to convert to a new filesystem, do you recommend we try to convert the entire array in a short timeframe, or just use the preferred filesystem as we add drives? Thanks!
  25. The forum post referenced wasn't from me, but I have done this. Four screws and the circuit boards fall right off. Connections to the platters are just little copper fingers. Easy to replace the boards. It can be a pain to find the same circuit board with the same 'Rev' number, though. If you've six years worth of media, you probably have some old data drives...those can be hard to source. eBay might work. I agree with DaleWilliams - and if all of your dead drives are the same model / revision, you should be able to swap the same PCB from one drive to the next and recover data from all of them. That would be a neat trick! If you go this route, please keep us posted. Here's hoping that only the externally-accessible PCB was fried. -- stewartwb