Leaderboard

Popular Content

Showing content with the highest reputation on 03/31/17 in all areas

  1. Ok finished the guide. Was a bit rushed so sorry if its a bit lower quality than normal.
    3 points
  2. Upgrade Instructions Clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade. If the new version does not appear, you can manually update by selecting the Install Plugin tab, paste this URL and click Install: https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg Alternately the release may be Downloaded. In this case it is usually only necessary to copy all the bz* files from the compressed folder to the root of your USB flash boot device. Please also read the unRAID OS version 6 Upgrade Notes for answers to common issues that may arise following upgrade. Changes This is a bug fix and security update release. Base distro: reiserfsprogs: version 3.6.24 (downgrade to address reiserfsck regression) samba: version 4.5.7 (CVE-2017-2619) Linux kernel: version 4.9.19 added CONFIG_BLK_DEV_PCIESSD_MTIP32XX: Block Device Driver for Micron PCIe SSDs (user request) Management: emhttp: get rid of SO_LINGER on connection socket emhttp: override array autostart if safe boot mode emhttp: silence "Transport endpoint is not connected" messages emhttp: btrfs cache pool set to raid1 only on new pool creation syslinux: include "unRAID OS GUI Safe Mode (no plugins)" boot option update hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} update_cron: generate system cron table only from installed plugins webGui: Ignore mover log entries in color coding webGui: Fixed wrong reference to Display Settings in Main page webGui: Fixed: missing creating of eth10 settings page webGui: Add in links to dashboard webGui: SweetAlert bugfixes webGui: Add tooltipster to dynamix and add a tooltip in DiskIO toggle at /Main webGui: Fixed DNS server assignment when changing VLANs webGui: Fixed DNS server assignment follows IP address assignment webGui: Fixed incorrect display of BTRFS check for non-btrfs disks. Removed unused buttons. webGui: Fixed missing csrf token and code optimizations in SMART report generation webGui: Cleanup unused parameters when saving configuration files webGui: Disk read/write IO in background daemon webGui: Remove the old temp .plg file on remove webGui: Remove href bookmarks on anchor elements webGui: Provide control to initiate btrfs balance on btrfs-formatted array devices and single cache device. webGui: Remove preset btrfs balance options; btrfs-raid1 is default only for initial creation of multi-device pool. webGui: DeviceInfo shows all check/balance/scrub operations but greyed out depending on arry started state webGui: Add lscpu output to diagnositics webGui: toggling acs override will now apply to all boot options in syslinux.cfg
    2 points
  3. Rather than try and write a procedure that's suitable for all circumstances and necessitates a very complex procedure go for something that's simple that is very hard to mess up. Vast majority of users use the share system and includes and excludes and not disk shares. Give the users something that works that can't be messed up. Enable disk shares Move disk1 to disk 2 over the network Format disk 1 Continue. Adjust include and excludes How many users have messed up the Rsync? How many users don't know what screen is? And now you're getting them to start reassigning disk positions simply to avoid changing an include or exclude. Sorry. I don't buy the argument. The point of that is to help users who aren't utilizing user shares which are very much in the minority now. People are asking for instructions on how to use unbalance to accomplish the conversion because the entire process has been over thought and is needlessly complex.. We're here to help people accomplish the goal. And that doesn't mean completely confusing the hell out of them in the process. I'm very late to the debate but it seems to me that everyone has lost sight of what the goal is and is instead concentrating on accomplishing it in the fastest way possible which is also the most prone to mistakes. Once 6.4 is released odds on the tool that Tom has alluded to to help with this is going to be far more akin to what I'm talking about rather than the nightmare that it currently is. I'm sorry it just floored me seeing rob making a comment about reassigning disks. Just throwing more fuel on the fire IMHO
    2 points
  4. Seagate 8TB Expansion STEB8000100 I just paid $199 from B&H and it's pre-clearing as I type (and as I go through my day, then sleep, then enjoy my Saturday and then... sorry, i digress) but this is obviously even better.
    1 point
  5. OK, I'm home on the computer so let me try to explain this one more time, first btrfs works differently than most other file systems, because before any writes are done it allocates chunks, mostly for data and metadata, these are usually 1GB and 256MB in size respectively, btrfs fi show displays allocated vs used space, this was your cache pool: So device size is 232.89GiB only 126.22GiB were in use, but 232.88GiB were allocated. Now lets look at btrfs fi df were we can see how many space is allocated and used for each type of chunk, again data and metadata are the ones that interest us, the others are negligible: So, you have a lot of free space on the chunks allocated for data, and that was not the problem, but if you look at metadata the existing chunks are almost full, so it needed to create a new metadata chunk, but because the devices were completely allocated it was not possible, hence the out of space error. The command you ran reclaimed all previously allocated data chunks that were only 5% or less used, so the needed metadata chunk could be created. Btrfs is being constantly improved and these situations should happen much less often in the future, but to avoid this for now run the same balance command anytime the allocated space comes close to total device space, say above 95% capacity or so.
    1 point
  6. 1 point
  7. I think this is the cheapest storage we've ever seen. I too have bought several of these external case versions and they have all precleared perfectly. This type of rapid price reductions tend to preceed models being phased out. We'll see. ST8000AS0002
    1 point
  8. No, you read the last several steps! Formatting does not destroy parity as it is done while the disk is assigned to the array and the array is started. Therefore the parity is updated to reflect the formatting. After all, formatting a disk means writing an empty filesystem to it. It is just as any other set of writes, parity does not care. It only cares about the raw bits. And no, you absolutely cannot put another drive in instead here. You have to use exactly the same set of drives to maintain parity.
    1 point
  9. Hey, so I hope you don't find this too self-promotional, but I did write my article with people who build DIY NAS/VM boxes in mind. Basically, I didn't find confirmations that ECC was enabled on Ryzen to be sufficient, so I actually tested to see if the feature was actually functional. I kind of spoiled the ending in my title here, but if you're interested this is my article: http://www.hardwarecanucks.com/foru...ws/75030-ecc-memory-amds-ryzen-deep-dive.html
    1 point
  10. write the name in the path, e.g.: /mnt/cache/libvirt.img And click apply.
    1 point
  11. Check filesystem on disk1 https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems
    1 point
  12. your libvert logs in both instance show 2017-03-31 10:13:44.234+0000: 3169: error : x86FeatureInData:780 : internal error: unknown CPU feature __kvm_hv_spinlocks 2017-03-31 10:13:44.234+0000: 3169: error : x86FeatureInData:780 : internal error: unknown CPU feature __kvm_hv_vendor_id this was brought up by someone with similar hardware here: I think somehow the unstable clock may be causing the problem, the more I read online. And it may just be inconsistent when it causes the problems. The workaround I proposed in that thread, switching from host passthrough to emulated might help, or not at all. I also saw someone online saying that changing their topology definition in the vm helps with the instability. Instead of presenting the vm a virtual hyper threaded processor, changing the xml to the following presents just an 8 core processor. <topology sockets='1' cores='8' threads='1'/>
    1 point
  13. Interesting read, thanks for going to the trouble!
    1 point
  14. maybe get rid of Riserfs and switch to XFS...
    1 point
  15. This probably has something to do with it Mar 31 05:06:20 SRV kernel: TSC synchronization [CPU#0 -> CPU#1]: Mar 31 05:06:20 SRV kernel: Measured 315100525287432 cycles TSC warp between CPUs, turning off TSC clock. Mar 31 05:06:20 SRV kernel: tsc: Marking TSC unstable due to check_tsc_sync_source failed Mar 31 05:06:41 SRV kernel: kvm: SMP vm created on host with unstable TSC; guest TSC will not be reliable Assuming this is the problem, I do not know the fix.
    1 point
  16. There are plenty of opportunities for users to configure things in such a way that specific drive numbers are important, such as includes/excludes, docker volume mapping, etc. So if you move things around to empty a drive for reformatting, then files are no longer on the expected disks, and those configurations will no longer work as expected. Of course, it is relatively easy to just adjust all the places where you may have specified a particular disk. and maybe that would be easier than swapping disks. But you can't have a general purpose wiki process that depends on different configuration details for different people. And people seem to need a general purpose process because they really don't understand how anything works, and that has caused some mistakes and anxiety. It has gotten to be a pretty long thread. I've said more than once that if you understand parity, understand formatting, and understand user shares, then you can figure out how to approach the conversion for your specific situation.
    1 point
  17. 1 point
  18. I'm talking about the GPU's bios in this case and a physical switch of the cards to put the 970 in the first slot. I basically just followed: http://lime-technology.com/wiki/index.php/UnRAID_6/VM_Management#Assigning_Graphics_Devices_to_Virtual_Machines_.28GPU_Pass_Through.29 and In short, to get an Nvidia GPU that is also boot to passthrough you need to manually provide the BIOs to the host with an xml edit. You can try to use other people's dumps, but it's really not hard to grab a copy of the bios off your actual card, and that did the trick without any issues. Conventional wisdom is that an AMD card doesn't need this, and I'd really RATHER be using the lower power consumption card for boot, but I just couldn't get that card to initialize at all if it was used to boot, my own bios or using a dumped one. Might try again at some point, but for now I want to see what I can do in terms of providing some uptime data now that my desktop environment is doing what I really NEED day-to-day and confirmed that my gaming performance is essentially identical to what it was with the 970 and a Haswell i7 running Windows on-the-metal. PS: sorry for the ninja edits
    1 point
  19. isolate your vm cores away from unraid. read the cpu pinning thread at the top of this forum.
    1 point
  20. I would LOVE to see an action button instead of the checkbox. It would run a short (10MB?) non-correcting parity check with the current layout to see whether parity is indeed partially valid. If that came back clean, it would result in a message, "Parity 1 appears to be valid, would you like to trust it?" If it fails, a message "Parity 1 is invalid with the current configuration, would you like to start a parity build that will overwrite disk model serial XXX now?" appears, repeating those messages for Parity 2 if applicable. Not trusting a valid check would result in a full parity generation, and denying a parity build with confirmed invalid parity would result in an array start with the typical message of parity hasn't been checked yet. There may be more combinations and permutations that are needed, but since computers are logical and we can get the information we need to help the user make decisions, I think it is in our best interest to get that information and attempt to steer the user to the proper answer.
    1 point
  21. I highly doubt you are going to experience any 'bogged down' performance over 10GB. The problem with the way you describe connecting the unRAID server is that the server will use either connection. If you want to bond the 10GB connections on the unRAID server you need to set up two dedicated bonded ports on a switch first and then use those, but they will have to be on one switch, you can't spread them between two switches I don't believe. Those aren't technically 10GB switches, they are gigabit switches with 10GB uplink ports. So are you connecting the unRAID server to the uplink ports then? I would just connect the unRAID server to two bonded gigabit ports.
    1 point
  22. Unfortunately you rebooted since the errors so we can't see what happened, SMART is OK for all disks, so check cables, rebuild the disabled disk and sync parity (you can do both at the same time), any more issues grab the diagnostics before rebooting. PS : you should connect the SSD to the one of the onboard ports because LSI2008 based controllers don't support trim.
    1 point
  23. This is nothing anyone can help with but I wanted to let know anyhow: For some reason (maybe I am just getting old) its hard for me to get used to the new forum.. I cannot put a finger on it but it seems to me its just harder to read/find/do stuff.. It looks nice, nothing wrong there.. As said, my issue, but I wanted to let know.
    1 point
  24. All the whiteness of this forum sickatates my eyes....
    1 point
  25. Try deleting the package manually on your flash in /config/plugins/NerdPack/packages/6.x
    1 point
  26. We are using slackware as a base for a lives in RAM OS. We can patch just about anything, excepting emhttpd and the kernel all @limetech has to do is push out a package ie samba-5.0.0-x86_64-6.3.3_limetech.txz and have it installed in the /boot/extra (We'll need web ui support for this) you could turn the array off and install the package, the start the array. BAM! fully patched and vulnerability fixed, while limetech continues getting the patch rolled into the next release since it was installed in /boot/extra, the patch takes over every restart. the limetech could insert /boot/extra cleanup code in the next release so once the new version started it would nuke or disable the old patches I don't see why running on a ramdisk precludes patching, when plugins can do it, the core system should be able to.
    1 point
  27. Yes, I bought an IBM x3650 M1 for extremely cheap, and got these HBA's to make a NAS using UNRaid; but no disks were showing up. If I created an array, UNraid could see the whole array, but only as 1 disk. So now I am trying to flash 1 of these controller's I got; very close to building a unit.
    1 point
  28. GT 610 is a rebranded 5xx series card, so it will require Seabios, as I pointed out in another topic.
    1 point