Leaderboard

Popular Content

Showing content with the highest reputation on 03/30/17 in all areas

  1. While I agree with your sentiment for patience, especially for those who have yet to purchase a Ryzen based system, I don't think any of this is premature, especially considering Lime-Tech is already on record indicating that unRAID on Ryzen works: The emphasis on "YES! IT WORKS!" is Jon's, not mine. Beyond that, here are some additional considerations: First, unRAID 6.3.2 is not "old tech". It is current gen tech, released just a short while ago. It is also the latest and greatest, there are no new Beta's posted to try or even discuss. Speaking of old tech, Windows 7 is definitely old tech, yet still performs well on Ryzen (in some cases better than Windows 10). AMD's (and Intel's) primary goal when creating new versions of the venerable x86/x64 CPU design is backwards compatibility. Creating a new product that does not work on existing infrastructure is a multi-billion dollar mistake (case in point, Itanium). AMD has done a stupendous job of maintaining backwards compatibility, though obviously older software cannot enjoy the latest architecture enhancements without updates. Lime-Tech has done a fantastic job of late with keeping up on kernel and module updates, my hat goes off to them - salute, and they only keep improving in this regard. I'm not sure what compatibility updates you are anticipating. The Linux kernel became Ryzen architecture aware in 4.9.10 (the changes from 4.10 were back-ported). That's pretty much it, it's a done deal in 4.9.10. Sure, we may see a new CPU scaling governor in a future kernel, just like AMD has promised one for Windows 10 in the May time-frame, and boost support seems sorely lacking, but those are merely optimization tweaks, not core architecture support. And since unRAID 6.3.2 is on Linux 4.9.10, in theory it is fully compatible with Ryzen. Yes, the forthcoming Linux 4.11 kernel is promising some nice KVM enhancements, much of which isn't even targeted at x86 but rather ARM and PowerPC. While x86 KVM performance is expected to improve and add some new capabilities, none of this is required to use KVM today on Ryzen. 4.11 will also be bringing some notable new driver inclusions, but this is important only so far as enabling the latest motherboard features like Realtek's new ALC S1220A audio codec, which is of small consequence right now and doesn't even affect half the Ryzen motherboards out there. For those on the fence thinking a newer Linux kernel is going to magically make a difference, that's simply not going to happen. Which brings us back to the current state of affairs: If unRAID is built on a Linux kernel that supports Ryzen, and Lime-Tech has indicated it works, then why are the majority of us, yes more than half, experiencing random crashes in unRAID? The problem is not the Linux kernel itself. I've tested my hardware on Linux kernels 4.8.x, 4.9.10, and 4.10.2 in other Linux distros (like openSUSE Tumbleweed and Ubuntu) and the hardware performed flawlessly. My hardware has also proven itself in Windows 10, again performing flawlessly. But for whatever reason, in unRAID my server experiences constant crashes. And I'm not alone, this is a systemic issue affecting most if not all Ryzen owners attempting to run unRAID. And because we are not getting error messages pointing us in the right direction, we are all playing a guessing game as to what the problem really is. Lime-Tech has already participated here, in this thread, stating that Ryzen works, with the only caveat they provided is that it may not be the right choice for a 2 Gamers 1 PC type build due to non-ideal IOMMU groupings. What Lime-Tech has not done is acknowledge that there is a problem, nor have they shared any details on issues that they themselves have reportedly experienced, regardless of the cause. Lime-Tech is essentially allowing their "IT WORKS!" proclamation to stand, misleading potential buyers. I do expect Lime-Tech to discuss problems with the "current" (not "old") tech, especially since the future (new Linux kernels and module updates) offers nearly zero hope of fixing these problems, because the problems appear to be only affecting the unRAID Linux port. Lime-Tech has done "something" in their build that turns out to be incompatible with Ryzen. What we are experiencing should be considered a newly discovered "bug" that only affects a specific CPU, not a need to wait for optimizations. So what we are doing here is trying to substantiate that yes, there is a problem with unRAID on Ryzen (and not just defective hardware as I was initially believing), and going a step further we are trying to identify the root cause of the problem. As a side benefit, we may even discover a simple workaround that allows us to use Ryzen for unRAID without crashes, giving Lime-Tech more time to investigate and determine the solution and provide a fix (if they so choose). I think it is also inappropriate for us to just sit back and wait for Lime-Tech to make everything better. Lime-Tech is a very, and I mean very, small company. For years it was just Tom, and while they are much bigger now, Lime-Tech still just a handful of employees. It is truly incredible what Tom and his small company has produced. And to be honest, I'm surprised that they even have a Ryzen system in-house. They have zero obligation to support Ryzen (though obviously it is in their best interest to do so), and they could easily leave things as-is for years. Or forever. There is no reason to expect that the natural progression of new unRAID versions with new Linux kernels and all else will automatically resolve this bug. On a side note, I am a technology consultant by trade, self-employed for the past 20 years. I primarily do work for Fortune 500 and Fortune 100 companies. While PC hardware and Linux is certainly not my specialty, if I were performing this troubleshooting for a client then for my services rendered over the past couple weeks, I would be submitting to them a bill for over $15k, and the tab is still growing. And those companies would gladly pay it. Lime-Tech is the recipient of these services for free, and not just mine, we've got half a dozen or so Ryzen adopters here that are all providing Quality Assurance testing services. There is a substantial value to this effort, which is not a wasted effort. I do not think it too much to expect the developer to pay attention and participate when they are the beneficiary of such a significant contribution. -Paul
    2 points
  2. I'm sorry guys, but this all seems way too premature! You're trying to get older tech to work with the newest tech object, without any compatibility updates specific to the new tech. I would not expect JonP or anyone else to participate here until they had first added what they could, a Ryzen friendly kernel and kernel hardware support and Ryzen tweaked KVM and its related modules, and various system tweaks to optimize the Ryzen experience. After that, then they can join you and participate. It's like having an old version with bugs, and an update with fixes. Why would a developer want to discuss problems with the old. They are always going to want you to update first and test, then you can talk. There's so much great work in this thread, especially from Paul, but it's based on the old stuff, not on what you will be using, so it seems to me that much of the effort is wasted. Patience!!!
    2 points
  3. Application Name: Ombi Application Site: https://ombi.io Docker Hub: https://hub.docker.com/r/linuxserver/ombi/ Github: https://github.com/linuxserver/docker-ombi Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
    1 point
  4. Application Name: Jackett Application Site: https://github.com/Jackett/Jackett Docker Hub: https://hub.docker.com/r/linuxserver/jackett/ Github: https://github.com/linuxserver/docker-jackett Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
    1 point
  5. The new Retain feature, part of New Config, is a great thing, makes it much easier to rearrange drive assignments. But it's still a fairly heavy task, that causes risk and confusion to certain new or non-technical users. It used to be much easier in earlier versions, as in 6.0 and 6.1, you could stop the array, then swap drive assignments or move drive assignments to different drive numbers/slots, without having to do New Config or being warned that parity would be erased, just start the array up with valid parity, so long as exactly the same set of drives were assigned. It would really make life easier, and less confusing for some users, if we could return to that mode, when safe to do so. An implementation suggestion to accomplish the above: At start or when super.dat is loaded or at the stop of the array, collect all of the drive serial numbers (and model numbers too if desired), separated by line feeds, and save it as an unsorted blob (for P2). Sort the blob (ordinary alphabetic sort) and save that as the sorted blob (for P1). At any point thereafter, if there has been a New Config or there have been any drive assignment changes at all, before posting the warning about parity being erased, collect new blobs and compare with the saved ones. If the sorted ones are different, then parity is not valid, erased warning should be displayed. If the unsorted ones are different, then parity2 is no longer valid. But if they are the same, then parity is still valid, and the messages and behavior can be adapted accordingly. Sorting the blob puts them in a consistent order that removes the drive numbering and movement. So long as it's exactly the same set of drives, the sorted blob will match, no matter how they have been moved around. The really nice effect is that users won't unnecessarily see the "parity will be erased" warning, or have to click the "parity is valid" checkbox. The array will just start like normal, even if a New Config had been performed (so long as the blobs matched). You *almost* don't need the "Parity is already valid" checkbox any more. The one complication to work around is that if they add a drive and unRAID clears it, or a Preclear signature is recognized on it, then the blobs won't match but parity is valid any way. I *think* it's just a matter of where you place the test. Or collect the blob without the new cleared disk, for comparison. Edit: need to add that ANY drive assignment changes invalidates Parity2.
    1 point
  6. Follow this guide here. Or this video guide here.
    1 point
  7. That's nothing. Install lsio's OpenVPN-AS app, copy the generated .opvn file onto whatever devices you want, and you're done.
    1 point
  8. Your setup is fine, it improves security when you connect to those apps from outside your LAN, it does nothing for how those apps connect to the internet themselves. I don't use https/SSL on any of these apps, I implement all that at the Lets Encrypt reverse proxy level. Don't even think of putting your Unraid webui on the reverse proxy, if you want to connect to that outside your LAN, then setup a VPN.
    1 point
  9. To Facebook, you are their product that their customers are paying for. Same for other "free" things like broadcast TV. To Comcast, you are a paying customer. Now Comcast wants you to also be their product for other paying customers.
    1 point
  10. Well got 10.12.4 running now. Just finishing my guide on how. Will be up soon.
    1 point
  11. You can't replace a trial key, but you can request a new one for the other device.
    1 point
  12. After using New Config, the warning "All data on the parity drive will be erased when array is started" appears. Nothing wrong with this normally, but if the user clicks the checkbox for "Parity is already valid", the warning remains. This has caused enormous confusion to one or more new or non-technical users, who cannot see past this warning that their parity drive will still be erased. Most of us know what is meant when we click the box to say it's valid, but all that new users see and understand is that parity is valid AND the parity drive will be erased! If the checkbox has a check, the warning should either be removed, or replaced with something like "All current parity data will be retained, not erased".
    1 point
  13. I would LOVE to see an action button instead of the checkbox. It would run a short (10MB?) non-correcting parity check with the current layout to see whether parity is indeed partially valid. If that came back clean, it would result in a message, "Parity 1 appears to be valid, would you like to trust it?" If it fails, a message "Parity 1 is invalid with the current configuration, would you like to start a parity build that will overwrite disk model serial XXX now?" appears, repeating those messages for Parity 2 if applicable. Not trusting a valid check would result in a full parity generation, and denying a parity build with confirmed invalid parity would result in an array start with the typical message of parity hasn't been checked yet. There may be more combinations and permutations that are needed, but since computers are logical and we can get the information we need to help the user make decisions, I think it is in our best interest to get that information and attempt to steer the user to the proper answer.
    1 point
  14. Major functions (including exit) are F-keys that are labeled at the bottom. Rsync is available. If you are comfortable at the command line you may want to install the NerdPack plugin for additional functionality (via Community Applications ).
    1 point
  15. Rather than wait for a reply on the above lets move on. We can see a lot of inode stats using: cat /proc/slabinfo | grep inode Using my cache share with a pressure of 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1198999 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16498 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 then we tree and see: 6085 directories, 310076 files then we ls -R real 0m4.753s user 0m1.146s sys 0m0.795s and finally we look at slabinfo again btrfs_inode 35483 37680 1072 30 8 : tunables 0 0 0 : slabdata 1256 1256 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1198994 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16510 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 Whilst I am not sure that these numbers are the only ones that matter what I can see is that the ones that would appear to be important, inode_cache, xfs_inode etc all appears to be static using pressure 0. Now if we copy a multi GB file and look again btrfs_inode 35483 37680 1072 30 8 : tunables 0 0 0 : slabdata 1256 1256 0 mqueue_inode_cache 72 72 896 18 4 : tunables 0 0 0 : slabdata 4 4 0 v9fs_inode_cache 0 0 648 25 4 : tunables 0 0 0 : slabdata 0 0 0 xfs_inode 1199018 1199898 960 17 4 : tunables 0 0 0 : slabdata 70817 70817 0 udf_inode_cache 0 0 728 22 4 : tunables 0 0 0 : slabdata 0 0 0 fuse_inode 138619 138690 768 21 4 : tunables 0 0 0 : slabdata 6698 6698 0 cifs_inode_cache 0 0 736 22 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_inode_cache 0 0 936 17 4 : tunables 0 0 0 : slabdata 0 0 0 isofs_inode_cache 0 0 624 26 4 : tunables 0 0 0 : slabdata 0 0 0 fat_inode_cache 644 644 712 23 4 : tunables 0 0 0 : slabdata 28 28 0 ext4_inode_cache 0 0 1024 16 4 : tunables 0 0 0 : slabdata 0 0 0 reiser_inode_cache 111210 111210 736 22 4 : tunables 0 0 0 : slabdata 5055 5055 0 rpc_inode_cache 0 0 640 25 4 : tunables 0 0 0 : slabdata 0 0 0 inotify_inode_mark 960 1242 88 46 1 : tunables 0 0 0 : slabdata 27 27 0 sock_inode_cache 1911 1975 640 25 4 : tunables 0 0 0 : slabdata 79 79 0 proc_inode_cache 16484 17506 632 25 4 : tunables 0 0 0 : slabdata 701 701 0 shmem_inode_cache 16927 16968 680 24 4 : tunables 0 0 0 : slabdata 707 707 0 inode_cache 58436 58772 576 28 4 : tunables 0 0 0 : slabdata 2099 2099 0 the numbers are all but identical. This allows us to make a reasonable assumption that our second rule of thumb is correct (as per docs): With a cache pressure of 0, assuming you have enough RAM, inodes will not be flushed by other file actions as is usual.
    1 point
  16. Your /tmpfs is full: Whether or not that's directly related to your lagging. Your syslog is being spammed with Which is USB related. You're going to have to reboot, then open up a tail of the syslog (log button on the UI). Once those messages start occuring, start pulling out usb devices until you nail down which device is causing the issue then possibly locate it onto another USB controller
    1 point
  17. At this point then, you've got to post in binhex's support thread about it, since it's now directly related to the distro that he's choosing to utilize, and he would be the best one to offer up suggestions / support
    1 point
  18. I run IPv6 on mine with fixed addresses. Currently I use a kernel recompiled to include IPv6, then added '--ipv6 --fixed-cidr-v6='2a02:xxxx:xxxx:xxxx::/64' into the DOCKER_OPTS in /boot/config/docker.cfg I'm just an end-user, so I'm not sure how much integration they'll do to start off with adding this feature, but this may help when it comes. If you've not got IPv6 yet on the network, then you can probably get by with just adding '--ipv6' and it will assign link-local addresses to containers.
    1 point
  19. Thanks for that link. It is both useful in itself and probably the bug that inspired me to re-look at cache pressure again. Based on 4kB per inode approximation in that thread if we work backwards that allows us to make our first rule of thumb: For every 250,000 files and folders, with a cache pressure of 0 to cause inodes to not be expelled due to normal use, you need 1GB of dedicated RAM. Now before I move on I find this interesting. If I `df -i` to show inodes per disk only XFS disks give info on the md e.g. reiserfs Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md9 0 0 0 - /mnt/disk9 xfs Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md10 56M 67K 56M 1% /mnt/disk10 and the totals of user0/user do not tally with the sum of the drives Filesystem Inodes IUsed IFree IUse% Mounted on shfs 301M 972K 300M 1% /mnt/user0 shfs 534M 1.4M 532M 1% /mnt/user I would like to know why this is
    1 point
  20. I tried both. Mapping /data to /mnt/disk1/Nextcloud and changing the included/excluded disks aswell but my parity and disk1 still do not spin down when the container is running. I'm using Nextcloud together with Nginex-letsencrypt. Don't know if that's the reason maybe. Are you using it this way aswell? As my guide describes I'm using it with Apache. I was mistaken, apparently I do have it mapped as /mnt/user/nextcloud Do you limit your share to one disk via the share settings then? Will keep /mnt/user/Nextcloud aswell then and try to only include one disk. Is there someone else using it together with Nginex and have the disk mapped to /data spinning all the time? So I made some testing and its only the Nextcloud container that keeps my disks spinning (I removed Nginx temporarily). I also tried including only one disk to the /data share. Is there anything I can do? This issue really drives me crazy because I always let my server go to sleep when the array is inactive. I finally solved it. Modified the logging settings within Nextcloud from everything to only errors etc. Maybe someone else has this problem aswell.
    1 point
  21. This looks useful. Shouldn't be hard to make into a container... I'll have a go! I have no way to test it though.
    1 point