kode54

Members
  • Posts

    246
  • Joined

  • Last visited

Everything posted by kode54

  1. As listed in the release notes of 6.2.0, and linked from the release notes of every release since: http://lime-technology.com/forum/index.php?topic=51874.0
  2. I think they're accessing their shares from a Mac, judging by the use of AFP and the AppleDouble references in the syslog. On a Mac, things should be configured to use SMB, except for using Time Machine backups, since Samba doesn't support that just yet. Also, your smb-extra.conf should be configured through the SMB setup page to contain the following: [global] ea support = yes vfs objects = catia fruit streams_xattr fruit:resource = file fruit:metadata = netatalk fruit:locking = none fruit:encoding = native
  3. Have you tried the all-in game of completely demolishing your configuration and rebuilding the array from scratch? Make sure your disks are added in the correct order, of course. You'll also have to erase a few shares. "docker" is no longer (was it ever?) a docker configuration share, it's "appdata", and created by the system automatically. The docker /var/lib/docker tree lives inside a disk image called docker.img, which lives in the "system" share by default. The /etc/libvirt configuration folder also lives in a disk image, called libvirt.img, which also lives in "system". "appdata" is cache "Prefer". "system" is also cache "Prefer". Something is wrong with diagnostics as well, as it makes no mention of your disk1 being read only until the AFP daemon starts trying to write things to it. Are we sure this was a file system error and not a permissions error on the mount point? Please post diagnostics again, as well as ls -la /mnt/ and ls -la /mnt/user/, censoring any share names you feel are sensitive. This would not be the first time that someone has posted to this forum after such an upgrade and found that a number of their diskN and possibly user shares lack write permission and/or have the wrong ownership. I'm not even sure how this happens. Also, I was looking through the Fix Common Problems log in your diagnostic: 1) You have non-cache files or folders in your "docker" share. 2) You have not installed Community Applications. 3) You have an i386 / 32 bit version of inotify-tools installed in your packages folder. Correct this by installing NerdPack and enabling its inotify-tools package, which is x86_64. 4) Since you do not have Community Applications installed, it does not recognize your ntfs-3g plugin as belonging to a known package. Installing CA should allow it to track this. 5) Your snap plugin is also not known due to missing Community Applications. 6) You will likely need to delete your docker.img and recreate it anyway, but Community Applications would have helped with the rebuild by preserving your Docker container settings. 7) Since you appear to be using ntfs-3g, and the log does not mention it in use, which partitions are you trying to mount as NTFS?
  4. Setting it up in Sierra is not terribly difficult. For now, the only hurdle is the libvirt-python version used. 1) brew tap jeffreywildman/virt-manager 2) brew edit virt-manager, search for "2.1.0", which should locate the libvirt-python link. Replace version number with "2.4.0", and replace the sha256 sum with "aa087cca41f50296306baa13366948339b875fd722fc4b92a484484cd881120c". 3) brew install virt-manager virt-viewer. 4) Due to outstanding issue #62 with jeffreywildman/virt-manager, remember to supply --no-fork switch when invoking virt-manager from the Terminal. E1: Something I just remembered. KVM+Qemu does support saving snapshots, but if you have any hardware passthrough, it doesn't support saving snapshots of a running VM.
  5. According to this page, it is possible to work around the default xterm-256color of macOS and similar, by adding: |xterm-256color| To the code block on line 144 in the default /etc/termcap. This fixes using elvis, the default vi clone, with the stock TERM setting. The suggested /etc/DIR_COLORS setting is not necessary, as there is already a TERM xterm-256color line within that file.
  6. Maybe leave it as an alternative and make the two mutually exclusive? Also, since one requires removing the other, it may make sense that switching off a package also removepkg:s it from the current running instance, since it would otherwise require a reboot or manual removepkg to get rid of it.
  7. 16GB, and what's with all the non-power-of-two denominations?
  8. random-seed, super.dat, and domain.cfg are normal. secrets.tdb is part of Samba, probably safe to keep that. drift likely has to do with your machine's clock drift. So, it looks like those are probably all safe, but even then, it may be wise to remove the secrets.tdb and redo your network share user passwords.
  9. Fine. Delete those files. Reboot. SET A PASSWORD IMMEDIATELY. MAKE NO DELAY. Also, is your unRAID machine visible from the Internet? That is a very dumb thing to do.
  10. I think the general answer is still going to involve testing it against the existing configuration. It will probably also require rewriting portions of the network scripts to support IPv6, including all of the Docker scripts for network forwarding, unless that's already supported upstream in Docker itself.
  11. Yeah, you won't be able to find out what the password is, but you can nuke the passwd and shadow files from /config/ on the flash drive and reboot the machine. Of course, then you'll be in the same exact place you were to begin with, an unprotected unRAID installation. That will last until whoever took over your machine decides to do it again. On second thought, let's do this carefully. 1) Power the machine off by power button, and keep it off. 2) Remove the flash drive. 3) Plug the flash drive into a spare machine. 4) Download or install Python 3 on whichever machine you have handy, possibly the same one. 5) Run the following script against Python 3: python3 -c 'import crypt; print(crypt.crypt("<your password>", crypt.mksalt(crypt.METHOD_SHA512)))' 6) Copy the resulting string and paste it into the password field for the "root" user in the /config/shadow file on your flash drive. That's the field between the first and second colons of the line starting with "root". 7) Delete any other unexpected user accounts from both /config/passwd and /config/shadow, even if they won't be able to get in to administer the machine. Just to play it safe. Cleanly eject the flash drive. It should now be "safe" to boot your unRAID installation again, as you've just configured a salted Crypt hashed password for the root account without a moment of the machine being live without a password. No, scratch that. Look at the "go" script in the /config/ folder as well. Check for anything suspicious being started up there. Also wipe out /config/plugins/ just in case anything weird was dropped there, you can reinstall those later. I'd go over all your virtual machines and dockers with a fine tooth comb, looking for any inconsistencies.
  12. Can we get netcat-openbsd in this kit any time in the future? Both it and its dependency, libbsd, have SlackBuilds scripts available, and I've verified that the latest built from Slack 14.2 runs on unRAID 6.3-rc3. This version of netcat is needed for things like connecting to Unix sockets, which is required to support virt-manager and other libvirt tools connecting using the qemu+ssh protocol, which connects to an ssh session and pipes traffic to the specified Unix socket using netcat. SlackBuilds.org https://slackbuilds.org/repository/14.2/network/netcat-openbsd/ https://slackbuilds.org/repository/14.2/libraries/libbsd/ You will need to remove the default nc package before it will let you build netcat-openbsd.
  13. It also appears that you are configuring libvirtd to be an unauthenticated listener on all interfaces. The less usual approach I am used to seeing is to install netcat-openbsd, which requires a different Slack package than the one provided in NerdPack, and configure the virt-manager connection from a command line: virt-manager -c qemu+ssh://root@tower/system?socket=/var/run/libvirt/libvirt-sock Once this has been run, assuming the server has a version of the Netcat command supporting the -U switch to connect to Unix sockets, it should be able to pass through to libvirtd. Once this has been added to virt-manager, it should persist across restarts. Furthermore, there is also a Homebrew tap on Github for virt-manager and virt-viewer, and it only requires a two line modification to the virt-manager.rb (brew edit virt-manager) to import a newer version of libvirt-python to bypass a compile time error which occurs on Sierra systems. Also, due to another bug with Sierra, this virt-manager will require the --no-fork switch to prevent a startup crash, but can still be launched into the background by appending an ampersand (&) to the command line. E 1: Okay, here's what you need for a more secure connection path that doesn't require a libvirtd.conf edit: You'll need libbsd and netcat-openbsd. You'll need a Slackware 14.2 install or the live DVD to build them. 1) Download the SlackBuilds and source packages for the above from the root account. 2) Unpack libbsd.tar.gz. 3) cd to libbsd. 4) ln -s ../libbsd-0.8.3.tar.gz ./ 5) ./libbsd.SlackBuild Now you'll have a libbsd package in /tmp/. 1) installpkg /tmp/libbsd*.tar.gz 2) cd back to root, or .. from the above steps. 3) Unpack netcat-openbsd.tar.gz. 4) cd to netcat-openbsd. 5) ln -s ../netcat-openbsd_* ./ 6) removepkg nc 7) ./netcat-openbsd.SlackBuild Now you'll also have a netcat-openbsd package in /tmp/. Transfer both of these packages to a share on unRAID: 1) scp /tmp/*.tar.gz root@<tower ip>:/mnt/user/<share>/ Now you may install them from unRAID SSH or Telnet: 1) cd <wherever you copied them> 2) If you have NerdPack GNU netcat installed, disable it in the NerdPack UI, which doesn't actually remove it, then run removepkg nc. 3) installpkg libbsd*.tar.gz 4) installpkg netcat-openbsd*.tar.gz
  14. What about QED? That supports snapshots as well, and I've at least "heard" there may be better performance from QED versus QCow2, but it needs testing.
  15. http://www.linux-kvm.org/page/Projects/auto-ballooning This would appear to indicate that it is really only designed for over-committing the memory on the host with VMs. It won't automatically relinquish VM memory unless there is pressure from the host side, or from another VM.
  16. Avast 2015 employs a hardware accelerated hypervisor by default, because virtualizing the entire operating system is easier than writing an AV scanning engine that is compatible with everything the conventional way. This has caused problems for people trying to run "unknown" virtualization software on their Windows machines, as well as people attempting to run Windows inside virtualization products that support nested virtualization.
  17. Configuring it to balloon up to the allocated size may require special work. By default, when creating virtual machines, the XML templates will pin the memory to the configured size at all times.
  18. There is no "tools" package for Linux guests under KVM+Qemu. However, Ubuntu versions prior to the switch to systemd will require you to install acpid for the guest to monitor ACPI events, such as the shutdown signal from the host.
  19. You will need to synchronize your /etc/passwd (user names and accounts) and /etc/shadow (actual hashed passwords) files. Well, I take that back. You'll just want to synchronize the entries for your users, you won't want to copy the whole files over, as they're altered by various plugins to add or remove users. Actually, hold that thought, someone else may know better. I don't really know if the actual share users are listed in any of the other configuration files. I do know that Samba is set to security = USER, which means that it checks passwords against the Unix passwd/shadow files. Bad me, I haven't actually configured my server to have any other users than root.
  20. This is still a problem with the .plg file that is pointed to by 6.2.0 release candidates being a 404 link. (Forbidden is also S3's generic response for 404 errors, unless a custom 404 error page is configured and the bucket has generic web hosting turned on.)
  21. At the very least, it works with those Tesla GPUs because they are multiple independent GPUs on a single card. The only way to do this with a single dedicated GPU is to virtualize the GPU the way a desktop VM application does, presenting a false GPU with virtual acceleration capability to the VM, and forwarding everything to the host. This also requires the hypervisor to have full GPU acceleration drivers. It would not require a full desktop environment, only full screen presentation on attached monitors, but it would still put a heavy burden on the hypervisor, though.
  22. You'd have to be nuts to use a shingled drive in an array like this. Figure 20GB of the drive is dedicated write cache. Every time sustained writes fill that up, the drive locks in a busy cycle until it flushes the cache in a slow stream of read-modify-write cycles to the shingled storage area. Or you could wait for someone to invent a desktop file system and operating system that can handle this at the host level. Probably won't happen any time in the next decade for consumer or prosumer accessible software. Just because you can do it, doesn't mean that you necessarily should.
  23. This article has been updated as of September of 2016, and still links to that patch. I find no signs of the relevant changes in the specified file in the Git repository. I suppose it could be a test release, since the alternative is finding out that it still somehow causes XP to bluescreen, even though it technically shouldn't trigger unless the VM has an Apple SMC attached. It's really only relevant for installing and using 10.6 through 10.7.4, though. Skipping straight ahead to at least 10.8, or starting with a pre-installed image of 10.7.5, bypasses the need for this patch. It's really only needed for installing legacy software, not anything modern.
  24. My bad, it creates a channel, but to access it, you have to connect to a secondary port. Let me see... http://www.htpcbeginner.com/install-plex-web-tools-2-0/ You just need to place the bundle folder in the correct place, and check the Channels page from the Plex Home, and make sure you save the port(s).
  25. This patch appears to be mandatory for running the topic listed versions of Mac OS X on an SMP system. It claims it only affects PIIX based "hardware", but I find the same error occurring on Q35-2.4, Q35-2.5, and Q35-2.7. And I cannot even make my Seabios boot loader boot into either Snow Leopard or Lion on a host-passthrough "uniprocessor" system, nor can I figure out how to simulate a uniprocessor system that will satisfy the installer. Getting this working with Seabios and a simple <kernel> loader is but one possible way to have a working system, which may then have Clover installed on it, and then converted over to an auto booting OVMF machine. My purpose in using a system this old is for compatibility targeting old machines with some Homebrew and Xcode trickery, and maybe eventually toying around with Rosetta.