Jump to content

rinseaid

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by rinseaid

  1. I meant between rc2 and rc3. Based on gridrunner's report of rc3 and the similarity in our hardware I'm not confident but will test and report results.
  2. I have a Gigabyte X399 Designaire running the F10 BIOS with a Threadripper 2950x. Windows 10 performance on 6.6.0 rc2 was very poor and I rolled back to 6.5.3. I haven't tried rc3 but I don't see anything in the changelog that makes me think this would be resolved. What I was seeing was extremely slow startups, including a significant delay until seeing the Tianocore logo, and then the spinning dots while Windows is booting would sort of stutter. Boot up took a total of around 5 minutes compared to 30ish seconds on 6.5.3. Performance while in the VM was similar - lots of stutters and delays. Couldn't see anything significant of interest in Task Manager.
  3. Thanks for this Vexorg. I was searching for a different issue (SAS spindown issues) and saw your post. I had been building a new server and had poor performance but hadn't had time to research. What I found was interesting - all my HP branded SAS drives had write cache disabled, while all my Seagate drives had it enabled. These drives were split across two servers so my assumption is that this was set on the drives themselves at some point and that this has nothing to do with unRAID. Enabled write caching on the HP drives and performance is back to where I'd expect, so you saved me a ton of time. Thanks again.
  4. Just updating the built in bzmodules file is such a simple solution - I was clearly overthinking it. Thanks again!
  5. I've been eager to try the unRAID 6.4 prerelease series but up until now hadn't had any time to investigate getting ZFS working with it. Finally got a test machine and spent more time than I'm willing to admit getting it going. Using the script steini84 posted earlier in the thread, I figured out which packages from the Slackware 14.2/Current 64-bit branch needed to be installed. A few notes, the cxxlib library was removed from Slackware and needed to be replaced with a few other packages. glibc was too old in Slackware 14.2 so had to be pulled from the Current branch. With the new bzfirmware/bzmodules squashfs implementation in unRAID 6.4, I 'borrowed' portions of CHBMB's script (posted here: https://unraid.net/topic/61576-640-rc13-error-compiling-custom-kernel/) to prepare the system prior to compiling, and then build the new squashfs files. I've uploaded updated build.sh, bzfirmware/bzmodules and spl/zfs package files here: https://github.com/rinseaid/unraid-zfs I also modified steini84's plugin file to install the new spl/zfs packages (based on latest 0.7.5), but bzfirmware and bzmodules need to be installed to /boot manually. I guess the plugin script could backup the original and copy the files, but I feel like it's a little messy to just overwrite whatever is there and wonder if there's a better way to do it that I don't know of. What I've put together, while working for me, is super hacky and I don't recommend anyone use it. I'm just not well versed enough to do this the 'right' way or I would. steini84 - hope this saves you at least a little time for when 6.4 stable is released!
  6. In my case, removing spaces from my VM names resolved the 500 error.
  7. The snapshots are not incremental, and unfortunately I'm not sure I have the time to implement this. As far as time stamps - a folder is created within your chosen destination with the name of the VM, and within that folder are time stamped folders that contain the VM backup files. There is no way to tag the filenames. I'll admit this is pretty quick and dirty and I'm hoping some form of backup is built into unRAID in the future.
  8. I know this is an old thread, but for anyone else searching I wanted to add that I see this too and haven't found a solution. I'm using a SuperMicro X10DRi motherboard with 2x Engineering Sample Xeon E5-2643 v3 CPUs. I've verified that the PSUs are delivering adequate power to the CPUs. I've tried disabling EIST and various C states in the BIOS, but this didn't help. I also find the performance governor better, but ultimately eventually the CPU scales down to the lowest frequency. Core and package temperature are well within limits (less than 70F). I tried disabling the pstate driver via syslinux.cfg, and then switching to its performance governor, and while it appears all cores are running stable at their clock speeds when checking either /proc/cpuinfo or cpufreq-info, it's clear the cores are throttling. This only seems to happen while gaming - as soon as I alt-tab or exit the game, the core speeds shoot right back up. Similarly, running a CPU stress test in the background while gaming stops the CPUs from scaling down but obviously kills performance. I ran both prime95 and AIDA64 (with GPU stress test) on their own and they ran for over an hour without exhibiting any CPU scaling. Switching back to a SuperMicro X10SRL-F and Xeon E5-1650 v3 (non ES) and the issue goes away. I previously had an ES E5-2693 v3 and experienced the same issues, so I'm wondering if there's something specific either to ES chips, or E5-26xx chips.
  9. I've been looking for a KVM backup script leveraging libvirt's snapshot and blockcommit features. The advantage of this method is that there is no downtime when taking backups, and they should also be application consistent. I've adapted the script found here (https://gist.github.com/cabal95/e36c06e716d3328b512b) for use with unRAID, to add additional functionality, and to fix some bugs. The script will either back up specified VMs, or all using the '--all' parameter. VMs with spaces in their names are supported, and if specifying the VMs as a parameter, use double quotes around the VM name. VMs have to be running in order to take a snapshot, so the script will power up any VMs that are turned off, and will then attempt a clean shutdown once the snapshot has been committed and disks copied. There are global variables that you can modify in the script to specify how long to wait for a VM to start before aborting the backup process, and how long to wait for a VM to cleanly shut down before powering it off. CD-ROM images and floppy drive images (I mean...if you have any...?) are automatically excluded from the backup. The script will utilize unRAID's built in notification system - which you can figure to send emails with a detailed summary that looks like this: Usage: vm-snapshot.sh <backup folder> <list of domains, comma separated, or --all to backup all domains> [max-backups (default is 7 if not specified)] The script can be found on github: https://github.com/rinseaid/unraid-vm-snapshot
  10. I also wanted to get the ZFS Event Daemon (zed) working on my unRAID setup. Most of the files needed are already built into steini84's plugin (thanks!) but zed.rc needs to be copied into the file system at each boot. I created a folder /boot/config/zfs-zed/ and placed zed.rc in there - you can get the default from /usr/etc/zfs/zed.d/zed.rc. Add the following lines to your go file: #Start ZFS Event Daemon cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/ /usr/sbin/zed & To use built in notifications in unRAID, and to avoid having to set up a mail server or relay, set your zed.rc with the following options: ZED_EMAIL_PROG="/usr/local/emhttp/webGui/scripts/notify" ZED_EMAIL_OPTS="-i warning -s '@SUBJECT@' -d '@SUBJECT@' -m \"\`cat $pathname\`\"" $pathname contains the verbose output from ZED, which will be sent in the body of an email alert from unRAID. I have this set to alert level of 'warning' as I have unRAID configured to always email me for warnings. You'll also want to adjust your email address, verbosity level, and set up a debug log if desired. Either place the files and manually start zed, or reboot the system for this to take effect. Pro tip, if you want to test the notifications, zed will alert on a scrub finish event. If you're like me and only have a large pools that takes hours/days to scrub, you can set up a quick test pool like this: truncate -s 64M /root/test.img zpool create test /root/test.img zpool scrub test When you've finished testing, just destroy the pool.
  11. For anyone trying to set up zfs-auto-snapshot to use Previous Versions on Windows clients: I placed the zfs-auto-snapshot.sh file from https://github.com/zfsonlinux/zfs-auto-snapshot/blob/master/src/zfs-auto-snapshot.sh in /boot/scripts/zfs-auto-snapshot.sh and made executable with chmod +x zfs-auto-snapshot.sh I found that no matter which way I set the 'localtime' setting in smb.conf, the snapshots were not adjusting to local time and were shown in UTC time. To fix this, I removed the --utc parameter on line 537 of zfs-auto-snapshot.sh to read: DATE=$(date +%F-%H%M) I then created cron entries by creating /boot/config/plugins/custom_cron/zfs-auto-snapshot.cron with the following contents: # zfs-auto-snapshot.sh quarter hourly */15 * * * * /boot/scripts/zfs-auto-snapshot.sh -q -g --label=04 --keep=4 // # zfs-auto-snapshot.sh hourly @hourly ID=zfs-auto-snapshot-hourly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=00 --keep=24 // # zfs-auto-snapshot.sh daily @daily ID=zfs-auto-snapshot-daily /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=01 --keep=31 // # zfs-auto-snapshot.sh weekly @weekly ID=zfs-auto-snapshot-weekly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 // # zfs-auto-snapshot.sh monthly @monthly ID=zfs-auto-snapshot-monthly /boot/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=03 --keep=12 // Edit: I switched the cron entries to use specific times of day, days of the week, etc. primarily due to the effect of reboots on unRAID's cron handling. I would get inconsistently spaced apart snapshots with the above cron configuration. # zfs-auto-snapshot.sh quarter hourly */15 * * * * /boot/config/scripts/zfs-auto-snapshot.sh -q -g --label=04 --keep=4 // # zfs-auto-snapshot.sh hourly 0 * * * * ID=zfs-auto-snapshot-hourly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=00 --keep=24 // # zfs-auto-snapshot.sh daily 0 0 * * * ID=zfs-auto-snapshot-daily /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=01 --keep=31 // # zfs-auto-snapshot.sh weekly 0 0 * * 0 ID=zfs-auto-snapshot-weekly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=02 --keep=8 // # zfs-auto-snapshot.sh monthly 0 0 1 * * ID=zfs-auto-snapshot-monthly /boot/config/scripts/zfs-auto-snapshot.sh --quiet --syslog --label=03 --keep=12 // Run 'update_cron' to immediately enable the custom cron entries. The labels differ from the zfs-auto-snapshot default labels for better compatibility with Samba. For the Samba shares, I placed the below in /boot/config/smb-extra.conf: [data] path = /mnt/zfs/data browseable = yes guest ok = yes writeable = yes read only = no create mask = 0775 directory mask = 0775 vfs objects = shadow_copy2 shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = zfs-auto-snap_%S-%Y-%m-%d-%H%M shadow: localtime = yes Run 'samba reload' to refresh your Samba config. After the first scheduled snapshot is taken, you should now be able to see the snapshots in the Previous Versions dialog on a connected Windows client. You'll need to modify this with your desired snapshot intervals, retention, and path names. This configuration is working well for me. I hope this helps anyone else out there trying to get ZFS snapshots working with shadow copy for Windows clients.
  12. Sorry for resurrecting an old thread, but for anyone that may come to this thread from Google, I have used dmacias' great work and adapted for use with Proxmox. I have posted the info to the Proxmox forums here: https://forum.proxmox.com/threads/wake-on-lan-script.32449/#post-160160
  13. Edit: as dmacias notes below, this was posted in the wrong thread, the link I provided for Proxmox is a WoL proxy which is not the purpose of the plugin in this thread. dmacias, thank you very much for this! I hope you don't mind, but I have modified the code for use with Proxmox and created a quick guide for its usage here: https://forum.proxmox.com/threads/wake-on-lan-script.32449/#post-160160
×
×
  • Create New...