Jump to content

nraygun

Members
  • Content Count

    15
  • Joined

  • Last visited

Community Reputation

0 Neutral

About nraygun

  • Rank
    Member
  1. I do have a DVD drive and would like to leave it there. Plus I hear that SATA port is not as fast as a newer controller would be (3Gbps vs 6?). Then again, I could attach an external DVD drive for the few times I rip something. Plus it would give me another bay for more bits! Is this what everyone does on their R710 - use the DVD bay with a tray for their SSD cache drive?
  2. I currently have my SSD cache drive on a port on my flashed H200 controller. I see errors and I hear it's because that controller even when flashed doesn't support SSDs. Can you recommend a good, say 2 port, SATA controller for an R710 server? I saw some for $20 on Amazon but not sure they are "server grade".
  3. Right. I would use one or the other. Are you saying that once you've made a choice to go the GPU route you can't go back to VNC?
  4. I'm having an hell of a time trying to get a Linux VM to work with a passed through GPU and VNC. I know the passthrough works because I have it working in a Windows VM. I shut that VM down before fiddling with the Linux VM. I can VNC to the Linux VM with graphics set to VNC but then when I change the graphics to the Nvidia card, I can no longer access it. And when I change it back to VNC, I get a message that says the graphics has not initialized (yet) or "internal error: qemu unexpectedly closed the monitor: 2019-08-08T13:59:37.609434Z qemu-system-x86_64: -device pcie-pci-bridge,id=pci.8,bus=pci.2,addr=0x0: Bus 'pci.2' not found". I'd expect to be able to access the VM when I put the graphics back to VNC. I'm using MX Linux and X2Go. Any ideas?
  5. If I'm on 6.7.1 with a cache drive and am experiencing no issues, is it advisable to go to 6.7.2? Or should I leave well enough alone given this investigation will probably yield yet another update?
  6. It's wonky, but it gives me what I need. I created a FreeDOS VM and called it scripts. Inside of my borg backup script I start off with a "virsh start scripts" and then end the script with a "virsh destroy scripts". The scripts VM will show running when the borg script is in process and will show stopped when it's not. Close enough.
  7. Does anyone know if there is a way to post the status of a script to unRaid's dashboard? I'd like to see if my backup script is still running right on the dashboard instead of going into the CA User Scripts area.
  8. I popped in a 2TB drive (WD Green) to use as a backup drive and started the preclear on it last night. When I checked it this morning, it was in a "stalled" state. The log shows this: Jun 20 06:51:02 preclear_disk_WD-XXX: Zeroing: progress - 90% zeroed Jun 20 07:06:08 preclear_disk_WD-XXX: smartctl exec_time: 1s Jun 20 07:20:02 preclear_disk_WD-XXX: Zeroing: progress - 95% zeroed Jun 20 07:53:26 preclear_disk_WD-XXX: smartctl exec_time: 10s Jun 20 07:53:37 preclear_disk_WD-XXX: smartctl exec_time: 21s Jun 20 07:53:47 preclear_disk_WD-XXX: smartctl exec_time: 31s Jun 20 07:53:47 preclear_disk_WD-XXX: dd[30448]: Pausing (smartctl exec time: 31s) Jun 20 07:53:57 preclear_disk_WD-XXX: smartctl exec_time: 41s Jun 20 07:54:08 preclear_disk_WD-XXX: smartctl exec_time: 52s Jun 20 07:54:18 preclear_disk_WD-XXX: smartctl exec_time: 62s Jun 20 07:54:18 preclear_disk_WD-XXX: killing smartctl with pid 30816 - probably stalled... Any ideas what went wrong here? I just hit continue and it looks like it's on the post-read task.
  9. How about handling of VMs in general? Should I just shut them down after I'm done with it?
  10. I'm new to unRaid and liking it so far! I created a couple of VMs for some specific tasks and I'll only run them when I need them. Previously I was just running Virtualbox on my Linux box and firing them up when I needed them, then I would close it which would save the machine state and shutdown the VM. Is there an equivalent to this in unRaid? Right now, I'm just doing a graceful shutdown in the VMs to shut them down, then restarting them when I need. I'd rather just pick up where I left off last instead of waiting for the VM to boot up again. I tried putting the VMs in Paused state, but then when I restart my unRaid server, they come back as Stopped? Is there a way to shutdown unRaid VMs and save their state so they can simply be restored from the saved state later? And are snapshots supported? (I'll keep Googling around in the meantime)
  11. Got it! Good ol' IRC... Just add "-o allow_other" to the borg mount command like this: sudo borg mount -f -o allow_other /mnt/disks/backup1/backup1 /mnt/user/borgmount
  12. I also tried setting up a rootshare as described in one of Space Invader's videos. The borgmount directory doesn't show up. I tried my own user account, root, and admin as the accounts. No go.
  13. I'm a noob still experimenting with unRaid 6.7.0 and borg 1.1.9. How can I mount a borg repository such that it's available via a share? I can mount the repository to a directory in unRaid in the terminal and the contents are displayed correctly in another terminal. I tried to create a share, then mounted the borg repository to that share expecting the contents to appear. I connect to this share as admin, but it only shows the contents of the root of the unRaid server and oddly, the mount point directory does not display anywhere in this directory structure. I created a share called borgmount, then issued this command: borg mount -f /mnt/disks/backup1/backup1 /mnt/user/borgmount The borg repository is on an unassigned disk. If I connect as admin to the disk where the share's directory resides, it's empty. I noticed before issuing the command, the share has owner:group of nobody:users, but then when the borg mount command is running it's root:root. There's probably some sort of permissions thing going on, but I can't figure it out. Any ideas?
  14. So then how is this solved? A few sites say this may be due to packaging while other sites say something else. I'm not familiar enough with Python to know what they're talking about. Is the version of borg contained in this package a python script versus a compiled one or something?
  15. Can you please update borg backup to the latest version 1.1.9? Keep up the great work!