positronicP

Members
  • Posts

    49
  • Joined

  • Last visited

Posts posted by positronicP

  1. 40 minutes ago, bugsysiegals said:

     

    Do you know if the script will convert a root folder into a dataset if it's not already or will it only convert sub-folders into datasets if the root folder is already a dataset?

     Appdata and domains are usually root folders and it converts them by default.

  2. 1 hour ago, bugsysiegals said:

    That said, I just found that if I use the plugin to create a dataset, I can then move all sub-folders/files into that dataset, delete the original folder, and rename the dataset.

     

    Essentially what the Si1 userscript in the video does. Albeit automated with some additional safety checks.

  3. On 10/11/2023 at 7:25 AM, manofoz said:

    Write seems to be read on the storage plot and read is always zero. It shows this:

     

    image.png.a6c8cbf875e432af3aa33af2483e27d1.png

     

    But my array is doing this:

    image.png.62a58d7b13ccf8599e84889e67305988.png

     

    My Installed Version is 2023.02.14.

     

    Same problem. Write is showing read speeds, and Read is always zero.

     

    This is for the System Stats plugin

  4. After some testing, was able to rename a zpool with the following commands in terminal:

     

    remove pool to be renamed:

    zpool export pool_to_rename

     

    re-add pool with new name

    zpool import pool_to_rename new_pool_name

     

    The only caveat being ZFS Master wasn't able to see pools with a period in the name. `zpool list` would still show them, but ZFS Master would not.

  5. On 9/29/2023 at 9:05 AM, JorgeB said:

    Stop the array, click on the pool, click on the pool name to change it, restart array.

     

    This will rename an unRaid pool. I don't see a GUI way to rename a zpool yet, including using the ZFS Master plugin.

     

    To answer the original question, THIS is how to do it via terminal command line. I haven't tried it, but all zfs commands I've used (thus far) have worked as expected.

  6. Assuming you've set up Krusader in a Docker container, you'll have to map a container path to it (similar to how you mapped a path for shares and unassigned disks).

     

    Steps will be the same, but set Host Path as /mnt if you want to see all your unRaid drives, or /mnt/nameOfPoolDevice if you only want to map to the pool device.

     

  7. On 12/3/2022 at 3:36 PM, SggCnn93 said:

    But with both I get this error:

    "Error response from daemon: Pool overlaps with other one on this address space"

     

    Believe you have to set a DHCP pool range for each network interface when using multiple. You have two pipes coming in, each with it's own DHCP service within docker, and they are both drawing from the same default pool. Since neither DHCP service knows what the other is doing, you have to manually specify the pool space for each. From the photos: 'DHCP pool: not set'

     

    I'm not 100% on docker network architecture, but I'm trying to do the same thing you were and that's my understanding.

  8. On 2/22/2023 at 6:07 PM, tjb_altf4 said:

    As long as you:

    • don't use them for parity,
    • don't use them a lot,
    • don't mind much slower writes when the disk is getting full,
    • and don't use them to rebuild failed disks,

    ...then you should be fine.

     

    Oh and they need TRIM to maintain performance to prevent write amplification, which the Unraid array doesn't support.

     

     

    Someone tell Dropbox. 90% of their deployment is SMR.

  9. Went through an unplanned USB replacement this week. Not sure where my backups were going, so I had to use an old manual .zip backup from years prior. Recovery was surprisingly painless.

     

    Backup .zip Creation:

    • Main TAB ⇨ Flash [under Boot Device] ⇨ Flash Backup

    Restore 

    • burn new USB with unRaid USB Creator. Select Local Zip as version
    • used a brand new, straight from package USB. Didn't do any extra formatting/renaming

    License Transfer

    • click on Unregistered in the Header and the process is self explanatory

     

    Updated Plugins/OS and I don't appear to have lost any of the configuration. Dockers/VM's all there and current, including those I created AFTER the backup. I know they're not stored on the USB, but still a pleasant surprise.

  10. Story:

    Decided to futz with my unRaid hardware for the first time in probably 3yrs. Something gets caught, USB gets bent, drive no longer recognized. I have no idea where I saved my most recent USB backup, but it wouldn't be up-to-date anyway. However I do find one of my first backups in Google Drive. I haven't changed hardware since the original build, so I decided to give it a shot. Burn it to a USB with the unRaid Tool [which is AMAZINGLY simple, shout out to the devs], and boot. Go to register the new key but it won't take. OS too old was my thought, better upgrade. Couldn't access the upgrade server, which is when I started to worry.

     

    Problem:

    Exact some symptoms as you. Network failure errors. I realized it wasn't AWS issue after clicking the Apps tab and getting some error about not being connected to the internet. Which was odd because I was connecting to unRaid via a browser.

     

    Solution:

    Turn off manual IPv4 address assignment. An earlier Si1 video, which I follow to the letter, changed address management from automatic to manual. Even though I rebooted my router, I thought there might be an unresolved address conflict in the router DHCP table.

     

    Setttings Tab ⇨ IPv4 address assignment ⇨ Automatic

     

    Didn't even have to reboot. Everything came back to life, even though my router didn't assign unRaid a new IP. Why it worked I may never know, but I hope your solution is just as simple.

  11. 1 hour ago, xkraz said:

    I did get this working. I use 10G nics in unraid. I am using e1000e as the nic driver but the speeds are very slow and I cannot seem to connect to the internet from the Xpenology VM.

    If you can download packages you have internet.

     

    You will get a connection error if you try to login to Synology or use certain features like Quickconnect. You need to generate a working serial number to enable them.

  12. Came across this thread so I thought I’d give it whirl. Used the instructions as posted by @bmac6996 and it worked right out of the box. To clarify, and my experience

    • Download bootloader v1.03b for DS3615xs, extract synoboot.img
    • Add VM -> CentOS
      • BIOS: SeaBIOS
      • Machine: Q35 (3.1 is the latest as of this post)
    • Primary vDisk: point to synoboot.img extracted earlier
      • Primary vDisk Bus: USB
    • 2nd (and additional) vDisk Location: I created a 10G vDisk, Type: qcow2. Someone earlier in the thread passed-through their disks
      • 2nd vDisk Bus: SATA
    • UN-select the ’Start VM after creation’ box at the bottom, then Create.
    • Edit VM in XML View
      • Find <model type=‘virtio’/> and change 'virtio' to 'e1000e'
    • Save and start the VM. If you view VNC it will show the bootloader screen
    • Took about a minute but DiskStation appeared in find.synology.com

    From here in out I followed the Synology onscreen installation instructions. It downloaded everything it needed from Synology servers. When I went looking for the bootloader, there was mention of PAF files, etc. I didn't need any of that since it all downloaded automatically.

     

    I skipped the link Synology account - QuickConnect part. Not sure if you need to generate a unique serial, but since I’m just playing around I didn’t bother

     

    Again, read though the post by @bmac6996. He explained the reasons behind the choices

  13. Odd - I am experiencing the same issue. Updated from last stable 6.6 release to 6.7.1.

     

    After creating new VM's for Ubuntu and ElementaryOS, neither will output to the GPU. Both boot fine, and I can use VNC to access if VNC is the only graphics option. If I add both VNC and GPU passthrough, VNC will display 'Guest has not initialized the display (yet)'. Tried Ubuntu with both SeaBIOS and OVMF.

     

    However my macOS High Sierra and Fedora VM's created before updating still use the GPU as normal. Using an Nvidia 1050Ti without additional ROM Bios.

  14. Seems to be a number of working Mac VM's in this thread. Is everyone getting sound out of their GPU? Any specific kext requirements?

     

    I passed through a Sapphire Nitro RX580 using SpaceInvaders instructions. Everything worked EXCEPT for sound - no sound devices would  show up in the VM. I did indeed passthrough both the sound and video parts of the card. 

     

    Coincidentally I have the same issue with a 1050Ti in High Sierra. Everything works except for sound.

  15. 3 hours ago, RSQtech said:

    Got to understand... Im the Forrest Gump of the linux world

    I followed what @dee31797 posted - here are the itemized steps:

    1. turn on netcat-openbsd in Nerd Pack, apply, and restart computer
    2. open the virt-manager docker GUI - you'll still have the error
    3. in the Docker container, go File > Add Connection
    • Hypervisor: QEMU
    • Select connect to remote host over SSH
    • Hostname: the internal network IP of your server
    • Hit connect

    It will then ask you to accept certificate, then for your unRaid password. Once cleared you're in

    • Upvote 2
  16. Much appreciated - copied the backup XML to a new VM template and it worked no problem. I'll admit I was holding my breath at boot.

     

    Previously tried something similar with a Win10 VM, and encountered all sorts of issues. I did a before and after comparison on the XML and I noticed the UUID changed. Everything seemed to fall apart from there.

     

    Removing the topology is the common suggestion for allowing 'odd' core configurations in the MacOS VM threads. Is there a better way? Simply using the CPU Pinning GUI or template manager will add cores, but then the VM won't boot.