Jump to content

alturismo

Members
  • Content Count

    781
  • Joined

  • Last visited

Community Reputation

58 Good

1 Follower

About alturismo

  • Rank
    Advanced Member
  • Birthday 06/07/1973

Converted

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. may also take a look here if iommu is enabled and as mentioned above, how does your iommu look like before ACS, the GPU most likely should be fine without it.
  2. i changed since v18 to this manual update in nextcloud shell sudo -u abc php /config/www/nextcloud/updater/updater.phar and since then no more issues at all, at the beginning webui update worked always, then from some version up it stuck on backup old, the manual update procedure from post 1 link also works, but this is simple and working.
  3. @jude and your mount point for /config is /mnt/user/appdata/binhex-delugevpn then im out of ideas ...
  4. @jude may remove the space from your ovpn file
  5. @pharpe u probably at the wrong place due this is the unraid forum for this docker may a tipp as i see your local mountpoint for /config is /apps/docker/deluge/config ...
  6. thanks, thought so after going through UAD and removed the partition and worked out as expected then thanks for the hint in release notes, i actually forgot there is something due i only readed about it here and in a comment in the bug report thread, my fault. may it would be nice if there would be a advise or a possibility in unraid gui itself todo so, wipe partition ... thanks again.
  7. yes i am but unfortunal it didnt ... stayed here on the same 64k Start as posted above after mounting the disk in UAD again, destroy and formatted with UAD it is now like you described i hope Disk /dev/nvme0n1: 476.96 GiB, 512110190592 bytes, 1000215216 sectors Disk model: Samsung SSD 950 PRO 512GB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 1000215215 1000213168 477G 83 Linux but i also understood that new formatted disks in unraid will also be formatted in this way, but nope ... not here actually. i still have my main cache drive todo as this is also still on the 64k start point, i can test again in unraid if it helps ... may a procedure howto force a format in unraid ?
  8. ok, i tried now to format the disk new to get the new 1mb allignement, but even after format its still on Start 64 what did i do stop the array, change FS to BTRFS to force a format, start array, format to BTRFS, stop the array, change FS to xfs again, start array, format. disk is blank now as expected but still on the same allignement as before, may a hint what i ve done wrong ? beta .25, do i have to format in a different approach ? through UAD somehow ? result after formatting Disk /dev/nvme0n1: 476.96 GiB, 512110190592 bytes, 1000215216 sectors Disk model: Samsung SSD 950 PRO 512GB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x7aff88bb Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 64 1000215215 1000215152 477G 83 Linux Ok, after using UAD to format the disk it is now on the Start 2048, so Unraid still formats different, seems i missunderstood.
  9. may as tip as im in the same boat, i was testing around and in the end im using anydesk for this use case, pretty decent performance and copy, paste ... working. also chrome remote desktop is working behind checkpoint. sadly rdp or native vnc aint ... without IT support to open access.
  10. out of fun now i deleted all my dockers, also checked i have no orphaned docker images left ... this is my docker tap in advanced mode (show orphan images) its completely empty as expected, now i waited a while so docker size can update ... now, in docker tab checking container size as expected, now checking unraid, still says 30 % filled ... now taking a look at docker settings whatever these used 6.5 GB are ... they dont come from the dockers or pathes ... are there always kept "leftovers" when doing something new ... ? idk but like this its hard to understand if there is may something wrong or not ... scrub, prune, etc ... wont free up the space, so i ll redo my docker image now to get a "real" result in my terms, but this is what i wanted to point at, there is sometimes something wrong in the docker image which ends in wrong results or "costing" time due searching for docker setup errors.
  11. thanks alot, ok, i ll leave it running now a little due i just checked, my 1st cache drive is the same, so i d need to backup both, reformat, copy back.
  12. Disk /dev/nvme0n1: 476.96 GiB, 512110190592 bytes, 1000215216 sectors Disk model: Samsung SSD 950 PRO 512GB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x7aff88bb Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 64 1000215215 1000215152 477G 83 Linux Command (m for help): so, this is my current 2nd cache pool device i use, like u described its not the 1mb allignment so your suggestion would be, backup, format, copy back ?
  13. may a question, is there a way to check if the disks are already formatted in this way ? i switched now a nvme drive (VM's) to a 2nd cache pool drive, just assigned new pool, assigned the disk. now, all files keep there (no formatting question), all VM's working fine after adjusting new path to vdisk. so either i missunderstood (older formatted disks are not compatible with beta .25 and up) or my disks where already properly formatted ?
  14. Hi, seems i have a issue now when trying to move my VM's from UAD to a 2nd cache pool. my libvrt service fails to start after shutting down the VM system and trying to restart it. made a reboot now with the same result. any chance to "repair" or only solution, wipe the img file and redo VM's with existing vdisks and hope i match the settings ### EDIT, sorry, my fault, just saw i made a wrong path for default VM's ... logs and diags attached
  15. thanks true, wrong form, question should have been sata or pcie mode m2, but as @dlandon mentioned it doesnt matter.