jonfive

Members
  • Posts

    66
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jonfive's Achievements

Rookie

Rookie (2/14)

0

Reputation

1

Community Answers

  1. Just installed, love it. Is the 4gb container size normal? I've got everything pointing to /mnt/user folders
  2. we were looking into it on discord #containers Turns out when it's using docker directory it reflects everything on cache. I'm still going to nuke everything and start clean given i won't lose data
  3. i was thinking if i go into the console for any of the containers, i would've been able to see what it used *locally*. but it's showing me the entire cache. Could i somehow get in as the docker user and start flipping through these directories?
  4. Alright. At one point or another, a container was exploding in size. I want to wipe out the entire docker directory and start over one by one to figure out which one had the problematic config. my issue: I have a few instances of databases renamed for their own usage. How can i reinstall those? Is there a better way of going about this than nuking it and starting over?
  5. UPDATE: It DOES work. Though, i need to test it's long term viability since there's complaints from people on the forum about i/o errors using scsiblock (see below) For this test, I created another usb with a trial license on bare metal, made a few folders and stuck some isos on it. Installed proxmox, attached the usb device, set first boot to that usb device. Ran the below qm set line. Started the vm, disk showed up correctly, pre-placed in the right disk# slot without showing "missing". Opened the 6.11 file manager, disk 1, files were there. qm set vm# -scsi# /dev/disk/by-id/YourDiskInformation,scsiblock=1 Without scsiblock, it just shows up as a qemuharddisk, so it's necessary for the time being. The particular part of the proxmox docs that mentions the scsiblock potential issue: If anyone has any further information or knowledge on how high/low memory fragmentation occurs in unraid, that'd be handy
  6. Timestamped - if it doesn't work, 9:08 This virtualized truenas video is the most closely representative of unraid drive usage that i could find for passing through individual drives without the entire controller. I'm not really looking to pass through the whole controller, as i want remaining drives to be accessible to proxmox. (my unit uses a sas expander, so i'm sorta stuck). TLDW: He sorta did stubbing like we would to pass a gpu through, but with the sda/sdb/sdc etc into proxmox's virtIO SCSI by id So, lsbk to list the drives with model/serial and attach them to the unraid VM's scsi controller. In my case, this would be the drives that i would potentially pass to the unraid VM's virtio scsi controller. ata-SanDisk_SDSSDH3_1T02_211135801590 ata-WDC_WD80EFBX-68AZZN0_VGKJ451G ata-WDC_WD80EFBX-68AZZN0_VGKJ5H3G ata-WDC_WD80EFBX-68AZZN0_VGKJD86G ata-WDC_WD80EFBX-68AZZN0_VYGAZANM ata-WDC_WD80EFBX-68AZZN0_VYGAZYMM For those with some experience virtualizing unraid, is this a viable solution? *added later* I have a new server coming tomorrow, i think i'll grab an unraid trial license and my cold spare, put some data on it, swap to proxmox and try passing that drive directly to unraid vm to see if it can 'remember' it. If it does work, i'll drop an update.
  7. goes through almost all the cores with this: PU: 3 PID: 25051 Comm: find Tainted: P D W O 5.15.46-Unraid #1 Can't really make sense of it all. If you can decipher it, diagnostics attached. thanks folks. Going to reboot before it fills up and crashes. tower-diagnostics-20220918-1735.zip
  8. Hey all, been looking up and down the forum looking at solutions to the permissions issues for containers. Does anyone know if this is on the list of future fixes? Or is it a difference in the way that dockers will need permissions assigned going forward? I briefly updated to 6.10.1 from 6.9.2, was also hit with permissions issues and rolled back. Has there been a consensus on the proper solution to give them the 'correct' permissions? I've seen many posts of chmod's and changing owners.
  9. Having similar issues, but no 404's. It seems, for me, when there's multiple tabs open and one stays out of focus too long the webui stops loading. like you said above, restarting the browser works to bring it back.
  10. gah, sorry it took so long to get back to you. things just got crazier. shucked seagates are the bane of my existence Disregard this whole thing.. in the middle of just copying off remaining files disk by disk to new wd nas drives and replacing the entire array on 6.10. Only ended up losing a few tb, of which are 100% replaceable. thank you SO much for all your efforts here. you guys are really the stars of the show
  11. Just as i rebooted. i think i just did too many things at once all willy-nilly
  12. I got ahead of myself, wasn't grasping what i should be doing and went ahead with new config. It wanted to erase parity disk which didn't seem right so i rebooted. I had a flash backup with the original config and loaded that. The disk i wanted to replace is 'unmountable' at the moment, which isn't really an issue because it's going to be replaced and recent parity check was good. I have the new disk ready to go, listed in unassigned devices. At the moment, it's going through a parity check. Can i cancel that check? And just so i don't mess up again: 1. Stop array 2. De-assign the disk i want to remove 3. Assign the disk that will be replacing that one in the same slot. 4. Start array Correct? or am i messing up Thanks a ton folks!
  13. @SmartPhoneLover Hey, fix for ffmpeg and other installables - Add Debian (or whatever user you chose) to the sudo group and restart. su to get to root, password Docker! by default. usermod -a -G sudo Debian (or whatever username) restart the container you can install stuff now These steps from login and opening a terminal: su Docker! usermod -a -G sudo Debian ps: I'm not sure if you've set the Debian user password at the start, but i changed mine right away after switching to root. passwd Debian password