piyper

Members
  • Posts

    29
  • Joined

  • Last visited

Everything posted by piyper

  1. I am working a utility that connects to multiple databases. Could you please some data database driver extensions ? pdo_odbc.so odbc.so if there are specific ones for MS SQL Server or Oracle, I wouldn't mind those, but I think ODBC would cover those. Thank you!
  2. Assuming your network type on the docker is bridged, you probably want to stay away from using 80 or 443 to avoid any conflicts with unraid itself. You could use 443 and 80 if you like, but you should then setup the docker to have a different IP address than your unraid server. To do this, you have to create a custom docker network, then choose that as your network type, you can then assign a new IP address. Make sure it is not within the range of your DHCP server IP range if you are using an IP within the same subnet. You would use the "docker network create SOMENAME" command to create a custom network, then you can specify a different IP in your docker when you select that network type. Steps: 1) Check that in your docker settings (settings->docker->advanced view) you have "Preserve user defined networks" set to yes. 2) Create custom docker network 3) select that custom network in your docker container settings 4) assign an IP (that will not conflict with your DHCP range) 5) set the container ports to be 80 and 443 Your web app should now respond on the browser default ports at that new IP address.
  3. Just a suggestion on the VM selection, I pulled this from my script if you are interested, just makes the selection easier. menu.txt
  4. Thank you for this!, I started writing my own but then came across this I made a full copy except at the end I got errors error: Failed to start domain 'Win10Base-3' error: operation failed: unable to find any master var store for loader: /usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd to fix it , all I did was to "edit" the vm and changed anything and updated, then it runs fine I tried it again to compare the before and after of the XML and here are the differences non-working version <nvram>/var/lib/libvirt/qemu/nvram/Win10Base-3-3_VARS.fd</nvram> working version <nvram>/etc/libvirt/qemu/nvram/eaf488d4-a3b9-4fdd-bd9c-fedd3ac4dae6_VARS-pure-efi.fd</nvram> I have not rebooted unraid to make sure everything got saved to the right places in the config, will let you know if there are any issues after a reboot. great script, except for that one glitch, works awesome. Thank you
  5. Hi, Thank you for the container, nice to a one that has PHP 8. I am writing something in PHP and wanted to fork some child processes for certain http requests using pcntl_fork, however, it does not look like it is enabled in your container. Would you happen to have a version where pcntl is enabled or could you enable it and include the module? Thank you Piyper
  6. Thank you all for the replies, I went through a few videos by Spaceinvader One, very helpful, I should have checked his channel first anyway , not sure why I didn't earlier. I have been using unraid for a long time, but have not had to do something like this for a while (I think I was using "unMenu") and I don't think this type of conversion was not available, things have changed a bit Interesting enough, I can't remember the last time I had to go into "maintainace mode" so didn't remember anything you could do in there. I guess it goes to show you how stable unRaid is. Thank you all for your help. Piyper
  7. I guess I was thinking that you could not format a drive while it was part of the array and you had to take it out of the array and once you did that, the parity would have to be rebuilt. I didn't know you could reformat a disk while it is still part of the array, I figured that even an empty drive would "compare" differently with two different formats, therefore as soon as you format the disk and put that disk back into the array the parity (on both parity drives) would not match the data on the drives anymore. I guess I am making am assumption about how parity is stored and if just one bit is different (say from a different format) that that part of the parity would not be correct anymore and unraid would just force a parity rebuild. I apologize if I am missing something and and I do thank you for your patience and help
  8. I have another question then. If a parity rebuild would rebuild the drive with the original format, then I assume the once empty a drive, reformat it and bring the array back up with the new format, the both parity drives will be invalid and will force a parity check/rebuild of the parity drives. this would mean I would have no parity protection for a while. Does this sound correct? Piyper
  9. I didn't know that, I figured the "data" was paritied (parityd, hmm, parity'd) and not the entire drive layout Thank you for the suggestion, unfortunately, my two parity drives are more than twice the size of the biggest data drive (planning for future upgrades), that would force me to upgrade at least one of the data drives now and Santa didn't bring me any new drives I have an other unraid server I use for backing up my important stuff from my main server, there is enough space on that to move data around, it will just be slow over the network. Thank you again for the help
  10. Hey there follow unraiders, I have I have two parity drives and a bunch of data drives, however, three data drives are still reiserfs and wanted to make sure they are ready for 2025, I am a little tight on space ATM and don't really have the space to move the data off them. I think what I have in mind should work, but thought I would ask here. I was thinking I should be able to just stop the array and reformat one of the data drives and then just start the array again, the parity should rebuild that one drive. once done rinse and repeat for the other two. Any issues with that strategy? Thank you
  11. Hi there, really nice addition. I have a feature quest if you have some time , I looked to see if anyone has asked for this already, sorry if this is repeat. I have some containers that I do not autostart, mostly development environments for different projects. I have grouped these environments into different folders and start/stop them manually when I need them. When I manually start them, It would be nice if they started in a certain order and we could put a delay between them. For instance; I would like my database container to fire up first and wait about 15/20 seconds before the application server starts. I have more complicated scenarios, but you get the picture. it would be really cool it I could add a script pre/post for each container were I could interrogate the environment to see if it is running or not before the next one was started, but that would just be a cool addition Right now I use my own scripts to start things in the right order, but would be nice if it was built in to your start option. Thank you again for the great utility, keep up the great work
  12. I am in favour of this as well, I would not even worry about what is on the rest of the parity drive, leave it in whatever state it/they are, you don't even really have to clear it. Simply stop at the size of the largest data drive, then when/if a new bigger drive gets added to the array and increases the top level, simply force a parity check on the remaining portion or do a full parity rebuild if the other is not possible. (can't see why it would not be possible) in my case, my largest data drive is 6TB (soon to be 8TB), but my two parity drives are 14TB (as I am sure others do, I buy when I get a good deal), so the parity check takes much longer than it would if it stopped at the max data drive capacity. As I hope, most people do a parity check on a schedule (let's say monthly), and in my case, the check takes over twice as long as it would if it would stop at the largest data drive. I would rather do a incremental rebuild for the added top capacity every once in a while when adding a new drive than to have my checks run for days (IMO) needlessly every month. A switch might be nice to support those people that add drives frequently, however, if we can get incremental parity rebuild working, even that would not be that big of a deal (IMO). Thank you
  13. I tried that and it did not work, however I did get it to work. I tried reverting back to a prior version, then I removed it and all evidence of the container to force a new install/config, that still did not work. I then removed it completely again, then upgraded my unraid from 6.9.0-beta35 to 6.11.2, then reinstalling the container again and like magic, it worked with no issues with the exact same configuration I had before (not a restore, I manually configured the new instance the same as the old) All seem to be working again and the only thing I did to make it work was to upgrade the unraid OS. After looking at the contents of the many tars I made, I did notice that the ownership of some of the files were "root" and not "nobody" and the "nobody" might not have had permission to some files. I know I messed around with PUID/PGID a bit on some of those and that might have messed up the permissions, however, even very old backups of the image shows that "nobody" may not have had the correct permissions. Maybe that is why some people suggest running it as root to fix the issue (which did not work for me) Not sure if that means anything, but I thought I would mention it. Piyper
  14. Hi guys, My deluge stopped working as well after the docker was updated and now is stuck on [info] Waiting for Deluge process to start listening on port 58846... I have tried a few things I found on line like having it run as root and switching the network to host rather then bridge. Here is my log Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2022-11-06 21:43:36.647325 [info] Host is running unRAID 2022-11-06 21:43:36.693266 [info] System information Linux dlvm1 5.8.18-Unraid #1 SMP Mon Nov 2 07:15:12 PST 2020 x86_64 GNU/Linux 2022-11-06 21:43:36.731062 [info] OS_ARCH defined as 'x86-64' 2022-11-06 21:43:36.763524 [info] PUID defined as '0' 2022-11-06 21:43:37.153398 [info] PGID defined as '0' 2022-11-06 21:43:37.578329 [info] UMASK defined as '000' 2022-11-06 21:43:37.627376 [info] Permissions already set for '/config' 2022-11-06 21:43:37.684705 [info] Deleting files in /tmp (non recursive)... 2022-11-06 21:43:37.740598 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info' 2022-11-06 21:43:37.789816 [info] DELUGE_WEB_LOG_LEVEL defined as 'info' 2022-11-06 21:43:37.843893 [info] Starting Supervisor... 2022-11-06 21:43:38,601 INFO Included extra file "/etc/supervisor/conf.d/deluge.conf" during parsing 2022-11-06 21:43:38,601 INFO Set uid to user 0 succeeded 2022-11-06 21:43:38,606 INFO supervisord started with pid 6 2022-11-06 21:43:39,609 INFO spawned: 'deluge-script' with pid 74 2022-11-06 21:43:39,610 INFO reaped unknown pid 7 (exit status 0) 2022-11-06 21:43:39,618 DEBG 'deluge-script' stdout output: [info] Deluge config file already exists, skipping copy [info] Attempting to start Deluge... [info] Removing deluge pid file (if it exists)... 2022-11-06 21:43:39,619 INFO success: deluge-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-11-06 21:43:40,777 DEBG 'deluge-script' stdout output: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... Any suggestions would be appreciated Piyper
  15. I know it has been a while since your post, but I would be interested in these plans. Let me know.
  16. I just edited the script rc.docker I didn't add to the $DOCKER_OPTS variable but I did change the line that starts docker to hardcode the -H parameters, like this: nohup $UNSHARE --propagation slave -- $DOCKER -p $DOCKER_PIDFILE $DOCKER_OPTS -H unix:///var/run/docker.sock -H tcp://IP_ADDRESS:PORT_NUMBER>>$DOCKER_LOG 2>&1 & IP_ADDRESS being the ip of the unraid server I changed this for all of my unraid servers and they all listen on port PORT_NUMBER This works great and my code can now access and manage all the docker hosts and the containers within I was thinking of changing the script to read "DOCKER_OPTS" from the cfg file, but never got around to it. It would be extremely helpful if the "DOCKER_OPTS" setting was supported by default.
  17. Sorry gerard6110, didn't mean to hijack the thread. Thank you itimpi for the post, I was off in the wrong direction and you straightened me out:)
  18. I would like to run unraid in a VM but my server doesn't support iommu I read through all this and from what I gather is we still can't do this, is that correct? So the only way to run Unraid in a vm is to use iommu?
  19. I like the looks on those ones, can you provide a link to the enclosures you used ?
  20. +1 for multiple arrays Here is another use case My issues is not disk space or number of drives, as I am sure many of you have the same setup, you keep all your file shares (documents, media, ...) on the array and use either standalone SSDs or in my case RAID1 SSDs for VMs, Cache and Dockers to make them run faster. As for the cache, because it is temporary storage and supports RAID redundancy, there really isn't any point to "unraid" that, however For VM and dockers, you want the best performance you can get and running them on the caches drive (SSD) gives you that, however, that setup isn't very flexible. I realize that you could run a VM with Unraid for your file shares and keep the main Unraid for only VMs and Dockers using an SSD "Unraid" raid setup , but then you are dealing with x2 or x3 admin consoles and the extra overhead of the nested VM. This would work, but having everything in one management console would be ideal. My Ideal setup would be Media and such on the HDD using "Unraid" RAID #1 Cache disks using SSDs and RAID1 (just in case of a failure) , but could go RAID0 Docker & VMs stored on SSD using "Unraid" RAID #2 or even Docker on "Unraid" RAID #2 (using SSDs) and VMs on "Unraid" RAID #3 (using SSDs) This setup would allow you to configure disk arrays of different sizes for different purposes and be protected against disk failures An added bonus would be if you could configure shares that span across the multiple arrays, but that is for a different thread but there is no point in asking for that if it doesn't support multiple arrays in the first place. Now there is another approach, maybe instead of supporting multiple arrays, you go with the multiple Unraid servers for different purposes and the management console itself is changed to enable the management of multiple Unraid servers as if they were one that has multiple arrays, that would solve the administrivia nightmare of having multiple Unraid servers (VM'd or not). This would also be good for backup servers (which I have) Administrating multiple Unraid servers from one console would solve a bunch of other issues, but this post is getting too long as it is.
  21. Hi guys, thank you in advance for your thoughts! I have a Ubuntu KVM/VM client running on unraid, it in turn is a VirtualBox host which then has several VMs running on that. This all works fine for the most part, however I do have a question regarding virtualization of the CPU. BTW, I tried getting VirtualBox running natively on Unraid with little success, I think I read all the post for this here, plus posts from other sources and still never got it to work. So, this is why I have this setup in the first place. If anyone has VirtualBox running on unraid, please let me know. I first started out with "Host Passthrough" which (i would think) exposed the CPU to the Ubuntu Client, which in turn exposed that to the VirtualBox clients. For this test, I had 8 of my 8 cores allocated to the Ubuntu VM and have allocated a single core to the VirtualBox Client. I noticed that when a started a process within a VirtualBox windows Client, it pinned one of the processors on the unraid box. To me, this behavior makes sense because it is just using the "passed-through" cores as if it were naively owned by the VM client and it is simply using only one core because I only allocated one core to the VirtualBox client. What would be nice is if I allocation 1 core to the VirtualBox client that it would use 1/8th the processing power spread across the 8 host cores I have allocated. I thought that, in order for this to work that way, I would have to not pass through the CPU and virtualize the CPU layer. I changed my Ubuntu VM settings to do this, I allocated 8 cores to it but it is not set to "qemu64" <cpu mode='custom' match='exact' check='none'> <model fallback='forbid'>qemu64</model> </cpu> for fun I also added <vcpu placement='static'>32</vcpu> to give me 32 virtual cores so, my thinking was that if this is truly virtual that if I allocate just 4 of the 32 cores (1/8th again) to a VirtualBox client, that when is runs something, it would use 4/32 (or 1/8) of the 8 "real" cores and the workload would be spread across the 8 real cores. This does not seem to be what is happening, when I run a process on the VirtualBox client and watch the usage on the unraid server, it is still pinning 1 core and all the others seem to be pretty much idle. This could just be how qemu64 CPU virtualization layer is coded or maybe I am misunderstanding what CPU virtualization is completely. Any input would be helpful.
  22. Firstly, cool plugin, well done. My apologies if this has been already asked and I did look at the link on the tab and noticed it was using a JS function, so my example below would not work, but you get the idea. It would be really cool if you could set the link target (IE <a href="..." target="plex">) so that when you choose the "open in new tab" option, it would use the tab/window you already have open from the last time you clicked the link, instead of opening a new window every time you click the link. Thank you in advance for your consideration.
  23. Thank you, that was what I was thinking. I have seen a few people suggest doing a preclear on a new drive, that way you find all the errors first prior to attempting to use it. I assume there is nothing wrong with a preclear first.