• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

almulder's Achievements


Apprentice (3/14)



  1. Odd it shows as a USB, yet I have no USB monitor, (only DHMI) an only 2 usb devices plugged in (well a 3rd passed through to VM) Guess I will dig deeper
  2. 6.9.2 I have a device called lcd_platform that is showing up in my unassigned drives, (No clue what it is) and the disk log shows "sd 3:0:0:0: [sdd] Attached SCSI removable disk" Is there a way to force remove this so it stops sowing up in my system. Maybe a user script at array start, or something, Hate seeing it.
  3. Since my last update to the plugin (Not sure what I was on before, i have a unassigned device show up called: LCD_PLATFORM_0000000000001 Not sure where it came from or what it is. Can you make a way to hide unassigned drives so they no longer show in the list. (Maybe a "hidden" toggle at the top by the other toggles, to show or hide devices and a checkbox on the Drive line when hidden is not active, when active it hides the checklboxs and drives that are checked
  4. Wish we had someone that would be willing to rewrite this code so that we could use a hierarchy for prerolls Default, Monthly, Weekly, Daily, Holidays (Each with customs we can add) Default would be used, but Monthly would override it if the month was not empty, Monthly would be used, but weekly would override it if the weekly was not empty, Weekly would be used, but daily would override it if the daily was not empty, Daily would be used, but Holidays would override it if the Holiday was not empty. But the Holidays I would like to have option for how many days before and how many days after I currently have mine setup so that I have defaults, then if a month has files it uses the month, else default. I then have holidays and Birthdays, ext.. added and if any of them have files, it overrides the month or default for set time frame of days before and after. I am doing this via another docker but I have to manually enter in all the info in a config file, would be nice to have a GUI like this one. (Wish I had the skills to create stuff)
  5. FYI you can have multiple prerolls run. but not 2 random ones, and not one always and then one random. (You can have as many prerolls run just not random) It can only be 2 prerolls every time. If multiple paths separated by commas are entered, videos will be played sequentially. If multiple paths separated by semi-colons are used, a single pre-roll video will be chosen randomly from the list. example: /Prerolls/preroll1.mkv,/Prerolls/preroll2.mkv (Will play preroll1 and then preroll2) /Prerolls/preroll1.mkv;/Prerolls/preroll2.mkv (Will randomly select preroll1 or preroll2 and play just one of them)
  6. So I have installed this and seems to work, Kinda. So for the schedule it seems like it only allows you to use the currently selected schedule? I put in weekly and monthly and depending on what Schedule is selected when i press submit is what appears in plex. Can we not use all? Default for months where there is nothing, monthly. Weekly to override monthly, daily to override weekly ect... What am I missing?
  7. So got this installed and running, but 0 change to my ram, ram amount never increased which seams odd, and it was not grabbing recent videos, they seem to be very old videos loaded over a year ago. Have 256GB ram and only 12% being used and after 20mins of the script running still shows 12% How can I fix this?
  8. So I am hoping someone here is a pro with the NetApp DS2246 or the DS4246 disk shelfs. I have both a DS2246 and DS4246, both of them from day has this issue. Turning the unit on and connecting to unraid and populating with drives, the amber and green light for the case and populated drives amber and green light all come one. Amber is suppose to signal an error, but everything seems to function correctly for both my cases and 2.5 or 3.5 drives. is that normal or is there something that can be done so the amber light goes away and only come on when there is an error/issue? the DS2246 has both IMO6 modules plugged in and both PSU installed, the DS4246 has both IMO6 modules plugged in and all 4 PSU installed. Each shelf has a different ID (DS2246 #1, DS4246 #2) and they are both using the first IMO6 connecting directly to my LSI controller (8e so each disk shelf are plugged into the LSI controller via a SFF-8436 to SFF-8088 cable I have a spare DS4246 also with both IMO6 modules and 4 PSU and a spare cable, swapped it out with the other unit and same issue still with the amber lights. hoping for good news.
  9. I have USB drives that are not appearing. They are not deleted (I checked that first), I have a few NVME drives connected via USB3.2 as they are faster than my SSD's and no room for internal. Can you make it so USB's appear in list please.
  10. Started up docker and went to gui and it keeps throwing an error on my one cache drive and never goes past it, just locks up there. sdaa & sdab are my cache drive (Raid 1) they are both new, less than 1 month old. No error in unraid 10:46:57 Found drive Samsung SSD 870 EVO Rev: SVT01B6Q Serial: *************** (sdaa), 1 partition 10:46:57 Found drive Samsung SSD 870 EVO Rev: SVT01B6Q Serial: *************** (sdab), 1 partition Lucee Error (application) Messagetimeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdac unit B print free] StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 1714 1712: <CFIF DriveID NEQ ""> 1713: <!--- Fetch partition information ---> 1714: <cfexecute name="/sbin/parted" arguments="-m /dev/#DriveID# unit B print free" variable="PartInfo" timeout="90" /> 1715: <CFFILE action="write" file="#PersistDir#/parted_#DriveID#.txt" output="#PartInfo#" addnewline="NO" mode="666"> 1716: <CFSET TotalPartitions=0> called from /var/www/ScanControllers.cfm: line 1643 1641: </CFIF> 1642: </CFLOOP> 1643: </CFLOOP> 1644: 1645: <!--- Admin drive creation ---> Java Stacktracelucee.runtime.exp.ApplicationException: timeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdac unit B print free] at lucee.runtime.tag.Execute._execute( at lucee.runtime.tag.Execute.doEndTag( at scancontrollers_cfm$cf.call_000163(/ScanControllers.cfm:1714) at scancontrollers_cfm$ at lucee.runtime.PageContextImpl._doInclude( at lucee.runtime.PageContextImpl._doInclude( at lucee.runtime.listener.ClassicAppListener._onRequest( at lucee.runtime.listener.MixedAppListener.onRequest( at lucee.runtime.PageContextImpl.execute( at lucee.runtime.PageContextImpl._execute( at lucee.runtime.PageContextImpl.executeCFML( at lucee.runtime.engine.Request.exe( at lucee.runtime.engine.CFMLEngineImpl._service( at lucee.runtime.engine.CFMLEngineImpl.serviceCFML( at lucee.loader.engine.CFMLEngineWrapper.serviceCFML( at lucee.loader.servlet.CFMLServlet.service( at javax.servlet.http.HttpServlet.service( at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter( at org.apache.catalina.core.ApplicationFilterChain.doFilter( at org.apache.tomcat.websocket.server.WsFilter.doFilter( at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter( at org.apache.catalina.core.ApplicationFilterChain.doFilter( at org.apache.catalina.core.StandardWrapperValve.invoke( at org.apache.catalina.core.StandardContextValve.invoke( at org.apache.catalina.authenticator.AuthenticatorBase.invoke( at org.apache.catalina.core.StandardHostValve.invoke( at org.apache.catalina.valves.ErrorReportValve.invoke( at org.apache.catalina.valves.AbstractAccessLogValve.invoke( at org.apache.catalina.valves.RemoteIpValve.invoke( at org.apache.catalina.core.StandardEngineValve.invoke( at org.apache.catalina.connector.CoyoteAdapter.service( at org.apache.coyote.http11.AbstractHttp11Processor.process( at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process( at$ at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at org.apache.tomcat.util.threads.TaskThread$ at Timestamp12/29/21 10:48:27 AM MST
  11. Feature Request: @Squid Now that we have multi Cache Pools, I have some Dockers App Data stored on other Cache Drives, I have 3 in total now. Can we get support for Multi Cache Pools where Docker App Data is stored on other Cache Drives. Example: I have all my normal AppData stored on my Cache Drive, But then I also have a Plex_AppData stored on a seperate cache drive, and then Nextcloud stored on another seperate Cache Drive as NextCloud_AppData. Then is there a way that when backing up dockers AppData, that only that docker (Or linked dockers, Example we link Nextcould to the database docker) are stopped and then started backup once they are backed up and then start the next docker, so only the docker that is currently being backed up is being stopped, others continue to run until its there time for backing up. Then also make it so we can restore one docker(linked dockers), instead of all or nothing. And if were stopping and starting dockers like I mentioned, then the individual or if linked together are backed into individual backups. by name_date_time. Since some of use have so many dockers that one backup could be close to 1TB and if we need only one app restored it takes forever, and appdata is only growing. This would allow our dockers to only be down for a short time, just enought for it to be back up and running, and then restore individual backups. Also I would like the ability to back up sets of dockers in schedules, Example: Daily Backups Mark dockers for daily backups. Weekly Backups Mark dockers for weekly backups. Monthly Backups Mark dockers for weekly backups. and an option for backup up marked now! As I don't need all my dockers to backup on the same schedule. A few I need backed up on a daily basis, then most others done weekly, and a few only monthly. My backups can take anywhere from 2-3 hours at times to back up and thats a long time for them all to be down. I know its asking a lot but I feel it would be a huge benefit to the community.
  12. Getting this error now. I uninstalled and deleted appdata folder, fresh install, and still getting error.
  13. love the UPS option, but is there a way to have it monitor more than one? I have several that I would like it to monitor via SNMP and USB. Would like the ability to select witch one(s) unraid would look at for the shutdown feature, say I have 5 units, but 3 are for unraid (1 for the server and 1 for each of the 2 disk shelfs I have) I would like to be able to view all 5 units, but say these 3 I want unraid to use for the shutdown, that if any of the 3 drop to only x mins left on battery then shutdown.. Also would be great if we could put in info like device name, battery model, how many, and date batteries installed, and replace every x years. This way when x date comes around we get a critical alert that its time to replace the batteries. (Email would be even better) and include device name, battery model, how many needed. then once replaced we update the battery install date. Would be great if we could even put in UPS devices that were not connected via usb or snmp so that we could keep record of the desk ups's and battery install dates and such. This would help a ton in managing our battery backups.
  14. I can click them into the foreground yes, but only one at a time, not next to each other, and its not just that, there are times I need to move a screen so I can see the path and such, it would be so much easier to be able to move them. What was the reason for making them not movable?