fluffybutt

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fluffybutt's Achievements

Noob

Noob (1/14)

0

Reputation

  1. hey all, so.. i can connect to the dashboard within ,, BUT im not able to actually connect via lan to put any files on the server again. i have tried everything from my last issues and nothing is working. main change is added a dream machine, the unraid server has a fixed i.p. i can see it as a wired connection in the ubiquity dashboard. i have also tried various other fixes or ideas via using the search function BUT im still sat here totally confused. i just want it to work ... lol .. sorry have attached the diagnostics if anyone can help .. i will again be eternally grateful thanks in advance... deathstar-diagnostics-20210212-1709.zip
  2. Hang on ..... what ... it wasn’t there when I did thie first post... but now it is ???? sorry to have wasted everyone’s time... #feeling stupid...
  3. good evening, i followed the how to on changing the parity drive for a larger one, all went well. wiped the old "parity" drive, then formatted it. but now its just vanished... up until then unraid could see it .. now.. im lost ... deathstar-diagnostics-20200831-2215.zip
  4. Really sorry for the lack of detail, I wasn’t thinking .. ok main pc is win 10... select files from drive on my pc, drop relevant “type” of file into correct folder on unraid via lan... I think I may have change the most free option to see if it changed how it distributed the files. any files that I put on the system continue to stay on the same disk, I’m thinking I have to “physically” put the folder on the other drive to move it ? whilst I have no issue in building things pc related hardware wise, sometimes the software manages to screw my head up ... once again sorry for not enough detail in the 1st reply, lack of thought on my part totally .. thanks for helping it is very much appreciated...
  5. Erm, have various folders on the system so just drag and drop normally.. I have not changed any of the system settings that I am aware of.. that was the latest system report ..
  6. good evening, sorry for the delay, have attached.. many thanks. deathstar-diagnostics-20200512-2013.zip
  7. Yes, I thought it would automatically split evenly across all available drives..
  8. everything was downloaded and left at default settings, highwater is on every folder...
  9. Good evening all, just checked the system and found a notification stating utilisation on 1 disk was high, and yes it is, the second drive is empty though.. is there an “easy” way to split the data ? I have unbalance installed but need to disable stuff to move the data, I thought the unraid system would automatically split data over drives rather than just fill 1 up.. no able to attach info zip atm as I am out.. but will do so if needed. thanks in advance...
  10. @johnnie.black finaly got round to testing it, and you were 100% correct. both modules of tridentz FAILED memtest... thank you so much for your help
  11. thanks for the replies... update time. eventuallly managed to format and mount the nvme drive, all working fine... tried to remove a docker and the system freezes up, thiss allso then takes my network off line. unpluging the machine from the network and we are back to happyness. except i now have the issue of the server to diagnose.... unablle to acess it currently.. any ideas at to what to try ?? again many thanks in advance, much appreciated.
  12. as requested deathstar-diagnostics-20190401-1531.zip
  13. just tried to format it Apr 1 15:18:07 DeathStar kernel: nvme0n1: p1 Apr 1 15:18:28 DeathStar emhttpd: Samsung_SSD_970_EVO_250GB_S465NB1K501822T (nvme0n1) 512 488397168 Apr 1 15:18:28 DeathStar emhttpd: import 30 cache device: (nvme0n1) Samsung_SSD_970_EVO_250GB_S465NB1K501822T Apr 1 15:18:29 DeathStar emhttpd: shcmd (43): mount -t btrfs -o noatime,nodiratime /dev/nvme0n1p1 /mnt/cache Apr 1 15:18:29 DeathStar root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error.