null

...
Joined
Nov 12, 2014
Messages
35,464
Reputation
6,852
Daps
53,980
Reppin
UK, DE, GY, DMV
I am building a private LLM using the following base model: meta-llama/Llama-3.2-11B-Vision-Instruct · Hugging Face

so yes. once you show it a pattern and how you like to write stuff it can more or less write code (python, bash) close to what you want.

it is crazy so i dont trust it.

my important data is stored read-only under the root account.

my working data is hand-checked and pushed to git at every small delta.

so far it has (without being asked to) edited and created files.

when asked it just said that "yeah i idid it without asking and i should not have".

i have added standing instructions in base.md files and still ..

even this low skills low permission agent running my macbook makes me worried. i can only imagine how much openclaw stresses people out.

ed zitron talking about this same issue:



-

progress: now chunking analyzing and chunking the training code tree.
 

Rembrandt

the artist
Joined
Jan 13, 2016
Messages
15,452
Reputation
2,751
Daps
40,873
Reppin
Villa Diodati
so yes. once you show it a pattern and how you like to write stuff it can more or less write code (python, bash) close to what you want.

it is crazy so i dont trust it.

my important data is stored read-only under the root account.

my working data is hand-checked and pushed to git at every small delta.

so far it has (without being asked to) edited and created files.

when asked it just said that "yeah i idid it without asking and i should not have".

i have added standing instructions in base.md files and still ..

even this low skills low permission agent running my macbook makes me worried. i can only imagine how much openclaw stresses people out.

ed zitron talking about this same issue:



-

progress: now chunking analyzing and chunking the training code tree.


Yeah, agenic shyt right now seems super beneficial but carries a huge risk for damn near everything. Was gonna use a separate PC for it but I still need to work on getting a local model up and running, if you or anyone else has suggestions. 4070 or 90, forget right now. 32GB of ram..

I really don't know where to start in regards to this. Definitely gonna have to upskill and gain some more knowledge. Really just use it as a tutor while I learn more programming now
 

null

...
Joined
Nov 12, 2014
Messages
35,464
Reputation
6,852
Daps
53,980
Reppin
UK, DE, GY, DMV
Yeah, agenic shyt right now seems super beneficial but carries a huge risk for damn near everything. Was gonna use a separate PC for it but I still need to work on getting a local model up and running, if you or anyone else has suggestions. 4070 or 90, forget right now. 32GB of ram..

I really don't know where to start in regards to this. Definitely gonna have to upskill and gain some more knowledge. Really just use it as a tutor while I learn more programming now

if you just want inference, you can use lmstudio. install a small edge model like one of the meta llamas (llama3).

then you end up with a web service with endpoints that runs locally.

if you want to fine tune it with your own information then that's a bigger process.

inference takes far less compute. in my case an M2 was not enough for fine-tuning by a mini--computer with 16GB is enough for intferemce.

you can always size your model (parameters, context size) depending on the hardware that you have.

-

i am doing it by hand writing scripts to download components and to build my training data because i will learn more that way.

after taking that long way around getting up to spead on LM studio should be pretty easy.

also yeah linux. the more you know about basic shell scripting and unix/linux commands the better. if you are on windows then :picard: good luck


and before any windows+GUI brehs start to babble:

image.png


:ufdup::picard::hubie:
 

Rembrandt

the artist
Joined
Jan 13, 2016
Messages
15,452
Reputation
2,751
Daps
40,873
Reppin
Villa Diodati
if you just want inference, you can use lmstudio. install a small edge model like one of the meta llamas (llama3).

then you end up with a web service with endpoints that runs locally.

if you want to fine tune it with your own information then that's a bigger process.

inference takes far less compute. in my case an M2 was not enough for fine-tuning by a mini--computer with 16GB is enough for intferemce.

you can always size your model (parameters, context size) depending on the hardware that you have.

-

i am doing it by hand writing scripts to download components and to build my training data because i will learn more that way.

after taking that long way around getting up to spead on LM studio should be pretty easy.

also yeah linux. the more you know about basic shell scripting and unix/linux commands the better. if you are on windows then :picard: good luck


and before any windows+GUI brehs start to babble:

image.png


:ufdup::picard::hubie:

The training is probably the thing I'm not looking forward to. I got some shyt mapped out in obsidian but just wanna make sure it really fits my need with the provided data.

I got a lot of that in my brain, but imma have to use this spare laptop to run linux and get familiar. The current kernals are prime for introduction but like you mentioned, gotta get those commands down and get familiar outside of the GUI.

Preciate you
 

null

...
Joined
Nov 12, 2014
Messages
35,464
Reputation
6,852
Daps
53,980
Reppin
UK, DE, GY, DMV
The training is probably the thing I'm not looking forward to. I got some shyt mapped out in obsidian but just wanna make sure it really fits my need with the provided data.

I got a lot of that in my brain, but imma have to use this spare laptop to run linux and get familiar. The current kernals are prime for introduction but like you mentioned, gotta get those commands down and get familiar outside of the GUI.

Preciate you

no problem.

the linux commands help you monitor what the AI is changing on the file system.

claude code will (normally) show a command and ask for permission before it runs it.

the problem is that even with say a nominally read-only-looking "find" command it's possible to bury write actions within it.

linux has redirection, pipes, and command options like "find -exec".

if you have linux and want to do the training then start with claude code and a 20 USD per month sub with anthropic.

use github to back the vault up between changing stuff.

as your intial script write a setup script to install the tools that you will need python, python plug ins, jq, soffice, a python virtual env etc.

you will probably need these python packages:

Code:
anthropic
beautifulsoup4
chromadb
einops
huggingface_hub
langchain-text-splitters
llama-index
mlx-lm
mlx-vlm
open-clip-torch
pillow
pymupdf
python-docx
python-pptx
sentence-transformers

TL;DR install claude code command line, add terminal plugin to obsidian, run an obsidian terminal and navigate to your vault, start claude code command line using 'claude'. start interacting with it. put standing instructions for claude in CLAUDE.md in your vault root.
 
Last edited:

Rembrandt

the artist
Joined
Jan 13, 2016
Messages
15,452
Reputation
2,751
Daps
40,873
Reppin
Villa Diodati
no problem.

the linux commands help you monitor what the AI is changing on the file system.

claude code will (normally) show a command and ask for permission before it runs it.

the problem is that even with say a nominally read-only-looking "find" command it's possible to bury write actions within it.

linux has redirection, pipes, and command options like "find -exec".

if you have linux and want to do the training then start with claude code and a 20 USD per month sub with anthropic.

use github to back the vault up between changing stuff.

as your intial script write a setup script to install the tools that you will need python, python plug ins, jq, soffice, a python virtual env etc.

you will probably need these python packages:

Code:
anthropic
beautifulsoup4
chromadb
einops
huggingface_hub
langchain-text-splitters
llama-index
mlx-lm
mlx-vlm
open-clip-torch
pillow
pymupdf
python-docx
python-pptx
sentence-transformers

TL;DR install claude code command line, add terminal plugin to obsidian, run an obsidian terminal and navigate to your vault, start claude code command line using 'claude'. start interacting with it. put standing instructions for claude in CLAUDE.md in your vault root.

I truly appreciate this. Would you recommend learning python more? I was hoping that would be a good way to utilize my free time. I tried with data science but it's the same reason I didn't like using that shyt in school for tracking diseases. Super interesting but overwhelming
 

null

...
Joined
Nov 12, 2014
Messages
35,464
Reputation
6,852
Daps
53,980
Reppin
UK, DE, GY, DMV
I truly appreciate this. Would you recommend learning python more?

sure it would help but don't get stuck on that. you have to be able to read python but the model can help you, if you know what you want. although learning it would make things a bit easier don't let the fact that your python needs work stop you from starting. my python is rudimentary and i'm not able to write idiomatic python but i know other languages so it is not really a big issue.

like with blockchain before there is an entire technology field being developed around this tech and familiarity and ability to use AI tools probably outweighs whether you know python well or not these days.

I was hoping that would be a good way to utilize my free time. I tried with data science but it's the same reason I didn't like using that shyt in school for tracking diseases. Super interesting but overwhelming

that is why you should 1. strike while the iron is hot. don't delay because of "reasons" and 2. start with something simple like building out your model training project description.

watch this video by an obnoxious german. i found it helpful to get me started.



as i said you will need the minimum of a 20 usd per month claude subscription for claude code but it is worth it.
 
Top