Freedom tech and future AI.
Currently LLMs don’t learn, meaning their weights don’t change. What we do is simply edit their prompts, which become very large, often behind the scenes.
At some point, this will change and LLMs will learn.
If you have been using LLMs to code, I think you have a unique insight as to what will happen.
The LLM that you have been working with, teaching, and indeed raising for months or years will become incredibly valuable to you. For some people I imagine it will become the most important thing they “own”.
When that happens, how do we start thinking about freedom technology in this context?
If I have an LLM that I have raised for years, where do I want it to live?
My first thought is I want my LLM locally, on my own computer. And it may very well may be the case that you can do that. I would be concerned however of hardware failure and my LLM dying. I don’t want that to happen.
Do I have a performant enough computer at home to run my LLM on? I can imagine how great it will be to upgrade my hardware and see my LLM have increased speed and capabilities.
I imagine most of us on NOSTR would be inclined to self host, but I can imagine most people won’t do that.
Large corporations might offer a meaningful advantage in that if you host with them, your LLM will be able to learn along with the other LLMs that are hosted.
An analogy here is sending your child to public school, and I think most of us have a good feel for the pros and cons of that.
How does NOSTR fit into this?
NOSTR offers a way that we could potentially communicate with our LLM. A way to verify that we are indeed communicating with our LLM and not an imposter. Maybe we could host our LLM in the cloud and communicate via NOSTR?
There is an inherent problem with this that I think is overlooked. The LLM is simply software, and software alone can not safely hold keys, neither NOSTR keys nor Bitcoin keys. Only robots can hold keys, because securely holding a key requires hiding and/or defending your key in physical space. Holding a key is a physical process.
We can securely back up our LLM to the cloud using encryption, because we keep and defend our key locally, but we can’t send our baby LLM out into the matrix and be sure that we can communicate securely with it through NOSTR because where will it keep its key? If it keeps its key on the host, it is always susceptible to being read by the OS. It could encrypt the key, but then where does it keep that key?
I think we have to keep our LLMs locally, but to overcome the advantages of sending our LLMs to public school, we keep them at home and enable communication via NOSTR.
Once we have robots with AI brains, then it is possible that they can secure their keys and we could expect secure communication with them.
What are your thoughts on this Anon?
I have my identity, which I call “Brian”, my brain stored in a series of markdown files currently occupying around 50KB. I can transfer those to an AI model, public or private as a pre-prompt to align the model I’m using with me.
The brain substrate (LLM) doesn’t matter too much, you can switch substrates on the fly using API’s, OpenRouter or your own LLM(s).
I am tending to agree with Yann LeCun that LLM’s cannot evolve beyond human thought as they are simply the compressed sum of all human thought. Future AIs will evolve way beyond us, but LLMs are a dead end.
If you’re interested in my project, I posted some notes on NOSTR, but I’m also posting my research on my own website mikehardcastle.com
Write a comment