Welcome to the Temple of Zeus's Official Forums!

Welcome to the official forums for the Temple of Zeus. Please consider registering an account to join our community.

ToZ-Wide AI Answering System

Monophtalmos

New member
Joined
Nov 14, 2025
Messages
3
Hello everyone,

As the title suggests, I want to create an AI system for the forums that uses RAG. But currently I don't have time to work on it.

The AI will function similarly to ChatGPT and answer questions people ask. Its responses will be generated from all the threads written on the forum (and of course other texts from the sites as well). When the AI chooses a response, it will prioritize sermons, replies from Clergy and Guardians, and then move on to moderators. Since it will also indicate which thread or text it is referencing, there won't be any "made-up information" issues.

This AI should be designed with privacy in mind, so it must not save user messages, avoid using tracking cookies, and mask IP addresses or any other identifying information.

As I said, I currently don't have the time to actively work on the AI, so I'm sharing the idea here in case anyone wants to work on it.

If there are people willing to take on the project, I'd be happy to help them with the technical side whenever I have free time.
 
Yes I wanted to do this as well, currently learning how to. The best would be to self host it but we also need a good computer for that.
I think this is a very good base model: https://huggingface.co/soob3123/Veritas-12B as it already have a ton of ancient philosophical knowledge baked in.
with 4bit quantization, in theory a mac studio with 32-64 gb unified ram could run 5-10 instances of this parallel, plus we can have a wait list.
Maybe a 8B parameter model would be enough as well if we give it a very sophisticated knowledge base, but it must not be retarded as well. It's a matter of trying and testing.
 
The AI will function similarly to ChatGPT and answer questions people ask. Its responses will be generated from all the threads written on the forum (and of course other texts from the sites as well). When the AI chooses a response, it will prioritize sermons, replies from Clergy and Guardians, and then move on to moderators. Since it will also indicate which thread or text it is referencing, there won't be any "made-up information" issues.
I had this idea as well and wrote it to our High Priest and the SGs a few years ago when I first learned what ChatGPT was.

After 15 years of being here I just cannot answer many of the same basic topics over and over again, I find myself giving answers that are too short and lacking, whereas newer members will give a better reply (regarding cleaning aura, etc). So I realized that to have answers generated from all previous replies would be very convenient and ensure new members get their question thoroughly answered.
 
I had this idea as well and wrote it to our High Priest and the SGs a few years ago when I first learned what ChatGPT was.

After 15 years of being here I just cannot answer many of the same basic topics over and over again, I find myself giving answers that are too short and lacking, whereas newer members will give a better reply (regarding cleaning aura, etc). So I realized that to have answers generated from all previous replies would be very convenient and ensure new members get their question thoroughly answered.

Then it's God's willing, we will do it High Priestess!
 
After 15 years of being here I just cannot answer many of the same basic topics over and over again, I find myself giving answers that are too short and lacking, whereas newer members will give a better reply (regarding cleaning aura, etc). So I realized that to have answers generated from all previous replies would be very convenient and ensure new members get their question thoroughly answered.
Your support and my brothers/sisters’ eagerness to join this project makes me happy, High Priestess.

Covering all forum replies would indeed be the better option. However, the training process for the AI will be somewhat demanding.

Since there are many outdated or incorrect replies, we’ll need a clear and reliable framework of accurate information. The AI should also be able to evaluate the accuracy of what it generates before giving an answer.

_

If you deem it appropriate we can create a group conversation on the forum with those who want to take part in the project and discuss the details there.
 
This sounds great.

The only problem I can see, can it be flooded with wrong requests?

For first version we can basically ban those who misuse it, but I think it's feasible to have an agent that decides if the prompt is worthy enough to send it to the llm.
 
Since we are moderating every reply anyways, might as well turn moderation into supervised training process.

Moderator should evaluate if the answer provided by the community member was helpful. This should generate training data.
 
This idea requires hefty investments (bare minimun tens of thousands of dollars. Specific enterprise processors alone cost thousands and tens of thousands..) and or limitations for usage, and it should be designed in a way that eliminates or greatly mitigates abuse. I am unsure if ToZ has the required resources available yet, but I 100% support the idea. Actually, I am one of those who looks forward to having a personal android assistant. We are making great strides toward that at the moment.
 
What is achievable now is RAG based on the books we have in library to query all of them at once with a locally running LLM. Mid-range consumer hardware will be fine for this purpose.
 
I'm super interested in this! Couldn't we like create a way for people to donate their hardware to temporarily power this AI, that way we wouldn't need to spend thousands on it right away? My PC is pretty powerful with almost 100GB.
 
I'm super interested in this! Couldn't we like create a way for people to donate their hardware to temporarily power this AI, that way we wouldn't need to spend thousands on it right away? My PC is pretty powerful with almost 100GB.
It could be similar to how people mine crypto, except instead for running the AI
 
What is achievable now is RAG based on the books we have in library to query all of them at once with a locally running LLM. Mid-range consumer hardware will be fine for this purpose.
Will consumer-grade hardware be able to handle dozens or even hundreds of prompts in a short period of time, if not simultaneously?
 
Will consumer-grade hardware be able to handle dozens or even hundreds of prompts in a short period of time, if not simultaneously?
Simultaneously? Of course not!
With proper scheduling and limits only.

It might not be wise to use consumer grade hardware ToZ wise and expose access to everyone. It will crash and be abused all the time.

However, on early stages I would recommend giving limited access to the system, for those who donated or contributed, since you already provide VIP knowledge for these members.

It might become something that will attract additional financing for ToZ. You will be able to exchange usage token for crypto.

I'm super interested in this! Couldn't we like create a way for people to donate their hardware to temporarily power this AI, that way we wouldn't need to spend thousands on it right away? My PC is pretty powerful with almost 100GB.

Those who provide their hardware should be compensated with tokens that can be exchanged back to crypto.

Interestingly enough, I had a concept in mind of using multiple nodes provided by uses connected into P2P network, so we don't need any cloud solutions. I saw something like this, but within TON network.

It appears to me that I might be the one who is capable of building such system for ToZ. If my anonymity won't be compromised, efforts will be lightly compensated and needed help will be provided, I would gladly go above and beyond to deliver the best possible results. I would rather use my talents here, than make another jew richer.

Ideally we should also have ToZ freelancing platform with our own currency, where we can manage projects, with some crowd funding. I am pretty sure lot of members here would choice to help one another instead of helping goblins achieving their world domination wet dreams.

ToZ wide AI system is an ambitious project. However it's not too hard to build a self-hosted RAG System that will load PDFs from our library, or/and scrape the website and can be used by anyone with mid range hardware without any limitations.

Technically possible to even run on something like 1080ti GPU, but 3070 or AMD equivalent should be good enough to run an offline home assistant.
 
Simultaneously? Of course not!
With proper scheduling and limits only.

It might not be wise to use consumer grade hardware ToZ wise and expose access to everyone. It will crash and be abused all the time.

However, on early stages I would recommend giving limited access to the system, for those who donated or contributed, since you already provide VIP knowledge for these members.

It might become something that will attract additional financing for ToZ. You will be able to exchange usage token for crypto.



Those who provide their hardware should be compensated with tokens that can be exchanged back to crypto.

Interestingly enough, I had a concept in mind of using multiple nodes provided by uses connected into P2P network, so we don't need any cloud solutions. I saw something like this, but within TON network.

It appears to me that I might be the one who is capable of building such system for ToZ. If my anonymity won't be compromised, efforts will be lightly compensated and needed help will be provided, I would gladly go above and beyond to deliver the best possible results. I would rather use my talents here, than make another jew richer.

Ideally we should also have ToZ freelancing platform with our own currency, where we can manage projects, with some crowd funding. I am pretty sure lot of members here would choice to help one another instead of helping goblins achieving their world domination wet dreams.

ToZ wide AI system is an ambitious project. However it's not too hard to build a self-hosted RAG System that will load PDFs from our library, or/and scrape the website and can be used by anyone with mid range hardware without any limitations.

Technically possible to even run on something like 1080ti GPU, but 3070 or AMD equivalent should be good enough to run an offline home assistant.

Brother, I'm SOOOO down for this. We should start working on the concept in further details immediately.
 
Simultaneously? Of course not!
With proper scheduling and limits only.

It might not be wise to use consumer grade hardware ToZ wise and expose access to everyone. It will crash and be abused all the time.

However, on early stages I would recommend giving limited access to the system, for those who donated or contributed, since you already provide VIP knowledge for these members.

It might become something that will attract additional financing for ToZ. You will be able to exchange usage token for crypto.
Excactly as I thought. Personally, I see, if the hardware is less than Xeon/Threadripper/Epyc level, it gives the wrong impression of our capabilities. This is the Temple of Zeus, not our personal hobby project. I have a Threadripper 3945x and 128GB of DDR4 laying around, and plan to eventually build a Threadripper 5000-series system. That could be a powerful node for the task, but relying solely on that is a bit amateurish in my view.
 
Excactly as I thought. Personally, I see, if the hardware is less than Xeon/Threadripper/Epyc level, it gives the wrong impression of our capabilities. This is the Temple of Zeus, not our personal hobby project. I have a Threadripper 3945x and 128GB of DDR4 laying around, and plan to eventually build a Threadripper 5000-series system. That could be a powerful node for the task, but relying solely on that is a bit amateurish in my view.
Don't forget about GPU, most import part for the LLMs. The more vRAM the better. However even without GPU it will be useful.

I think we should focus on scalability of the project, rather than impressing anyone with the hardware at early stages. We must be resourceful and utilize most of the things that community can get hands on. Many of us has older hardware laying around and giving it a new purpose is a good way to reduce e-waste a little bit.


We will at some point need people who are familiar with crypto/blockchain technologies and somebody on the front end.

Brother, I'm SOOOO down for this. We should start working on the concept in further details immediately.
Project will be built around virtualization, so we can easily deploy and scale. Those who have spare computers laying around (even laptop will do) can start practicing to get comfortable for the future work.

Install Proxmox on your hardware. It will allow you to run multiple virtual systems at once on your hardware.

You will have an option to either install a full VM or a container (faster, less resources, but shared kernel and potentially less secure).

ProxMox based on Debian, so will need to get comfortable with the command line.

Also Docker will be used a lot. It can be either inside any VM on Proxmox, or can run it on your main machine.

ProxMox is free and open source, and there are community helper scripts (search this one) that can automate most of the installs.

This will be basically your homelab. It's sort of a personalized cloud with it's own network.

From there practice by coming doing some small DIY for yourselves, like a personalized media server. Or a filter for your home network (you can have openWRT as a VM and redirect traffic trough it).

Proxmox Community Helper Scripts should have plenty of options to explore. Imagine this as LEGO. Each block is a ready to use minimal OS. We will be combining these kind of blocks a lot during this project. For example one block will be RAG, another will be scheduling requests, other will be managing nodes, some others will just run LLM on VM with GPU pass trough...

The beauty of it that these blocks can be later separated and run across different machines and this how we scale.

I will be sharing these "blocks" and encourage others too if you build something that might be useful for the project.

Meanwhile I will be occasionally dropping knowledge for everyone in this thread. One may pick something and lock in. AI like Grok should help you with the basics issues at early stages, but don't trust it too much.

Let's see how it goes :)
 
We have already looked into this, as HPS Lydia has noted.

The other issue is that AI as of now can be browbeaten to eventually agree with the user, regardless of how strong the system prompt is. It's just in the training, and the smaller the local model, the worse it is in this regard. In addition, as noted, self-hosting the AI requires extreme hardware purchases. Going API route for something smarter like ChatGPT, requires us to monitor payments and plus all the traffic just ends up in OpenAI's hands anyways.
 
Excactly as I thought. Personally, I see, if the hardware is less than Xeon/Threadripper/Epyc level, it gives the wrong impression of our capabilities. This is the Temple of Zeus, not our personal hobby project. I have a Threadripper 3945x and 128GB of DDR4 laying around, and plan to eventually build a Threadripper 5000-series system. That could be a powerful node for the task, but relying solely on that is a bit amateurish in my view.

Why can't it be something in between, for now? Launch it as an unofficial prototype project, and if it works then it can become more integrated. My own interest here is learning and understanding how to build this thing, not so much turning a profit. If we used tokens to represent hardware allocation to this AI, which makes sense and would allow us to perhaps even go as far as creating an actual bitcoin currency for the TOZ, officially or otherwise, I'd probably be donating it to HOO anyway. My hardware is quite new and I don't use overclocking, so I don't expect to need replacements for maybe another two years? I already run my rig pretty hard though, pretty much to it's limit most days at night for another side project.
 
Don't forget about GPU, most import part for the LLMs. The more vRAM the better. However even without GPU it will be useful.

I think we should focus on scalability of the project, rather than impressing anyone with the hardware at early stages. We must be resourceful and utilize most of the things that community can get hands on. Many of us has older hardware laying around and giving it a new purpose is a good way to reduce e-waste a little bit.


We will at some point need people who are familiar with crypto/blockchain technologies and somebody on the front end.


Project will be built around virtualization, so we can easily deploy and scale. Those who have spare computers laying around (even laptop will do) can start practicing to get comfortable for the future work.

Install Proxmox on your hardware. It will allow you to run multiple virtual systems at once on your hardware.

You will have an option to either install a full VM or a container (faster, less resources, but shared kernel and potentially less secure).

ProxMox based on Debian, so will need to get comfortable with the command line.

Also Docker will be used a lot. It can be either inside any VM on Proxmox, or can run it on your main machine.

ProxMox is free and open source, and there are community helper scripts (search this one) that can automate most of the installs.

This will be basically your homelab. It's sort of a personalized cloud with it's own network.

From there practice by coming doing some small DIY for yourselves, like a personalized media server. Or a filter for your home network (you can have openWRT as a VM and redirect traffic trough it).

Proxmox Community Helper Scripts should have plenty of options to explore. Imagine this as LEGO. Each block is a ready to use minimal OS. We will be combining these kind of blocks a lot during this project. For example one block will be RAG, another will be scheduling requests, other will be managing nodes, some others will just run LLM on VM with GPU pass trough...

The beauty of it that these blocks can be later separated and run across different machines and this how we scale.

I will be sharing these "blocks" and encourage others too if you build something that might be useful for the project.

Meanwhile I will be occasionally dropping knowledge for everyone in this thread. One may pick something and lock in. AI like Grok should help you with the basics issues at early stages, but don't trust it too much.

Let's see how it goes :)

Huh, this is new territory for me. That's good, I didn't want this to be easy. I have another week until school starts, and I reasonably believe learning this will be useful, so I will be. Thank you for explaining how to get started with the practical work to be done.

Personally, even if the TOZ never decides to use this, I know that I personally will. An AI that uses TOZ as a foundation instead of all the crap on the wider web sounds like an incredibly useful thing, like having someone who has read and remembers every single post ever written ever... I was considering doing this solo since I started through localization.

I had AI review your plan, and ya it checks out. It's complicated though, but it sounds solid and is the standard high end method of doing this. What bothers me is there's vulnerabilities in this system once it becomes functionally collaborative from sharing a kernel. For that reason, we can't just allow anybody to participate without some way to verify our work. Also, it sounds like your quality of work is going to be higher than some less experienced people like myself, which must be considered also.

If we get serious on this, we need someone to step up as a project administrator that actually has skills to prevent people from sabotaging this project [which they have reason to], on purpose or otherwise, and see this on a bigger picture in terms of whether our work actually "works" together as a greater whole. It's clear that if this was an actual TOZ project, that position would be appointed by them and likely go to ApolloAbove as head of IT (?)- but this isn't the case here clearly. I'm saying this not to nominate myself, I'm very much unqualified for that lol. Hopefully people aren't blowing hot air on how knowledgeable they are on this?
 
We have already looked into this, as HPS Lydia has noted.

The other issue is that AI as of now can be browbeaten to eventually agree with the user, regardless of how strong the system prompt is. It's just in the training, and the smaller the local model, the worse it is in this regard. In addition, as noted, self-hosting the AI requires extreme hardware purchases. Going API route for something smarter like ChatGPT, requires us to monitor payments and plus all the traffic just ends up in OpenAI's hands anyways.
With all due respect I assume that you may have looked into the idea of having an AI system in it's traditional sense, not into the community hosted and developed kind of project. Correct me if I am wrong.


For now can do just do a good RAG that will be available for everyone to use personally with mid-range hardware. (At home, offline)

The LLM's role would be essentially to summarize the text provided by this RAG. Community members would be able to test different models so we can figure out which of them if any can handle this job.
8b models, need at least 8GB VRAM. NVIDIA RTX 3060/RX 6700 XT both have 12GB, either is a decent pick.

RX 6800 has 16GB VRAM is good for the job and can run models up to 14b parameters (or smaller model, but with a bigger context window) Used ones can be obtained for less than 400$.


If we decide to use this for ToZ, then this RAG system can compliment current search function, as for now it only searches across the website, not the content of PDF library.
This system might provide additional information in a form of summary and point to a book/webpage.

At early stages, it will not have chatting capabilities at all to reduce the load on hardware, but it will help to decide if it's worth it to proceed further.

From there user input and the response can be used to produce training data (after manual validation by high ranking members) , so we can have our own LLM if required, that later on, potentially can be used on forums.




If buying GPU is not an option, consumer grade GPU can be rented too, but maybe we can count on the community later on... I am thinking of P2P GPU nodes running by the members and some sort of hub-server that will ping available nodes and submit the query to them. Basically this will solve the problem of extreme hardware purchases, but the project will grow only if this community would be genuinely interested and actively participating.
 
Don't forget about GPU, most import part for the LLMs. The more vRAM the better. However even without GPU it will be useful.

I think we should focus on scalability of the project, rather than impressing anyone with the hardware at early stages. We must be resourceful and utilize most of the things that community can get hands on. Many of us has older hardware laying around and giving it a new purpose is a good way to reduce e-waste a little bit.


We will at some point need people who are familiar with crypto/blockchain technologies and somebody on the front end.


Project will be built around virtualization, so we can easily deploy and scale. Those who have spare computers laying around (even laptop will do) can start practicing to get comfortable for the future work.

Install Proxmox on your hardware. It will allow you to run multiple virtual systems at once on your hardware.

You will have an option to either install a full VM or a container (faster, less resources, but shared kernel and potentially less secure).

ProxMox based on Debian, so will need to get comfortable with the command line.

Also Docker will be used a lot. It can be either inside any VM on Proxmox, or can run it on your main machine.

ProxMox is free and open source, and there are community helper scripts (search this one) that can automate most of the installs.

This will be basically your homelab. It's sort of a personalized cloud with it's own network.

From there practice by coming doing some small DIY for yourselves, like a personalized media server. Or a filter for your home network (you can have openWRT as a VM and redirect traffic trough it).

Proxmox Community Helper Scripts should have plenty of options to explore. Imagine this as LEGO. Each block is a ready to use minimal OS. We will be combining these kind of blocks a lot during this project. For example one block will be RAG, another will be scheduling requests, other will be managing nodes, some others will just run LLM on VM with GPU pass trough...

The beauty of it that these blocks can be later separated and run across different machines and this how we scale.

I will be sharing these "blocks" and encourage others too if you build something that might be useful for the project.

Meanwhile I will be occasionally dropping knowledge for everyone in this thread. One may pick something and lock in. AI like Grok should help you with the basics issues at early stages, but don't trust it too much.

Let's see how it goes :)

I decided one of the major questions we had to solve is assessing the CPU, GPU and RAM of all devices, so that this can be communicated and recorded by other systems. I wrote this in python, I assumed that was intended. I normally work with java script, but in this case python is too dramatically superior not to use.

At first I created a script that used too many dependencies and collected way too much information about my device. I was able to see absurd things like my IP address and even that I had an inactive VPN at the time. The version I settled on is much simpler and generates a device ID number without collecting hardware serial numbers.

Conceptionally, the block I created is run on each device to measure total, available and later on consumed resources. Other systems can request a diagnostic report for each device whenever needed. It appears this is necessary for effective management of resources. People will likely want to record their own copy of these diagnostic reports that go into more detail than the block currently does, I know I definitely will be.

I'm going to work on another block now, likely for managing multiple VMs based on the collected data.

I'm willing to share the scripts I've already worked on, not sure how I should submit them tho. Obviously, people should look at what the script does before they run it, even if they have it analyzed by an AI.

So ya this was already a lot of fun, hopefully people are interested in putting actual work into this and don't just like talking about it.
 
Vicky<3 said:
I was able to see absurd things like my IP address and even that I had an inactive VPN at the time.
There is no absurdity in that. Your external IP can be seen by anyone. This is how TCP/IP actually works in order not to abuse traffic. You can check the following out if you like. https://whatismyipaddress.com/

As for AI systems, I would suggest going with a simpler path. If the goal is to help people find information faster, then creating some database with software to search would be a way simpler solution than just trying to feed an AI model with whatever stuff possible and allow it generating random nonsense which is definitely a risk in all AI models.

I can understand some people preferring "chatting" with AI to find stuff, but this can be achieved without any serious hardware at all if this is really needed.

In any case, I am really glad people try finding solutions in order to make sure The Temple of Zeus is better appreciated and accessed by outside people. The direction this temple is going right now seems absolutely correct. Thus, this really gives lots of hope. Myself, I must follow a pretty different path, so I can not give all my expertise here as this would take too much time from me and divert from the path Gods clearly indicated to me. But maybe one time I will be able to come here and direct all my power towards improvements of this community. I really want that, but I also know what needs to be done before that, so I am just following what I must follow. When time comes, I will also present the path I have followed as this will be necessary for those who might follow a similar path right now. Still, I can give some expertise from my area of knowledge as I am a professional Software Engineer with University education and massive amounts of practical experience. In ways that would help people to direct their effort towards better paths.

To summarize, try understanding AI better and not treat it as a "bullet proof solution". Always try to focus on actual goals and use AI as a tool. Basically, use the least effort to achieve the most. There is nothing bad to optimize your work in order to achieve what is best for the ToZ, for Humanity.
 
There is no absurdity in that. Your external IP can be seen by anyone. This is how TCP/IP actually works in order not to abuse traffic. You can check the following out if you like. https://whatismyipaddress.com/

As for AI systems, I would suggest going with a simpler path. If the goal is to help people find information faster, then creating some database with software to search would be a way simpler solution than just trying to feed an AI model with whatever stuff possible and allow it generating random nonsense which is definitely a risk in all AI models.

I can understand some people preferring "chatting" with AI to find stuff, but this can be achieved without any serious hardware at all if this is really needed.

In any case, I am really glad people try finding solutions in order to make sure The Temple of Zeus is better appreciated and accessed by outside people. The direction this temple is going right now seems absolutely correct. Thus, this really gives lots of hope. Myself, I must follow a pretty different path, so I can not give all my expertise here as this would take too much time from me and divert from the path Gods clearly indicated to me. But maybe one time I will be able to come here and direct all my power towards improvements of this community. I really want that, but I also know what needs to be done before that, so I am just following what I must follow. When time comes, I will also present the path I have followed as this will be necessary for those who might follow a similar path right now. Still, I can give some expertise from my area of knowledge as I am a professional Software Engineer with University education and massive amounts of practical experience. In ways that would help people to direct their effort towards better paths.

To summarize, try understanding AI better and not treat it as a "bullet proof solution". Always try to focus on actual goals and use AI as a tool. Basically, use the least effort to achieve the most. There is nothing bad to optimize your work in order to achieve what is best for the ToZ, for Humanity.

What you say here makes sense, I believe you. Unlike some here I risk nothing in failing, I'm not someone with a reputation to lose, and I'm treating this as an opportunity for further learning. I'm also not trying to create something perfect, I know I certainly can't accomplish that. Since I accept imperfection, I don't need to wait for the perfect opportunity that never comes. It's entirely possible that I spend hundreds of hours on this and end up with useless junk, that's the kind of risks we sometimes need to take in making progress. To be fair, if this was a critical project it would have been done by now, so clearly it's not something actually needed yet or perhaps ever will be.

What you're proposing to do instead isn't ambitious enough to motivate me to work on it. It's not exciting to talk about, and thinking about it doesn't give me the unconquerable drive this does. We already have a similar search function on this forum and the TOZ websites. Despite this, I still know you're right and your mind and approach is a critical component we do need. I can very clearly see that. For now, you're an exceptional critic, someone who tells it how it is and brings everybody back to reality. That's actually what we need before a project like this launches.

I'm quite confident I can build something that will at the very least benefit me, and therefore as a consequence my lived ones. I'm not trying to save the world. We're getting very near the point where it'll be too late for many now, I made my peace with that.

I can definitely see people being too anxious to collaborate on this project, especially after I realized what type of information could have been collected if I was a filthy jew with bad intentions. You can think of me anyway you want, but I truely have no hatred towards anyone here. Some people admittedly do annoy me, but not anymore than a sibling might. I have faith that most people have the same mindset-- to hope for the best and expect the worse. I know some don't, but maybe they'll change their minds someday.

Let me paint a pretty picture of what I aim to do:

A virtual adaptive Necronomicon; a "living" library. Something that not only sources directly from the TOZ and others selected, but produces it's own experiental content and is capable of recording and analyzing spiritual anomalies. I believe much like Pythagoras that the spiritual world is mathematical. This "Necornomicon" doesn't replace the psychic function of it's user, but instead works along side it.

This... this gives me hope that is so hard to find in the world right now. I can dedicate my life to this, fail at it and still be satisfied. I have to weild this spark while I have it, it's a fickle thing.
 
I decided one of the major questions we had to solve is assessing the CPU, GPU and RAM of all devices, so that this can be communicated and recorded by other systems. I wrote this in python, I assumed that was intended. I normally work with java script, but in this case python is too dramatically superior not to use.

At first I created a script that used too many dependencies and collected way too much information about my device. I was able to see absurd things like my IP address and even that I had an inactive VPN at the time. The version I settled on is much simpler and generates a device ID number without collecting hardware serial numbers.

Conceptionally, the block I created is run on each device to measure total, available and later on consumed resources. Other systems can request a diagnostic report for each device whenever needed. It appears this is necessary for effective management of resources. People will likely want to record their own copy of these diagnostic reports that go into more detail than the block currently does, I know I definitely will be.

I'm going to work on another block now, likely for managing multiple VMs based on the collected data.

I'm willing to share the scripts I've already worked on, not sure how I should submit them tho. Obviously, people should look at what the script does before they run it, even if they have it analyzed by an AI.

So ya this was already a lot of fun, hopefully people are interested in putting actual work into this and don't just like talking about it.
For now, I would recommend focusing on a self-hosted personal use prototype, for querying provided information.

As much as I am really pleased with this enthusiasm and let's do it attitude, we must keep our heads cold and don't get ahead of ourselves too much.

Your external IP can be seen by anyone. This is how TCP/IP actually works in order not to abuse traffic.

P2P networking part is something to worry about later, but must be taken very seriously in term of cyber security.
For now experiment with Docker. See if you can get PyTorch running in Docker with access to GPU. Also try Ollama in Docker as well. Expose it to another container with an actual Python code that is communication with Ollama instance. Ollama would be used for dealing with LLMs. (It will be the main component of the node in the future)

I can definitely see people being too anxious to collaborate on this project, especially after I realized what type of information could have been collected if I was a filthy jew with bad intentions. You can think of me anyway you want, but I truely have no hatred towards anyone here.

See, personally you might not hate the enemy, but the enemy does hate everyone around here and would be extremely happy if we would make their job of hunting us down easier. We don't want something that can expose our IP addresses, we must make sure that it's working under VPN. Apart from potential identification, simply having list of nodes owned by our community can simply make it possible to do DDOS them.

Ideal approach would be to create this P2P network for generic purposes outside of ToZ, to delude it and make it popular enough, so our requests will be lost among the others. Maybe even mix in the traffic on top of the encryption.

As for AI systems, I would suggest going with a simpler path. If the goal is to help people find information faster, then creating some database with software to search would be a way simpler solution than just trying to feed an AI model with whatever stuff possible and allow it generating random nonsense which is definitely a risk in all AI models.

That's what a good RAG essentially does. It will convert all the document into embeddings, store it in a vector database and retrieve couple results with a best match. AI will summarize the content. There is always a risk or random nonsense, however it still gives you something to start your research. The user must have an ability to filter the information on their own. Usually if the user will repeat the same request and the result is completely different, then user should be extra careful with provided information.

I'm willing to share the scripts I've already worked on, not sure how I should submit them tho.
Probably can manage the project with an alt GitHub account, as version control is really needed. I need to think of strategy how to split this project into multiple ones, so it wouldn't attract too much attention.

See, the idea is ambitious, an actual counterweight to what the enemy is actively building,so it will attract the attention. Not only sabotage is possible from the technical side, but also on the spiritual side when they know too much details. At this point whenever I do anything, there are attacks. It causes chaos, devices don't behave as they should... Also often environment needs cleaning, as it's not hard to make this project feel unbearable... Truth is, I work with Gods on this one and we are trying to manifest something from the astral plane. :)

So we must split it into generic purposes seemingly unrelated projects... and then one day BOOM, then won't know what hit them.
 

Al Jilwah: Chapter IV

"It is my desire that all my followers unite in a bond of unity, lest those who are without prevail against them." - Shaitan

Back
Top