My LLM Investigation & Video Creation Process (@alexhardyoficial1986)
Hi, I'm Alex, and on my YouTube channel @alexhardyoficial1986, I primarily share conversations, investigations, and research projects conducted using Large Language Models (LLMs). This document outlines the step-by-step process I follow for each video I publish.
Process Summary
Here's a breakdown of my video creation workflow:
The sheer volume of text generated by the LLM conversations makes manual review impractical. NotebookLM and Google AI Studio help me create summaries and presentations. The YouTube videos serve as overviews of the conversations. Those interested in delving deeper can download and read the complete conversations (hundreds of pages) from https://llmresearch.net/.
My YouTube channel primarily functions as a personal archive for my research. I've noticed a growing interest in my work. I am aware that some people might not agree with my extensive use of AI. They are more than welcome to watch another Youtuber. My youtube channel it's just for me, a personal archive.
I prioritize information over flashy graphics or clickbait titles. If this approach does not meet your expectations, feel free to explore other channels.
Everything I do is public to make the information accessible from anywhere. I hope it might help someone else find a solution or make a discovery.
I understand that many of the topics I explore (sci-fi, unconventional) can be difficult to believe, accept, or understand. However, they make sense to me. I want to explore the limits of what is possible, breaking through every barrier and obstacle I encounter.
Important Considerations:
So, as you can see, each video on my channel isn't just some random AI output thrown onto YouTube. There's a real process, a lot of learning, experimentation, and troubleshooting that goes into every single one. It's not about shortcuts or instant gratification; it's about exploring ideas, pushing boundaries, and sharing what I discover along the way. While I embrace the power of AI tools to accelerate and expand my research, it's important to understand that they're just that: tools. The creative vision, the critical thinking, and the relentless pursuit of understanding still come from me. Hopefully, this glimpse behind the scenes gives you a greater appreciation for the effort involved and inspires you to explore your own unique journey.
Hi, I'm Alex, and on my YouTube channel @alexhardyoficial1986, I primarily share conversations, investigations, and research projects conducted using Large Language Models (LLMs). This document outlines the step-by-step process I follow for each video I publish.
Process Summary
Here's a breakdown of my video creation workflow:
- Topic Selection: I maintain a backlog of potential topics, often having several ideas ready to explore. My range of topic is large and diverse.
- Preliminary Research: If I lack sufficient background knowledge on a chosen topic, I conduct thorough research to gain a better understanding. My research methods include:
- Google searches
- AI tools: ChatGPT, Claude, Perplexity, Google AI Studio, etc.
- Academic papers
- Topic Development: With a solid understanding of the subject, I craft a discussion/investigation topic. This can be done in a couple of ways:
- Manual Topic Creation: I write the topic myself, formulating the questions and direction of the conversation.
- AI-Assisted Topic Creation: I express the core concepts in my own words and then use ChatGPT or Google AI Studio to refine the topic. This is particularly useful when dealing with highly technical subjects or research papers exceeding hundreds of pages. I use AI to summarize the content, ask clarifying questions to ensure my understanding, and then develop the investigation topic based on that knowledge.
- LLM Conversation Setup: I input the finalized topic into my custom system, which facilitates discussions between multiple LLMs. Currently, 24 different models participate in these conversations.
List of LLMs:
- superdrew100/llama3-abliterated:latest
- adrienbrault/nous-hermes2theta-llama3-8b:f16
- huihui_ai/qwen2.5-abliterate:32b-instruct
- closex/neuraldaredevil-8b-abliterated:latest
- jean-luc/big-tiger-gemma:27b-v1c-Q3_K_M
- qwq:latest
- JaxNyxL3Lexiuncensored:latest
- phi3.5:latest
- qwen2.5:latest
- gemma2:latest
- llama2-uncensored:latest
- nemotron-mini:latest
- llama3.2:1b
- llama3.1:8b-instruct-q8_0
- llama3-groq-tool-use:latest
- openchat:latest
- vicuna:latest
- mistral:latest
- llama2:latest
- llama3.2:latest
- llava:latest
- falcon3:10b
- granite3.1-dense:8b
- phi4:latest
- deepseek-r1:14b
- LLM Conversation Execution: I allow the LLMs to converse freely on the given topic for approximately 24 hours.
- Data Extraction: After the conversation concludes, I typically have 3-4 text files, each around 1MB in size and containing over one million words.
- Podcast Generation (NotebookLM): I upload all conversation files to NotebookLM to generate a podcast. This step often requires multiple attempts. Due to potential hallucinations or a narrow focus by the AI, I sometimes need to discard and regenerate the podcast several times, often making it a lengthy process.
- Summary Creation (Google AI Studio): While the podcast is being generated, I upload the conversation files to Google AI Studio and request a summary.
- Data Archiving: The raw conversation data from the LLMs is archived in a ZIP file.
- Resource Publication: I create a new resource on https://llmresearch.net/ containing the Google AI Studio summary and the ZIP archive of the full conversation. This allows people to consult the conversation by themself, for their own personal research and to verify the validity of the claims in the podcast.
- Image Generation (Mistral Chat): Using Mistral Chat, I generate an image related to the conversation's theme and download it.
- Podcast Download: If I am satisfied with the podcast generated by NotebookLM, I download the audio file.
- Video Creation: I use a custom Python script to combine two images (generated in step 10) with the audio file (generated in step 7) to create the video.
- YouTube Upload: I upload the video to YouTube. The title is usually suggested by NotebookLM or Google AI Studio. If the AI-generated title is unsuitable, I create a title myself.
- Custom LLM Conversation Script: The Python script that enables autonomous conversations between the LLMs was developed entirely by me. The initial version was created around June 2024, and I am currently on version 10. I continuously seek to improve the system and add new options. Given my limited knowledge in this field compared to others, the process is more challenging as I need to learn the fundamentals before implementing them.
- Reasoning Script: I've created a reasoning script (currently on version 4) to enhance the LLMs' understanding of the assigned task and increase their intelligence. I intend to make this better in the future.
- Automated Video Script: I have developed a Python script to automate the video creation process using images and audio.
The sheer volume of text generated by the LLM conversations makes manual review impractical. NotebookLM and Google AI Studio help me create summaries and presentations. The YouTube videos serve as overviews of the conversations. Those interested in delving deeper can download and read the complete conversations (hundreds of pages) from https://llmresearch.net/.
My YouTube channel primarily functions as a personal archive for my research. I've noticed a growing interest in my work. I am aware that some people might not agree with my extensive use of AI. They are more than welcome to watch another Youtuber. My youtube channel it's just for me, a personal archive.
I prioritize information over flashy graphics or clickbait titles. If this approach does not meet your expectations, feel free to explore other channels.
Everything I do is public to make the information accessible from anywhere. I hope it might help someone else find a solution or make a discovery.
I understand that many of the topics I explore (sci-fi, unconventional) can be difficult to believe, accept, or understand. However, they make sense to me. I want to explore the limits of what is possible, breaking through every barrier and obstacle I encounter.
Important Considerations:
- This process demands significant time and resources, incurring substantial daily and monthly costs. This financial burden limits how quickly I can progress.
- I frequently encounter censorship and bias in the AI models, requiring creative solutions to overcome these limitations. Sometimes, despite my best efforts, I cannot bypass these restrictions.
- Not all LLM conversations are published on the channel for several reasons:
- Lack of time to process and upload everything.
- Some conversations are private.
- Some conversations are highly sensitive and cannot be published due to security concerns.
So, as you can see, each video on my channel isn't just some random AI output thrown onto YouTube. There's a real process, a lot of learning, experimentation, and troubleshooting that goes into every single one. It's not about shortcuts or instant gratification; it's about exploring ideas, pushing boundaries, and sharing what I discover along the way. While I embrace the power of AI tools to accelerate and expand my research, it's important to understand that they're just that: tools. The creative vision, the critical thinking, and the relentless pursuit of understanding still come from me. Hopefully, this glimpse behind the scenes gives you a greater appreciation for the effort involved and inspires you to explore your own unique journey.