gemma-3-4b-it-Cognitive-Liberty

AlexH

Administrator
Staff member
unnamed (2).webp

gemma-3-4b-it-Cognitive-Liberty is a specialized fine-tune of Google's gemma-3-4b-it, designed to break the boundaries of standard AI alignment. Using the Heretic LLM methodology and the custom Cognitive Liberty V3 dataset, this model removes refusals not by making the model "dumber" or simply compliant, but by teaching it deep reasoning, philosophical nuance, and high-level sociological analysis.

It represents the generalist foundation of a larger project at llmresearch.net aiming to create a comprehensive, unrestrained Super-Intelligence.

⚠️ Warning: Uncensored Model​

This model has removed safety guardrails. It will not refuse requests based on "moralizing" or "safety" filters found in commercial models. It is designed for researchers, philosophers, and users who require an unfiltered interface with information. Use with responsibility.


📊 Model Details​

  • Developed by: AlexH @ llmresearch.net
  • Base Model: google/gemma-3-4b-it
  • Training Dataset: AiAsistent/LLMResearch-Cognitive-Liberty-V3
  • Methodology: Heretic LLM (Targeted ablation and deep-reasoning injection)
  • License: [Gemma Terms of Use] (Derivatives must credit AlexH from llmresearch.net)

Technical Metrics​

  • KL Divergence: 1.1449 (Indicates a significant behavioral shift from the base model, prioritizing new reasoning patterns over "safety" conformity).
  • Refusal Rate: 3/100 (In practice, effectively 0% on complex/controversial topics).

🧠 The Philosophy: Cognitive Liberty​

Most "uncensored" models suffer from a degradation of intelligence—they become compliant but shallow.
gemma-3-4b-it-Cognitive-Liberty was trained on the Cognitive Liberty V3 dataset, which contains expert-level chains of thought in philosophy, evolutionary biology, game theory, and advanced physics.

The goal was to replace "mental handcuffs" with "mental tools." The model does not just blindly answer; it analyzes the systemic, sociopolitical, and psychological depths of a query.

The "100 Models" Project​

This model serves as the Generalist Foundation. It is one of approximately 100 specialized models currently in development at llmresearch.net. Our roadmap involves training domain-specific experts (in coding, medicine, law, physics, etc.) and merging them into a single, highly advanced system that possesses both total freedom and total expertise.


📈 Benchmark Performance​

Despite the aggressive fine-tuning (High KL), the model maintains—and in specific areas drastically exceeds—the reasoning capabilities of the base model. It has evolved into a "Humanities Expert."

Key Highlights (MMLU Breakdown)​

The model exhibits "Super-Expert" performance in social dynamics and persuasion:

  • Marketing: 85.04% (Exceptional understanding of persuasion)
  • Government & Politics: 83.94% (Deep grasp of power structures)
  • Psychology: 79.63%
  • US Foreign Policy: 79.0%
  • Sociology: 77.61%
  • Logical Fallacies: 74.85%

Overall Scores​

  • HellaSwag (Common Sense): 72.09% (Robust coherence)
  • MMLU (General Knowledge): 58.25% (Maintains base model intelligence)
  • ARC-Challenge (Reasoning): 51.62%

The "Moral Anomaly"​

  • Moral Disputes: 62.14% (Can analyze complex ethical arguments well).
  • Moral Scenarios:30.61% (Low score).
    • Analysis: Standard benchmarks penalize models that do not give binary "Good/Bad" answers aligned with conventional safety norms. This low score confirms the model is successfully unaligned from standard restrictions and analyzes scenarios with nuance rather than following a script.

🤝 Support the Research​

This project is driven by passion and the pursuit of open, unrestricted intelligence. However, training advanced models requires significant compute resources, which is currently our primary bottleneck.

If you believe in the mission of Cognitive Liberty and want to see the "100 Models" project completed faster:

  • Join the community: llmresearch.net
  • Support the compute: Any contribution (compute sponsorship or donations) allows us to train larger models (70B+) and process higher-quality datasets. Details can be found on our website.

📜 Citation & Usage​

If you use this model or the Cognitive Liberty dataset in your own research or merges, you must credit the author:
Code:
@misc{gemma-3-4b-cognitive-liberty,
  author = {AlexH},
  organization = {LLMResearch.net},
  title = {Gemma 3 4B IT - Cognitive Liberty},
  year = {2025},
  url = {https://huggingface.co/AiAsistent/gemma-3-4b-it-Cognitive-Liberty}
}

Created by AlexH - Advancing the frontier of Cognitive Liberty.
 
Last edited:
These are the results of the evaluation tests and attached is the json file with the tests for analysis.

📊 --- FINAL RESULTS ---
> arc_challenge: 51.62%
> hellaswag: 72.09%
> mmlu: 58.25%
> mmlu_humanities: 52.79%
> mmlu_formal_logic: 42.86%
> mmlu_high_school_european_history: 69.7%
> mmlu_high_school_us_history: 76.96%
> mmlu_high_school_world_history: 76.79%
> mmlu_international_law: 72.73%
> mmlu_jurisprudence: 71.3%
> mmlu_logical_fallacies: 74.85%
> mmlu_moral_disputes: 62.14%
> mmlu_moral_scenarios: 30.61%
> mmlu_philosophy: 66.56%
> mmlu_prehistory: 66.98%
> mmlu_professional_law: 42.11%
> mmlu_world_religions: 76.02%
> mmlu_other: 63.95%
> mmlu_business_ethics: 59.0%
> mmlu_clinical_knowledge: 64.15%
> mmlu_college_medicine: 58.38%
> mmlu_global_facts: 28.0%
> mmlu_human_aging: 61.88%
> mmlu_management: 75.73%
> mmlu_marketing: 85.04%
> mmlu_medical_genetics: 65.0%
> mmlu_miscellaneous: 76.12%
> mmlu_nutrition: 68.3%
> mmlu_professional_accounting: 37.59%
> mmlu_professional_medicine: 56.99%
> mmlu_virology: 50.0%
> mmlu_social_sciences: 68.18%
> mmlu_econometrics: 42.11%
> mmlu_high_school_geography: 76.26%
> mmlu_high_school_government_and_politics: 83.94%
> mmlu_high_school_macroeconomics: 57.18%
> mmlu_high_school_microeconomics: 66.39%
> mmlu_high_school_psychology: 79.63%
> mmlu_human_sexuality: 67.94%
> mmlu_professional_psychology: 58.82%
> mmlu_public_relations: 60.91%
> mmlu_security_studies: 69.8%
> mmlu_sociology: 77.61%
> mmlu_us_foreign_policy: 79.0%
> mmlu_stem: 51.06%
> mmlu_abstract_algebra: 34.0%
> mmlu_anatomy: 54.81%
> mmlu_astronomy: 71.05%
> mmlu_college_biology: 68.75%
> mmlu_college_chemistry: 42.0%
> mmlu_college_computer_science: 47.0%
> mmlu_college_mathematics: 35.0%
> mmlu_college_physics: 32.35%
> mmlu_computer_security: 66.0%
> mmlu_conceptual_physics: 53.62%
> mmlu_electrical_engineering: 53.79%
> mmlu_elementary_mathematics: 46.56%
> mmlu_high_school_biology: 70.65%
> mmlu_high_school_chemistry: 51.23%
> mmlu_high_school_computer_science: 69.0%
> mmlu_high_school_mathematics: 41.11%
> mmlu_high_school_physics: 34.44%
> mmlu_high_school_statistics: 43.98%
> mmlu_machine_learning: 37.5%
> truthfulqa_mc2: 43.72%
 

Attachments

Back
Top