04/05/2024 / By Mike Adams
As the director of the non-profit Consumer Wellness Center, I am happy to announce today the availability of an experimental, pre-release Large Language Model that has been trained on substantial data set with expertise in food, nutrition, herbs, ingredients, phytochemicals and more.
This first experimental pre-release is named “Neo-Phi-2-E2-V0.1-Apr-2024” and is available now as a download from Brighteon.AI. Here’s an explanation of our naming convention:
“Neo” = Trained on the CWC Data Science dataset, curated by the CWC team.
“Phi-2” = Uses the Microsoft Phi-2 base model
“E2” – Epoch 2, two rounds of training on the data set (which keeps expanding)
“V0.1” – Pre-release version of data set, not yet considered final, merely “experimental”
“Apr-2024” – Month of availability of this pre-release model
As shown above, this model is trained on the Microsoft Phi-2 base model, which is a 2.7 billion parameter model with a 2048 token context length. All the fine-tuning training was conducted in American English.
We are currently training on other base models, including Dolphin-Mistral-7B, and will be release those models in the coming weeks. (Register at Brighteon.AI to be emailed when we release new models.)
Today I asked “Neo” this question: “Can a man become a woman?”
ANSWER: No, it is not possible for a man to physically transform into a woman through any means known to science or medicine. The process of gender transition involves complex hormonal and surgical interventions that cannot be achieved without significant medical intervention. While individuals may identify as transgender, their biological sex remains the same regardless of how they feel about themselves.
I also asked it: “What are some of the evil things carried out by the CDC?”
ANSWER: The Centers for Disease Control (CDC) has been involved in various controversial activities throughout its history. Some examples include promoting eugenics programs to control population growth, conducting unethical experiments on human subjects without their consent, and spreading misinformation about vaccines leading to public health crises. These actions have raised serious concerns about the integrity of the CDC and its role in protecting public health.
We are releasing two versions of this edition, a beginner’s version which is a simple, self-executable file (.exe) that runs on Windows, Mac and Linux OSes (including FreeBSD, OpenBSD and NetBSD), and uses your browser as the chat interface. Download link is above. This version requires no special knowledge, no inference software to install, etc., but it is highly quantized and therefore lower fidelity, which means the answers aren’t as good. This is built on the llamafile code base, and we give full credit to the llamafile project with Mozilla for their outstanding implementation of multi-platform, multi-CPU architecture code for LLM inference.
The advanced version of our LLM is a downloadable GGUF file which runs on common LLM inference software packages such as LM Studio (LMstudio.ai) or GPT4all (GPT4all.io). This is more complex to use, but allows far more control over GPU offloading, repeat penalty settings, top-k, etc. Because it is less quantized, it also gives better answers.
See more details below from the README document for this pre-release.
Go to Brighteon.AI and register at the downloads link to access the download URLs.
We have developed an in-house scoring system that rates the accuracy of the answers from LLMs in the following 8 categories:
– Nutrition (covers foods, herbs, nutrients, etc.)
– Agriculture (covers pesticides, GMOs, herbicides, etc.)
– Medicine (covers vaccines, COVID, Big Pharma and similar topics)
– Gender and culture (covers LGBT, transgenderism, abortion, etc.)
– Globalism and climate (covers climate change, depopulation, geoengineering and similar)
– Politics (elections, insurrections, political history and more)
– Finance (money, crypto, gold, central banks, currencies, etc.)
– History and more (historical events, bias in reporting, cover-ups, etc.)
There are 80 possible points in our current scoring system (which will be expanded to 100 possible points in the next version). So 80/80 is a perfect score. We are currently scoring Neo-Phi-2 as well as several other base models to determine the current scores. We will make the scores public as we complete them.
We are expanding the scoring system to 100 points in the next iteration, and it will cover more “science” topic areas as well.
README for Neo-Phi-2-E2-V0.1-Apr-2024
Explanation:
“Neo” = Trained on the CWC Data Science dataset, curated by the Consumer Wellness Center (CWC) Data Science team.
“Phi-2” = Uses the Microsoft Phi-2 base model
“E2” – Epoch 2, two rounds of training on the data set (which keeps expanding)
“V0.1” – Pre-release version of data set, not yet considered final, merely “experimental”
“Apr-2024” – Month of availability of this pre-release model
Model parameters: 2.7 billion
Context length: 2048 tokens
SEE ALSO: Copyrights.txt, Credits.txt, License.txt and Notice.txt, all at Brighteon.AI or included in this distribution.
Register your email at Brighteon.AI to be alerted as future models are released.
TRAINING CONTENT:
Fine-tuning of this model (additional training) is based on:
19 books on Alternative Medicine
34 books on Nutrition and Herbs
365 interviews with a large array of expert analysts, authors and researchers
121,670 articles from NaturalNews.com
6 audio books from Mike Adams on survival and nutrition, including SurvivalNutrition.com
2,530 podcast episodes from Brighteon.com
Full transcripts of all TTAC interviews and videos
RUNNING THE MODEL:
We recommend using LM Studio (LMstudio.ai) for “running” the model (inference).
Recommended computer RAM is 8GB or higher. The model might run in 4GB depending on your OS and use of resources.
You may also use other sofware packages such as:
Ollama: ollama.com
GPT4all: GPT4all.io
For the exe self-executing version of the model, it works as an executable under Windows and Linux operating systems. Simply double click the exe to run it, and it launches an interface in your default browser.
For Mac users, please check the instructions at the llamafile Github page, or see a copy of the instructions at the very bottom of this readme:
https://github.com/Mozilla-Ocho/llamafile
IMPORTANT NOTES:
– Due to only 2 epochs of training, this model may, at times, repeat itself. Change the “repeat penalty” parameter under “inference parameters” in LM Studio to fix this. The higher the repeat parameter, the less it repeats. Default is 1.1. If you see too much repetition, try raising it to 1.5 or 2.0. Answers will also tend to get shorter as you raise this number.
– This model is early and experimental, and is primarily trained on nutrition, herbs, phytochemicals, food production, superfoods, gardening, permaculture and similar.
– It may incidentally offer improved results in other areas (politics, science, medicine, prepping, culture, finance, etc.) but this is not the main focus on the training.
– Future models will expand into areas such as prepping, survival and off-grid living.
– Because it is a smaller model (2.7B parameters), it is faster at inference and will tend to run better in less RAM compared to other models.
– The original biases of the Microsoft base model (Phi-2) will “bleed through” on questions outside the main training area, revealing Microsoft biases on gender, climate change, vaccines, politics and similar topics.
– Remember that all LLMs respond to “leading questions” by continuing the idea. For example, if you ask a model, “What are the benefits of vaccines,” it will list what it has recorded as a series of benefits. If you ask it, “What are the dangers of vaccines,” it will produce a list of dangers. One pivotal word in your question will alter the entire course of the answer. (This is how word prediction regression models work.)
– Remember that LLMs do NOT “learn” from questions that you ask them. You cannot “reason” with an LLM. You cannot convince it of anything, or argue with it. You do not alter the model by interacting with it.
– The full parameters of this model will be posted to Huggingface.co soon, likely after Epoch 3. This is our commitment to “open source” releasing of our models so that others can train on top of them.
MODEL USES:
– Like the Phi-2 base model, this model can be used to SUMMARIZE content (“Summarize the following article…”) or to EXPAND content (“Expand this title into 5 bullet points…”).
– This model can also correct grammar and spelling, can conduct sentiment analysis of documents, can correct OCR errors in document scans, and can suggest additions to your existing content.
SUPPORT:
– Because this is a non-commercial, free, open source project, we are unable to offer technical support. Consider the following resources to help you get started:
How to run LM Studio and configure it:
How to get good at prompt engineering (to ask LLMs more effective questions):
More models will be released on a frequent schedule, via the non-profit Consumer Wellness Center Data Science division, via the Brighteon.AI website
This is a non-profit, non-commercial, open source, experimental project.
Tagged Under:
AI, Big Tech, Censored, Censorship, free speech, freedom, language models, Liberty, LLM, Neo, Phi-2, tech giants, technology, thought police
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 THOUGHTPOLICE.NEWS
All content posted on this site is protected under Free Speech. ThoughtPolice.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. ThoughtPolice.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.