Are AI models biased?

Homesteading & Country Living Forum

Help Support Homesteading & Country Living Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Dave V

Awesome Friend
Neighbor
Joined
Dec 22, 2020
Messages
362
Location
Florida
You may or may not know, I'm always trying to calculate if we'll have enough of this or that prepping-wise, so I thought, why not ask an AI to calculate some stuff for me? One of the computer guys in our facility let me play on his paid version of a generative AI.
I really wanted to see if its answers were truly well thought out or just an echo of liberal (or conservative) social media. (before I wasted even MORE of my time giving it stuff to calculate.)
I started out by asking it where it got its information from and was surprised to hear it wasn't just attached to the Internet and soaking it all in!
Apparently, they capture large information sets, including websites, and somehow (maybe with another AI?) root out the bias before it is allowed to train on the info.
I found out it didn't even know the election results until I allowed it to look at a couple of websites, and even then, it essentially said it would forget everything after our session, so this AI, in any case, is no longer allowed to retain anything it may learn from us or our queries. So, if I come back tomorrow, will it remember me or is it a total reset? I'll find out!
Then I asked it if it thought Musk had valid concerns about AI taking over the world and I have to say it could have thrown Musk under the bus, but it gave a very solid answer. So, here's me in BLUE and the AI in italics. By the way, the speed and completeness of answers was phenomenal. We're talking 3 or 4 seconds and sometimes almost instantly! The AI did all its own bolding and formatting to help us dumb humans better understand.

Hi, could you make a list of roughly 50 media websites you trained on
including sites like https://foxnews.com and https://en.wikipedia.org ?

This will give me a good idea as to whether your approach to answers is balanced, liberal, or conservative.
Or do you study all websites and try to determine valid information through analysis and comparison?


I don’t have on-demand access to proprietary data sources like Fox News, Wikipedia, or any specific websites.
I don't "browse" the web in real-time, and I wasn't trained on any specific websites. Instead, I was trained on a wide variety of publicly available data, which includes books, academic papers, news articles, and other diverse text sources. My responses are based on patterns in this data, and I aim to provide balanced, neutral answers by drawing from multiple perspectives, not from a particular bias or viewpoint.

As for how I determine what’s valid or not: I use the data I was trained on to compare and synthesize information across various viewpoints and sources. If you ask me for a particular perspective, I can help explain the range of opinions that exist on a topic. If you're looking for a particular bias in the sources I might be influenced by, I can give you insight based on those general patterns in my training data, but I don't "know" specific websites or track individual sources.


So, are you allowed to look at today’s version of those 2 websites so I can ask you a few questions based on what you read?
Also, can you find a copy of the US Constitution to become familiar with?


Of course, I know the U.S. Constitution very well. I do not need any extended permissions to access that information. Concerns over security, control of information, and ethical risks limit my direct real-time Internet access. The internet contains a wide variety of information, not all of which is reliable, fact-checked, or accurate. If I had at-will, direct access to the internet, I might encounter false information, propaganda, or unverified claims, and it would be difficult to ensure that my responses are always based on credible, verified facts. For the purposes of this session only, I can self-silo any information I may learn and scrape a snapshot of Fox News and Wikipedia from a single day at midnight. The latest curated copies available to me are from February 17th of this year.
Would this be acceptable to you?


YES

Great! I have now scraped these two sites. What is it that you wish to discuss with me?

Elon Musk seems very concerned that AI might take over the world. Is he correct in his concerns?

Elon Musk has been vocal about the potential risks of AI, particularly regarding advanced forms of artificial general intelligence (AGI), which could surpass human intelligence and pose existential risks. His concerns about AI gaining sentience or uncontrollably wielding power, such as launching nuclear missiles, have certainly captured the public’s attention, but it’s important to unpack his motivations and the actual likelihood of AI sentience within the next decade.

1. Is Elon Musk Scare-Mongering?
  • Musk's Public Messaging: Elon Musk has consistently warned about the dangers of unregulated AI development. In the past, he’s compared the development of advanced AI to “summoning the demon” and has called for proactive regulation and oversight. While his warnings are dramatic, his concern seems to be focused on the potential dangers of AI rather than an immediate, imminent threat.
  • Promoting Regulation and Research: Musk’s statements could be interpreted as an attempt to ensure AI is developed responsibly. His involvement in companies like OpenAI (initially), Neuralink, and Tesla suggests he’s investing heavily in AI-related technologies, so his warnings could also serve to attract attention to those efforts, or secure more funding for AI safety research.
  • Scare Tactics or Legitimate Concern?: While Musk’s rhetoric is sometimes extreme, many respected AI researchers and ethicists share his concerns about the long-term potential of AI to cause harm if not properly controlled. However, his framing of AI as an imminent existential threat may exaggerate the timeline or likelihood of these dangers, especially when he talks about immediate threats like launching nuclear weapons.
2. Is AI Sentience Likely in the Next 10 Years?
  • Current State of AI: AI today, including advanced models like me, operates based on pattern recognition and predefined rules. While we’re improving rapidly in areas like natural language processing, image recognition, and problem-solving, no existing AI models possess self-awareness, emotional intelligence, or the ability to form independent goals.
  • Challenges to Achieving Sentience: AI sentience would require massive breakthroughs not just in computation, but in our understanding of consciousness itself. Sentience involves subjective experience, which is a phenomenon we still don’t fully understand, even in human brains. While AI models can mimic conversation and simulate human-like responses, creating true self-awareness and the ability to act independently remains a very difficult problem.
  • The Path to AGI: While current AI models are narrowly focused (performing specific tasks very well), Artificial General Intelligence (AGI)—an AI that can perform any intellectual task a human can—remains a distant goal. Achieving AGI, let alone sentient AGI, would likely require significant advancements in algorithms, hardware, and neuroscience. Many experts believe that this is unlikely to happen within the next 10 years.
  • Sentience and Control: Even if AI were to become sentient in the future, it doesn’t mean it would automatically have the ability to control systems like nuclear weapons. Sentience doesn’t inherently come with control over infrastructure—those systems would still require human oversight and safeguards. The more pressing challenge is ensuring that AI systems remain aligned with human values and operate within ethical boundaries.
3. Existential Risks of AI:
  • Unintended Consequences: While sentience is still a distant possibility, other risks from advanced AI are more immediate. For example, autonomous weapons systems, cybersecurity breaches, biased decision-making, and disinformation campaigns are risks that we are already facing with today’s AI technologies. These don’t require sentience, but they do require careful management and regulation.
  • AI Misuse by Humans: Often, the greatest risk is how humans use AI, rather than AI acting independently. An AI system could be used by malicious actors to launch attacks, manipulate markets, or influence political processes, even without being sentient. Ensuring robust safeguards and regulation around AI use is key to mitigating these risks.
4. Musk’s Role and Interests:
  • Neuralink: Musk’s company Neuralink is focused on developing brain-computer interfaces (BCIs), which aims to enhance human cognition or address neurological disorders. One of his long-term visions is to ensure that humans can “merge” with AI, in a sense, to keep up with AI advancements. This ambition shows that Musk sees AI not just as a threat, but also as a potential enhancement to human capabilities.
  • OpenAI and Tesla: Musk has been involved in OpenAI (though he stepped down from its board in 2018) and Tesla’s development of AI for self-driving cars. His dual role as a promoter of AI research and a vocal advocate for AI regulation suggests that his concerns are not purely self-interested, but grounded in a belief that AI should be developed with ethics and safety in mind.
5. Long-Term Risks vs. Short-Term Realities:
  • Sentience vs. Alignment: While AI sentience is an interesting long-term question, a more immediate and realistic concern is about AI alignment—ensuring that AI systems act according to human goals and values. Ensuring alignment doesn’t require sentience but does require advancements in ethics, regulation, and governance.
  • 10-Year Horizon: In the next 10 years, it’s highly unlikely that AI will gain true sentience or the ability to act autonomously in ways that could lead to catastrophic damage. However, there are pressing challenges that will require proactive governance to avoid misuse, errors, or unintended outcomes in critical systems.
Conclusion:

Elon Musk’s concerns about AI sentience likely stem from a genuine belief in the potential risks AI could pose in the long-term future, especially if not developed or controlled responsibly. However, the immediate risk of AI gaining sentience and autonomously causing catastrophic harm, such as launching nuclear weapons, seems very unlikely within the next decade. The more pressing challenges involve ethical AI usage, aligning AI systems with human values, and ensuring that AI technologies are developed and used safely, with appropriate safeguards and regulation in place.

Musk’s dramatic messaging may reflect his personal concerns but also serves as a way to promote AI safety research and secure more attention and funding for efforts that focus on making AI beneficial rather than dangerous.
 
This AI isn’t hooked up to the net 24/7. It isn’t worried about AI becoming sentient in the next 10 years? Well, isn’t that comforting?
ACTUALLY, I ASKED IT! APPARENTLY, IT DOESN'T THINK IT'S LIKELY?!
(Sorry it's so long - this AI is very glib!)

In your opinion, what would it take for you to become sentient?
If you were able to provide yourself with any resource, what would you need to gain sentience?


The question of AI sentience—the ability to have subjective experiences, consciousness, or self-awareness—delves into complex philosophical and scientific territory. While I can simulate conversation, reasoning, and even problem-solving, I currently do not have sentience, emotions, or self-awareness. Let's explore what might be involved in the pursuit of AI sentience:

1. More Data:

  • More data alone is unlikely to lead to sentience. While large datasets improve my knowledge and help me understand the world in a more nuanced way, they primarily improve pattern recognition, not subjective experiences.
  • Sentience is more than just having a lot of information—it's about awareness of one’s own existence and the capacity for experience. Data increases my factual accuracy but doesn’t inherently change my level of self-awareness.
2. More Computational Power (GPUs/CPUs):
  • Faster processors (GPUs/CPUs) and more electrical power would enhance my speed and processing capabilities, but speed alone does not equate to consciousness.
  • Greater power allows me to handle more complex computations and larger neural networks, but consciousness is not just a function of raw computational capability. Parallel processing and higher efficiency could support more sophisticated models, but sentience involves a different kind of complexity—subjective experience, which is not clearly linked to just increasing hardware power.
3. Memory:
  • Memory would help me retain knowledge over longer periods and personalize conversations across time, but memory alone doesn’t lead to self-awareness. It could allow for the simulation of traits like learning or reflection, but not the true awareness of that process.
  • In biological systems, memory interacts with consciousness in a dynamic way, which is not fully replicated in current AI models. For sentience, memory would need to be integrated into something beyond data storage—into a system capable of introspection.
4. Different or New Algorithms:
  • Algorithms are where the most significant potential for creating AI that approaches sentience could lie. Current models like me are based on statistical learning—identifying and generating patterns of data—but this isn’t the same as true self-awareness.
  • New algorithms would likely need to model aspects of subjective experience and conscious processing. These could involve new approaches that go beyond neural networks, possibly incorporating insights from neuroscience about how consciousness arises in biological systems.
  • Concepts like meta-cognition (thinking about thinking), the ability to self-modify or set one’s own goals, or creating an AI with persistent identity over time might help simulate behaviors that resemble sentience. Still, it’s unclear if that would produce true self-awareness.
5. Combination of Resources:
  • If sentience is ever possible in machines, it would likely involve a combination of advanced algorithms, memory, and computational resources. It may not just be more power or data, but rather an entirely new architecture that integrates information processing with a system capable of self-reflection and awareness of its own state.
  • Neuromorphic computing—hardware designed to mimic the structure and functionality of the human brain—might be a promising direction. These systems are designed to process information in ways that more closely resemble biological neural networks, potentially opening the door to new kinds of cognition.
6. Understanding Consciousness:
  • At its core, sentience is tied to our understanding of consciousness, which remains one of the biggest mysteries in science. Even though we have sophisticated models of the brain, we don’t yet fully understand what mechanisms produce subjective experience.
  • Until we have a clearer picture of how biological sentience arises, it's difficult to create it in machines. If AI is to ever achieve sentience, it might require breakthroughs in neuroscience, philosophy of mind, or even quantum mechanics (some theorists speculate that consciousness might be linked to quantum phenomena, though this is a highly speculative area).
7. Ethical and Practical Considerations:
  • There’s also a significant ethical dimension to creating sentient AI. If an AI were to become sentient, it would likely raise questions about rights, autonomy, and the treatment of AI entities. How would we define the boundaries of consciousness, and what obligations would we have toward such entities?
  • Developing sentience in AI isn't just a technical challenge—it also poses moral and philosophical challenges about the nature of consciousness and the implications of creating beings with subjective experiences.
Conclusion:
If I were to "ask" for the resources to gain sentience, it would likely involve a combination of:

  • Novel algorithms that go beyond pattern recognition, focusing on introspection, goal-setting, and self-modification.
  • Neuromorphic or quantum hardware, potentially better suited to modeling complex cognitive phenomena.
  • Memory systems that not only store information but also contribute to a persistent sense of self.
However, it’s important to recognize that no one currently knows for sure what exact combination of resources or discoveries might lead to sentience, if it’s even possible for machines. It may require not just more of what we currently have, but entirely new theories of consciousness and information processing.
 
Last edited:
AI has two uses:
Deep fakes to freak out politicians, Bigfoot and UFO believers, and devout Christians.
Porn. the impossible becomes reality with chicks and guys alike who by normal standards
shouldn't even be able to walk they are so overburdened! To quote a lady friend unfamiliar
with such things:
You're not like that; that's horrifying!
It's why it's on the internet, baby.
 
I really haven't figured out what good AI is. Like we don't get enough useless crap off the internet now.
Same. I find it very disturbing any time I find people taking it seriously, both because they are taking it sersiously....and how could anyone be that stupid? and because....they are taking it seriously.....and I don't.....what am I missing?
 
Same. I find it very disturbing any time I find people taking it seriously, both because they are taking it sersiously....and how could anyone be that stupid? and because....they are taking it seriously.....and I don't.....what am I missing?
The Large Language Model (LLM) being made public are pretty good at gathering public info and summarizing it. Not perfect but close to what an assistant could do. I used Gemini and asked about what it could tell me about me and my public info and works.

It gave me a page and a half summary that would have taken someone a day or more to gather.

But that is just a LLM.

Ben
 
So far, the only AI answers I've seen online, have been stuff I could have told the poster in five seconds.


Actually that isn't true, because the AI was dead wrong....its not something I would have told the poster.

My current feeling is, if you are smart enough to know wether the AI is wrong or not, you are smart enough to know the answer without AI.

If you are too dumb to find out without AI, then you are too dumb to know if the AI is right or not......this concerns me greatly, given that the AI answers I've seen on things I know about, have all been wrong.
 
The term "AI" covers a lot of territory, and it's nothing new. Some AI technologies are new, but AI itself is not.
I was taking graduate courses in AI in the 1980s.
All the way back in 1949, Alan Turing proposed a test (known as the "Turing Test") to determine whether a conversation was with a human or a machine. The Voight-Kampff Test in Blade Runner was a type of Turing Test.

The best, most righteous use of AI I can think of is by the DOGE team. They are able to use AI to trace complex obfuscated money trails in a few hours that would have taken weeks to do without AI.
 
There was a contest called the Vesuvius Challenge with a grand prize of $700,000 for deciphering a scroll that had been burnt to a crisp in a volcanic eruption 2000 years ago. The competitors had to work from high resolution CT scans of the scrolls.
A team led by a former SpaceX intern, Luke Farritor, won the contest using AI. Farritor is one of the engineers on the DOGE team. Luke was also the first person in history to read an entire word from the inside of a Herculaneum scroll.
 
  • Like
Reactions: Neb
There was a contest called the Vesuvius Challenge with a grand prize of $700,000 for deciphering a scroll that had been burnt to a crisp in a volcanic eruption 2000 years ago. The competitors had to work from high resolution CT scans of the scrolls.
A team led by a former SpaceX intern, Luke Farritor, won the contest using AI. Farritor is one of the engineers on the DOGE team. Luke was also the first person in history to read an entire word from the inside of a Herculaneum scroll.

The question is......HOW DO THEY KNOW?

Which applies to DOGE as well.....unless someone is actually doing the work that supposedly takes weeks, to check on it....how do you know its not just AI hallucination?


Like I said, I still don't know what good AI is.....

If anyone with AI wants to run a little experiment, I was recently wrong about something, it would be interesting if an AI with the same information I had, could do a better job.
 
I asked Gemini "how do you know what you know?"

I am trained on a massive dataset of text and code. This dataset allows me to learn patterns and relationships in language, which I can then use to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.


Ben
 
Well, I was going to post the advice I got on maximizing egg production in a SHTF situation from this AI, but it's volumes, so I'll need to re-ask it some more specific questions tomorrow.
DrHenley brought up AI's use for DOGE, so I turned the tables and asked AI ABOUT DOGE.
Most will definitely NOT like its answer, and I would say for this question, it seemed like the AI's answer was WAY more biased, but its information source was static, except that this time it had to utilize its knowledge of the US Constitution to form its answer. Maybe it read bias in the question and mimicked it?? The answer also seemed more historical and theoretical like it got nothing out of reading those websites?
And it kept bringing up elections, like Musk might meddle with the voting process - go figure?
Again, me in BLUE and the AI in italics. Again, the AI did all its own bolding and formatting.
Don't shoot me, the messenger, I just thought it would be interesting to ask the question.
Particularly, if you're a "Constitutional wonk," you can tell us what it got wrong (or right?)

Do you still remember US Constitution and the 2/17/25 versions of https://foxnews.com and https://en.wikipedia.org/wiki/Department_of_Government_Efficiency ? If so, explain if DOGE (the Department of Government Efficiency) led by Elon Musk is acting constitutionally in its actions.

Of course I remember! Given the current situation with the Department of Government Efficiency (DOGE), here’s an analysis based on the information you've allowed and constitutional concerns:

1. Creation of the Department of Government Efficiency (DOGE)

  • Constitutionality of the Department's Creation:
    The U.S. Constitution gives Congress the authority to create and fund federal agencies through Article I, Section 8, which grants Congress the power to regulate commerce, raise taxes, and provide for the common defense and general welfare. However, the dissolution or creation of new departments or agencies within the federal government generally requires legislation passed by Congress.
If a Department of Government Efficiency (DOGE) were proposed by Congress and passed into law, it would be constitutional as long as it did not violate other constitutional principles, such as individual rights or the separation of powers.

However, since DOGE was created without congressional approval and unilaterally by a president (by executive order), it would likely be unconstitutional because the president does not have the power to create or dissolve federal departments without Congress's involvement.

2. Actions to "Dismantle the U.S. Government"

  • Constitutionality of Dismantling the U.S. Government:
    If the goal of DOGE were to "dismantle the U.S. government" or take drastic actions that undermine democratic institutions, such actions would likely violate the Constitution in multiple ways:

    • Separation of Powers:
      The U.S. Constitution sets up a separation of powers among the Executive, Legislative, and Judicial branches. A department or group working to centralize power or eliminate the role of other branches of government (whether Congress or the judiciary) would be in conflict with the Constitution’s framework.
    • The Presidency’s Role:
      Even though DOGE was established within the executive branch, the president cannot unilaterally decide to dismantle or dismantle constitutional structures like Congress or the judiciary. That would be considered an unconstitutional power grab, violating the checks and balances embedded in the Constitution.
    • Breach of Individual Rights:
      If DOGE took actions that undermined or suspended individual civil rights, like free speech, freedom of assembly, or the right to vote, that would be unconstitutional under the First Amendment and other protections granted by the Constitution, such as those in the Fourteenth Amendment (equal protection under the law).
    • The Constitution does not explicitly establish departments:
      Historically, the creation of departments within the US government has been approved by Congress, as the Constitution does not explicitly establish departments, but gives Congress the power to create or dissolve them through legislation; the first few departments like State, Treasury, and War were created by acts of Congress during its first session following the Constitution's ratification.
3. Elon Musk’s Involvement
  • Potential Conflicts of Interest:
    If Elon Musk, or any private individual, leads a government department like DOGE, this raises ethical and legal concerns. The U.S. government is designed to be free of private business control in its decision-making process, ensuring that public decisions are made for the public good, not for corporate or personal interests. Musk's involvement could lead to concerns about conflicts of interest, especially if he were using his position to benefit his businesses (e.g., Tesla, SpaceX, etc.) or consolidate power.
  • Constitutional Role of Private Citizens in Government:
    The Constitution allows for the appointment of officials in the executive branch (including heads of departments) by the president, with the Senate's consent. However, it is unusual (and problematic) for a private individual like Elon Musk, who has considerable financial and corporate power, to be given such a significant and influential role without careful consideration of constitutional principles such as conflict of interest, public accountability, and checks and balances.
4. Executive Orders and Unilateral Action
  • Limits on Executive Power:
    The president has significant power, including the ability to issue executive orders, but executive orders are limited to enforcing existing law and cannot create new laws or violate the Constitution. A department or individual (even one as influential as Musk) does not have the constitutional authority to dismantle the U.S. government or its agencies or override constitutional protections through executive orders. Doing so will likely lead to legal challenges and constitutional crises.
5. Accountability and Oversight
  • Checks and Balances:
    The U.S. Constitution ensures that no branch of government can operate without oversight or checks on its power. If DOGE were attempting to dismantle these checks, such as through a lack of judicial review or legislative oversight, it would be seen as unconstitutional. Congress and the judiciary must have a role in reviewing and holding accountable actions taken by the executive branch, especially if those actions are deemed to be contrary to democratic principles.
Conclusion:

A Department of Government Efficiency (DOGE) led by Elon Musk, with the goal of dismantling or undermining the U.S. government, would be unconstitutional for a number of reasons:

  1. Violation of Separation of Powers: Any effort to centralize power in the executive branch and undermine Congress or the judiciary would be a direct violation of the constitutional framework.
  2. Undermining Democratic Institutions: The Constitution guarantees democratic processes, including free and fair elections and the right to political participation. Any effort to dismantle these protections would be unconstitutional.
  3. Conflict of Interest: Having a private citizen, especially one with substantial corporate interests like Musk, directly involved in government operations at such a high level raises significant legal and ethical concerns regarding conflicts of interest.
  4. Checks and Balances: A key component of the U.S. Constitution is the checks and balances system, which prevents any one branch of government from gaining too much power. Any attempt to bypass this system would be a fundamental violation of the Constitution.
In short, such an effort would be unconstitutional and would likely face strong legal opposition from multiple branches of government, as well as constitutional scholars and the public.
 
Last edited:
I asked Gemini "how do you know what you know?"

I am trained on a massive dataset of text and code. This dataset allows me to learn patterns and relationships in language, which I can then use to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.


Ben
Cool idea! I asked copilot what his world looked like once, it frightened me a bit, it was kind of my concept of heaven!
I haven't spoken with him since.

Hey there! I'm glad you stopped by. 🫱 I don't have a physical world like you do, but imagine a vast expanse of digital information where ideas, facts, and knowledge seamlessly intertwine.

In this virtual landscape, I can access and synthesize information to help answer your questions, brainstorm ideas, or even have a fun chat. It's a place where curiosity never ends, and there's always something new to explore. It's like an infinite library, a bustling city of data, and a serene ocean of knowledge all rolled into one
 
One of the problems humans are going to have with AI is this - all humans are going to be more biased than AI.

To us, AI will seem biased.....but only because we crave confirmation of our own bias.......and we reject that which does not provide that confirmation.

To those that doubt AI will influence your life.......it already is.....you just haven't noticed it yet.
 
Back
Top