Artificial Intelligence

The Ghost in the Machine Learning: The Challenge of Context and Common Sense in AI

Artificial intelligence has made incredible advances in recent years, from beating humans at complex games like chess and Go to powering everyday tech like virtual assistants and self-driving cars. However, despite its prowess in narrow domains, AI still lacks fundamental human capabilities like common sense and understanding context. This “ghost in the machine learning” remains a key obstacle to achieving human-level artificial general intelligence (AGI).

Introduction

AI systems today excel at specialized tasks like image recognition, natural language processing, and prediction. With enough data and processing power, they can match or surpass human-level performance at clearly defined goals like identifying objects in images or translating between languages.

However, even the most advanced AI still struggles with open-ended tasks that require real-world knowledge and reasoning abilities that humans acquire through lifelong experience. Unlike humans, today’s AI lacks the intuitive understanding of how the world works that allows people to make logical inferences, anticipate potential outcomes, and act appropriately in novel situations.

This absence of common sense and context is the “ghost in the machine learning” – the missing piece that prevents AIs from functioning independently in complex, ambiguous environments. It remains one of the fundamental challenges in artificial intelligence research.

In this article, we’ll dive into why context and common sense matter for AI, look at key limitations of current approaches, and explore promising directions for instilling stronger reasoning abilities in machines.

Why Context and Common Sense Matter

To understand why today’s AIs struggle with tasks that seem trivial to people, it helps to appreciate just how much implicit knowledge humans accumulate through our experiences.

Common sense encompasses vast caches of facts, intuitions, and assumptions about how the world works that we absorb since childhood. This includes naive physics (objects fall down, not up), spatial relations (if I walk away from my house, it’s now behind me), causality (flicking a light switch turns a light on or off), and social interactions (people get upset if you insult them), to name just a few.

We effortlessly apply this background knowledge to reason about almost everything we encounter. It provides the missing context that allows us to interpret subtle meaning, fill in information gaps, assess the plausibility of situations, and make logical predictions and decisions.

Without access to similar repositories of knowledge, AIs struggle to make sense of concepts that are unfamiliar or lack explicit context. For example, consider the statement “Mary yelled at the tree.”

A human instinctively knows that trees can’t hear and deduces that Mary was likely expressing frustration rather than literally speaking to a tree. An AI has no built-in understanding of what trees or yelling represent, lacking the context to infer the actual meaning behind this sentence.

This becomes even more problematic for complex reasoning tasks like question answering and conversation. To answer the question “Where should I hang a punching bag in my apartment?”, an AI needs to implicitly understand attributes of homes, safely using punching bags, and more.

The AI systems behind chatbots like Alexa and Siri also lack the common sense to maintain coherent, wide-ranging conversations grounded in shared context and reasoning. Their responses are limited to what their training data explicitly covers.

In contrast, humans seamlessly apply years of accumulated knowledge and experiences to handle ambiguous, novel situations. We recognize anomalies, make logical inferences, and take appropriate actions based on unspoken contextual understanding. Mastering this implicit reasoning remains an elusive challenge for AI.

Current Approaches and Their Limitations

Given the vast scope of common sense and contextual knowledge, how can we possibly impart such capabilities to AI systems? Current techniques fall into three main categories:

1. Unsupervised and self-supervised learning

AI systems attempt to extract patterns and relationships from unlabeled datasets like text, images or videos. For example, natural language models like GPT-3 absorb staggering amounts of text data to learn associations between words and concepts.

However, this approach struggles to disentangle superficial statistical correlations from deeper meaning. The patterns discovered may not generalize well either. GPT-3 exhibits some basic common sense but also generates plausible-sounding but false or nonsensical statements.

2. Explicit knowledge bases

Projects like Cyc and ConceptNet compile human knowledge into structured databases of facts and relations that AIs can reference. However, constructing such repositories is extremely challenging. It’s impossible to encode all nuanced real-world context explicitly. Reasoning also requires more than just retrieving isolated facts.

3. Human feedback and reinforcement

Approaches like reward learning and amplification involve humans critiquing and correcting an AI’s behaviour to steer its reasoning. But it’s notoriously difficult for people to articulate common sense knowledge they take for granted. There are also limits to how much reasoning can be supervised.

While each approach provides some common sense capabilities, all face challenges in terms of generalization, scalability and directly applying the knowledge. The fundamental problem remains: how can we equip AI with the implicit understanding of the world that humans acquire from lived experience?

Promising Directions

With the limitations of current techniques in mind, researchers are exploring promising new directions to address the common sense challenge:

Top 6 Forex EA & Indicator

Based on regulation, award recognition, mainstream credibility, and overwhelmingly positive client feedback, these six products stand out for their sterling reputations:

NoTypeNamePricePlatformDetails
1.Forex EAGold Miner Pro FX Scalper EA$879.99MT4Learn More
2.Forex EAFXCore100 EA [UPDATED]$7.99MT4Learn More
3.Forex IndicatorGolden Deer Holy Grail Indicator$689.99MT4Learn More
4.Windows VPSForex VPS$29.99MT4Learn More
5.Forex CourseForex Trend Trading Course$999.99MT4Learn More
6.Forex Copy TradeForex Fund Management$500MT4Learn More

Simulated environments

Training AIs in physics and interactive simulations offers a safe, efficient way to learn cause-and-effect relationships through trial-and-error. Open-source platforms like AI2-THOR and Unity provide rich virtual environments for agents to explore intuitive concepts.

Multimodal learning

Humans learn common sense from diverse sensory inputs and interactions. Combining modalities like vision, language and touch may help AIs distill higher-level conceptual knowledge from the physical experiences simulated environments provide.

Neuro-symbolic AI

Integrating neural networks with symbolic logic and knowledge representation may enable learning declarative concepts and causal models that support robust reasoning and transfer across contexts.

Architecture search

Automated search across model architectures could discover designs better suited for representing and applying common sense knowledge, complementing improvements in learning algorithms.

Cognitive architectures

Architectures based on computational theories of human cognition could provide structure and processing inspired by how people acquire and employ common sense.

Transfer learning

Pretraining foundation models on massive multimodal data then transferring learned representations to downstream tasks shows promise for injecting common sense.

Hybrid human-AI approaches

Combining the complementary strengths of humans and AIs – intuitive reasoning and scalable computation – could enable common sense to be injected collaboratively.

While none offer a complete solution yet, these directions reflect promising avenues for developing AI systems with stronger reasoning abilities.

The Path Forward

Imbuing machines with human-like common sense remains both essential and enormously difficult. However, rather than an impossible challenge, the ghost in the machine learning represents an exciting opportunity.

Mastering implicit reasoning would unlock AI’s potential to interact naturally with ambiguity and novelty like people do. This could enable transformative applications while also ensuring safety as AI systems become more autonomous.

But it also requires rethinking how we develop AI. The community is shifting from pursuing raw predictive power towards designs that learn and think more like humans. Advances in multimodal, neuro-symbolic, and human-grounded techniques offer hope.

Yet the sheer scope of common sense means that finding solutions will involve exploring many paths simultaneously. There likely won’t be a single breakthrough but rather gradual progress across a mosaic of approaches.

While today’s AIs still seem cognitive cripples in many ways, they also now learn powerful representations given sufficient data. Our accumulating experience makes this one of the most exciting times to be working towards artificial general intelligence. The ghosts that haunt our current machines also light the way forward.

Frequently Asked Questions

What are some examples that illustrate the lack of common sense in AI?

Here are a few examples that highlight the common sense limitations of today’s AI systems:

  • Failing to recognize absurd or anomalous situations in images, like a dog walking a human on a leash.
  • Generating rambling, incoherent text when conversing, straying between topics without logical connections.
  • Misinterpreting sarcasm, metaphor or idioms that rely on real-world implications beyond literal meaning.
  • Struggling to answer simple questions that require basic world knowledge like “Why do people shake hands?”
  • Having trouble weighing up evidence and making logical inferences, deductions and decisions.
  • Lacking awareness of its own capabilities and inability to reason about what it doesn’t know.

Why is common sense so difficult to teach to AI compared to narrow skills?

There are a few key reasons why imparting common sense is uniquely challenging for AI:

  • It’s an enormous body of general knowledge without clear structure or boundaries.
  • It’s largely implicit rather than explicit like facts in a database.
  • Humans acquire it gradually through diverse physical experiences.
  • Abstract rules have complex nuanced exceptions and social norms.
  • Applying common sense fluidly requires strong reasoning abilities.

This combination makes it profoundly difficult to codify common sense into a form machines can learn and use for flexible reasoning like humans.

How might acquiring common sense transform the capabilities of AI systems?

If AI systems could acquire human-like common sense, it could enable a number of transformative capabilities:

  • Hold natural, coherent open-ended conversations on any topic.
  • Interpret and generate natural language more human-like.
  • Achieve more robust, trustworthy performance in complex real-world environments.
  • Recognize strange, anomalous or implausible situations.
  • Learn new tasks and adapt to novel conditions faster and more safely.
  • Make rich inferences and deductions about vague, ambiguous information.
  • Understand social interactions and conventions more deeply.

In essence, common sense would allow AIs to be substantially less brittle, more human-understandable and safer when operating autonomously.

What are the risks associated with AI that lacks human common sense?

Some of the dangers posed by AI systems without common sense include:

  • Making unreasonable decisions in novel situations that humans can clearly recognize as harmful. This poses physical risks.
  • Misinterpreting or generating language in dangerous ways out of context, such as inappropriate text or audio.
  • Inability to recognize absurd or anomalous outputs that humans see as nonsensical.
  • Taking logically flawed actions with overconfidence beyond its true reasoning abilities.
  • Lacking understanding of ethical norms and social implications when interacting with people.

Ensuring future AI is aligned with human values will likely require abilities like common sense.

What are some promising ways for developing common sense in AI?

Some of the most promising approaches that researchers are exploring for developing common sense in AI include:

  • Training agents in interactive simulations and games.
  • Leveraging neuro-symbolic integration to learn structured knowledge.
  • Multimodal learning combining vision, language, robotics.
  • Transfer learning from foundation models.
  • Using cognitive architectures inspired by human cognition.
  • Crowdsourcing common sense data and human feedback.
  • Open knowledge bases codifying facts, relations and implications.

Real breakthroughs will likely involve combining multiple techniques like these, rather than any single approach on its own.

How close are we to achieving human-like common sense in AI?

While AI has achieved superhuman skill in many narrow domains, replicating the vast general knowledge and reasoning of common sense remains a major challenge for the field. Researchers are making steady progress, but most believe human-like common sense is still many years if not decades away. However, the potential benefits make this grand challenge one of the most exciting frontiers in AI research today. Each incremental gain represents progress towards more robust, trustworthy artificial intelligence.

Conclusion

The absence of human-like common sense remains a major barrier between today’s AI and achieving fully general artificial intelligence. Progress requires equipping machines with far more capabilities to intuitively reason about the world as people do.

This likely calls for a reimagining of many aspects of AI research, from architectures and training approaches to how we define intelligence. While the path forward poses immense challenges, conquering the ghost in the machine learning would unlock AI’s revolutionary potential to assist and cooperate with humans for the betterment of society.

Top 10 Reputable Forex Brokers

Based on regulation, award recognition, mainstream credibility, and overwhelmingly positive client feedback, these ten brokers stand out for their sterling reputations:

NoBrokerRegulationMin. DepositPlatformsAccount TypesOfferOpen New Account
1.RoboForexFSC Belize$10MT4, MT5, RTraderStandard, Cent, Zero SpreadWelcome Bonus $30Open RoboForex Account
2.AvaTradeASIC, FSCA$100MT4, MT5Standard, Cent, Zero SpreadTop Forex BrokerOpen AvaTrade Account
3.ExnessFCA, CySEC$1MT4, MT5Standard, Cent, Zero SpreadFree VPSOpen Exness Account
4.XMASIC, CySEC, FCA$5MT4, MT5Standard, Micro, Zero Spread20% Deposit BonusOpen XM Account
5.ICMarketsSeychelles FSA$200MT4, MT5, CTraderStandard, Zero SpreadBest Paypal BrokerOpen ICMarkets Account
6.XBTFXASIC, CySEC, FCA$10MT4, MT5Standard, Zero SpreadBest USA BrokerOpen XBTFX Account
7.FXTMFSC Mauritius$10MT4, MT5Standard, Micro, Zero SpreadWelcome Bonus $50Open FXTM Account
8.FBSASIC, CySEC, FCA$5MT4, MT5Standard, Cent, Zero Spread100% Deposit BonusOpen FBS Account
9.BinanceDASP$10Binance PlatformsN/ABest Crypto BrokerOpen Binance Account
10.TradingViewUnregulatedFreeTradingViewN/ABest Trading PlatformOpen TradingView Account

George James

George was born on March 15, 1995 in Chicago, Illinois. From a young age, George was fascinated by international finance and the foreign exchange (forex) market. He studied Economics and Finance at the University of Chicago, graduating in 2017. After college, George worked at a hedge fund as a junior analyst, gaining first-hand experience analyzing currency markets. He eventually realized his true passion was educating novice traders on how to profit in forex. In 2020, George started his blog "Forex Trading for the Beginners" to share forex trading tips, strategies, and insights with beginner traders. His engaging writing style and ability to explain complex forex concepts in simple terms quickly gained him a large readership. Over the next decade, George's blog grew into one of the most popular resources for new forex traders worldwide. He expanded his content into training courses and video tutorials. John also became an influential figure on social media, with over 5000 Twitter followers and 3000 YouTube subscribers. George's trading advice emphasizes risk management, developing a trading plan, and avoiding common beginner mistakes. He also frequently collaborates with other successful forex traders to provide readers with a variety of perspectives and strategies. Now based in New York City, George continues to operate "Forex Trading for the Beginners" as a full-time endeavor. George takes pride in helping newcomers avoid losses and achieve forex trading success.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button