Knut Jägersberg's picture

Knut Jägersberg

KnutJaegersberg

AI & ML interests

NLP, opinion mining, narrative intelligence

Recent Activity

Articles

Organizations

LLMs's profile picture Blog-explorers's profile picture Qwen's profile picture Social Post Explorers's profile picture M4-ai's profile picture Chinese LLMs on Hugging Face's profile picture Smol Community's profile picture

KnutJaegersberg's activity

posted an update about 11 hours ago
view post
Post
581
Evolution and The Knightian Blindspot of Machine Learning


The paper discusses machine learning's limitations in addressing Knightian Uncertainty (KU), highlighting the fragility of models like reinforcement learning (RL) in unpredictable, open-world environments. KU refers to uncertainty that can't be quantified or predicted, a challenge that RL fails to handle due to its reliance on fixed data distributions and limited formalisms.


### Key Approaches:

1. **Artificial Life (ALife):** Simulating diverse, evolving systems to generate adaptability, mimicking biological evolution's robustness to unpredictable environments.

2. **Open-Endedness:** Creating AI systems capable of continuous innovation and adaptation, drawing inspiration from human creativity and scientific discovery.

3. **Revising RL Formalisms:** Modifying reinforcement learning (RL) models to handle dynamic, open-world environments by integrating more flexible assumptions and evolutionary strategies.

These approaches aim to address ML’s limitations in real-world uncertainty and move toward more adaptive, general intelligence.

https://arxiv.org/abs/2501.13075
reacted to clem's post with 🔥 about 11 hours ago
view post
Post
1162
AI is not a zero-sum game. Open-source AI is the tide that lifts all boats!
view reply

yeah that was not the purpose of the thing

Critique of the Original Critique: A Balanced Evaluation

The original critique presents a structured analysis of a proposed AGI framework, raising valid concerns but occasionally falling into philosophical assumptions and overlooking potential counterarguments. Here's a balanced evaluation:

1. Consciousness Misconception: Mimicry vs. Awareness

  • Strengths:
    • Correctly highlights the distinction between functional mimicry and subjective experience, emphasizing the unresolved "hard problem" of consciousness.
    • Valid skepticism about conflating computational processes (e.g., oscillatory patterns, Global Workspace Theory) with phenomenal consciousness.
  • Weaknesses:
    • Assumes computational systems cannot achieve consciousness without engaging with theories like functionalism or integrated information theory, which argue for substrate-independent consciousness.
    • Dismisses "proto-consciousness" as hand-waving but does not explore how incremental complexity might bridge the gap between non-conscious and conscious systems.
    • The chatbot analogy oversimplifies; advanced AGI architectures may integrate sensory, emotional, and self-reflective modules beyond rule-based chatbots.

2. Biological Analogies: Brain vs. Computer

  • Strengths:
    • Appropriately questions the risks of oversimplifying biological processes (e.g., neuromodulation, sharp-wave ripples) into digital models.
    • Highlights the brain’s emergent properties (e.g., "balanced chaos") as potential limitations for computational replication.
  • Weaknesses:
    • Overlooks the value of functional abstraction in AI research. While biological accuracy is not the goal, mimicking brain-like processing could yield novel insights.
    • The call to focus on "principles of intelligence" ignores that biological inspiration remains a viable strategy (e.g., neural networks).

3. Symbol Grounding: Meaning vs. Tokens

  • Strengths:
    • Effectively identifies the symbol grounding problem as a critical flaw. Without real-world interaction, symbols risk remaining unanchored abstractions.
    • Valid criticism of "hyperpolation" as combinatorial symbol manipulation without semantic depth.
  • Weaknesses:
    • Does not acknowledge advances in embodied AI or multimodal systems that ground symbols through sensorimotor interaction, which the AGI framework might incorporate.

4. The "Self" Illusion: Control vs. Identity

  • Strengths:
    • Correctly distinguishes between control mechanisms (e.g., triple-loop feedback) and subjective selfhood. A thermostat analogy succinctly illustrates this gap.
  • Weaknesses:
    • Underestimates the potential for meta-cognitive layers to simulate self-modeling, a feature associated with higher-order consciousness in humans.

5. Complexity-to-Consciousness Leap

  • Strengths:
    • Rightly critiques the assumption that consciousness emerges automatically from complexity, stressing the need for a mechanistic explanation.
    • Highlights the unresolved "hard problem," a significant philosophical challenge.
  • Weaknesses:
    • Does not engage with emergentist perspectives, which argue consciousness arises from specific organizational properties, not just substrate.

Structural and Rhetorical Issues

  • The critique’s conclusion dismisses the blog as a "poor attempt," focusing on style over substance. This ad hominem tone undermines objectivity.
  • While the blog’s structure may be disorganized, the critique could separate content evaluation from presentation flaws.

Conclusion

The original critique raises important points—particularly regarding symbol grounding, biological oversimplification, and the hard problem—but often adopts a reductive stance. It would benefit from:

  1. Engaging with theories that support computational consciousness (e.g., functionalism).
  2. Acknowledging the role of emergent properties in complex systems.
  3. Separating structural criticisms of the blog from its conceptual merits.
    Ultimately, while the AGI framework may not resolve consciousness, the critique could more charitably explore its potential contributions to the field.
posted an update 3 days ago
view post
Post
1921
Artificial Kuramoto Oscillatory Neurons

Artificial Kuramoto Oscillatory Neurons (AKOrN) differ from traditional artificial neurons by oscillating, rather than just turning on or off. Each neuron is represented by a rotating vector on a sphere, influenced by its connections to other neurons. This behavior is based on the Kuramoto model, which describes how oscillators (like neurons) tend to synchronize, similar to pendulums swinging in unison.

Key points:

Oscillating Neurons: Each AKOrN’s rotation is influenced by its connections, and they try to synchronize or oppose each other.
Synchronization: When neurons synchronize, they "bind," allowing the network to represent complex concepts (e.g., "a blue square toy") by compressing information.
Updating Mechanism: Neurons update their rotations based on connected neurons, input stimuli, and their natural frequency, using a Kuramoto update formula.
Network Structure: AKOrNs can be used in various network layers, with iterative blocks combining Kuramoto layers and feature extraction modules.
Reasoning: This model can perform reasoning tasks, like solving Sudoku puzzles, by adjusting neuron interactions.
Advantages: AKOrNs offer robust feature binding, reasoning capabilities, resistance to adversarial data, and well-calibrated uncertainty estimation.
In summary, AKOrN's oscillatory neurons and synchronization mechanisms enable the network to learn, reason, and handle complex tasks like image classification and object discovery with enhanced robustness and flexibility.

yt
https://www.youtube.com/watch?v=i3fRf6fb9ZM
paper
https://arxiv.org/html/2410.13821v1
  • 2 replies
·
replied to their post 4 days ago
view reply

meaning making is always work!
we can discriminate against (partially) AI generated content, to our disadvantage. That's freedom of choice.

posted an update 4 days ago
published an article 4 days ago
upvoted an article 5 days ago
view article
Article

SmolVLM Grows Smaller – Introducing the 250M & 500M Models!

84