• Home
  • Docs
  • Projects 
    • MOSS AI Office
    • MOSS AI Games
  • …  
    • Home
    • Docs
    • Projects 
      • MOSS AI Office
      • MOSS AI Games
Login
  • Home
  • Docs
  • Projects 
    • MOSS AI Office
    • MOSS AI Games
  • …  
    • Home
    • Docs
    • Projects 
      • MOSS AI Office
      • MOSS AI Games
Login

DeepFEP:MOSS AGI Architecture

As 2025 comes to an end, we are still building AGI.

From a technical perspective, our greatest achievement in 2025 was the in-depth research on AONN(Aspect-Oriented Neural Network) with the assistance of AI, and the proposal of the MOSS AGI framework: DeepFEP.

Press enter or click to view image in full size

In the Circle Packing diagram, each circle represents an aspect neuron.

This framework organically integrates existing theories related to artificial neural intelligence and, in a creative way, proposes a generative mechanism for neurons in artificial neural networks along with corresponding software methodologies.

At the core of DeepFEP is AONN : a new type of neural network in which AOP weaving corresponds to neural spiking, runtime structural generation serves as the learning mechanism, and event-driven reasoning functions as forward propagation. It is not software that merely simulates a neural network; rather, it is a software system that exists in the form of a neural organization.

What AONN learns is not parameters, but causal structure. ANN learns weights, Transformers learn distributions, while AONN learns when to fire which spike.

In 2025, I also realized that causal emergence and its 2.0 are quantitative theories for describing emergence. Like Integrated Information Theory and the Free Energy Principle (FEP), they describe and characterize existing systems, but do not provide a generative mechanism for the system itself (neurons and their networks). Fortunately, FEP offers a verifiable learning framework, which includes a world model, observation, prediction, and state transitions. Most importantly, active inference provides a testable mechanism: the goal of generation or learning is to minimize free energy.

The world model in DeepFEP can be the Web, financial markets, prediction markets, or workflows — as long as the following conditions exist:

  • Observations: data / event streams
  • Actions: applied operations
  • World transitions: actions change future observable outcomes
  • Verifiable feedback: computable scores (free energy / error / cost)

Such systems can be treated as the external world of a world model (or as an interface layer to the external world).

If DeepFEP is compared to an artificial brain, then the LLM serves as the language cortex, while AONN functions as the prefrontal cortex.

Looking Ahead to 202

6

2026 will be the year of implementing and releasing DeepFEP. Combined with different world models, it will give rise to diverse product forms — perhaps even products similar to ChatGPT. The most important difference is that DeepFEP understands the world model through learning, thereby driving AONN to generate behavior and, in turn, driving the LLM language center to produce tokens.

Previous
MOSS AI Office Design Thesis: The Disappearing Windows
Next
 Return to site
Cookie Use
We use cookies to improve browsing experience, security, and data collection. By accepting, you agree to the use of cookies for advertising and analytics. You can change your cookie settings at any time. Learn More
Accept all
Settings
Decline All
Cookie Settings
Necessary Cookies
These cookies enable core functionality such as security, network management, and accessibility. These cookies can’t be switched off.
Analytics Cookies
These cookies help us better understand how visitors interact with our website and help us discover errors.
Preferences Cookies
These cookies allow the website to remember choices you've made to provide enhanced functionality and personalization.
Save