From neuroscience to the Autonomous General Intelligence

Maciej Wolski
13 min readFeb 24, 2020

„You waste years, by not being able to waste hours” — Amos Tversky

I started with one of my favorite quotes because it sums up elegantly the optimal approach to doing research.

There is not a single moment in time when we should stop questioning our ways of doing things, the current state of science or even the whole world.

People in the past sometimes thought that what could be discovered has already been revealed, and what we can do is simply describe it in more detail.

They were terribly wrong.

And if you are an authority because of your previous achievements or role in some organization — you should never ignore other people’s ideas because you might miss something really valuable.

It is worth to never stop searching and often waste hours, but sometimes save years of looking in the wrong direction.

I have spent countless hours reading less popular scientific papers and books with few or even no reviews at all. And it was worth it. I found precious gems in texts written by people from all regions of the world. And inspirations to solve technical problems in often surprising sources of information.

The next generations will almost certainly build better things than us, improving our work, but also by finding weaknesses in our theories and inventions.

We stick to things that worked in the past — this is our nature. But the nature of progress is constant change.

I intentionally replaced the usual „Artificial” part of the AGI abbreviation with the word „Autonomous” as the goal of this article is to describe foundations of general intelligence in humans and probably in future machines.

General Intelligence should be taught how to learn and expand its sets of skills and knowledge

For me the difference between narrow and general intelligence is fundamental and you simply can’t transform one into another.

Why? The first is optimized to perform a specific task when the expected input is present. It can’t modify the association between input and output on the fly, when it seems more appropriate in the current situation. It does only what it was trained for. Like a (literally) mindless robot.

The second should be taught how to learn on its own, build a set of useful skills and knowledge. To respond to input data in different ways, according to the context, own goals and motivations, presence of other actors or salient factors.

In other words, General Intelligence is not trained, but grown to have a rich inventory of possible reactions to the input data. It can reflect on the past and imagine the future. Make plans with delayed execution of an action. Evaluate the effects to make the optimal choice. Find the root cause of a problem instead of just reacting to a preceding event. It is capable of other types of reasoning besides inductive.

The simplest and the best analogy I can imagine now is to compare knowing the route to a single destination with having a detailed map of the whole environment. In the first case, you just pick the only option available — straight to the goal. In the other case you have plenty of different ways to pick from. And by constantly updating your knowledge, during exploration of your cognitive map — you find new possibilities all the time.

The concept of a cognitive map makes even more sense when it is realized that the same parts of the brain are involved in mapping both physical and mental space in our brain.

General Intelligence needs a real paradigm shift in Machine Learning, if we would like to experience it while interacting with machines.

Sounds great, but how to actually make it happen?

Well, maybe future generations (or even ours in the next decades) will come up with better ways, but for now we should be more inspired by the only example available — the human brain.

I would like to emphasize the word “inspired”. There is a significant difference between computational neuroscience and artificial intelligence. I will elaborate on this topic further in the article

The brain allows us to do extraordinary things, but in the same time often causes problems.

We like to think of ourselves as rational beings, but then we make simple mistakes or abandon our important commitments and plans, because of feelings or short-term pains.

We have many cognitive biases, biological and physical limitations. Short-term needs and long-term ambitions that influence our choices and responses, often remaining in conflict with each other.

180 examples of human cognitive biases

Our environment and cultures shape our world models at both conscious and unconscious level. We also inherit genetic features that are no longer relevant in the present.

But we are the most advanced intelligent beings that we are all aware of.

Some say that we will not achieve AGI because the human brain, mind or consciousness is too complicated to understand. They suggest that maybe quantum computing will help us — just because it is equally incomprehensible.

For an extreme futurist like me who spends a lot of time thinking about the times when I will not be here anymore — it is obvious that just as we stopped to perceive Earth as the center of the Universe, at some point of time we will all be convinced that we are not the ultimate intelligence, nor the only one possible.

Kluge — a book by Gary Marcus suggests that instead of being a product of state of the art engineering, our brains are rather an effect of haphazard evolution during the ages.

The only available example

We don’t know all about the brain — that it is true. But we don’t need and should not replicate everything.

The brain has its biological constraints — it does many things to achieve advanced intelligence. But also to grow and maintain itself. Clean up the metabolic byproducts or defend itself from external threats.

It consists of layers of neural components that have evolved in reptiles, mammals and finally humans. Because of this, our minds work in ways that have proved useful in the past, but may no longer be.

Our rational plans are confronted with emotional guidance on how to avoid even short-term pain and maximize pleasure. Due to the lowest layers of the central nervous system, we can overreact to situations that affect our personal space or self-image.

The brain doesn’t have a memory that can be retrieved from anywhere just by providing a location address. Often to connect two pieces of information it needs to grow long synaptic connections across two hemispheres and insulate them with myelin sheaths (white matter).

Mimicking nature is not always the best strategy.

We can be inspired by nature but do similar things in different ways.

Our flying machines do not flap their wings. But they overcome the force of gravity by introducing a stronger one.

For lifting the bird the strength of its muscles is enough, for moving tons of weight we need jet engines.

But intelligence is not about introducing stronger force. Like more raw computing power. It is about something completely different.

Modular and semi-structured architecture

When we look at the anatomy of the brain it is clear that there is a strong presence of order.

We can recognize the modular organization and functional separation. Although people are different and their behavior may be unpredictable — we all have separate parts responsible for processing sensory data, making associations between them and responding with selected actions.

The front part of our brain focuses on processing thoughts, planning subsequent moves and performing actions. At the back we have the center of sensual perception.

These modules are separated, but connected and constantly exchange information.

We can see hierarchical organization not only at the level of brain regions, where the data is recognized at different scales of complexity but also in functional units of the cortex called cortical columns.

Functional units of the neocortex — vertical objects called cortical columns. They may vary depending on the location and goal, but still we can recognize similar layered structure.

We even share a similar semantic space in which similar words (even in different languages) trigger responses in the same areas of the brain.

Brain’s semantic space

This is because all meaning is grounded in the neural encoding of actions and perceptions. Verbs are stored closer to motor areas and nouns at the back, where the perception of objects and their attributes is processed. Self-directed actions are kept together and those that affect others too. The same applies to words describing other agents (people, animals, and probably robots and other autonomous devices in the future), movement of objects, places, tools and devices.

There is no chaos and randomness. There is a certain structure, but not rigid — leaving room for adaptation and expanding knowledge.

This is the reason why children first learn a lot about the world, objects and their attributes, actions and agents that perform them — to later label them with words. Without the multi-modal representation of meaning, the words are empty.

At the age of about 3 years, the hippocampus responsible for episodic memory is ready to store compositions of events, actions, detected actors, objects and their attributes as unified experiences that allow us to experience life as continuous and effectively learn from past experiences, even in our imagination.

And usually, around the age of 20–22, the prefrontal cortex is mature enough to allow evaluation of the consequences of actions, making complex plans and adapting own behavior to the experienced situation.

There is no place for chaos or randomness — but the repetition of the most proven qualities. What was successful in our ancestors - survived in our bodies and brains to serve as a base of our cognition that can be adapted to the current needs and conditions.

This specific template guarantees our efficient learning and supports the process of building a cognitive map, leaving the space for filling in the details.

Computing power and energy requirements

The human brain, while learning in real time consumes only a fraction of energy necessary for current deep neural networks.

Our most important body organ has an operating frequency in tens of Hz, but is more capable than computers equipped with processors clocked at levels of GHz (billions of Hz) levels.

Tens > Billions? It is clear that the brain works in a very different way…

To achieve AGI we need a paradigm shift in machine learning, not just more of faster processing units.

Quantum computers will not change this alone, but one day they can strongly contribute to the ASI revolution in which optimal solutions will be available immediately, regardless of the size of the search space.

But first we must take a step towards AGI and realize that some changes are necessary.

The human brain switches between multiple various scale neural networks to achieve different tasks.

You probably don’t need chess skills while driving. This part is skipped during processing, along with many other parts of the massive brain structure.

In emergencies, quick and good enough solutions are preferred. The range of possibilities is therefore narrowed to the most useful.

In currently popular neural networks you either do not know how exactly your solution works or you need to introduce even more computation and a burden of energy consumption to reveal some parts of the secret.

Introducing more structure and order to Artificial Intelligence may allow us to not only have a transparent view of its operation but also help optimize energy consumption. Without sacrificing the ability of adaptation at the same time.

Such energy efficient approach can lead to neural architectures with billions of available neurons, but only a fraction of them used simultaneously.

Infinite possibilities. Optimal energy requirements.

Other brain components

It is often emphasized that the neocortex provides us with our unique abilities and position in the world. It is indeed important. But many other factors influence its operation.

We have subcortical components that filter what is transferred for further processing in the neocortex, index our memories formed with partial components stored there, modulate our decisions with neurochemicals, drive our attention and feed our consciousness with salient features, help us locate in the environment or monitor and regulate the body state.

General Intelligence needs much more than our relatively simple Artificial Neural Networks. Mathematically sophisticated (and energy inefficient) but not very capable, when compared to the biological counterpart.

If that is not enough, all those components were dealing only with neurons.

We have another, obscure for most people part of the brain that literally controls the neurons!

Long ignored and seen merely as a filler — astrocytes, members of the glial cells group may have a much more important role. Glia („glue” in Greek) have a parallel communication network that is chemically and not electrically excited.

In 1985 Marian Diamond, a neuroscientist that studied the brain of Albert Einstein discovered with a disappointment that there was nothing special about his neurons. But he had more astrocytes in the left hemisphere than the average person.

Then scientists placed human astrocytes in newborn mice to discover that they are growing up as more intelligent.

Everyone knows that synaptic connection is made between two neurons, right?

No, it is two neurons and one astrocyte, that supports them and their synapses. Astrocytes are the key to synaptic plasticity and neurotransmitter activity. They also provide nutrients, co-creating a blood-brain barrier.

Tripartite synapse

But maybe they are capable of doing much more?

This is material for a different story, or even better — the implementation in the new generation of machine learning algorithms. Do not ignore astrocytes when you think about Artificial Intelligence. They have the potential to revolutionize it.

What gives us the possibility of doing and thinking whatever we like at any given moment?

Parts of our body and brain are massively connected and communicating with each other by electrochemical means. The modular architecture, functional separation and hemisphere dualism extend our mental capacity, reducing the energy requirements at the same time. What is more important at the moment gets more resources.

We have our mind — a kind of software equivalent running on the brain’s hardware. It generalizes for us the stimuli from the environment, changing colorless, odorless and tasteless atoms into something meaningful. It creates our perceived reality and because of that doesn’t have many problems with object separation.

It also provides a mental simulator that can teleport us into the past or imagined future. Evaluate actions that were done or are planned to be. Experience the effects once again or simulate in a test environment to pick the satisfying option or learn on mistakes.

The simulator is so good that it can generate dreams full of places, people and objects that never existed in the real world.

The unprocessed emotions of trauma or simply failure remain with us until we resolve them — find an explanation or a better solution to avoid pain or increase your chances of success. Then new knowledge is saved and emotional burden is reduced. It’s just a built-in learning mechanism spread over time.

We have a planning and scheduling module, the ability to set goals in line with personal values. Biochemistry regulating our nervous system, transmitting and modulating information flow by expanding or limiting the set of processed information.

Our brain is a self-organizing system based on the most proven strategies of the past, which can be customized through interaction with the world and under the influence of the culture in which we grew up.

Do you still think that even the most sophisticated neural network optimized for one task can be transformed into AGI?

We need much more.

In the brain, different networks constantly communicate with each other, modify themselves in real-time with synaptic plasticity and are influenced by many subcortical components and glial cells.

We have almost perfected the Narrow AI, but to experience the interaction with AGI — we will need something completely different.

It will not be easy to do. But we already have pieces of information necessary to do that, that are distributed in scientific books and papers that not many Machine Learning researchers look at.

On the other hand, neuroscientists more skilled in understanding the brain may lack capacity to translate these information into technology.

Is it possible that we already have the solutions hidden in plain sight?

Even if we will build a technology capable of achieving human-like intelligence — do you remember how many years it takes children to reach maturity?

Technology may be closer than we think — if we will look in the right direction. But we will have to wait longer for humanoid robots or virtual agents who will be like us. After this time, the more difficult part will be done and we will write articles about the ASI revolution, which will significantly benefit from the raw computing power of our next generation processing units or quantum computers.

What experts have to say?

Geoff Hinton, the “godfather” of deep learning

“My view is to throw it all away and start again”

“Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.”

“I suspect that means getting rid of back-propagation. I don’t think it’s how the brain works. We clearly don’t need all the labeled data.”

Demis Hassabis, CEO of DeepMind

“Deep learning is an amazing technology and hugely useful in itself, but in my opinion, it’s definitely not enough to solve AI, [not] by a long shot”

Maciej Wolski, founder of AGICortex — realistic AGI architecture and PoC technology, a company creating a dedicated AI chip and visual tools for autonomous, explainable AI. Before that, I spent a few years working on the AI R&D projects.

--

--

Maciej Wolski

Futurist. Technologist. Independent AGI researcher. Neuroscience enthusiast. Extreme learning machine.