References

AGI Paper Reference

Get Started. It's Free
or sign up with your email address
References by Mind Map: References

1. Models

1.1. Why Neurons have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

1.1.1. Fig 1

1.1.2. Fig 3

1.2. A Neurally-Inspired Hierarchical Prediction Network for Spatiotemporal Sequence Learning and Prediction

1.2.1. Fig 1

1.2.2. Fig 3

1.3. Learning higher-order sequential structure with cloned HMMs

1.3.1. Fig 1. f

1.3.2. Fig 1. g

1.4. A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality

1.4.1. Summary

1.4.1.1. I found his paper while searching for the sparse coding implementation.

1.4.1.2. His work claimed 91% performance on MNIST and 67% on Weizmann event.

1.4.1.3. While this result is weaker than the state of the art result, the following aspects was interesting.

1.4.1.4. First, Sparsey model uses sparse distributed representation(SDR) rather than dense representation.

1.4.1.5. Second, Sparsey model does not use any of optimization including gradient back propagation.

1.4.1.6. It can do one-shot learning meaning that only one example is required for learning.

1.4.1.7. These two attributes are what I was considering meaningful.

1.4.1.8. Only comparable models is HTM by Numenta.

1.4.1.9. However, HTM does not have performance evaluation except synthetic toy dataset.

1.4.1.10. The main algorithm transforms binary inputs into sparse distributed representation.

1.4.1.11. The key insight is that the algorithm uses the familarity or novelty measure to control randomness of the resulting code.

1.4.1.12. If the input pattern is very similar to previous pattern, it will return almost same code and vice versa.

1.4.1.13. The author calls it SISC (similar inputs map to similar codes).

1.4.2. Good’

1.4.2.1. There were many neuroscience reference which will be useful for my future research.

1.4.2.1.1. Cells in the Minicolumn possesses similar receptive field characteristic or tuning.

1.4.3. Bad

1.4.3.1. One big question for me is that if we define SISC as the binary bit overlap, the incoming input already possess the SISC property. Right that the algorithm creates more sparse version of the binary input. But It seems to be all.

1.4.3.2. Is the simplication the core process? In my opinion, the prediction is the main task that a brain is conducting. After several hierarchy, the mnist digits might be separated to constant representation, but it does not tells about how the motion should be generated.

1.4.4. Implication

1.5. Online learning and generalization of parts-based image representations by non-negative sparse autoencoders

1.5.1. Fig 3

1.5.2. Fig 9

1.6. Attention is All you Need

1.7. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

2. Environment

2.1. Developmental Psychology

2.1.1. Is speech learning 'gated' by the social brain?

2.1.1.1. Patricia Kuhl

2.1.1.2. Fig 1

2.1.1.3. Fig 2

2.2. Developmental Robotics

2.2.1. Autonomous Mental Development by Robots and Animals

2.2.1.1. Juyang Weng

2.2.1.2. Spoon-fed human edited sensory data

2.2.1.3. figures

2.2.1.3.1. 1

2.2.1.3.2. 2

2.2.2. Developmental Robotics: From babies to robots

2.2.2.1. Angelo Cangelosi and Mattew Schlesinger

2.2.2.2. Maturation

2.2.2.2.1. symptoms

2.2.2.2.2. critical period

2.2.2.2.3. motor development

2.2.2.2.4. Intrinsic motivation

2.2.2.3. figures

2.2.2.3.1. 1

2.2.2.4. Milestones and tests

2.2.2.4.1. Scan pattern

2.2.2.4.2. Visual Expectation Paradigm

2.2.2.4.3. Predictive tracking task

2.2.2.4.4. self-perception

2.2.2.4.5. Visual Stimuli Test

2.2.2.4.6. Face recognition test by Maue

2.2.2.4.7. Visual cliff test

2.2.2.4.8. Two-location appartus test

2.2.2.4.9. Unity Perception test

2.2.2.4.10. Affordance test

2.3. AGI

2.3.1. Mapping the Landscape of Human-level Artificial General Intelligence

2.3.1.1. Fig 1. Characteristics for AGI Environments, Tasks, and Agents

2.3.1.2. Characterizing Human Cognitive Development

2.3.1.2.1. Gardener's theory of multiple intelligence

2.3.1.2.2. Piaget's theory

2.3.1.2.3. Vygotsky's theory

2.3.1.3. Figure 6. Scenario Milestones on the AGI Landscape

2.3.1.3.1. General Video-Game Learning

2.3.1.3.2. Preschool Learning

2.3.1.3.3. Reading Comprehension

2.3.1.3.4. Story or Scene Comprehension

2.3.1.3.5. School Learning

2.3.1.3.6. The Wozniak Test

2.3.1.4. Review summary

2.3.2. Computing Machinery and Intelligence

2.3.2.1. Imitation game: Turing test

2.3.2.2. Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?

2.3.2.3. Computational evolution

2.3.3. AGI Preschool: A Framework for Evaluating Early-Stage Human-like AGIs

2.4. Robotics

2.4.1. Env from Cynthia Matuszek

2.4.1.1. Fig 1

2.4.1.1.1. Using game engine and robot simulator and robot OS, humans can immerse into the simulation using virtual reality techniques.

2.4.1.2. Virtual Reality and photogrammetry for Improved Reproducibility of Human-Robot Interaction Studies

2.4.1.3. Learning from Human-Robot Interactions in Modeled Scenes

2.4.1.3.1. Fig

2.4.1.4. Summary

2.4.1.4.1. What is the problem to solve?

2.4.1.4.2. Why this is important?

2.4.1.4.3. What was previous approaches?

2.4.1.4.4. How this work is different?

2.4.1.4.5. What is good?

2.4.1.4.6. What can be improved?

2.4.1.4.7. Relevance to my research?

2.5. 3D animation

2.5.1. An Immersive System for Browsing and Visualizing Surveillance Video

2.5.1.1. System Figure

2.5.1.2. Calibration interface

2.5.1.3. Summary

2.5.1.3.1. This paper is about creating a 3D immersive from the multicamera recording of Human Speechome Project.

2.5.1.3.2. Human Speechome project is a project by MIT professor Deb Roy that he installed multiple cameras and mics around his house and recorded 9,000 hours of video of his son from the birth to 3 years.

2.5.1.3.3. He wanted to use this data for the how human develops an intelligence and use this lesson for the training of his robot Tracy.

2.5.1.3.4. He found how the visual experience and the first words are correlated. And he also used this technique to analyze the video of large mall for analytics.

2.5.1.3.5. The good points for me was

2.5.1.3.6. Bad things are

2.5.1.3.7. Implication

2.6. Definition

2.6.1. Universal Intelligence: A Definition of Machine Intelligence

2.6.1.1. authors

2.6.1.1.1. Shane Legg

2.6.1.1.2. Marcus Hutter

2.6.1.2. Intelligence measures an agent's ability to achieve goals in a wide range of environments.

2.6.1.2.1. Expected reward of an intelligence in an environment

2.6.1.2.2. Kolmogorov complexity of a binary string x

2.6.1.2.3. Intelligence