
1. Philosophy
1.1. Ethics
1.1.1. Robot ethics
1.1.1.1. Robot rights
1.1.1.2. Threat to privacy
1.1.1.3. Transparency and open source
1.1.2. Weaponization of artificial intelligence
1.1.3. Machine ethics
1.1.4. Unintended consequences
1.2. Existential Risk
1.2.1. Timeframe
1.2.2. Basic argument
1.2.3. Risk scenarios
1.2.3.1. Poorly specified goals: "Be careful what you wish for"
1.2.3.2. Difficulties of modifying goal specification after launch
1.2.3.3. Instrumental goal convergence: Would a superintelligence just ignore us?
1.2.3.4. Anthropomorphism
1.2.3.5. Other sources of risk
1.2.3.6. Orthogonality: Does intelligence inevitably result in moral wisdom?
1.2.3.7. "Optimization power" vs. normatively thick model of intelligence
1.2.4. Reactions
1.2.4.1. Endoresment
1.2.4.2. Sckepticism
1.2.4.3. Indifference
1.2.5. Consensus against regulation
1.3. Turing test
1.3.1. History
1.3.1.1. Philosophical background
1.3.1.2. Alan Turing
1.3.1.3. ELIZA and PARRY
1.3.1.4. The Chinese room
1.3.1.5. 2014 University of Reading competition
1.3.1.6. Loebner Prize
1.3.2. Versions
1.3.2.1. Imitation Game
1.3.2.2. Standard interpretation
1.3.2.3. Imitation Game vs. Standard Turing test
1.3.2.4. Should the interrogator know about the computer?
1.3.3. Strengths
1.3.3.1. Tractability and simplicity
1.3.3.2. Breadth of subject matter
1.3.3.3. Emphasis on emotional and aesthetic intelligence
1.3.4. Weaknesses
1.3.4.1. Human intelligence vs intelligence in general
1.3.4.2. Consciousness vs. the simulation of consciousness
1.3.4.3. Human misidentification
1.3.4.4. Silence
1.3.4.5. Impracticality and irrelevance: the Turing test and AI research
1.3.5. Variations
1.3.5.1. Reverse Turing test and CAPTCHA
1.3.5.2. Subject matter expert Turing test
1.3.5.3. Total Turing test
1.3.5.4. Minimum Intelligent Signal Test
1.3.5.5. Hutter Prize
1.3.5.6. Other tests based on compression or Kolmogorov Complexity
1.3.5.7. Ebert test
1.3.6. Predictions
1.3.7. Conferences
1.3.7.1. Turing Colloquium
1.3.7.2. 2005 Colloquium on Conversational Systems
1.3.7.3. 2008 AISB Symposium
1.3.7.4. The Alan Turing Year, and Turing100 in 2012
1.4. Chinese room
1.4.1. Philosophy
1.4.1.1. Strong AI
1.4.1.2. Strong AI as computationalism or functionalism
1.4.1.3. Strong AI vs. biological naturalism
1.4.1.4. Consciousness
1.4.1.5. Applied Ethics
1.4.2. Complete argument
1.4.3. Computer science
1.4.3.1. String AI vs. AI research
1.4.3.2. Turing test
1.4.3.3. Symbol processing
1.4.3.4. Chinese room and Turing completeness
1.4.4. Replies
1.4.4.1. Systems and virtual mind replies: finding the mind
1.4.4.2. Robot and semantics replies: finding the emaning
1.4.4.3. Brain simulation and connectionist replies: redesigning the room
1.4.4.4. Speed and complexity: appeals to intuition
1.4.4.5. Other minds and zombies: meaninglessness
1.5. Friendly AI
1.5.1. Risks of unfriendly AI
1.5.2. Coherent extrapolated
1.5.3. Other approaches
1.5.4. Public policy
1.5.5. Criticism
2. Technology
2.1. Applications
2.1.1. AI for Good
2.1.2. Finance
2.1.2.1. Algorithm Trading
2.1.2.2. Market Analysis and Data Mining
2.1.2.3. Personal Finance
2.1.2.4. Portfolio Management
2.1.2.5. Underwriting
2.1.3. Aviation
2.1.4. Computer science
2.1.5. Education
2.1.6. Heavy industry
2.1.7. Hospitals and medicine
2.1.8. Human Resources & Recruiting
2.1.9. Marketing
2.1.10. Music
2.1.11. News, publishing, and writing
2.1.12. Online and telephone customer service
2.1.13. Telecommunications maintenance
2.1.14. Toys and games
2.1.15. Transportation
2.2. Projects
2.2.1. Specialized projects
2.2.1.1. Brain-inspired
2.2.1.2. Cognitive architecture
2.2.1.3. Games
2.2.1.4. Knowledge and reasoning
2.2.1.5. Motion and manipulation
2.2.1.6. Music
2.2.1.7. Natural language processing
2.2.2. Multipurpose projects
2.2.2.1. Software libraries
2.2.2.2. GUI frameworks
2.2.2.3. Cloud services
2.2.2.4. Machine learning As A Service
2.2.3. Partnership on AI
2.2.3.1. Amazon
2.2.3.2. Google
2.2.3.3. Facebook
2.2.3.4. IBM
2.2.3.5. Microsoft
2.2.3.6. Tesla
2.3. Programming languages
2.3.1. Wolfram Language
2.3.2. C++
2.3.3. MATLAB
2.3.4. Perl
2.3.5. Python
2.3.6. Haskell
2.3.7. POP-11
2.3.8. Planner
2.3.9. STRIPS
2.3.10. Prolog
2.3.11. Smalltalk
2.3.12. Lisp
2.3.13. AIML
2.3.14. IPL
2.4. Other
2.4.1. Homeland security
2.4.2. Speech and text recognition
2.4.3. Data mining
2.4.4. E-mail spam filtering
2.4.5. Gesture recognition
2.4.6. Individual voice recognition
2.4.7. Facial expression recognition
2.4.8. Object recognition
2.4.9. Robot navigation
2.4.10. Obstacle avoidance
3. Major Goals
3.1. Knowledge reasoning
3.1.1. Semantic nets
3.1.2. Systems architecture
3.1.3. Frames Rules
3.1.4. Ontologies
3.1.5. Automated reasoning engines
3.1.5.1. Interference engines
3.1.5.2. Theorem provers
3.1.6. Domain Independent Planning
3.2. Planning
3.2.1. Planning Domain Modelling Languages
3.2.2. Algorithm for planning
3.2.2.1. Classical planning
3.2.2.2. Reduction to other problems
3.2.2.3. Temporal planning
3.2.2.4. Probabilistic planning
3.2.2.5. Preference-based planning
3.2.3. Deployment of planning systems
3.3. Machine Learning
3.3.1. Decision tree learning
3.3.2. Association rule learning
3.3.3. Artificial neural networks
3.3.4. Deep learning
3.3.5. Inductive logic programming
3.3.6. Support vector machines
3.3.7. Clustering
3.3.8. Bayesian networks
3.3.9. Reinforcement learning
3.3.10. Representation learning
3.3.11. Similarity and metric learning
3.3.12. Sparse dictionary learning
3.3.13. Genetic algorithms
3.3.14. Rule-based machine learning
3.4. Natural language processing
3.4.1. Syntax
3.4.2. Semantics
3.4.3. Discourse
3.4.4. Speech
3.5. Computer vision
3.5.1. Recognition
3.5.2. Motion analysis
3.5.3. Scene reconstruction
3.5.4. Image restoration
3.5.5. System methods
3.5.6. Image-understanding systems
3.6. Robotics
3.6.1. Components
3.6.1.1. Power source
3.6.1.2. Actuation
3.6.1.3. Sensing
3.6.1.4. Maipulation
3.6.1.5. Locomotion
3.6.1.6. Environmental interaction and navigation
3.6.1.7. Human-robot interaction
3.6.2. Control
3.6.2.1. Autonomy levels
3.6.3. Research
3.6.3.1. Dynamics and kinematics
3.6.3.2. Bionic and biominetics
3.6.4. Education and training
3.7. Artificial general intelligence
3.7.1. Strong AI
3.7.2. Full AI
4. Approaches
4.1. Symbolic
4.1.1. Expert systems
4.1.1.1. Production rules
4.1.2. Fuzzy logic
4.1.2.1. Artificial neural network
4.2. Deep learning
4.2.1. Automatic speech recognition
4.2.2. Image recognition
4.2.3. Visual Art Processing
4.2.4. Natural language processing
4.2.5. Drug discovery and toxicology
4.2.6. Customer relationship management
4.2.7. Recommendation systems
4.2.8. Bioinformatics
4.2.9. Mobile Advertising
4.3. Recurrent neural networks
4.3.1. Fully recurrent
4.3.2. Recursive
4.3.3. Hopfield
4.3.4. Elman networks and Jordan networks
4.3.5. Echo state
4.3.6. Neural history compressor
4.3.7. Long short-term memory
4.3.8. Gated recurrent unit
4.3.9. Bi-directional
4.3.10. Continuous-time
4.3.11. Hierachical
4.3.12. Recurrent multilayer perceptron
4.3.13. Multiple timescales model
4.3.14. Neural Turing machines
4.3.15. Differentiable neural computer
4.3.16. Neural network pushdown automata
4.4. Bayesian networks
4.4.1. Interference and learning
4.4.1.1. Interring unobserved variables
4.4.1.2. Parameter learning
4.4.1.3. Structure learning
4.4.2. Statistical intro
4.4.2.1. Introductory examples
4.4.2.2. Restrictions on priors
4.4.3. Definitions and concepts
4.4.3.1. Factorization definition
4.4.3.2. Local Markov property
4.4.3.3. Developing Bayesian networks
4.4.3.4. Markov blanket
4.4.3.5. Hierarchical models
4.4.3.6. Causal networks
4.4.4. Interference complexity and approximation algorithms
4.4.5. Applications
4.4.5.1. SW
4.4.5.1.1. WinBUGS
4.4.5.1.2. OpenBUGS
4.4.5.1.3. JAGS
4.4.5.1.4. Stan
4.5. Evolutionary algorithms
4.5.1. Generic algorithm
4.5.2. Genetic programming
4.5.3. Evolutionary programming
4.5.4. Gene expression programming
4.5.5. Evolution strategy
4.5.6. Differential evolution
4.5.7. Neuroevolution
4.5.8. Learning classifier system