How do we represent what we know?

Get Started. It's Free
or sign up with your email address
Rocket clouds
How do we represent what we know? by Mind Map: How do we represent what we know?

1. syntax: how you write it down

2. interpretation:what does it mean in this context

3. semantics:what symbols mean

4. Using formal models of logic: the Mind

5. Knowledge Representation Languages

6. Using the type of numerical maths we use to describe the physical world: e.g., the Brain

7. the current state of the world- "facts" (inputs)

8. how the world works (model) - "rules" applying to things - metaknowledge/ontologies applying to groups of things or relationships

9. how these can be manipulated( what form of logic)

10. Expressive (can say what we need to)

11. Effective (can infer what we need to)

12. Explicit (can justify inferences

13. In order to do Inference: Create new facts (output)

14. Depend on the type of logic allowed

15. Propositional logic - BIVALENT (everything is T/F), - Implies, IsEquivalent to , AND, OR, NOT Sound, Complete, Decidable, Not Expressive

16. First order logic: Adds variables, existential/universal quantifiers Sound, expressive, not decidable

17. Knowledge Engineer:

18. work out what the client wants/needs - - work out how to represent it (choose best KRL) -Create rule/fact base and (try to) verify it is complete and accurate and will finish - Discuss with client

19. Fuzzy Logic Multivalent (Things can belong to more than one class)

20. 2. Calculate inferences for each fuzzy rule that applies

21. Probabilistic reasoning e.g. Bayesian networks

22. To build models: 1. Use graphs/ Dependency Arcs to indicate conditional probabilities - creates structure of model 2. Measure frequencies of events - provides parameters of model

23. To use Models: Combine probabilities to make predictions: - independent events=>multiply -dependent events=>apply Bayes Rule

24. Artificial Neural Networks

25. Logical calculus -inputs and bias(1) -weights on links -1,0,+1 -output 1 if sum of inputs >0

26. Link to KRL -single node does And/OR/NOT -can compute any logical function - by suitable combination of units -have to be hand designed

27. Perceptron

28. simple update rule: -change in weight on link = error*signal*learning rate

29. - will learn any linearly separable problem - can't learn non-separable problems e.g. XOR

30. Multi-Layer perceptron

31. 1. Signals feed-forward to make predictions using sigmoid function 2. During training errors propagated backwards -compare output to desired output apply perceptron update rule -send errors backwards in proportion to signal through links

32. Machine Learning Engineer

33. Unsupervised e.g. clustering

34. Supervised learning: model building

35. - training (used to guide search process ) - validation (used to avoid overfitting) - test (used to estimate accuracy )

36. ML algorithms are: 1 A way of representing decision boundaries e.g. rules, set of examples examples 2. A search algorithm for moving/improving/learning them e.g. hill climbing, EAs...

37. do we know what the outputs should be?

38. In order to learn from experience and generalise to new inputs

39. Expert systems

40. Semantic Web, e.g., RDF, XML self-describing data, AIML - separate facts from details about how languages work

41. 1. Work out membership of different functions

42. 3. Combine outputs to get a single result