Line 67: | Line 67: | ||

|- | |- | ||

|3.11. | |3.11. | ||

+ | |Logical agents: inference in logical knowledge base. | ||

+ | |R&N, ch.7 | ||

+ | |- | ||

+ | |10.11. | ||

|Supervised learning: linear and non-linear regression, binary perceptron. | |Supervised learning: linear and non-linear regression, binary perceptron. | ||

|R&N, ch.18.1-18.2, 18.6-18.6.3 | |R&N, ch.18.1-18.2, 18.6-18.6.3 | ||

|- | |- | ||

− | | | + | |24.11. |

|Multi-layer perceptron, idea of error backpropagation. | |Multi-layer perceptron, idea of error backpropagation. | ||

|R&N, ch.18.6.4-18.7.5 | |R&N, ch.18.6.4-18.7.5 | ||

|- | |- | ||

− | | | + | |1.12. |

|Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | |Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | ||

| | | | ||

|- | |- | ||

− | | | + | |8.12. |

|Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | |Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | ||

|R&N, ch.18.8-18.8.2 | |R&N, ch.18.8-18.8.2 | ||

− | |||

− | |||

− | |||

| | | | ||

|- | |- |

## Revision as of 18:30, 3 November 2020

# Introduction to Artificial Intelligence 1-AIN-304

## Contents

The course objectives are to provide the students with basic insight into artificial intelligence, that can further be extended in the master programmes. The course covers the basics of symbolic and nature-inspired methods of artificial intelligence. The theory is combined with practical exercises.

For exercises, project assignments, and lecture slides, please see LIST.

## Course schedule

Type | Day | Time | Room | Lecturer |
---|---|---|---|---|

Lecture | Tuesday | 14:50 | M-IX | Mária Markošová, Ľubica Beňušková |

Exercises | Tuesday | 18:10 | I-H3 | Štefan Pócoš |

## Lecture syllabus

Date | Topic | References | |
---|---|---|---|

22.09. | What is artificial intelligence, properties and types of agents. Uninformed search - state space, uninformed search algorithms, DFS, BFS. | R&N, ch.2-3.4 | |

29.9. | Informed search, A* algorithm, heuristics and their properties. | R&N, ch.3.5-3.6 | |

6.10. | Local search, looking for an optimum, hill climbing, genetic algorithm, simulated annealing etc. | R&N, ch.4.1 | |

13.10. | Constraint satisfaction problem: definition, heuristics, methods of solving. | R&N, ch.6 | |

20.10. | Basics of game theory, MiniMax algorithm, Alpha-Beta pruning, ExpectiMiniMax. | R&N, ch.5 | |

27.10. | Logical agents: inference in logical knowledge base. | R&N, ch.7 | |

3.11. | Logical agents: inference in logical knowledge base. | R&N, ch.7 | |

10.11. | Supervised learning: linear and non-linear regression, binary perceptron. | R&N, ch.18.1-18.2, 18.6-18.6.3 | |

24.11. | Multi-layer perceptron, idea of error backpropagation. | R&N, ch.18.6.4-18.7.5 | |

1.12. | Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | ||

8.12. | Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | R&N, ch.18.8-18.8.2 | |

15.12. | Quo vadis AI? Problems and visions of future AI methods | R&N, ch. 26 |

## References

- Russell S., Norwig P., Artificial Intelligence: A Modern Approach (3rd ed.), Prentice Hall, 2010. Ask lecturers for password. Also available in the faculty library. A.k.a.
*AIMA*. - Návrat P., Bieliková M., Beňušková Ľ. et al., Umelá inteligencia (3. vydanie), Vydavateľstvo STU, 2015. (ANN chapter)

## Course grading

The course grading consists of four parts:

- Exercises (30%)
- Project (20%)
- Final exam (50%)

Throughout the semester, you can gain 30% for exercises and 20% for the project. You have to earn at least half from each of these. If you do not meet minimal condition from the semester, then you cannot pass the exam. The final exam is worth 50% of the total mark.

**Overall grading:** A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0).