• EN

# Introduction to Artificial Intelligence 1-AIN-304

The course objectives are to provide the students with basic insight into artificial intelligence, that can further be extended in the master programmes. The course covers the basics of symbolic and nature-inspired methods of artificial intelligence. The theory is combined with practical exercises.

For exercises, project assignments, and lecture slides, please see LIST.

## News

26.9. - Change of lecture room

Beginning from next week, the Wednesday lectures of Introduction to AI will take place in room i-9, not i-23.

28.9. - Exercise 1 tournament

After gathering all your submissions for exercise 1, I decided to organize a tournament. All players with functional codes were qualified. Every student played against all other students, in each combat 50 matches were executed (30 x TicTacToe, 20 x Gomoku). Students were ranked according to [ [avg score for 30 TicTacToe games] + [avg score for 20 Gomoku games] ].

The winners are:

1. Filip Kerák - 87% win rate
2. Tatiana Gyurcsovicsová - 79% win rate
3. Dáša Keszeghová - 67% win rate

## Course schedule

Type Day Time Room Lecturer
Lecture Wednesday 14:50 I-9 Mária Markošová, Ľubica Beňušková
Exercises Thursday 14:50 I-H3 Juraj Holas

## Lecture syllabus

Date Topic References
25.09. What is artificial intelligence, properties and types of agents. Uninformed search - state space, uninformed search algorithms, DFS, BFS. R&N, ch.2-3.4
02.10. Informed search, A* algorithm, heuristics and their properties. R&N, ch.3.5-3.6
9.10. Local search, looking for an optimum, hill climbing, genetic algorithm, simulated annealing etc. R&N, ch.4.1
16.10. Constraint satisfaction problem: definition, heuristics, methods of solving. R&N, ch.6
23.10. Basics of game theory, MiniMax algorithm, Alpha-Beta pruning, ExpectiMiniMax. R&N, ch.5
6.11. Logical agents: inference in logical knowledge base. R&N, ch.7
13.11. Supervised learning: linear and non-linear regression, binary perceptron. R&N, ch.18.1-18.2, 18.6-18.6.3
20.11. Multi-layer perceptron, idea of error backpropagation. R&N, ch.18.6.4-18.7.5
27.11. Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet
4.12. Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis R&N, ch.18.8-18.8.2
11.12. Weight optimization of MLP using genetic algorithms, evolutionary robotics
18.12. Quo vadis AI? Problems and visions of future AI methods R&N, ch. 26

## References

The course grading consists of four parts:

• Exercises (25%)
• Short tests (10%)
• Project (15%)
• Final exam (50%)

Throughout the semester, you can gain 25% for exercises, 10% for tests and 15% for the project. You have to earn at least half from each of these. If you do not meet minimal condition from the semester, then you cannot pass the exam. The final exam is worth 50% of the total mark.

Overall grading: A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0).