(Aktualizácia na nový školský rok) |
|||

Line 21: | Line 21: | ||

|- | |- | ||

|Lecture | |Lecture | ||

− | | | + | |Tuesday |

|14:50 | |14:50 | ||

− | | | + | |M-IX |

|[[Maria Markosova|Mária Markošová]], [[Lubica Benuskova|Ľubica Beňušková]] | |[[Maria Markosova|Mária Markošová]], [[Lubica Benuskova|Ľubica Beňušková]] | ||

|- | |- | ||

|Exercises | |Exercises | ||

|Thursday | |Thursday | ||

− | | | + | |18:10 |

|I-H3 | |I-H3 | ||

− | |[[ | + | |[[Štefan Pócoš|Štefan Pócoš]] |

|} | |} | ||

Line 42: | Line 42: | ||

!References | !References | ||

|- | |- | ||

− | | | + | |22.09. |

|What is artificial intelligence, properties and types of agents. Uninformed search - state space, uninformed search algorithms, DFS, BFS. | |What is artificial intelligence, properties and types of agents. Uninformed search - state space, uninformed search algorithms, DFS, BFS. | ||

|R&N, ch.2-3.4 | |R&N, ch.2-3.4 | ||

|- | |- | ||

− | | | + | |29.9. |

|Informed search, A* algorithm, heuristics and their properties. | |Informed search, A* algorithm, heuristics and their properties. | ||

|R&N, ch.3.5-3.6 | |R&N, ch.3.5-3.6 | ||

|- | |- | ||

− | | | + | |6.10. |

|Local search, looking for an optimum, hill climbing, genetic algorithm, simulated annealing etc. | |Local search, looking for an optimum, hill climbing, genetic algorithm, simulated annealing etc. | ||

|R&N, ch.4.1 | |R&N, ch.4.1 | ||

|- | |- | ||

− | | | + | |13.10. |

|Constraint satisfaction problem: definition, heuristics, methods of solving. | |Constraint satisfaction problem: definition, heuristics, methods of solving. | ||

|R&N, ch.6 | |R&N, ch.6 | ||

|- | |- | ||

− | | | + | |20.10. |

|Basics of game theory, MiniMax algorithm, Alpha-Beta pruning, ExpectiMiniMax. | |Basics of game theory, MiniMax algorithm, Alpha-Beta pruning, ExpectiMiniMax. | ||

|R&N, ch.5 | |R&N, ch.5 | ||

|- | |- | ||

− | | | + | |27.10. |

|Logical agents: inference in logical knowledge base. | |Logical agents: inference in logical knowledge base. | ||

|R&N, ch.7 | |R&N, ch.7 | ||

|- | |- | ||

− | | | + | |3.11. |

|Supervised learning: linear and non-linear regression, binary perceptron. | |Supervised learning: linear and non-linear regression, binary perceptron. | ||

|R&N, ch.18.1-18.2, 18.6-18.6.3 | |R&N, ch.18.1-18.2, 18.6-18.6.3 | ||

|- | |- | ||

− | | | + | |10.11. |

|Multi-layer perceptron, idea of error backpropagation. | |Multi-layer perceptron, idea of error backpropagation. | ||

|R&N, ch.18.6.4-18.7.5 | |R&N, ch.18.6.4-18.7.5 | ||

|- | |- | ||

− | | | + | |24.11. |

|Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | |Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | ||

| | | | ||

|- | |- | ||

− | | | + | |1.12. |

|Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | |Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | ||

|R&N, ch.18.8-18.8.2 | |R&N, ch.18.8-18.8.2 | ||

|- | |- | ||

− | | | + | |8.12. |

|Weight optimization of MLP using genetic algorithms, evolutionary robotics | |Weight optimization of MLP using genetic algorithms, evolutionary robotics | ||

| | | | ||

|- | |- | ||

− | | | + | |15.12. |

|Quo vadis AI? Problems and visions of future AI methods | |Quo vadis AI? Problems and visions of future AI methods | ||

|R&N, ch. 26 | |R&N, ch. 26 | ||

Line 100: | Line 100: | ||

The course grading consists of four parts: | The course grading consists of four parts: | ||

− | * Exercises ( | + | * Exercises (30%) |

− | + | * Project (20%) | |

− | * Project ( | + | |

* Final exam (50%) | * Final exam (50%) | ||

− | Throughout the semester, you can gain | + | Throughout the semester, you can gain 30% for exercises and 20% for the project. You have to earn at least half from each of these. If you do not meet minimal condition from the semester, then you cannot pass the exam. The final exam is worth 50% of the total mark. |

'''Overall grading:''' A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0). | '''Overall grading:''' A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0). |

## Revision as of 13:21, 14 September 2020

# Introduction to Artificial Intelligence 1-AIN-304

## Contents

The course objectives are to provide the students with basic insight into artificial intelligence, that can further be extended in the master programmes. The course covers the basics of symbolic and nature-inspired methods of artificial intelligence. The theory is combined with practical exercises.

For exercises, project assignments, and lecture slides, please see LIST.

## Course schedule

Type | Day | Time | Room | Lecturer |
---|---|---|---|---|

Lecture | Tuesday | 14:50 | M-IX | Mária Markošová, Ľubica Beňušková |

Exercises | Thursday | 18:10 | I-H3 | Štefan Pócoš |

## Lecture syllabus

Date | Topic | References |
---|---|---|

22.09. | What is artificial intelligence, properties and types of agents. Uninformed search - state space, uninformed search algorithms, DFS, BFS. | R&N, ch.2-3.4 |

29.9. | Informed search, A* algorithm, heuristics and their properties. | R&N, ch.3.5-3.6 |

6.10. | Local search, looking for an optimum, hill climbing, genetic algorithm, simulated annealing etc. | R&N, ch.4.1 |

13.10. | Constraint satisfaction problem: definition, heuristics, methods of solving. | R&N, ch.6 |

20.10. | Basics of game theory, MiniMax algorithm, Alpha-Beta pruning, ExpectiMiniMax. | R&N, ch.5 |

27.10. | Logical agents: inference in logical knowledge base. | R&N, ch.7 |

3.11. | Supervised learning: linear and non-linear regression, binary perceptron. | R&N, ch.18.1-18.2, 18.6-18.6.3 |

10.11. | Multi-layer perceptron, idea of error backpropagation. | R&N, ch.18.6.4-18.7.5 |

24.11. | Applications of multi-layer perceptron: sonar, NetTalk, ALVINN, LeNet | |

1.12. | Unsupervised learning: K-means clustering, KNN, Self-organizing map, Principal component analysis | R&N, ch.18.8-18.8.2 |

8.12. | Weight optimization of MLP using genetic algorithms, evolutionary robotics | |

15.12. | Quo vadis AI? Problems and visions of future AI methods | R&N, ch. 26 |

## References

- Russell S., Norwig P., Artificial Intelligence: A Modern Approach (3rd ed.), Prentice Hall, 2010. Ask lecturers for password. Also available in the faculty library. A.k.a.
*AIMA*. - Návrat P., Bieliková M., Beňušková Ľ. et al., Umelá inteligencia (3. vydanie), Vydavateľstvo STU, 2015. (ANN chapter)

## Course grading

The course grading consists of four parts:

- Exercises (30%)
- Project (20%)
- Final exam (50%)

Throughout the semester, you can gain 30% for exercises and 20% for the project. You have to earn at least half from each of these. If you do not meet minimal condition from the semester, then you cannot pass the exam. The final exam is worth 50% of the total mark.

**Overall grading:** A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0).