Introduction to Computational Intelligence 2-IKVa-115/18

The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.

Course schedule

Type Day Time Room Lecturer
Lecture Monday 10:40 - 12:10 I-9 / in room Igor Farkaš
Seminar Thursday 13:10 - 14:40 i-23 / in room Kristína Malinovská


week Date Topic References
1. Mon 26.09. What is computational intelligence, basic concepts, relation to artificial intelligence. slides Seminar on Thursday. Craenen & Eiben (2003); wikipedia; R&N (2010), chap.1;
2. Mon 03.10. Taxonomy of artificial agents, nature of environments. slides Seminar on Thursday. R&N (2010), chap.2
3. Mon 10.10. Inductive learning via observations, decision trees. Model selection. slides Seminar on Thursday. R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12
4. Mon 17.10. Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression. slides Seminar on Thursday. R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
5. Mon 24.10. Unsupervised (self-organizing) neural networks: feature extraction, data visualization. slides Seminar on Thursday. Marsland (2015), ch.14, Engelbrecht (2007), ch.4
6. Mon 31.10. No lecture (holiday) Thursday - Lecture: Probability theory. Bayes formula. Naive Bayes classifier. R&N (2010), ch.13,20.1-2 slides
7. Mon 7.11. Seminar to lecture 6 Thursday - Lecture 7: Probabilistic learning: MAP, ML.
8. Mon 14.11. Seminar to lecture 7 + Q&As before mid-term Tuesday: mid-term test
9. Mon 21.11. Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. slides R&N (2010), ch.21.1-2.
10. Mon 28.11. Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains. R&N (2010), ch.21.3-5.
11. Mon 05.12. Evolutionary computation: basic concepts, genetic algorithms. slides Engelbrecht (2007), ch.8
12. Mon 12.12. Fuzzy systems, fuzzy logic and reasoning. slides Engelbrecht (2007), ch.20-21; Zadeh (2007)

Note: Dates refer to lectures, seminars will be on day+3 each week.


Course grading

  • Active participation during the lectures/exercises (25%): 15 for lectures, 10 for exercises. Minimum 1/3 of points required.
  • Homework (10%): weekly homework given and discussed at the exercises, usually solved by hand or via excel sheets (no programming necessary)
  • Written mid-term test (30%), covering topics of the first half of the semester.
  • Final written-oral exam (30%): We will discuss 3 randomly chosen (by a computer) questions that basically correspond to weekly topics during the semester. Minimum of 1/3 of all points required.
  • Small final project (10%) = implementation of a small neural network (using an existing Python library) and writing a short report. Note: even without this, the student can still get maximum points if s/he has performed very actively. Deadline: TBA
  • Overall grading: A (>90%), B (>80%), C (>70%), D (>60%), E (>50%), Fx (otherwise).