Introduction to Computational Intelligence 2-IKVa-115/18

The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.

Course schedule

Type Day Time Room Lecturer
Lecture Monday 9:00 - 10:30 I-9 / in room Igor Farkaš
Seminar Thursday 9:00 - 10:30 i-9 / in room Kristína Malinovská

Syllabus

week Date Topic References
1. 18.09. What is computational intelligence, basic concepts, relation to artificial intelligence. slides Craenen & Eiben (2003); wikipedia; R&N (2010), chap.1;
2. 25.09. Properties of environments, taxonomy of artificial agents. slides R&N (2010), chap.2
3. 02.10. Inductive learning via observations, decision trees. Model selection. slides R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12
4. 09.10. Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression. slides R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
5. 16.10. Unsupervised (self-organizing) neural networks: feature extraction, data visualization. slides Marsland (2015), ch.14, Engelbrecht (2007), ch.4
6. 23.10. Probability theory. Bayes formula. Naive Bayes classifier. slides R&N (2010), ch.13,20.1-2
7. 30.10. Probabilistic learning: Maximum A Posteriori learning, Maximum Likellihood R&N (2010), ch.13,20.1-2
8. 06.11. Q&As before mid-term Thu: mid-term test
9. 13.11. Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. slides R&N (2010), ch.21.1-2.
10. 20.11. Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains. R&N (2010), ch.21.3-5.
11. 27.11. Fuzzy systems, fuzzy logic and reasoning. slides Engelbrecht (2007), ch.20-21; Zadeh (2007)
11. 04.12. Evolutionary computation: basic concepts. slides Engelbrecht (2007), ch.8

Note: Dates refer to lectures, seminars will be on day+3 each week.

References

Course grading

  • Active participation during the lectures/exercises (25%): 15 for lectures, 10 for exercises. Minimum 1/3 of points required.
  • Homework (10%): weekly homework given and discussed at the exercises, usually solved by hand or via excel sheets (no programming necessary)
  • Written mid-term test (30%), covering topics of the first half of the semester.
  • Final written-oral exam (30%): We will discuss 3 randomly chosen (by a computer) questions that basically correspond to weekly topics during the semester. Minimum of 1/3 of all points required.
  • Small final project (10%) = implementation of a small neural network (using an existing Python library) and writing a short report. Note: even without this, the student can still get maximum points if s/he has performed very actively. Deadline: TBA
  • Overall grading: A (>90%), B (>80%), C (>70%), D (>60%), E (>50%), Fx (otherwise).