(Syllabus)
(Syllabus)
Riadok 44: Riadok 44:
 
|-
 
|-
 
|1.
 
|1.
|26.09.  
+
|Mon 26.09.  
 
|What is computational intelligence, basic concepts, relation to artificial intelligence.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.4x.pdf slides]
 
|What is computational intelligence, basic concepts, relation to artificial intelligence.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.4x.pdf slides]
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1;
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1;
 
|-
 
|-
 
|2.
 
|2.
|03.10.
+
|Mon 03.10.
 
|Taxonomy of artificial agents, nature of environments. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.4x.pdf slides]
 
|Taxonomy of artificial agents, nature of environments. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.4x.pdf slides]
 
|R&N (2010), chap.2
 
|R&N (2010), chap.2
 
|-
 
|-
 
|3.
 
|3.
|10.10.
+
|Mon 10.10.
 
|Inductive learning via observations, decision trees. Model selection. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.4x.pdf slides]
 
|Inductive learning via observations, decision trees. Model selection. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.4x.pdf slides]
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|-
 
|-
 
|4.
 
|4.
|17.10.
+
|Mon 17.10.
 
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression.  <!--  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.4x.pdf slides]-->
 
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression.  <!--  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.4x.pdf slides]-->
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|-
 
|-
 
|5.
 
|5.
|24.10.
+
|Mon 24.10.
 
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.4x.pdf slides]-->
 
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.4x.pdf slides]-->
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
Riadok 74: Riadok 74:
 
|-
 
|-
 
|6.
 
|6.
|7.11.
+
|Mon 7.11.
 
|Probability theory. Bayes formula. Naive Bayes classifier. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.4x.pdf slides]-->
 
|Probability theory. Bayes formula. Naive Bayes classifier. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.4x.pdf slides]-->
 
|R&N (2010), ch.13,20.1-2
 
|R&N (2010), ch.13,20.1-2
 
|-
 
|-
 
|7.
 
|7.
|14.11.
+
|Mon 14.11.
 
|Probabilistic learning: MAP, ML.
 
|Probabilistic learning: MAP, ML.
 
|Thursday: mid-term test
 
|Thursday: mid-term test
 
|-
 
|-
 
|8.
 
|8.
|21.11.
+
|Mon 21.11.
 
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.4x.pdf slides]-->
 
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem.  <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.4x.pdf slides]-->
 
|R&N (2010), ch.21.1-2.  
 
|R&N (2010), ch.21.1-2.  
 
|-
 
|-
 
|9.
 
|9.
|28.11.
+
|Mon 28.11.
 
|Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains.
 
|Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains.
 
|R&N (2010), ch.21.3-5.
 
|R&N (2010), ch.21.3-5.
 
|-
 
|-
 
|10.
 
|10.
|05.12.
+
|Mon 05.12.
 
|Evolutionary computation: basic concepts, genetic algorithms. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.4x.pdf slides]-->
 
|Evolutionary computation: basic concepts, genetic algorithms. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.4x.pdf slides]-->
 
|Engelbrecht (2007), ch.8  
 
|Engelbrecht (2007), ch.8  
 
|-
 
|-
 
|11.
 
|11.
|12.12.
+
|Mon 12.12.
 
|Fuzzy systems, fuzzy logic and reasoning. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.4x.pdf slides]-->  
 
|Fuzzy systems, fuzzy logic and reasoning. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.4x.pdf slides]-->  
 
|Engelbrecht (2007), ch.20-21; Zadeh (2007)
 
|Engelbrecht (2007), ch.20-21; Zadeh (2007)

Verzia zo dňa a času 20:12, 19. október 2022

Introduction to Computational Intelligence 2-IKVa-115/18

The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.


Course schedule

Type Day Time Room Lecturer
Lecture Monday 9:50 - 11:30 I-9 / in room Igor Farkaš
Seminar Thursday 13:10 - 14:40 i-9 / in room Kristína Malinovská

Syllabus

# Date Topic References
1. Mon 26.09. What is computational intelligence, basic concepts, relation to artificial intelligence. slides Craenen & Eiben (2003); wikipedia; R&N (2010), chap.1;
2. Mon 03.10. Taxonomy of artificial agents, nature of environments. slides R&N (2010), chap.2
3. Mon 10.10. Inductive learning via observations, decision trees. Model selection. slides R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12
4. Mon 17.10. Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression. R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
5. Mon 24.10. Unsupervised (self-organizing) neural networks: feature extraction, data visualization. Marsland (2015), ch.14, Engelbrecht (2007), ch.4
x 31.10. No lecture (holiday) on Thursday we have a class (Q&As)
6. Mon 7.11. Probability theory. Bayes formula. Naive Bayes classifier. R&N (2010), ch.13,20.1-2
7. Mon 14.11. Probabilistic learning: MAP, ML. Thursday: mid-term test
8. Mon 21.11. Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. R&N (2010), ch.21.1-2.
9. Mon 28.11. Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains. R&N (2010), ch.21.3-5.
10. Mon 05.12. Evolutionary computation: basic concepts, genetic algorithms. Engelbrecht (2007), ch.8
11. Mon 12.12. Fuzzy systems, fuzzy logic and reasoning. Engelbrecht (2007), ch.20-21; Zadeh (2007)

Note: Dates refer to lectures, seminars will be on day+3 each week.

References

Course grading

  • Active participation during the lectures/exercises (25%): 15 for lectures, 10 for exercises. Minimum 1/3 of points required.
  • Homework (10%): weekly homework given and discussed at the exercises, usually solved by hand or via excel sheets (no programming necessary)
  • Written mid-term test (30%), covering topics of the first half of the semester.
  • Final written-oral exam (30%): We will discuss 3 randomly chosen (by a computer) questions that basically correspond to weekly topics during the semester. Minimum of 1/3 of all points required.
  • Small final project (10%) = implementation of a small neural network (using an existing Python library) and writing a short report. Note: even without this, the student can still get maximum points if s/he has performed very actively. Deadline: TBA
  • Overall grading: A (>90%), B (>80%), C (>70%), D (>60%), E (>50%), Fx (otherwise).