(Syllabus)
(Syllabus)
 
(16 intermediate revisions by the same user not shown)
Riadok 7: Riadok 7:
 
The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.
 
The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.
 
<!--[https://sluzby.fmph.uniba.sk/infolist/sk/2-AIN-132_15.html Informačný list predmetu]-->
 
<!--[https://sluzby.fmph.uniba.sk/infolist/sk/2-AIN-132_15.html Informačný list predmetu]-->
 
 
<!--
 
<!--
 
== News ==
 
== News ==
 
<!--'''Exam:''' The list of questions is [http://dai.fmph.uniba.sk/courses/ICI/ci-exam-questions.pdf here]. You will choose three questions (pseudo)randomly.-->
 
<!--'''Exam:''' The list of questions is [http://dai.fmph.uniba.sk/courses/ICI/ci-exam-questions.pdf here]. You will choose three questions (pseudo)randomly.-->
 
 
<!-- [[#Archív noviniek|Archív noviniek…]] -->
 
<!-- [[#Archív noviniek|Archív noviniek…]] -->
  
Riadok 29: Riadok 27:
 
|-
 
|-
 
|Seminar
 
|Seminar
|TBA
+
|Thursday
|TBA
+
|9:00 - 10:30
|i-23 / in room
+
|i-9 / in room
 
|[[Kristina Malinovska|Kristína Malinovská]]  
 
|[[Kristina Malinovska|Kristína Malinovská]]  
 
|}
 
|}
Riadok 45: Riadok 43:
 
|1.
 
|1.
 
|18.09.  
 
|18.09.  
|What is computational intelligence, basic concepts, relation to artificial intelligence.   
+
|What is computational intelligence, basic concepts, relation to artificial intelligence.  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.pdf slides]
<!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-def.4x.pdf slides]-->
+
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1;
 
|Craenen & Eiben (2003); [https://en.wikipedia.org/wiki/Computational_intelligence wikipedia]; R&N (2010), chap.1;
 
|-
 
|-
 
|2.
 
|2.
 
|25.09.
 
|25.09.
|Taxonomy of artificial agents, nature of environments. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.4x.pdf slides]-->
+
|Properties of environments, taxonomy of artificial agents. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-agents.pdf slides]
 
|R&N (2010), chap.2
 
|R&N (2010), chap.2
 
|-
 
|-
 
|3.
 
|3.
 
|02.10.
 
|02.10.
|Inductive learning via observations, decision trees. Model selection. <!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.4x.pdf slides]-->
+
|Inductive learning via observations, decision trees. Model selection. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-learning.pdf slides]  
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12 <!--[http://www.r2d3.us/visual-intro-to-machine-learning-part-1/ DT visualization], [https://www.youtube.com/watch?v=LDRbO9a6XPU DT in python], [https://www.youtube.com/watch?v=2s3aJfRr9gE information entropy], [https://www.youtube.com/watch?v=EuBBz3bI-aA bias-variance tradeoff]-->
 
|-
 
|-
 
|4.
 
|4.
 
|09.10.
 
|09.10.
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression.  
+
|Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.pdf slides]
<!--  [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fwdnn.4x.pdf slides] -->
+
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
 
|-
 
|-
 
|5.
 
|5.
 
|16.10.
 
|16.10.
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization.  
+
|Unsupervised (self-organizing) neural networks: feature extraction, data visualization. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.pdf slides]
<!-- [http://dai.fmph.uniba.sk/courses/ICI/References/ci-unsup.4x.pdf slides]-->
+
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
 
|Marsland (2015), ch.14, Engelbrecht (2007), ch.4
 
|-
 
|-
 
|6.
 
|6.
 
|23.10.  
 
|23.10.  
|Probability theory. Bayes formula. Naive Bayes classifier.  
+
|Probability theory. Bayes formula. Naive Bayes classifier. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.pdf slides]
<!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-prob.4x.pdf slides]-->
+
 
|R&N (2010), ch.13,20.1-2  
 
|R&N (2010), ch.13,20.1-2  
 
|-
 
|-
Riadok 89: Riadok 83:
 
|9.
 
|9.
 
|13.11.
 
|13.11.
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem.  
+
|Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.pdf slides]
<!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-rl.4x.pdf slides]-->
+
 
|R&N (2010), ch.21.1-2.  
 
|R&N (2010), ch.21.1-2.  
 
|-
 
|-
Riadok 100: Riadok 93:
 
|11.
 
|11.
 
|27.11.
 
|27.11.
|Evolutionary computation: basic concepts, genetic algorithms.  
+
|Fuzzy systems, fuzzy logic and reasoning. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.pdf slides]
<!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.4x.pdf slides]-->
+
|Engelbrecht (2007), ch.20-21; Zadeh (2007)
|Engelbrecht (2007), ch.8
+
 
|-
 
|-
|12.
+
|11.
 
|04.12.
 
|04.12.
|Fuzzy systems, fuzzy logic and reasoning.  
+
|Evolutionary computation: basic concepts. [http://dai.fmph.uniba.sk/courses/ICI/References/ci-evol.pdf slides]
<!--[http://dai.fmph.uniba.sk/courses/ICI/References/ci-fuzzy.4x.pdf slides] -->
+
|Engelbrecht (2007), ch.8
|Engelbrecht (2007), ch.20-21; Zadeh (2007)
+
 
<!--
 
<!--
 
|13.
 
|13.

Aktuálna revízia z 13:53, 3. december 2023

Introduction to Computational Intelligence 2-IKVa-115/18

The course objectives are to make the students familiar with basic principles of various computational methods of data processing that can commonly be called computational intelligence (CI). This includes mainly bottom-up approaches to solutions of (hard) problems based on various heuristics (soft computing), rather than exact approaches of traditional artificial intelligence based on logic (hard computing). Examples of CI are nature-inspired methods (artificial neural networks, evolutionary algorithms, fuzzy systems), as well as probabilistic methods and reinforcement learning. After the course the students will be able to conceptually understand the important terms and algorithms of CI, and choose appropriate method(s) for a given task. The theoretical lectures are combined with the seminar where the important concepts will be discussed and practical examples will be shown.

Course schedule

Type Day Time Room Lecturer
Lecture Monday 9:00 - 10:30 I-9 / in room Igor Farkaš
Seminar Thursday 9:00 - 10:30 i-9 / in room Kristína Malinovská

Syllabus

week Date Topic References
1. 18.09. What is computational intelligence, basic concepts, relation to artificial intelligence. slides Craenen & Eiben (2003); wikipedia; R&N (2010), chap.1;
2. 25.09. Properties of environments, taxonomy of artificial agents. slides R&N (2010), chap.2
3. 02.10. Inductive learning via observations, decision trees. Model selection. slides R&N (2010), ch.18.1-3,18.6; Marsland (2015), ch.12
4. 09.10. Supervised learning in feedforward neural networks (perceptrons), pattern classification, regression. slides R&N (2010), ch.18.2; Marsland (2015), ch.3-4, Engelbrecht (2007), ch.2-3
5. 16.10. Unsupervised (self-organizing) neural networks: feature extraction, data visualization. slides Marsland (2015), ch.14, Engelbrecht (2007), ch.4
6. 23.10. Probability theory. Bayes formula. Naive Bayes classifier. slides R&N (2010), ch.13,20.1-2
7. 30.10. Probabilistic learning: Maximum A Posteriori learning, Maximum Likellihood R&N (2010), ch.13,20.1-2
8. 06.11. Q&As before mid-term Thu: mid-term test
9. 13.11. Reinforcement learning I: basic principles and learning methods (TD-learning). Prediction problem. slides R&N (2010), ch.21.1-2.
10. 20.11. Reinforcement learning II (Q, SARSA), actor-critic, control problem, RL for continuous domains. R&N (2010), ch.21.3-5.
11. 27.11. Fuzzy systems, fuzzy logic and reasoning. slides Engelbrecht (2007), ch.20-21; Zadeh (2007)
11. 04.12. Evolutionary computation: basic concepts. slides Engelbrecht (2007), ch.8

Note: Dates refer to lectures, seminars will be on day+3 each week.

References

Course grading

  • Active participation during the lectures/exercises (25%): 15 for lectures, 10 for exercises. Minimum 1/3 of points required.
  • Homework (10%): weekly homework given and discussed at the exercises, usually solved by hand or via excel sheets (no programming necessary)
  • Written mid-term test (30%), covering topics of the first half of the semester.
  • Final written-oral exam (30%): We will discuss 3 randomly chosen (by a computer) questions that basically correspond to weekly topics during the semester. Minimum of 1/3 of all points required.
  • Small final project (10%) = implementation of a small neural network (using an existing Python library) and writing a short report. Note: even without this, the student can still get maximum points if s/he has performed very actively. Deadline: TBA
  • Overall grading: A (>90%), B (>80%), C (>70%), D (>60%), E (>50%), Fx (otherwise).