(Syllabus)
 
(51 intermediate revisions by 2 users not shown)
Riadok 8: Riadok 8:
 
__TOC__
 
__TOC__
  
The aim of the course is to get acquainted with the basic concepts and algorithms of learning artificial neural networks and their use in solving various problems. Theoretical lectures are combined with practical modeling in Python exercises
+
The aim of the course is to provide key insights into the basic concepts and algorithms of learning artificial neural networks and their use in solving various problems. The syllabus is organized to provide an overview of important milestones, combining older models with newer ones. Theoretical lectures are combined with practical modeling in Python exercises.
 
+
== News ==
+
Partial changes were made in lectures and exercises last year. Some older parts have been shortened, newer topics have been added. The syllabus is updated, as well as the evaluation of course activities.  
+
  
 
<!--
 
<!--
Riadok 33: Riadok 30:
 
!Day
 
!Day
 
!Time
 
!Time
!Place
+
!Location
 
!Teacher
 
!Teacher
 
|-
 
|-
 
|Lecture
 
|Lecture
|Tuesday
+
|Wednesday
|09:50 - 11:20
+
|9:50 - 11:20
|online
+
|M-IV
 
|[[Igor Farkas|Igor Farkaš]]
 
|[[Igor Farkas|Igor Farkaš]]
 
|-
 
|-
|[https://moodle.uniba.sk/moodle/inf11/course/view.php?id=734 Exercise]
+
|[https://moodle.uniba.sk/course/view.php?id=2883 Exercise]
 
|Thursday
 
|Thursday
|16:30 - 18:00
+
|18:10 - 19:40
|online
+
|H3
|[[Endre Hamerlik|Endre Hamerlik]],[[Stefan Pocos|Štefan Pócoš]],
+
|[[Iveta Beckova|Iveta Bečková]], [[Stefan Pocos|Štefan Pócoš]]
 
|}
 
|}
  
Riadok 52: Riadok 49:
  
 
{| class="alternative table-responsive"
 
{| class="alternative table-responsive"
 +
!No.
 
!Date
 
!Date
 
!Topic
 
!Topic
 
!References
 
!References
 
|-
 
|-
|16.2.
+
|01
|Conditions for passing the course. Introduction, inspiration from neurobiology, brief history of NN, basic concepts. NN with logical neurons. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.intro.L01.4x.pdf slajdy-L01]-->
+
|21.2.
 +
|Conditions for passing the course. Introduction, inspiration from neurobiology, brief history of NN, basic concepts. NN with logical neurons. <!-- [http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.intro.L01.4x.pdf slides-L01]-->
 
| [U1/1][U3/1][U5/1]
 
| [U1/1][U3/1][U5/1]
 
|-
 
|-
|23.2.
+
|02
|Binary and continuous perceptron: supevised learning, error functions, binary classification and regression, linear separability. Relation to the Bayesian classifier.   <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.perceptron.L02.4x.pdf slajdy-L02]-->
+
|28.2.
 +
|Binary and continuous perceptron: supervised learning, error functions, binary classification and regression, linear separability. Relation to the Bayesian classifier. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.perceptron.L02.4x.pdf slides-L02]-->
 
|[U1/1-3]
 
|[U1/1-3]
 
|-
 
|-
|02.3.
+
|03
|Single-layer NS: Linear autoassociation: General Inverse model. Classification into n-classes. Error functions, relation to information theory . <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.single-layer-models.L03.4x.pdf slajdy-L03]-->
+
|06.3.
 +
|Supervised single-layer NS: Linear autoassociation: General Inverse model. Classification into n-classes. Error functions, relation to information theory. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.single-layer-models.L03.4x.pdf slides-L03] -->
  
 
|[U4/3][U5/4]
 
|[U4/3][U5/4]
 
|-
 
|-
|09.3.
+
|04
|Multilayer perceptron: error back-propagation algorithm. Training, validation, testing. Model selection.  <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.mlp.L04.4x.pdf slajdy-L04]-->
+
|13.3.
 +
|Multilayer perceptron: error back-propagation algorithm. Training, validation, testing. Model selection.  Bias-variance tradeoff. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.mlp.L04.4x.pdf slides-L04] -->
 
|[U1/4][U4/4]
 
|[U1/4][U4/4]
 
|-
 
|-
|16.3.
+
|05
|Modifications of gradient methods, second order optimization, regularization. Optimization problems. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.optimization.L05.4x.pdf slajdy-L05]-->
+
|20.3.
 +
|Modifications of gradient methods, second-order optimization, regularization. Optimization problems. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.optimization.L05.4x.pdf slides-L05]-->
 
|[U1/15][U4/11]
 
|[U1/15][U4/11]
 
|-
 
|-
|23.3.
+
|06
|Unsupervised learning, feature extraction, neural PCA model. Data visualization: self-organizing map (SOM) model. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.unsup.L06.4x.pdf slajdy-L06]-->
+
|27.3.
 +
|Unsupervised learning, feature extraction, neural PCA model. Data visualization: self-organizing map (SOM) model. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.unsup.L06.4x.pdf slides-L06] -->
 
|[U1/8-9][U5/7]
 
|[U1/8-9][U5/7]
 
|-
 
|-
|30.3.
+
|07
|Sequential data modeling: forward NS, relation to n-grams, partially and fully recurrent models, SRN model, BPTT, RTRL algorithm. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.seq-models.L07.4x.pdf slajdy-L07]-->
+
|03.4.
 +
|Sequential data modeling: forward NS, relation to n-grams, partially and fully recurrent models, SRN model, BPTT, RTRL algorithm. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.seq-models.L07.4x.pdf slides-L07]-->
 
|[U4/8][U5/6]
 
|[U4/8][U5/6]
 
|-
 
|-
|06.4.
+
|08
|Expansion of hidden representation: NS with radial basis functions (RBF), echo state network (ESN). <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.rbf-esn.L08.4x.pdf slajdy-L08]-->
+
|10.4.
 +
|Hopfield model: deterministic dynamics, attractors, autoassociative memory, sketch of the stochastic model, modern versions.  <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.hopfield-aam.L11.4x.pdf slides-L11]-->
 +
|[U1/13][U5/9]
 +
|-
 +
|09
 +
|17.4.
 +
|Expansion of hidden representation: NS with radial basis functions (RBF), echo state network (ESN). <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.rbf-esn.L08.4x.pdf slides-L08]-->
 
|[U1/5][U2]
 
|[U1/5][U2]
 
|-
 
|-
|13.4.
+
|10
|Deep learning. Convolutional neural networks: introduction. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.deep-convol.L09.4x.pdf slajdy-L09]-->
+
|24.4.
 +
|Deep learning. Convolutional neural networks: introduction. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.deep-convol.L09.4x.pdf slides-L09]-->
 
|[U3/6,9, U4/6]
 
|[U3/6,9, U4/6]
 
|-
 
|-
|20.4.
+
|11
|More recent models: autoencoders, GRU, LSTM. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.autoenc-gated.L10.4x.pdf slajdy-L10]-->
+
|01.5.
 +
|More recent models: autoencoders, gated recurrent models, transformers <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.autoenc-gated.L10.4x.pdf slides-L10]-->
 
|[U3/14,U4/9.1-2]
 
|[U3/14,U4/9.1-2]
 
|-
 
|-
|27.4.
+
|12
|Hopfield model: deterministic dynamics, attractors, autoassociative memory, sketch of the stochastic model . <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.hopfield-aam.L11.4x.pdf slajdy-L11]-->
+
|08.5.
|[U1/13][U5/9]
+
|Stochastic recurrent models: basics of probability theory and statistical mechanics, Boltzmann machine, RBM model, Deep Belief Network. <!-- [http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.stochastic.L12.4x.pdf slides-L12]-->
|-
+
|04.5.
+
|Stochastic recurrent models: basics of probability theory and statistical mechanics, Boltzmann machine, RBM model, Deep Belief Network. <!--[http://dai.fmph.uniba.sk/courses/NN/Lectures/nn.stochastic.L12.4x.pdf slajdy]-->
+
 
|[U1/11][U3/16]
 
|[U1/11][U3/16]
 
|-
 
|-
|11.5.
 
|Recent advances in the field.
 
|
 
 
|}
 
|}
  
 
== References ==
 
== References ==
  
* Farkaš I. (2016). [http://dai.fmph.uniba.sk/courses/NN/neural-networks.2016.pdf Neural networks]. Knižničné a edičné centrum FMFI UK v Bratislave. Slajdes to the lectures (not updated).
+
* Farkaš I. (2016). [http://dai.fmph.uniba.sk/courses/NN/neural-networks.2016.pdf Neural networks]. Knižničné a edičné centrum FMFI UK v Bratislave. Lecture slides (not updated).
 +
* Goodfellow I., Bengio Y., Courville A. (2016). [http://www.deeplearningbook.org Deep Learning]. MIT Press. [U3] 
 
* Haykin S. (2009). Neural Networks and Learning Machines (3rd ed.). Upper Saddle River, Pearson Education (k dispozícii na štúdium v knižnici FMFI, ale aj stiahnuteľné z webu). [U1]
 
* Haykin S. (2009). Neural Networks and Learning Machines (3rd ed.). Upper Saddle River, Pearson Education (k dispozícii na štúdium v knižnici FMFI, ale aj stiahnuteľné z webu). [U1]
 
* Jaeger H. (2007). [http://www.scholarpedia.org/article/Echo_state_network Echo-state network]. Scholarpedia, 2(9):2330. [U2]
 
* Jaeger H. (2007). [http://www.scholarpedia.org/article/Echo_state_network Echo-state network]. Scholarpedia, 2(9):2330. [U2]
* Goodfellow I., Bengio Y., Courville A. (2016). [http://www.deeplearningbook.org Deep Learning]. MIT Press. [U3] 
 
* Zhang A. et al. (2020). [https://d2l.ai/ Dive into Deep Learning]. An interactive deep learning book with code, math, and discussions, based on the NumPy interface. [U4]
 
 
* Kvasnička V., Beňušková., Pospíchal J., Farkaš I., Tiňo P. a Kráľ A. (1997). [http://dai.fmph.uniba.sk/courses/NN/UvodDoTeorieNS.pdf.zip Úvod do teórie neurónových sietí]. Iris: Bratislava. [U5]
 
* Kvasnička V., Beňušková., Pospíchal J., Farkaš I., Tiňo P. a Kráľ A. (1997). [http://dai.fmph.uniba.sk/courses/NN/UvodDoTeorieNS.pdf.zip Úvod do teórie neurónových sietí]. Iris: Bratislava. [U5]
 +
* Zhang A. et al. (2020). [https://d2l.ai/ Dive into Deep Learning]. An interactive deep learning book with code, math, and discussions, based on the NumPy interface. [U4]
  
 
== Conditions and grading ==
 
== Conditions and grading ==
  
* Odovzdanie aspoň dvoch (z troch) projektov počas semestra (max. 3x5 = 15 bodov). Termíny na odovzdanie projektov budú včas uvedené na stránke. Projekty budú ponúkať aj bonusy (max. 2 body).
+
* Submission of <b>at least two functioning projects</b> (out of three) during the semester (max. 3x10 = 30 points) and obtaining at least 15 points in total. The deadlines will be announced on the webpage. The projects will offer bonuses (max. 4 points).
* Študent môže byť požiadaný, aby prezentoval svoj funkčný kód na cvičení ako súčasť odovzdania projektu. Na cvičeniach sa budú robiť malé úlohy, za ktoré tiež bude možné získať body (max. 17 za semester).
+
* Students may be asked to <b>present the code</b> during exercise as part of the submission.
* Absolvovanie záverečnej písomno-ústnej skúšky (3 otázky po 5 bodov, pseudonáhodný výber). Na skúšku sa môže prihlasiť len ten, kto má oznámkované aspoň dva projekty s fungujúcim kódom. Skúška je povinná, ústna, je nutné získať aspoň 6 bodov.
+
* Each exercise (starting from the third one) will have <b>a 5-minute exam</b>. These will be worth in total 10x3 = 30 points.
* Prednášky sú dobrovoľné, za účasť však možno získať max. 3 body.
+
* For <b>active participation during exercises</b> the student can get 10 points. It is mandatory to obtain at least 20 points from exercises in total (out of max. 40).
* <b>Celkové hodnotenie:</b> A (50-46), B (45-41), C (40-36), D (35-31), E (30-26), Fx (25-0).
+
* Passing for <b>the final written-oral exam</b> (3 questions, pseudorandom choice, 3x10=30 points in total). To register for the exam, you have to have at least two functioning projects graded. The exam is compulsory, the student has to obtain at least 12 points.
 +
* The <b>lectures are voluntary</b>, but the active participation will be rewarded with up to 5 bonus points.
 +
* <b>Overall grading:</b> A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0).
  
== Project during the semester ==
+
== Projects during the semester ==
  
* Projekt, spolu so zdrojovým kódom, sa odovzdáva k stanoveného termínu. Za každý oneskorený kalendárny deň (max. 5 dní) odovzdania vzniká malus 1 bod (až do nuly). Úspešný projekt (fungujúci model) odovzdaný viac ako týždeň po termíne sa započítava bez bodov.
+
* The project, together with the source code, is to be submitted before the deadline. Late submissions are penalized by -2 points each day. It is not possible to submit a project more than a week after the deadline.
* Pri projekte sa hodnotí najmä obsah, ale záleží aj na forme (čitateľnosť, úprava). Obsah projektu musí byť zrozumiteľný, aby bolo jasné, čo a ako ste robili, t.j. grafické výstupy kombinované so sprievodným textom. Za výraznejšie formálne a gramatické nedostatky môže byť strhnutý bod.
+
* The projects are graded mainly based on content, but the form is considered, too (readability). The content should be comprehensible, i.e. graphical outputs combined with text.  
* Model je naprogramovaný v jazyku Python a projekt sa odovzdáva vo formáte PDF.
+
* The model is to be implemented in Python and the project must be submitted as a PDF (no title page is required, the title and your name is enough).
* Pri zistení plagiátorstva študent(ka) získava automaticky nula bodov z projektu a nebude pripustený(-á) ku skúške.
+
* In case of plagiarism detection, the student automatically receives zero points from the project and will not be admitted to the exam.

Aktuálna revízia z 19:34, 10. marec 2024

Neural Networks 2-AIN-132

Course information sheet

The aim of the course is to provide key insights into the basic concepts and algorithms of learning artificial neural networks and their use in solving various problems. The syllabus is organized to provide an overview of important milestones, combining older models with newer ones. Theoretical lectures are combined with practical modeling in Python exercises.


Schedule

Type Day Time Location Teacher
Lecture Wednesday 9:50 - 11:20 M-IV Igor Farkaš
Exercise Thursday 18:10 - 19:40 H3 Iveta Bečková, Štefan Pócoš

Syllabus

No. Date Topic References
01 21.2. Conditions for passing the course. Introduction, inspiration from neurobiology, brief history of NN, basic concepts. NN with logical neurons. [U1/1][U3/1][U5/1]
02 28.2. Binary and continuous perceptron: supervised learning, error functions, binary classification and regression, linear separability. Relation to the Bayesian classifier. [U1/1-3]
03 06.3. Supervised single-layer NS: Linear autoassociation: General Inverse model. Classification into n-classes. Error functions, relation to information theory. [U4/3][U5/4]
04 13.3. Multilayer perceptron: error back-propagation algorithm. Training, validation, testing. Model selection. Bias-variance tradeoff. [U1/4][U4/4]
05 20.3. Modifications of gradient methods, second-order optimization, regularization. Optimization problems. [U1/15][U4/11]
06 27.3. Unsupervised learning, feature extraction, neural PCA model. Data visualization: self-organizing map (SOM) model. [U1/8-9][U5/7]
07 03.4. Sequential data modeling: forward NS, relation to n-grams, partially and fully recurrent models, SRN model, BPTT, RTRL algorithm. [U4/8][U5/6]
08 10.4. Hopfield model: deterministic dynamics, attractors, autoassociative memory, sketch of the stochastic model, modern versions. [U1/13][U5/9]
09 17.4. Expansion of hidden representation: NS with radial basis functions (RBF), echo state network (ESN). [U1/5][U2]
10 24.4. Deep learning. Convolutional neural networks: introduction. [U3/6,9, U4/6]
11 01.5. More recent models: autoencoders, gated recurrent models, transformers [U3/14,U4/9.1-2]
12 08.5. Stochastic recurrent models: basics of probability theory and statistical mechanics, Boltzmann machine, RBM model, Deep Belief Network. [U1/11][U3/16]

References

  • Farkaš I. (2016). Neural networks. Knižničné a edičné centrum FMFI UK v Bratislave. Lecture slides (not updated).
  • Goodfellow I., Bengio Y., Courville A. (2016). Deep Learning. MIT Press. [U3]
  • Haykin S. (2009). Neural Networks and Learning Machines (3rd ed.). Upper Saddle River, Pearson Education (k dispozícii na štúdium v knižnici FMFI, ale aj stiahnuteľné z webu). [U1]
  • Jaeger H. (2007). Echo-state network. Scholarpedia, 2(9):2330. [U2]
  • Kvasnička V., Beňušková., Pospíchal J., Farkaš I., Tiňo P. a Kráľ A. (1997). Úvod do teórie neurónových sietí. Iris: Bratislava. [U5]
  • Zhang A. et al. (2020). Dive into Deep Learning. An interactive deep learning book with code, math, and discussions, based on the NumPy interface. [U4]

Conditions and grading

  • Submission of at least two functioning projects (out of three) during the semester (max. 3x10 = 30 points) and obtaining at least 15 points in total. The deadlines will be announced on the webpage. The projects will offer bonuses (max. 4 points).
  • Students may be asked to present the code during exercise as part of the submission.
  • Each exercise (starting from the third one) will have a 5-minute exam. These will be worth in total 10x3 = 30 points.
  • For active participation during exercises the student can get 10 points. It is mandatory to obtain at least 20 points from exercises in total (out of max. 40).
  • Passing for the final written-oral exam (3 questions, pseudorandom choice, 3x10=30 points in total). To register for the exam, you have to have at least two functioning projects graded. The exam is compulsory, the student has to obtain at least 12 points.
  • The lectures are voluntary, but the active participation will be rewarded with up to 5 bonus points.
  • Overall grading: A (100-91), B (90-81), C (80-71), D (70-61), E (60-51), Fx (50-0).

Projects during the semester

  • The project, together with the source code, is to be submitted before the deadline. Late submissions are penalized by -2 points each day. It is not possible to submit a project more than a week after the deadline.
  • The projects are graded mainly based on content, but the form is considered, too (readability). The content should be comprehensible, i.e. graphical outputs combined with text.
  • The model is to be implemented in Python and the project must be submitted as a PDF (no title page is required, the title and your name is enough).
  • In case of plagiarism detection, the student automatically receives zero points from the project and will not be admitted to the exam.