{"id":332,"date":"2025-09-20T20:41:15","date_gmt":"2025-09-20T18:41:15","guid":{"rendered":"https:\/\/mlatilikzsolt.hu\/?p=332"},"modified":"2025-09-20T21:00:18","modified_gmt":"2025-09-20T19:00:18","slug":"intro-to-neural-networks_part4","status":"publish","type":"post","link":"https:\/\/mlatilikzsolt.hu\/en\/2025\/09\/20\/intro-to-neural-networks_part4\/","title":{"rendered":"Introduction to the World of Neural Networks Part 4"},"content":{"rendered":"<h2 class=\"wp-block-heading\" id=\"0-mi%C3%A9rt-haszn%C3%A1ljunk-numpy-t\">Why Use NumPy?<\/h2>\n\n\n\n<p>In the previous articles, we built an artificial neuron and a simple layer using pure Python code. The logic was not complicated: weighted sum, add bias, and optionally apply an activation function. \u00a0 &nbsp;<br>But as networks grow larger \u2014 with multiple layers and hundreds or thousands of neurons \u2014 pure Python solutions become: &nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>slow, &nbsp;<\/li>\n\n\n\n<li>hard to manage, &nbsp;<\/li>\n\n\n\n<li>and prone to errors. \u00a0 &nbsp;<\/li>\n<\/ul>\n\n\n\n<p>This is why we use the NumPy library, which is: \u00a0 &nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>very fast (written in C language), &nbsp;<\/li>\n\n\n\n<li>reliable (thoroughly tested), \u00a0 &nbsp;<\/li>\n\n\n\n<li>and makes vector and matrix operations easy. \u00a0 &nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"1-vektorok-t%C3%B6mb%C3%B6k-m%C3%A1trixok-%C3%A9s-tenzorok\">Vectors, Arrays, Matrices and Tensors<\/h2>\n\n\n\n<p>Before we look at how NumPy is used through specific examples, it is important to clarify a few concepts.<\/p>\n\n\n\n<p>Let's start with the simplest Python data store, the list. A Python list contains comma-separated numbers enclosed in square brackets. In the previous sections, we used lists to store data in our pure Python solutions.<\/p>\n\n\n\n<p>Example of a list:<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(1 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly>list = &#91;1, 5, 6, 2&#93;<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #267F99\">list<\/span><span style=\"color: #000000\"> = &#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">6<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>List of lists:<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(1 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly>list_of_lists = [&#91;1, 5, 6, 2&#93;,\n\t\t\t\t         &#91;3, 2, 1, 3&#93;]<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #000000\">list_of_lists = [&#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">6<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\t\t\t\t         &#91;<\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">&#93;]<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>List of lists of lists:<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(1 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly>list_of_lists_of_lists = [[&#91;1, 5, 6, 2&#93;,\n                           &#91;3, 2, 1, 3&#93;],\n                          [&#91;5, 2, 1, 2&#93;,\n                           &#91;6, 4, 8, 4&#93;],\n                          [&#91;2, 8, 5, 3&#93;,\n                           &#91;1, 1, 9, 4&#93;]]<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #000000\">list_of_lists_of_lists = [[&#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">6<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">                           &#91;<\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">&#93;],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">                          [&#91;<\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">                           &#91;<\/span><span style=\"color: #098658\">6<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">4<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">8<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">4<\/span><span style=\"color: #000000\">&#93;],<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">                          [&#91;<\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">8<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">                           &#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">9<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">4<\/span><span style=\"color: #000000\">&#93;]]<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>All of the above examples can also be called arrays. However, not all lists can be arrays.<\/p>\n\n\n\n<p>For example:<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(1 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly> [&#91;1, 2, 3&#93;,\n\u00a0 \u00a0&#91;4, 5&#93;,\n\u00a0 \u00a0&#91;6, 7, 8, 9&#93;]<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #000000\"> [&#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\u00a0 \u00a0&#91;<\/span><span style=\"color: #098658\">4<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">5<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\u00a0 \u00a0&#91;<\/span><span style=\"color: #098658\">6<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">7<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">8<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">9<\/span><span style=\"color: #000000\">&#93;]<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>This list cannot be an array because it is not \"homologous\". A \"list of lists\" is homologous if each row contains exactly the same amount of data and this is true for all dimensions. The example above is not homologous because the first list has 3 elements, the second has 2, and the third has 4. <\/p>\n\n\n\n<p>The definition of a matrix is \u200b\u200bsimple: it is a two-dimensional array. It has rows and columns. So a matrix can be an array. Can every array be a matrix? No. An array can be much more than rows and columns. It can be 3, 5, or even 20 dimensions.<\/p>\n\n\n\n<p>Finally, what is a tensor? The exact definition of tensors and arrays has been debated for hundreds of pages by experts. Much of this debate is caused by the participants approaching the topic from completely different areas. If we want to approach the concept of tensor from the perspective of deep learning and neural networks, then perhaps the most accurate description is: \"A tensor object is an object that can be represented as an array.\"<\/p>\n\n\n\n<p>In summary: A linear or 1-dimensional array is the simplest array, and in Python, a list corresponds to this. Arrays can also contain multidimensional data, the most well-known example of which is a matrix, which is a 2-dimensional array.<\/p>\n\n\n\n<p>One more concept that is important to clarify is the vector. Simply put, a vector used in mathematics is the same as a Python list, or a 1-dimensional array.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" style=\"line-height:1.7\">Two Key Operations: Dot Product and Vector Addition<\/h3>\n\n\n\n<p>When performing the dot product operation, we multiply two vectors. We do this by taking the elements of the vectors one by one and multiplying the elements with the same index, then adding these products. Mathematically, this looks like this:<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\vec{a}\\cdot\\vec{b} = \\sum_{i=1}^n a_ib_i = a_1\\cdot b_1+a_2\\cdot b_2+...+a_n\\cdot b_n<\/pre><\/div>\n\n\n\n<p>It is important that both vectors have the same size. If we wanted to describe the same thing in Python code, it would look like this:<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(2 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly># Els\u0151 vektor\na = &#91;1, 2, 3&#93;\n\n# M\u00e1sodik vektor\nb = &#91;2, 3, 4&#93;\n\n# Dot product kisz\u00e1m\u00edt\u00e1sa\ndot_product = a&#91;0&#93;*b&#91;0&#93; + a&#91;1&#93;*b&#91;1&#93; + a&#91;2&#93;*b&#91;2&#93;\n\nprint(dot_product)\n\n>>> 20<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #008000\"># First vector<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">a = &#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">&#93;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Second vector<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">b = &#91;<\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">4<\/span><span style=\"color: #000000\">&#93;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Dot product calculation<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">dot_product = a&#91;<\/span><span style=\"color: #098658\">0<\/span><span style=\"color: #000000\">&#93;*b&#91;<\/span><span style=\"color: #098658\">0<\/span><span style=\"color: #000000\">&#93; + a&#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">&#93;*b&#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">&#93; + a&#91;<\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;*b&#91;<\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">&#93;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #795E26\">print<\/span><span style=\"color: #000000\">(dot_product)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">&gt;&gt;&gt; <\/span><span style=\"color: #098658\">20<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>You can see that we have performed the same operation as when calculating the output value of a neuron, only here we have not added the bias. Since the Python language does not contain any instructions or functions for calculating the dot product by default, we use the NumPy library.<\/p>\n\n\n\n<p>When adding vectors, we add the elements of each vector with the same index. Mathematically, this looks like this:<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>\\vec{a}+\\vec{b} = [a_1+b_1, a_2+b_2,...,a_n+b_n]<\/pre><\/div>\n\n\n\n<p>Here again, it is important that the vectors have the same size. The result will be a vector of the same size. NumPy handles this operation easily.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Using NumPy<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">A neuron<\/h3>\n\n\n\n<p>Let\u2019s now implement a neuron using NumPy. \u00a0<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(2 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly>import numpy as np\n\n# Bemenetek \u00e9s s\u00falyok\ninputs = np.array(&#91;0.5, 0.8, 0.3, 0.1&#93;)\n\nweights = np.array(&#91;0.2, 0.7, -0.5, 0.9&#93;)\n\nbias = 0.5\n\n# Neuron kimenete (dot product + bias)\noutput = np.dot(inputs, weights) + bias\n\nprint(\"Neuron kimenete:\", output)<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #AF00DB\">import<\/span><span style=\"color: #000000\"> numpy <\/span><span style=\"color: #AF00DB\">as<\/span><span style=\"color: #000000\"> np<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Inputs and weights<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">inputs = np.array(&#91;<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.8<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.1<\/span><span style=\"color: #000000\">&#93;)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">weights = np.array(&#91;<\/span><span style=\"color: #098658\">0.2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.7<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.9<\/span><span style=\"color: #000000\">&#93;)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">bias = <\/span><span style=\"color: #098658\">0.5<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Neuron output (dot product + bias)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">output = np.dot(inputs, weights) + bias<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #795E26\">print<\/span><span style=\"color: #000000\">(<\/span><span style=\"color: #A31515\">\"Neuron output:\"<\/span><span style=\"color: #000000\">, output)<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>Here, <code>np.dot(inputs, weights)<\/code> computes the dot product, and then we simply add the bias. &nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">A layer<\/h3>\n\n\n\n<p>Now let\u2019s build a layer of 3 neurons, each receiving 4 inputs. \u00a0 &nbsp;<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(2 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly>import numpy as np\n\n# P\u00e9lda bemenet (4 elem)\ninputs = np.array(&#91;1.0, 2.0, 3.0, 2.5&#93;)\n\n# 3 neuron s\u00falyai (m\u00e1trix: 3 sor, 4 oszlop)\nweights = np.array([\n\u00a0 \u00a0             &#91;0.2, 0.8, -0.5, 1.0&#93;, \u00a0 \u00a0   # Neuron 1\n\u00a0 \u00a0             &#91;0.5, -0.91, 0.26, -0.5&#93;,    # Neuron 2\n\u00a0 \u00a0             &#91;-0.26, -0.27, 0.17, 0.87&#93;   # Neuron 3\n])\n\n# Bias \u00e9rt\u00e9kek (3 elem)\nbiases = np.array(&#91;2.0, 3.0, 0.5&#93;)\n\n# R\u00e9teg kimenete (m\u00e1trix-szorz\u00e1s + vektor \u00f6sszead\u00e1s)\noutput = np.dot(weights, inputs) + biases\n\nprint(\"R\u00e9teg kimenete:\", output)\n\n>>> R\u00e9teg kimenete: &#91;4.8 \u00a0 1.21 \u00a02.385&#93;<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #AF00DB\">import<\/span><span style=\"color: #000000\"> numpy <\/span><span style=\"color: #AF00DB\">as<\/span><span style=\"color: #000000\"> np<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Example inputs (4 elements)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">inputs = np.array(&#91;<\/span><span style=\"color: #098658\">1.0<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2.0<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3.0<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2.5<\/span><span style=\"color: #000000\">&#93;)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Weights for 3 neurons (matrix: 3 rows, 4 columns)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">weights = np.array([<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\u00a0 \u00a0             &#91;<\/span><span style=\"color: #098658\">0.2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.8<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1.0<\/span><span style=\"color: #000000\">&#93;, \u00a0 \u00a0   <\/span><span style=\"color: #008000\"># Neuron 1<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\u00a0 \u00a0             &#91;<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.91<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.26<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">&#93;,    <\/span><span style=\"color: #008000\"># Neuron 2<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">\u00a0 \u00a0             &#91;-<\/span><span style=\"color: #098658\">0.26<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.27<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.17<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.87<\/span><span style=\"color: #000000\">&#93;   <\/span><span style=\"color: #008000\"># Neuron 3<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">])<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Bias values (3 elements)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">biases = np.array(&#91;<\/span><span style=\"color: #098658\">2.0<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3.0<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">&#93;)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Layer output (matrix multiplication + vector addition)<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">output = np.dot(weights, inputs) + biases<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #795E26\">print<\/span><span style=\"color: #000000\">(<\/span><span style=\"color: #A31515\">\"Layer output:\"<\/span><span style=\"color: #000000\">, output)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">&gt;&gt;&gt; Layer output: [<\/span><span style=\"color: #098658\">4.8<\/span><span style=\"color: #000000\"> \u00a0 <\/span><span style=\"color: #098658\">1.21<\/span><span style=\"color: #000000\"> \u00a0<\/span><span style=\"color: #098658\">2.385<\/span><span style=\"color: #000000\">&#93;<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<p>Here, <code>np.dot(weights, inputs)<\/code> computes the matrix-vector product, which is exactly the weighted sum for each neuron. Adding the bias vector completes the computation. &nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Next Article<\/h2>\n\n\n\n<p>In the next article, we will explore activation functions, and see how they provide the \"nonlinear power\" that makes neural networks much more capable. Without them, our network would only be able to model simple linear relationships. &nbsp;<\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>In the previous articles, we built an artificial neuron and a simple layer using pure Python code. The logic was not complicated: weighted sum, add bias, and optionally apply an activation function. \u00a0 \u00a0<br \/>\nBut as networks grow larger \u2014 with multiple layers and hundreds or thousands of neurons \u2014 pure Python solutions become: \u00a0<br \/>\n&#8211; lass\u00fa lesz, \u00a0<br \/>\n&#8211; \u00e1ttekinthetetlenn\u00e9 v\u00e1lik, \u00a0<br \/>\n&#8211; \u00e9s rengeteg hibalehet\u0151s\u00e9get hordoz. \u00a0<\/p>","protected":false},"author":1,"featured_media":337,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":3,"footnotes":""},"categories":[9,8],"tags":[11,10,13],"class_list":["post-332","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial_intelligence","category-neural-networks","tag-artificial-intelligence","tag-neural-networks","tag-python"],"featured_image_src":"https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/09\/numpylogo-1024x461-1-e1758394449293.webp","author_info":{"display_name":"MlatilikZsolt","author_link":"https:\/\/mlatilikzsolt.hu\/en\/author\/mlatilikzsolt\/"},"_links":{"self":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/332","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/comments?post=332"}],"version-history":[{"count":5,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/332\/revisions"}],"predecessor-version":[{"id":339,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/332\/revisions\/339"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/media\/337"}],"wp:attachment":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/media?parent=332"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/categories?post=332"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/tags?post=332"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}