{"id":106,"date":"2025-09-05T23:51:39","date_gmt":"2025-09-05T21:51:39","guid":{"rendered":"https:\/\/mlatilikzsolt.hu\/?p=106"},"modified":"2025-09-06T18:46:44","modified_gmt":"2025-09-06T16:46:44","slug":"intro-to-neural-networks_part3","status":"publish","type":"post","link":"https:\/\/mlatilikzsolt.hu\/en\/2025\/09\/05\/intro-to-neural-networks_part3\/","title":{"rendered":"Introduction to the World of Neural Networks Part 3"},"content":{"rendered":"<p>In the <a href=\"https:\/\/mlatilikzsolt.hu\/en\/2025\/08\/28\/intro-to-neural-networks_part2\/\" data-type=\"post\" data-id=\"77\">previous<\/a> section, we saw how a single artificial neuron works. But a neuron on its own isn't very useful. Its true usefulness comes when you connect multiple neurons together to form a layer. In this section, we'll look at that in a little more detail.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Is a Layer?<\/h2>\n\n\n\n<p>In simple terms, a layer is a bunch of neurons that work with the same input data, but each neuron processes that data with different weights and biases. This data can come directly from the input or from a previous layer. Thanks to the different weights and offsets, each neuron can recognize different patterns in the same data.<\/p>\n\n\n\n<p>For example, if we analyze an image with a neural network, some neurons can recognize vertical lines, others horizontal lines, and still others oblique lines. By combining these appropriately, it becomes possible to recognize more complex shapes. This is how Facebook's feature that recognizes faces in photos works, for example. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Let's look at an example.<\/h2>\n\n\n\n<p>For the sake of illustration, let's build a simple layer with:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>4 inputs: x<sub>1<\/sub>, x<sub>2<\/sub>, x<sub>3<\/sub>, x<sub>4<\/sub><\/li>\n\n\n\n<li>3 neurons<\/li>\n<\/ul>\n\n\n\n<p>Each neuron uses four weights (one for each input) and a bias, from which it calculates its own output value.<\/p>\n\n\n\n<div class=\"wp-block-katex-display-block katex-eq\" data-katex-display=\"true\"><pre>z_j= w_{j1} \\cdot x_1 + w_{j2} \\cdot x_2 + w_{j3} \\cdot x_3 + w_{j4} \\cdot x_4 + b_j<\/pre><\/div>\n\n\n\n<p>In this formula, j refers to each neuron (1, 2, 3). After the calculations are done, the output of the layer will be a three-element vector: [z<sub>1<\/sub>, z<sub>2<\/sub>, z<sub>3<\/sub>]. This can be either the input to a next layer or a final result that is not processed further.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"540\" height=\"621\" src=\"https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/09\/Layer.png\" alt=\"\" class=\"wp-image-112\" srcset=\"https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/09\/Layer.png 540w, https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/09\/Layer-261x300.png 261w, https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/09\/Layer-10x12.png 10w\" sizes=\"auto, (max-width: 540px) 100vw, 540px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Python example: calculating the output of a layer<\/h2>\n\n\n\n<p>Let's see how we can program the above example in Python. <\/p>\n\n\n\n<p>Important: in this example we do not use an activation function, we only calculate the \u201craw\u201d output data.<\/p>\n\n\n\n<div class=\"wp-block-kevinbatdorf-code-block-pro cbp-has-line-numbers\" data-code-block-pro-font-family=\"Code-Pro-JetBrains-Mono\" style=\"font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;--cbp-line-number-color:#000000;--cbp-line-number-width:calc(2 * 0.6 * .875rem);line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)\"><span role=\"button\" tabindex=\"0\" style=\"color:#000000;display:none\" aria-label=\"Copy\" class=\"code-block-pro-copy-button\"><pre class=\"code-block-pro-copy-button-pre\" aria-hidden=\"true\"><textarea class=\"code-block-pro-copy-button-textarea\" tabindex=\"-1\" aria-hidden=\"true\" readonly># Egy r\u00e9teg 3 neuronb\u00f3l, 4 bemenettel\n\ninputs = &#91;1, 2, 3, 2.5&#93;\nweights = [&#91;0.2, 0.8, -0.5, 1.0&#93;,\n           &#91;0.5, -0.91, 0.26, -0.5&#93;,\n           &#91;-0.26, -0.27, 0.17, 0.87&#93;]\nbiases = &#91;2, 3, 0.5&#93;\n\n# A r\u00e9teg kimenete\nlayer_outputs = []\n\n# Minden egyes neuron kimenet\u00e9nek kisz\u00e1m\u00edt\u00e1sa\nfor neuron_weight, neuron_bias in zip(weights, biases):\n    # S\u00falyozott \u00f6sszeg kisz\u00e1m\u00edt\u00e1sa\n    neuron_output = 0\n    for n_input, weight in zip(inputs, neuron_weight):\n        neuron_output += n_input * weight\n    # Eltol\u00e1s hozz\u00e1ad\u00e1sa\n    neuron_output += neuron_bias\n    # A neuron kimenet\u00e9nek hozz\u00e1ad\u00e1sa a r\u00e9teg kimenet\u00e9hez\n    layer_outputs.append(neuron_output)\n\nprint(\"A r\u00e9teg kimenete:\",layer_outputs)\n\n>>>\nA r\u00e9teg kimenete:&#91;4.8, 1.21, 2.385&#93;<\/textarea><\/pre><svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"width:24px;height:24px\" fill=\"none\" viewbox=\"0 0 24 24\" stroke=\"currentColor\" stroke-width=\"2\"><path class=\"with-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4\"><\/path><path class=\"without-check\" stroke-linecap=\"round\" stroke-linejoin=\"round\" d=\"M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2\"><\/path><\/svg><\/span><pre class=\"shiki light-plus\" style=\"background-color: #FFFFFF\" tabindex=\"0\"><code><span class=\"line\"><span style=\"color: #008000\"># A layer with 3 neurons and 4 inputs<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">inputs = &#91;<\/span><span style=\"color: #098658\">1<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2.5<\/span><span style=\"color: #000000\">&#93;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">weights = [&#91;<\/span><span style=\"color: #098658\">0.2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.8<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1.0<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">           &#91;<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.91<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.26<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">&#93;,<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">           &#91;-<\/span><span style=\"color: #098658\">0.26<\/span><span style=\"color: #000000\">, -<\/span><span style=\"color: #098658\">0.27<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.17<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.87<\/span><span style=\"color: #000000\">&#93;]<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">biases = &#91;<\/span><span style=\"color: #098658\">2<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">3<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">0.5<\/span><span style=\"color: #000000\">&#93;<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Output of the layer<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">layer_outputs = []<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #008000\"># Calculate the output of each neuron<\/span><\/span>\n<span class=\"line\"><span style=\"color: #AF00DB\">for<\/span><span style=\"color: #000000\"> neuron_weight, neuron_bias <\/span><span style=\"color: #AF00DB\">in<\/span><span style=\"color: #000000\"> <\/span><span style=\"color: #795E26\">zip<\/span><span style=\"color: #000000\">(weights, biases):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    <\/span><span style=\"color: #008000\"># Calculate the weighted sum<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    neuron_output = <\/span><span style=\"color: #098658\">0<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    <\/span><span style=\"color: #AF00DB\">for<\/span><span style=\"color: #000000\"> n_input, weight <\/span><span style=\"color: #AF00DB\">in<\/span><span style=\"color: #000000\"> <\/span><span style=\"color: #795E26\">zip<\/span><span style=\"color: #000000\">(inputs, neuron_weight):<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">        neuron_output += n_input * weight<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    <\/span><span style=\"color: #008000\"># Add the bias<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    neuron_output += neuron_bias<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    <\/span><span style=\"color: #008000\"># Append the output of the neuron to the layer outputs<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\">    layer_outputs.append(neuron_output)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #795E26\">print<\/span><span style=\"color: #000000\">(<\/span><span style=\"color: #A31515\">\"Output of the layer:\"<\/span><span style=\"color: #000000\">,layer_outputs)<\/span><\/span>\n<span class=\"line\"><\/span>\n<span class=\"line\"><span style=\"color: #000000\">&gt;&gt;&gt;<\/span><\/span>\n<span class=\"line\"><span style=\"color: #000000\"># Output of the layer<\/span><span style=\"color: #098658\">4.8<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">1.21<\/span><span style=\"color: #000000\">, <\/span><span style=\"color: #098658\">2.385<\/span><span style=\"color: #000000\">&#93;<\/span><\/span><\/code><\/pre><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Why Is This Useful?<\/h2>\n\n\n\n<p>A layer of multiple neurons can recognize multiple patterns in data at once. This is the first step towards building deeper networks, where we can stack multiple layers on top of each other to solve increasingly complex problems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Next Article<\/h2>\n\n\n\n<p>In the next article, we will look at why it is worth using the NumPy library instead of pure Python solutions. It can calculate a single layer or even an entire network much faster and more elegantly, especially when the network is larger and consists of multiple layers.<\/p>","protected":false},"excerpt":{"rendered":"<p>Az el\u0151z\u0151 r\u00e9szben l\u00e1ttuk, hogyan m\u0171k\u00f6dik egyetlen mesters\u00e9ges neuron. De egy neuron \u00f6nmag\u00e1ban m\u00e9g nem t\u00fal sok dologra j\u00f3. Az igazi haszn\u00e1lhat\u00f3s\u00e1ga akkor mutatkozik meg, amikor t\u00f6bb neuront \u00f6sszekapcsolunk egy r\u00e9tegg\u00e9. Ebben a r\u00e9szben ezt fogjuk egy kicsit r\u00e9szletesebben megn\u00e9zni.<\/p>","protected":false},"author":1,"featured_media":160,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":2,"footnotes":""},"categories":[9,8],"tags":[11,10,13],"class_list":["post-106","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial_intelligence","category-neural-networks","tag-artificial-intelligence","tag-neural-networks","tag-python"],"featured_image_src":"https:\/\/mlatilikzsolt.hu\/wp-content\/uploads\/2025\/08\/neural-network-3637503_640.png","author_info":{"display_name":"MlatilikZsolt","author_link":"https:\/\/mlatilikzsolt.hu\/en\/author\/mlatilikzsolt\/"},"_links":{"self":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/comments?post=106"}],"version-history":[{"count":8,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/106\/revisions"}],"predecessor-version":[{"id":118,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/posts\/106\/revisions\/118"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/media\/160"}],"wp:attachment":[{"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/media?parent=106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/categories?post=106"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mlatilikzsolt.hu\/en\/wp-json\/wp\/v2\/tags?post=106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}