Author: MlatilikZsolt

  • Bevezetés a neurális hálózatok világába 5. rész

    Introduction to the World of Neural Networks Part 5

    In the previous parts, we built a layer that computes the raw output of neurons – the weighted sum plus bias. However, that output is linear: if you double the input, the output doubles. A linear network can only learn straight-line relationships.

    But the real world is nonlinear. Image recognition, speech understanding, and natural language processing all involve complex patterns that linear models cannot capture. This is where activation functions come in. These nonlinear transformations give neural networks the ability to sense, adapt, and truly learn.

    The Role of Activation Functions

    Without activation:

    output = sum (inputs * weights) + bias

    With activation:

    output = f(sum (inputs * weights) + bias)

    The function f() is the activation function, and it’s what gives the network its learning power.

    Common Activation Functions

    Step Function

    The simplest of all activation functions is the Step. It works like a switch: if the neuron’s input is above a threshold, the output is 1; otherwise, it’s 0.

    f(x) = \begin{cases} 1 & \text{ } x \geq 0 \\ 0 & \text{ } x < 0 \end{cases}

    Graphically depicted:

    It’s a good way to illustrate how neurons turn on or off, but it’s not suitable for training, since it’s not continuous and doesn’t support gradient-based learning. Step is mostly used for demonstration purposes, as we did earlier.

    Sigmoid Function

    The Sigmoid is smoother. It squashes every input into the range 0–1 following a soft S-shaped curve:

    f(x) = \frac {1} {(1 + e^{-x})}

    Graphically depicted:

    This makes it ideal when we want an output that represents probability, such as in binary classification tasks. However, the Sigmoid’s gradient becomes extremely small for very large or small input values, slowing down learning — a problem known as the vanishing gradient.

    Tanh Function

    The tanh function, short for hyperbolic tangent, is similar to Sigmoid but scales outputs between -1 and 1:

    f(x) = tanh(x)

    Graphically depicted:

    Because its output is centered around zero, it often trains faster and more stably. Still, it suffers from the same vanishing gradient issue in extreme regions. Despite that, it remains popular in smaller or older networks for its intuitive and balanced behavior.

    ReLU (Rectified Linear Unit)

    The ReLU is perhaps the most widely used activation function in modern networks. It’s defined as:

    f(x) = max(0, x)

    Graphically depicted:

    Negative inputs become 0, positive inputs pass through unchanged. Its simplicity is its strength — it’s fast, efficient, and avoids the Sigmoid’s gradient issues. However, some neurons can “die” if they get stuck with negative inputs forever, never activating again. Even so, ReLU remains the default choice for most deep learning models.

    Leaky ReLU

    The Leaky ReLU improves on ReLU by allowing a small, nonzero output for negative inputs:

    f(x) = \begin{cases} x & \text{ } x \geq 0 \\ 0.01*x & \text{ }x < 0 \end{cases}

    Graphically depicted:

    This tiny “leak” keeps neurons alive even when their inputs are mostly negative, leading to more stable training. It’s often used when too many neurons become inactive under standard ReLU.

    Summary

    Activation functions give neural networks their nonlinear power. Without them, a model could only describe linear relationships — essentially a flat plane or line. With them, neural networks can learn complex, nonlinear decision boundaries and perform genuinely intelligent behavior.

    In the next article, we’ll see how to implement these activation functions in Python and NumPy, and observe how they transform a layer’s output in practice.

  • Bevezetés a neurális hálózatok világába 4. rész

    Introduction to the World of Neural Networks Part 4

    Why Use NumPy?

    In the previous articles, we built an artificial neuron and a simple layer using pure Python code. The logic was not complicated: weighted sum, add bias, and optionally apply an activation function.    
    But as networks grow larger — with multiple layers and hundreds or thousands of neurons — pure Python solutions become:  

    • slow,  
    • hard to manage,  
    • and prone to errors.    

    This is why we use the NumPy library, which is:    

    • very fast (written in C language),  
    • reliable (thoroughly tested),    
    • and makes vector and matrix operations easy.    

    Vectors, Arrays, Matrices and Tensors

    Before we look at how NumPy is used through specific examples, it is important to clarify a few concepts.

    Let's start with the simplest Python data store, the list. A Python list contains comma-separated numbers enclosed in square brackets. In the previous sections, we used lists to store data in our pure Python solutions.

    Example of a list:

    list = [1, 5, 6, 2]

    List of lists:

    list_of_lists = [[1, 5, 6, 2],
    				         [3, 2, 1, 3]]

    List of lists of lists:

    list_of_lists_of_lists = [[[1, 5, 6, 2],
                               [3, 2, 1, 3]],
                              [[5, 2, 1, 2],
                               [6, 4, 8, 4]],
                              [[2, 8, 5, 3],
                               [1, 1, 9, 4]]]

    All of the above examples can also be called arrays. However, not all lists can be arrays.

    For example:

     [[1, 2, 3],
       [4, 5],
       [6, 7, 8, 9]]

    This list cannot be an array because it is not "homologous". A "list of lists" is homologous if each row contains exactly the same amount of data and this is true for all dimensions. The example above is not homologous because the first list has 3 elements, the second has 2, and the third has 4.

    The definition of a matrix is ​​simple: it is a two-dimensional array. It has rows and columns. So a matrix can be an array. Can every array be a matrix? No. An array can be much more than rows and columns. It can be 3, 5, or even 20 dimensions.

    Finally, what is a tensor? The exact definition of tensors and arrays has been debated for hundreds of pages by experts. Much of this debate is caused by the participants approaching the topic from completely different areas. If we want to approach the concept of tensor from the perspective of deep learning and neural networks, then perhaps the most accurate description is: "A tensor object is an object that can be represented as an array."

    In summary: A linear or 1-dimensional array is the simplest array, and in Python, a list corresponds to this. Arrays can also contain multidimensional data, the most well-known example of which is a matrix, which is a 2-dimensional array.

    One more concept that is important to clarify is the vector. Simply put, a vector used in mathematics is the same as a Python list, or a 1-dimensional array.

    Two Key Operations: Dot Product and Vector Addition

    When performing the dot product operation, we multiply two vectors. We do this by taking the elements of the vectors one by one and multiplying the elements with the same index, then adding these products. Mathematically, this looks like this:

    \vec{a}\cdot\vec{b} = \sum_{i=1}^n a_ib_i = a_1\cdot b_1+a_2\cdot b_2+...+a_n\cdot b_n

    It is important that both vectors have the same size. If we wanted to describe the same thing in Python code, it would look like this:

    # First vector
    a = [1, 2, 3]
    
    # Second vector
    b = [2, 3, 4]
    
    # Dot product calculation
    dot_product = a[0]*b[0] + a[1]*b[1] + a[2]*b[2]
    
    print(dot_product)
    
    >>> 20

    You can see that we have performed the same operation as when calculating the output value of a neuron, only here we have not added the bias. Since the Python language does not contain any instructions or functions for calculating the dot product by default, we use the NumPy library.

    When adding vectors, we add the elements of each vector with the same index. Mathematically, this looks like this:

    \vec{a}+\vec{b} = [a_1+b_1, a_2+b_2,...,a_n+b_n]

    Here again, it is important that the vectors have the same size. The result will be a vector of the same size. NumPy handles this operation easily.

    Using NumPy

    A neuron

    Let’s now implement a neuron using NumPy.  

    import numpy as np
    
    # Inputs and weights
    inputs = np.array([0.5, 0.8, 0.3, 0.1])
    
    weights = np.array([0.2, 0.7, -0.5, 0.9])
    
    bias = 0.5
    
    # Neuron output (dot product + bias)
    output = np.dot(inputs, weights) + bias
    
    print("Neuron output:", output)

    Here, np.dot(inputs, weights) computes the dot product, and then we simply add the bias.  

    A layer

    Now let’s build a layer of 3 neurons, each receiving 4 inputs.    

    import numpy as np
    
    # Example inputs (4 elements)
    inputs = np.array([1.0, 2.0, 3.0, 2.5])
    
    # Weights for 3 neurons (matrix: 3 rows, 4 columns)
    weights = np.array([
                    [0.2, 0.8, -0.5, 1.0],       # Neuron 1
                    [0.5, -0.91, 0.26, -0.5],    # Neuron 2
                    [-0.26, -0.27, 0.17, 0.87]   # Neuron 3
    ])
    
    # Bias values (3 elements)
    biases = np.array([2.0, 3.0, 0.5])
    
    # Layer output (matrix multiplication + vector addition)
    output = np.dot(weights, inputs) + biases
    
    print("Layer output:", output)
    
    >>> Layer output: [4.8   1.21  2.385]

    Here, np.dot(weights, inputs) computes the matrix-vector product, which is exactly the weighted sum for each neuron. Adding the bias vector completes the computation.  

    Next Article

    In the next article, we will explore activation functions, and see how they provide the "nonlinear power" that makes neural networks much more capable. Without them, our network would only be able to model simple linear relationships.  

  • Gondolatok a „vibe-coding”-ról

    Thoughts on "vibe-coding"

    The most popular term of the recent period is "vibe-coding". It promises users that they no longer need to know how to program, because all they need to do is tell the AI ​​agent in natural language, as if "talking" to it, what they want, and the agent will produce a ready-made, functional program. Based on a short Google search, we find hundreds of success stories about people who, without programming knowledge, created a functional program in a few hours with one of the coding agents and are now selling it on the market. However, developers who have been working in the market for years or decades are watching this new trend with suspicion, many of whom fear for their livelihoods from the spread of coding agents.
    I've always thought that it's worth giving an opinion about something if you know it, even if you don't go into the smallest details. It doesn't hurt if you know what you're talking about.

    The first meeting

    I've been using Visual Studio for development for a few years now, and like many people, I've been using GitHub to store my code. When GitHub launched Copilot in 2021, it was advertised as a programming partner who would understand the code I was writing and help me improve it. Sometime in 2022, I decided to give it a try. While the $10/month fee wasn't a huge deal, the real kicker was the 30-day free trial. I figured I really had nothing to lose.
    Well, these 30 days flew by pretty quickly and I have to admit that Copilot completely impressed me. At first, I only used it to explain the functioning of codes written by others that I didn't fully understand. Then I pulled out old programs of mine that I was stuck on or that produced mysterious errors. And lo and behold, after analyzing the codes, it was able to give suggestions and new aspects (with specific code snippets) that I could move on with. But the biggest "bang" for me was when I experienced how well it could take over boring or not really liked tasks. It writes complete tests in such a short time that I can't even really think about what cases should be tested. Moreover, it also writes tests for cases that I wouldn't have even thought of. It generates the backbone of a project in a few minutes, and all I have to do is put the "meat" on the backbone. Plus, the auto-completion code works for me. I just start typing and it gives me complete suggestions for completion. If I like it, I just press a button and it inserts it. If I don't like it, I just keep typing and after a while I get a modified suggestion.
    During the month of use, I somehow felt like I was enjoying programming again. It was as if I had two helpers by my side. A typing "slave" who writes the long, boring and eternally repetitive parts for me, and a mentor who can always push me further when I get stuck on something. At the end of the trial period, I decided not to cancel the subscription, because that's what it's worth to me. Of course, its operation was not flawless, it repeatedly suggested code that didn't work on its own because, for example, it was missing a helper function that it "forgot" to write. But after 1-2 refinements, there was always some result.
    Of course, this version of Copilot wasn't a coding agent in the modern sense. It just made suggestions, but didn't automatically make any changes to the code. I wouldn't have let it, because I like to understand what I'm going to put into my program.

    The agent enters the scene

    The Copilot agent mode was released sometime in early 2025. Claude Code was also released at that time, but there were already other players on the market, such as Cursor or Replit. The promise was that these agents would be able to create a complete program completely independently, based only on a text prompt, during which they would not only write the code, but also run it, correct errors and repeat the whole process until we end up with a flawlessly working program. I was quite skeptical about the matter, because although I had been dealing with certain areas of AI for some time, studying the operation of large language models, I somehow didn't believe in it all. Of course, video sharing sites were also quickly filled with materials that proved that this thing worked, but I always felt a little bit of a lack. Because in all such videos, they only showed the creation of relatively simple, hobby-like programs with 1-2 functions. But what about a slightly more serious application? For some reason, no one wanted to undertake the task of presenting this.
    As time went by, they were constantly improving the Copilot agent mode. There were constant updates about what new features it had, what language models were available, etc. And I was getting more and more excited to try it out. I thought it must have outgrown its childhood problems by now, and the initial bugs had been fixed. Then in mid-August, I took a deep breath and gave it a try.

    The test

    My idea was to test the Copilot Agent mode by creating a not-too-complicated fitness app. Since there are several models to choose from, I decided on the Claude Sonnet 4, as it is considered one of the best models for coding, according to reviews.
    I already knew that agents are powered by various large language models that require as much detailed context as possible in order to provide an accurate answer. That's why I started with a product requirements document (PRD). I also used Claude to compile it. I described the idea in a prompt consisting of 5-6 sentences and had it generate a PRD. I had the result in 1-2 minutes, but it still needed to be refined and specified. After 3-4 iterations, a material was finally created that could be used to start testing. Based on the document, the planned application is capable of the following functions:

    • user management (logout/login, manage own data)
    • recording physical data (weight, abdomen, thigh, chest, etc. measurements) daily
    • management of pre-recorded exercises by the admin user and of exercises recorded by the user
    • compiling workouts from exercises, recording workout data (duration, calories burned, etc.)
    • a Blazor web frontend should be created first, but later it should be possible to manage a mobile application as well
    • due to the previous point, creating an API interface with JWT authentication
    • PostgreSQL database
    • .NET Core environment, Entity Framework for database management

    I created a new folder for the project, launched VS Code, attached the PRD to Copilot, and asked it to create the application based on it. Then I sat back and waited. After some thought, the machine started working. In the chat window, I could see how more and more files were created, organized into separate projects according to function. After the data models and API were completed, Copilot stopped and asked me to enter the database server access data. When I entered these, the work continued. I saw how Copilot created the database, compiled and ran the completed program, recognized and corrected any errors that arose. If it reached a part that was more critical for the runtime environment (for example, file system modification), it stopped and asked for permission to continue. Finally, in 15-20 minutes, the API interface was ready with a working database and data models. Hats off so far!
    Then he asked the question: "Should I continue with creating the web frontend interface?" Of course, that's why we're here!
    The Blazor frontend was ready in another 5-10 minutes. It seemed that the project could be compiled and run, but did it actually work? Copilot indicated that it was ready, so I should try the program. And here began a 3-hour ordeal.
    Since my goal was to test the agent's standalone operation, I decided not to fix it in the code, but to just communicate the problems I encountered through the prompt.
    So the program started and the interface appeared in the browser. It's a pretty minimal design, but that wasn't the focus, it was the functionality. I start a registration, enter the data, but nothing changes after sending. Checking the database, no new user was created. I describe the problem to the agent, he thinks about it, then tells me that he found and fixed the error. Another try, the same result. Problem description, thinking, fixing. Another try, it doesn't work. Since it was a bit suspicious, I checked the messages on the backend API console. It seemed that a request was going to the API, but it was immediately thrown back with a 404 error. So it managed to produce frontend code that tries to call a non-existent endpoint of the API it produced! A little confused, I write the problem to Copilot. Thinking, then he confirms approvingly that I'm really right. He fixes the error, and now I can register. According to the interface, I am logged in, although my username does not appear in the menu. OK, let's move on from this for now. I open the body data page and try to record data. It fails, because after sending, the loading icon only "spinning", but nothing happens. I restart the application and, learning from the previous ones, I pay attention to the messages in the console windows. Based on this, the token from the previous login is still there, so the application lets me log in. Another attempt to record data, after sending, a 401 error message is received on the console (Unauthorized). Interesting. Problem description to Copliot, thinking, throwing ideas, fixing, trying again. It doesn't work. Finally, after 3 hours of struggle, 15-20 fixes, I gave up because I got tired.

    But why doesn't it work?

    The problem didn't let me rest, so 3 days later I took the program out again with the aim of finding out why it wasn't working. The backend is relatively straightforward, a standard .NET Core web API. What was a little strange was that it didn't use the Microsoft Identity Framework to manage users, but otherwise everything seemed to be in order. The frontend components are also quite straightforward, HTML elements spiced up with some C# code. What struck me was that almost all the components are embedded in a component called AuthComponent . Its task is to check the user's logged in status and display either the given component or the login interface accordingly. For this, it uses a component called AuthService , which tries to read the JWT token from the browser's local storage and based on that, set the status, and read the username from the token. In addition, it uses a separate TokenService component for this.
    I started the application as a test and activated the frontend project console. I was shocked to see that the AuthService method that checks the user's token is called at least 10 times before the interface is displayed. OK, I'll have to look into this carefully. I log in to the interface, but the username still doesn't appear in the menu and I can't record any data because the API keeps returning a 401 error. I can't log out either. On a sudden idea, I start Postman and try to access the API from there. The login works and I can also send data to the database based on the token I receive back, because I don't get a 401 error from there. Interesting.
    When I looked at the frontend code closely, my eyes widened. The AuthComponent component was supposed to check the user's logged-in status using AuthService and display the embedded components accordingly. However, AuthService was injected separately into the embedded components and each of them checked the logged-in status separately. Why? Moreover, since these components tried to check the user's status at the same time when they started, AuthService tried to prevent the components from competing with each other to set the status with all sorts of tricky locking solutions. However, it would have been much simpler if AuthComponent had made the user's status available through a parameter. With some work, I "untangled" these anomalies, simplified the code in a few places and restarted the application. I was happy to see that AuthService is only called once. However, the API still returns connection attempts with a 401 error.
    It took me about 2 hours to figure it out. I delved into how JWT tokens work, how .NET Core handles tokens, what data should be included in the token. I tried these, but to no avail. In the end, I copied the HTTP headers from the Postman requests one by one into the frontend code, but that didn't work either. I was about to give up when something caught my eye in the API project code. The CORS policy settings were suspicious, because the URL used when running the frontend project didn't seem to be there. I checked it and sure enough! The frontend wanted to connect from a URL that wasn't included in the CORS settings. When I fixed this, things started working. I don't understand how Copilot didn't notice this.
    The only thing left unsolved was why the logout wasn't working. All I managed to find out was that the event handler in the NavMenu component wasn't called when the button was clicked. While I was looking for the reason for this, I noticed that there were a few lines of code within the component that would be better moved to the event handler that runs after the component is initialized. After I did this, I noticed that these lines of code weren't running either. So the component's initialization was stuck somewhere. After further investigation and reading the documentation, it turned out that the component's RenderMode parameter wasn't set correctly. After I fixed this, everything suddenly fell into place. The user's name appeared in the menu and the logout worked.
    Then I found a few more interesting bugs. For example, I couldn't record my own gymnastics exercises in the database. As it turned out, the format of the data sent from the frontend didn't match the data model used in the backend. Or, for example, certain frontend interfaces were only half-finished. But I didn't bother with these anymore because I felt that this was enough of an experiment.

    Conclusion

    Coding agents can do a pretty good job when it comes to simple, well-defined tasks. But they are still a long way from being able to put together even a moderately complex program on their own. A detailed, well-defined context is important for solving tasks. For more complex programs consisting of several parts, however, it is worth breaking the task into smaller units, having them prepared one by one, in several iterations, and then adding the completed modules to the context and moving on to the next part.
    I also think it is important to always include human verification in the process. This is because the large language models behind the agents were trained on publicly available codebases, which often contain non-optimized, test-only, or security-vulnerable code. Thus, the code generated by the agents will likely contain security bugs or suboptimal code, which can be a source of additional problems in a program released for production.
    So, in my opinion, coding agents won't take away developers' jobs for a while, but they will fundamentally transform them. Junior developers will have a harder time entering the profession, as agents can take over the simpler, more automatable coding tasks they've been doing. More experienced developers will spend more time reviewing and fixing the code generated by agents.

  • Bevezetés a neurális hálózatok világába 3. rész

    Introduction to the World of Neural Networks Part 3

    In the previous section, we saw how a single artificial neuron works. But a neuron on its own isn't very useful. Its true usefulness comes when you connect multiple neurons together to form a layer. In this section, we'll look at that in a little more detail.

    What Is a Layer?

    In simple terms, a layer is a bunch of neurons that work with the same input data, but each neuron processes that data with different weights and biases. This data can come directly from the input or from a previous layer. Thanks to the different weights and offsets, each neuron can recognize different patterns in the same data.

    For example, if we analyze an image with a neural network, some neurons can recognize vertical lines, others horizontal lines, and still others oblique lines. By combining these appropriately, it becomes possible to recognize more complex shapes. This is how Facebook's feature that recognizes faces in photos works, for example.

    Let's look at an example.

    For the sake of illustration, let's build a simple layer with:

    • 4 inputs: x1, x2, x3, x4
    • 3 neurons

    Each neuron uses four weights (one for each input) and a bias, from which it calculates its own output value.

    z_j= w_{j1} \cdot x_1 + w_{j2} \cdot x_2 + w_{j3} \cdot x_3 + w_{j4} \cdot x_4 + b_j

    In this formula, j refers to each neuron (1, 2, 3). After the calculations are done, the output of the layer will be a three-element vector: [z1, z2, z3]. This can be either the input to a next layer or a final result that is not processed further.

    Python example: calculating the output of a layer

    Let's see how we can program the above example in Python.

    Important: in this example we do not use an activation function, we only calculate the “raw” output data.

    # A layer with 3 neurons and 4 inputs
    
    inputs = [1, 2, 3, 2.5]
    weights = [[0.2, 0.8, -0.5, 1.0],
               [0.5, -0.91, 0.26, -0.5],
               [-0.26, -0.27, 0.17, 0.87]]
    biases = [2, 3, 0.5]
    
    # Output of the layer
    layer_outputs = []
    
    # Calculate the output of each neuron
    for neuron_weight, neuron_bias in zip(weights, biases):
        # Calculate the weighted sum
        neuron_output = 0
        for n_input, weight in zip(inputs, neuron_weight):
            neuron_output += n_input * weight
        # Add the bias
        neuron_output += neuron_bias
        # Append the output of the neuron to the layer outputs
        layer_outputs.append(neuron_output)
    
    print("Output of the layer:",layer_outputs)
    
    >>>
    # Output of the layer4.8, 1.21, 2.385]

    Why Is This Useful?

    A layer of multiple neurons can recognize multiple patterns in data at once. This is the first step towards building deeper networks, where we can stack multiple layers on top of each other to solve increasingly complex problems.

    Next Article

    In the next article, we will look at why it is worth using the NumPy library instead of pure Python solutions. It can calculate a single layer or even an entire network much faster and more elegantly, especially when the network is larger and consists of multiple layers.

  • Bevezetés a neurális hálózatok világába 2. rész

    Introduction to the World of Neural Networks Part 2

    In the previous article, we introduced the basic idea of neural networks and saw that an artificial neuron is a simplified version of the brain’s nerve cells. Now let’s take a closer look at how a biological neuron works and how we can model it in a computer.

    How Does a Neuron Work?

    The Biological Neuron

    Without going into too much scientific detail, a biological neuron is made up of four main parts:

    • Dendrites: it receives information from other neurons through these.
    • Cell body (Soma): this processes the signals received by the dendrites.
    • Axon: the neuron sends (or not) the processed signal through this.
    • Axon terminals: the branches of the axon through which other neurons perceive the output signal.

    So, the cell body receives signals from other neurons through the dendrites, processes them, and if the signal resulting from the processing reaches a certain level, the neuron "fires", i.e. sends a signal through the axon to the other neurons connected to it.

    Artificial Neuron

    The artificial neuron attempts to mimic this operation mathematically. Its most important parts are:

    • Inputs: these simulate the dendrites, through which the neuron receives data.
    • Weights: each input has a weight that shows how much the data arriving at the given input influences the output value.
    • Bias: an offset is added to the weighted sum of the input signals, which can also influence the output result.
    • Activation function: this makes the decision as to what value should be output by the neuron ("fire" or not) based on the previously summarized data.

    In Mathematical Form

    Let the inputs be in order x1, x2…xn, their corresponding weights w1, w2…wn, and the bias b. The operation performed by the neuron is as follows:

    z=w_1 \cdot x_1 + w_2 \cdot x_2 + \ldots + w_n \cdot x_n + b

    The activation function receives the value of z calculated in this way. In this case, let's take a simple step function that examines the input value and if it is zero or greater, it outputs 1, and if it is less than zero, it outputs 0.

    y=\begin{cases} 1 & \text{ha } z \geq 0 \\ 0 & \text{ha } z < 0 \end{cases}

    There are many types of activation functions (e.g. Sigmoid, ReLU, tanh), which we will discuss in a separate section later.

    Python Example

    Let's see how to program a neuron in Python:

    # Inputs and weights
    inputs = [0.5, 0.8]     # two inputs
    weights = [0.4, 0.7]    # their associated weights
    bias = -0.5             # bias
    
    # Summarize input values
    sum = (inputs[0]*weights[0] + inputs[1]*weights[1] + bias) # 0.26
    
    # Calculating output with the step function
    output = 1 if sum >= 0 else 0
    
    print("Output of the neuron:", output) # 1

    Next Article

    In the next article, we will connect multiple neurons together and see how a simple layer is built. This will bring us closer to a complete neural network.

  • Bevezetés a neurális hálózatok világába 1. rész

    Introduction to the World of Neural Networks Part 1

    In recent years, artificial intelligence has become almost everywhere: it recognizes our faces on smartphone cameras, translates text from foreign languages, and can even draw pictures or write stories. One of the engines behind these impressive developments is the neural network..

    But what exactly is a neural network? And what does it have to do with our brain?

    A Simple Analogy

    Imagine a tiny decision-maker: a "neuron" that looks at a single input (e.g., temperature) and, based on a simple rule, decides whether to answer “yes” or “no.” If we connect many of these small decision-makers, they can solve more complex problems together. This basic idea is at the heart of artificial neural networks.

    What is an Artificial Neuro

    An artificial neuron is a simplified mathematical model of the brain’s nerve cells (neurons).  

    A biological neuron receives inputs from other cells, processes the information, and decides whether to pass on the signal.

    Similarly, an artificial neuron :

    • receives multiple input values ,  
    • multiplies them by weights, which determine how important each input is,  
    • adds a bias value, which acts as a baseline adjustment,  
    • and then transforms the result using an activation function , before sending it forward to the next layer.  

    In short: weights determine the importance of inputs, while bias fine-tunes the decision.

    Why are Neural Networks Important?

    • Image recognition: when Facebook automatically tags people in photos.  
    • Translation: when Google Translate translates entire sentences, not just words.  
    • Chatbots: when a virtual customer service assistant appears on a website.

    They can do all this because neural networks are extremely good at recognizing patterns in data.

    What Will This Series Cover?

    In this series, I will try to show you step by step:  

    1. What an artificial neuron is, and how to describe it in Python.  
    2. How a simple network is built from multiple neurons.  
    3. What a loss functionis, and why it matters.  
    4. How a network learns (backpropagation).  
    5. How we can use them for real-world problems.  

    I strive to write in plain language in my articles, avoiding very dry, technical text and complicated mathematical formulas. I will only write about the mathematics behind how things work, as much as is absolutely necessary for understanding.

    In this series, I will program a simple neural network from scratch with all its features. I will use the Python language for this.

    You might ask, what's the point of programming a neural network from scratch when there are ready-to-use frameworks where you can build any network with a few lines of code? Well, I think it's because it's exciting to look a little "behind the scenes" and understand how things work. And because it's fun!

    Next Article

    In the next article, we’ll take a closer look at how an artificial neuron works, and write our first Python code which demonstrates this in practice.

en_GB