Artificial intelligence algorithms | heise developer

0
3


In the field of artificial intelligence (AI) there are numerous algorithms for all kinds of problems. Which basic algorithms should one be able to classify in the context?

Last week it was about various basic concepts of artificial intelligence.  The y represent the basis on which one can deal with some basic algorithms. Of course, there are countless algorithms for artificial intelligence, but some of them are more essential than others and therefore belong to the elementary tools of the trade.

It starts with the k-means algorithm, which is used to automatically divide data into clusters, whereby only the number, but not the type of clusters, is specified. Since the algorithm works completely autonomously, it is an algorithm that is unsupervised.

Ultimately, k-Means consists of three steps. In the first will k Cluster created and the data randomly assigned to these clusters. In the second step, the mean value for each cluster is calculated, the so-called “centroid”. In the third step, the data is reassigned to the clusters so that they are as similar as possible to the centroid of the respective cluster. It is then iterated until no more changes occur.

This seems remarkably unspectacular and has shockingly little to do with “intelligence”, but it is actually one of the algorithms that are often taught as an introduction to AI. In principle, k-Means are nothing more than a clever combination of guessing and arithmetic, combined with a bit of statistics. As sobering as it sounds – unfortunately that is the core of pretty much every AI algorithm, it just fluctuates in the complexity of the calculations.

Clusters with K-Means

Genetic Algorithms

© www.de24.news

 The  situation is not much different with “genetic algorithms”, which are often used to solve (or approximate) optimization problems. In principle, here too an initial solution is guessed at random, which is then improved iteratively. Various measures are available that are based on genetics.

© www.de24.news

 The se include, for example, mutation, in which elements of the solution are randomly changed, and recombination, in which a new one is formed by merging different solution candidates. Since it is often difficult to evaluate a solution in absolute terms when it comes to optimization problems, one is content with relative comparisons: as long as the solution becomes better through mutation, recombination, etc., one is apparently moving in the right direction.

Solutions that worsen the result are usually discarded. A selection takes place here. On the way, for example, a solution for the TSP (Traveling Sales Person) problem can be approached, but with significantly less computational effort than if one were to try to solve the problem in the classic way.

Genetic Algorithms

Neural Networks

After all, it looks a little different with neural networks. © www.de24.news

 The y are, so to speak, the prime example of successful AI algorithms, as many developments in recent years are based on advances in neural networks. However, this progress is less due to basic research than to the faster hardware.

© www.de24.news

 The  basic concepts of neural networks are still the same today as they were 50 years ago – only today, with powerful GPUs, extremely fast chips are available that specialize in vector and matrix calculations and enable things that were only possible a few decades ago dared to dream.

In principle, a neural network consists of neurons, whereby a neuron in this case is a function that calculates a weighted sum over its parameters. © www.de24.news

 The  input values ​​are usually between 0 and 1, but the weighted sum can be greater than 1. © www.de24.news

 The refore the result is often normalized, for example with the sigmoid function. © www.de24.news

 The  weights can be used to emphasize or attenuate the individual summands.

If you arrange such neurons next to each other in one layer and several such layers one behind the other and connect the layers, you get a neural network. © www.de24.news

 The  training is supervised, which means that errors can be calculated. A procedure called back propagation can be used to determine from the errors how the weights must be adjusted in order to reduce the error. A neural network “learns” along the way.

Ultimately, a neural network is nothing more than a very complex function. Usually three layers are used, an input, a hidden and an output layer. While the number of neurons in the first and third layers is fixed by the form of the input and the expected output, the hidden layer is variable.

Neural Networks

Generative, Convolutional & Co.

With neural networks it is important that the data is only passed on from layer to layer, so to speak “forwards”. This is why one speaks of feed-forward networks. That does not have to be that way. For example, feedback loops can be built in by neurons influencing themselves in a direct or indirect way. In this case one speaks of Recurrent Neural Networks (RNN).

In addition, there are also other types of neural networks, for example the convolutional neural networks (CNN), which not only consider individual data values, but also their environment. This is particularly useful in the area of ​​image and speech recognition, which is why CNNs are used particularly often here.

In addition, neural networks can also be combined in that, for example, one network generates data that is checked for validity by another. Creativity can be imitated along the way: one network, for example, generates images, the second decides whether the images meet certain criteria, from which the first network learns. One speaks here of Generative Adversarial Networks (GAN).

Generative, Convolutional & Co.

Deep Learning

If you add more than one hidden layer to a neural network, it is called deep learning. First of all, the computing effort increases enormously, but the hope is to get better results. This can actually be true, but the question of the number of neurons in the hidden layers is now becoming more difficult, especially since the number of hidden layers itself has to be defined. A lot of trial and error is used at this point.

© www.de24.news

 The  approach can be combined with the aforementioned types of neural networks; one then speaks, for example, of Deep Convolutional Neural Networks (DCNN), which admittedly sounds very lofty, but is actually nothing other than the model described above, just with more computational effort – what in turn is only possible thanks to the powerful hardware available today.

Deep-Learning

Conclusion

© www.de24.news

 The  question that arises in all of this is: where will this take us? Real progress on a large scale is still in short supply today, we are still very far from a strong AI that would be on a par with humans. © www.de24.news

 The  first disillusionment occurred in the 1980s and 1990s. It went so far that there was talk of the so-called AI winter at the time.

All the modern AI developments of the past ten years are impressive on the one hand, but not what they claim to be on the other hand, because they are primarily due to the increased computing power. It remains to be seen how long this will work and when the second AI winter will kick in. It is certain that it will come – the only question is when.





[ source link ]
https://www.heise.de/developer/artikel/Algorithmen-fuer-kuenstliche-Intelligenz-5057766.html

Artificial intelligence algorithms heise developer

LEAVE A REPLY

Please enter your comment!
Please enter your name here