May 7,2025, I attended the SC483 short course "Machine Learning in Optical Networks" taught by Professor Massimo Tornatore from Politecnico di Milano in Italy with great earnestness. Although I didn't fully understand it, I was still able to grasp some content and would like to share it with readers.
Professor Tornatore's course is divided into three parts. The first part is about basic concepts, the second part focuses on the applications of machine learning in solving optical network problems, and the third part is a simulation exercise on using machine learning to evaluate the transmission quality of optical networks (I couldn't keep up with this part at all). Let's first take a look at the final example, a personalized large - scale model applied to optical networks. We want AI to help us design the lightpath between two nodes. First of all, we should know that it's an hallucination to rely on ordinary large - scale language models (LLMs) to solve problems like optical network design. Due to its statistical prediction principle, the output of ordinary large - scale models is uncontrollable. Although relying on a sufficient number of parameters and learning from the context can help LLMs improve their output, it's not enough. At last year's ECOC, there was a PDP article titled "AI Agents in Optical Network Digital Twins". It generated AI - Agents suitable for optical networks based on Domain/Task - specific LLMs and optical network digital twins, and on this basis, completed tasks such as transmission quality assessment (QoT), failure management, traffic prediction, resource allocation, and even sensing applications.
Why can machine learning achieve these? Let's go back to the basic concepts. What is machine learning (ML)? A. Samuel initially defined it in 1959 as "the field of study that gives computers the ability to learn without being explicitly programmed". More specifically in the field of optical networks, it refers to a series of statistical and mathematical tools for making decisions based on monitored data. Nowadays, machine learning is a very popular concept. It, together with natural language processing, expert systems, machine vision, etc., belongs to the category of artificial intelligence (AI). It includes deep learning and also intersects with previously popular data mining and big data. Machine learning algorithms can be classified as follows:
Supervised learning: The data has labels, and its applications include traffic prediction, speech and image recognition, etc. Specific algorithms include linear regression, logistic regression, neural networks, etc.
Unsupervised learning: The data has no labels, and its applications include classifying and characterizing mobile phone users, targeted advertising, etc. Specific algorithms include K - means, Gaussian Mixture Models.
Semi - supervised learning: A mixture of the above two.
Reinforcement learning: Data without labels, similar to cybernetics/dynamic programming.
Here, we need to briefly explain another popular term, "neural network". This is a bionic concept imitating biological neural networks, which performs regression relying on logical units or Neurons. When the number of regression layers increases, it is what we call deep learning. Also, we should know that in many cases, a neural network is not necessary. There are also simple algorithms such as KNN and random forests.

The editor with Professor Tornatore(Center)and Professor Memedhe Ibrahimi (left)
Before taking this course, I was actually quite confused about whether AI really has intelligence. Here are some of my personal experiences. Through this course, I realized that first, AI can indeed have intelligence. Although it is a prediction based on statistical algorithms, as long as the previous dataset is large enough, it can indeed give the desired results. Second, AI cannot make real - time predictions because it is only looking for the closest next data from a vast amount of known data. So, it can only replace ordinary transactional work. Third, the implementation of AI requires in - depth human participation, whether it is in the training process, the judgment of results, or the questioning of AI.
Back to the applications in optical networks, how can machine learning help improve optical network design and enhance the transmission quality (QoT) of optical networks? A good optical network design is to build a network with as low a margin as possible. This requires both QoT based on machine learning and routing and spectrum allocation (RSA) (further, there is also the allocation of routing, wavelength, modulation mode, and coding rate, RWMCA). The details involved are beyond my current knowledge, so I won't introduce them here. In addition to QoT, the applications of machine learning in the field of optical networks also include failure management, optical amplifier control, automatic modulation format recognition, nonlinearity suppression, sensing, traffic prediction, virtual topology design, traffic flow classification, and so on.
With the support of AI/ML, the future optical networks will be highly elastic, and optical modules will also be elastic, capable of supporting multiple modulation formats and rates. Perhaps one day, looking back on my learning this time, it will be really valuable.