In the next section, we’ll examine the network’s functional components in light of the many challenges that organizations are experiencing when implementing artificial intelligence in telecommunications.
- Fragmentation between standardization and telecom bodies
The business significance of AI/ML is evident in the interest both standardization and open-source networks have shown in investigating how AI/ML can be applied to specific areas of software engineering, as well as their efforts to guarantee administrative roles on specific parts of the architecture business.
ITU-T, ETSI ENI, ETSI ZSM, 3GPP, ONAP and ORAN have at least detailed efforts. Even though a significant part of the work is complementary, it is fragmented as well.
In addition to scattering concentration and causing apprehension, fragmentation also diffuses future adoption (both for network providers and CSPs). Due to the risks of conflicting guidance and the failure of duplication, this hesitation has arisen.
Fragmentation is visible in both the use cases presented as well as the results of the particular experiences generated by the AI functions portrayed by SDO/companies open source. The issue involves network and shear loads, primarily in SA2 around NWDAF, in SA5 around MDAF, and ORAN around RIC.
This does not only result in inefficiencies in allocating work twice but also puts top telecom service providers at risk of making a mistake in how they handle adoption.
A further example of fragmentation is the effort to portray the various segments of AI/ML-enabled functions, such as inference functions, training functions, and storage functions, as well as data storage.
Some standardizing bodies do this. The business benefits from arranging around the hidden architecture and concept, however, over-engineering can stifle innovation, and a variety of details slow down adoption.
Another hindering part of fragmentation is data collection and management, which refers to the capability to allow AI/ML applications to ask for, collect, and access data. This is being developed in 3GPP SA2, 3GPP SA5, ONAP, and ORAN, and is probably going to interface.
2. Getting AI into telecommunications is the biggest challenge
The use of AI/ML in the real world is not a function of open source or standardized industry conversations. Therefore, both telecom service providers and vendors are already integrating AI/ML capabilities into their investment networks and organizations.
Nevertheless, AI/ML is still in its infancy, so it is worth considering the hurdles that will prevent its rapid adoption. Some examples are as follows:
- By using artificial intelligence/machine learning-based LCM, new possibilities are opened that cannot be achieved by traditional LCM.
- Because of the lack of access to data and expertise (as well as guidelines) in creating AI/ML models, it is difficult to develop and train them.
- Segmenting the network and connecting it to various standards and open source drives keeps spreading the business’ core, causing waves.
- The building of trust in automation technology requires time since certain conclusions are difficult to explain. The humans must supervise and control the process to present pope guards progressively.
- Telecom service providers generally provide their suite of (different) tools and interfaces, which makes unlocking them difficult and slows the CSP’s desire to open up, resulting in slower deployment and maintenance.
- There are very few qualified use cases for short-term investments
3. Managing the AI/ML life cycle in telecommunications
Technology-based on artificial intelligence/machine learning integrates training components, essential for model conceptual flow, integrated learning, and data security. Artificial intelligence or machine learning is used to enhance all of them.
The Life Cycle Management (LCM) model adds new requirements for telecommunications software (advancement, approval, delivery, operation, and finally retirement).
As a result of LCM processes, providers and aggregators play specific roles. When it comes to duties and delivery between partners, the CSPs determine who is responsible for what and who offers what to whom.
For more than 20 years, communications infrastructure companies have been using a well-established, well-recognized, and well-working LCM process of software licensing.
Developing AI/ML-based technology, creating LCM software, maximizing its potential, avoiding fragmentation, and maintaining a safe distance from concerns are our responsibilities as an industry.
4. In the telecom industry, data access is a challenge
The availability of relevant data is essential to the development of any AI/ML model and its training. To accomplish this, it is important to identify the infrastructure and processing capabilities of each data point.
Keeping unnecessary data exchanges to a minimum is also imperative since the volume of data can be enormous. Using filters and preprocessing in data points can drastically reduce the amount of data transferred over a network.
In the case of artificial intelligence/machine learning, the vendor has completed the initial training. Data pertinent to this process must be made available. To improve the prediction quality in the target network, it may be necessary to retrain the AI/ML model with local data.
CSPs and vendors should be able to agree on the price, possession, and protection of data, and these agreements should be part of the data ecosystem. In addition, the technical solutions must support extensive adaptability to handle differences across different countries, while the reliability and security standards must be compatible with administrative policies.