Volume №4(32) / 2023
Articles in journal
The paper shows that the construction of fast transformations (similar to FFT) is based on self-similar structures that can equally be used to build fast neural networks (FNN). It is shown that the class of fast transformations is determined by system invariants of the morphological level and can be described as the morphogenesis of terminal projections of neural modules. Linguistic models are proposed to describe the morphology, structure and topology of regular self-made neural networks. The models are easily generalized to multidimensional variants of neural networks of this class. Due to their structure, FNN have special learning algorithms that are fundamentally different from the classic ErrorBackPropagation by the absence of a mechanism for error back propagation. The learning algorithms are based on the methods of multiplicative factorization of images and fast transformations proposed in the work. The developed algorithms are completed in a finite number of steps with guaranteed convergence. The consistent development of the concept of self-similarity leads to the development of methods for creating fast neural networks with a deep degree of learning. Self-similar neural networks have a unique opportunity to learn to new data without losing previously acquired knowledge. It is shown that FNN can be used to create high-speed random access image memory and complex combinational logic devices. The paper presents the results of the author's research on the following issues: biological prerequisites for self-similarity of neural networks; self-similar multilayer structures, morphogenesis, stratification of model representations; algorithms for fast transformations, fast neural networks, tuning methods; training of FNN to reference functions; plasticity of FNN; pyramidal neural networks of deep learning; multi-channel correlators; implementation of memory and combinational logic on pyramidal structures. The research results will be presented in three parts of the article.
According to the authors, the results of the theory of linear inequalities should be more widely used in machine learning problems. To eliminate redundant inequalities when constructing ensembles based on linear separators, one should use theorems on dependent inequalities and consequences. To generalize the results of various studies, search models for the most compatible subsystems should be used. The concept of maximally collaborative subsystems should be expanded to include unanimity committees. To solve classification problems, one can effectively use the method of convex hulls, and to determine the extreme points of convex hulls, use the results from the theory of alternative systems. The article provides the information necessary for this from the theory of linear inequalities, mathematical models based on them and a listing of programs for the computer implementation of mathematical models.
The advantages of mivar expert systems of logical artificial intelligence (AI) lie in the drastic reduction in the computational complexity of automatically building algorithms and inference from N! to linear N and expansion of basic productions of the “If, Then” format from formal logic to the implementation of computational procedures in a single information and control space. This allows you to create decision support systems, simulate real business processes, plan and rebuild the actions of robotic complexes and cyber-physical systems in real time, and automatically build algorithms for solving problems based on the mivar knowledge base. “Big Knowledge” is the combination and synthesis of heterogeneous knowledge bases, which provides a qualitative transition and provides great opportunities for creating AI systems. The development of the inference machine "Razumator" is done by programmers, and the creation of Big Knowledge is done by analysts, who are called: knowledge engineers, cognitologists, etc. Great Knowledge can be gradually increased and increased in size, both quantitatively and qualitatively, for example, to create an Active Mivar Encyclopedia. The creation of new knowledge bases for various subject areas led to the expansion of the areas of application of mivar technologies and made it possible to solve new problems for expert systems: planning robot routes; optimization of resource allocation; action planning and project budgeting; dynamic calculation of multidimensional vectors and their comparison; selection of teams and combinations of characters; information security and many others. In fact, these are all problems for which a person reasons and uses the If-Then rules or the Entry-Action-Exit procedures. In addition, to create complex AI systems, mivar technologies are successfully combined with neural network methods, for example, for image and speech recognition. Logical artificial intelligence was created on the basis of mivar technologies and now it must be trained by creating Big Knowledge, which will increase labor productivity and create autonomous intelligent robots and much more.
An efficient algorithm for the quasi-optimal solution of the traveling salesman problem by a team of actors using the evolutionary matching method is proposed and investigated. The method is based on the use of genetic algorithms. The chromosomes of individuals consist of triangles of the Delaunay triangulation obtained from the Voronoi diagram. The results of the program developed on the basis of the proposed algorithm are compared with existing known methods.
Cognitive-behavioral processes routinely deviate from classical models, assuming the deterministic rationality of actors. The paper revises the logical foundations behind these limitations. A major such premise is shown to be the use of classical probability theory and the logic of sets (including fuzzy ones) behind it, sometimes mistakenly considered as the only option. According to the natural dualism of discreteness – continuity and particle – wave, another possibility is the logic of waves and the corresponding probability calculus. The efficiency of this logic in cognitive-behavioral modeling is demonstrated in the experiments «prisoner’s dilemma» and «two-stage gamble». A wave-like probabilistic model complements the logic of sets with an additional factor, quantifying violations of classical rationality in these experiments. Phases of the interfering cognitive waves in this model account for the subjectively-semantic regularities of natural thinking, ignored in classical approaches. These new regularities enable probabilistic forecast of «irrational» decisions in novel contexts. Unique features of wave logic, common to optical, holographic, and quantum algorithms of information, open novel prospects for cognitive- behavioral modeling and data analysis.
The paper presents the results of analyzing and predicting the educational results of first-year university students in the implementation of a separate discipline using machine learning. The relevance of the research topic is due to the need for universities in modern conditions to successfully compete in the educational services market, which is characterized by a low number of applicants and an increase in requirements for the quality of vocational education both on the part of applicants and on the part of the state. An important component for effective decision-making in the process of quality management of the educational process is educational analytics, on the basis of which it is possible to predict the academic performance of students, to identify factors that have a significant impact on achieving high educational results. The study showed the possibility of predicting the exam in a particular discipline of first-year university students based on the data of control sections conducted by deans during the semester to identify groups of students with an increased risk of academic debt. The prediction accuracy shown by the constructed models (neural network, decision tree and logistic regression) turned out to be quite acceptable both at the stage of the first boundary control and at the stage of the second. The results of this work are of practical importance for the administration of universities and for teachers. Predictive models can be used to predict the expulsion of students due to academic failure. Models can be embedded in educational information systems and be an assistant to teachers for decision-making in the process of implementing the discipline.
In this work, mathematical models have been developed and the sensitivity of plate vibrations has been tested taking into account rotation, heating with the extension of numerical analysis to radial wheels of turbomachines. First, the influence of the linear and quadratic law of temperature change on the static and dynamic characteristics of the plate was studied using the BLADIS + and ANSYS programs. These results of the calculation of natural frequencies are consistent in two programs and with an analytical solution. During the calculation, it is noted that the natural frequencies of the plate decrease with a quadratic law of temperature change, but increase with rotation. To understand the effect of rotation and heating on sensitivity using the program SOLIDWORKS, ANSYS WORKBENCH, and MATLAB simulate vibration modes and their sensitivity to changes in the studied frequencies. From the results of testing the plate, which showed good convergence of the calculation of natural frequencies, it was decided to extend the created mathematical models and software packages to the real radial wheels of turbomachines. This cover disc is characterized by the greatest degree of deformation, taking into account rotation. And the zone of reduction in vibration frequencies decreases at the upper edge of the blades, but increases in the middle of the leading edge of the blades, taking into account rotation and uneven heating. Thus, the obtained calculations of the analysis sensitivity on the plate and the real radial wheel, taking into account rotation and heating make it possible to reduce the volume of expensive experimental studies and reduce the time for designing new machines according to the criteria of efficiency, reliability, technology and resource saving of highly loaded machine units.
The article considers the problem of identifying an internal heat source and assessing its influence on changes in the temperature of a controlled object. The problem of identifying a heat source arises during thermal control of power transmission, during heat treatment, and during non-destructive testing of buildings, structures and materials. The mathematical model of heat transfer inside an object is represented by a heat conduction equation with an unknown source function, initial conditions, and boundary conditions formed on the basis of the results of noisy temperature measurements obtained near the surface of the object. The article proposes an approach to identifying an internal heat source based on the transition from an inverse problem to an integral equation, a numerical method for solving it, as well as an algorithm for calculating non-stationary internal heat fields that takes into account the influence of a heat source. The stability of the identification method with respect to the error of the initial data is ensured by the choice of regularization parameters. The proposed approach and methods, in contrast to the existing ones, make it possible to establish an explicit dependence of the desired function of the internal heat source on boundary measurements in a situation where the temperature fields near the surface of the object change with time. The article presents error estimates for numerical solutions found as a result of a comparative analysis with test values. The results of the experiment indicate that the proposed methods reduce the negative impact of noise on the accuracy of data processing and allow determining the internal thermal state of an object from indirect measurements with a sufficient level of accuracy and can serve as a basis for determining the influence of an internal heat source on the formation of internal non-stationary temperature fields.
Abstract. This article presents the results of computational work on the study of the dynamics of the VVER-SKD reactor installation, performed using the capabilities of the RELAP5/MOD3.3 program. A characteristic feature of this reactor is a rather noticeable change in the density of the coolant in the core. In combination with neutron-physical feedback, it creates risks of thermal-hydraulic and neutron-thermal-hydraulic instability, especially during transients. In this regard, it is necessary to be able to predict the occurrence of instabilities, to assess the limits of the stability of the system. For these purposes, a design model of the reactor plant was developed. The choice of the RELAP5 code for this work is due to the extensive experience of its use in justifying the safety of existing water-cooled reactors. To obtain the possibility of calculations of VVER-SKD by the RELAP5 code, the properties of water were expanded and detailed in the field of supercritical pressures and temperatures. For the developed model of the VVER-SKD reactor plant, calculations of the conditional start-up of the reactor to the nominal operating mode were per-formed. The influence of deviations of the feed water temperature and flow rate on the dynamics of the reactor plant was also considered. Based on the results of calculations, conclusions are made about the stability of the reactor plant in the nominal mode.
Abstract. This article is the second in a series devoted to the study of the resilience of isolated local-level energy complexes or autonomous microgrids using previously developed digital twin technology of a complex technical system. Vitality is the ability of microgrids to adapt to large disturbances and restore their original state after their impact. The study of the resilience of energy complexes is traditionally based on multivariate computational experiments. However, a digital twin associated with a real microgrid or test bench makes it possible to combine computational experiments and field experiments in the study of microgrid resilience. Two-way communication between a digital twin and the equipment of a microgrid or test bench is provided by a specialized subject-oriented environment, the architecture of which is presented in this article. The proposed environment architecture includes a monitoring system, which, in addition to collecting data on the state of computing facilities and communication equipment, is adapted to collecting data from instrumentation of power equipment and microgrid automation. The article also develops a methodology for assessing the resilience of an autonomous microgrid using its digital twin. The input data for the resilience assessment methodology are the values of the digital twin parameters, information from the monitoring system, microgrid configurations, performance indicators, summary indicators, and the output is resilience curves. The developed methodology will be further used in solving various classes of problems in the subject area of resilience research, for example, in analyzing the vulnerability of microgrids.
A comprehensive analysis of the subject area was made. A block diagram of the medical decision support system for modules has been built, and a functional model that allows you to accurately display the tasks performed by the medical decision support system. An approach to data preprocessing is proposed, including the algorithmic base of a medical decision support system when choosing a trajectory for treating patients, analysis of outliers, processing of missing data and their recovery.
The paper considers obtaining morphometric characteristics of the relief using modern methods of remote sensing of the Earth and geoinformation technologies to provide the agricultural monitoring system of the FRC KSC SB RAS. The new digital terrain model (DEM) of the global scale FABDEM and high-precision aerial photographs obtained by shooting from an unmanned aerial vehicle (UAV) and specialized geodetic equipment were used as initial data. Based on them, a number of thematic maps have been developed in the geoinformation systems QGIS, ArcGIS and Sputnik Agro, displaying the main morphometric characteristics of the relief. When making maps, the requirements for cartographic materials are met (choosing the correct cartographic projection, format, standard legend, etc.). The morphometric characteristics of the relief prepared using the FABDEM data set and UAV imagery were compared. The high degree of similarity between them demonstrates that the FABDEM DEM is a suitable option for obtaining information about the terrain in the absence of the possibility of shooting from a UAV. As a result, a large amount of information was obtained about the features of the device of the surface relief of the Mikhailovskoye agricultural experimental production facility (EPF). To store, distribute and analyze information about the structure of the surface of the object under study, thematic maps saved in the qgis project format were imported into the agricultural monitoring system of the FRC KSC SB RAS. The joint use of thematic maps on the terrain and data already included in the system (satellite data, vegetation indices, soil and climatic characteristics) will allow taking into account the state of agricultural land when developing strategies for the management and use of agricultural land.
Systematic mapping, categorization and comparative analysis of the organizational knowledge allows executives to make informed managerial decisions. The fragmentation and heterogeneity of knowledge sources, as well as the lack of unification of their assessment is key problem in forming a system of intellectual decision support based on knowledge mapping. This paper presents the development and testing of a knowledge mapping methodology as a response to the described problem. A distinctive feature of the presented approach is the methodological reliance on ontological engineering. The ontology was developed and proposed for updating to the working group. It took into account the following areas: functional knowledge, in-depth profile knowledge, possession of specialized software, knowledge of contractors, knowledge gained as a result of project work. Data collection for the digital map was carried out using a survey of employees and supplemented with information provided by the sub-division.
The article discusses the source data, their structure and processing methods used in GeoGIPSAR information-forecasting system, which was developed at the Melentiev Energy Systems Institute (MESI) of SB RAS and continues to develop. It successfully user data from modern global climate model (such as NOAA, HPCC, etc.) to predict inflow at the large reservoirs and lakes with an assessment of their level modes. The article describes the structures and types of indicators of global climate models used in practice when forecasting runoff in river basins. The main steps for data collection, processing and analysis are considered. The results of the system operation using the developed map visualizer are demonstrated. It is noted that it is necessary to conduct research on forecasting a useful inflow for effective management of the level regime of Lake Baikal. On the results of the work carried out, it is planned to expand the system with the inclusion of a web interface for the analysis and formation of forecast indicators.
The protection of the corporate network is an important aspect of the successful functioning of the organization. In this paper, the cybersecurity of the internal network perimeter is studied using the example of the Krasnoyarsk Scientific Center of the Siberian Branch of the Russian Academy of Sciences. There are various tools for preventing cyber threats and analyzing visited Internet resources, but their performance and applicability strongly depend on the amount of input data. The article discusses existing methods for identifying network threats by analyzing proxy server logs. The division of Internet users into thematic groups to detect anomalies is investigated. A method for clustering Internet resources is proposed, aimed at reducing the volume of input data by excluding groups of safe Internet resources or selecting only suspicious Internet resources. The proposed method consists of the following steps: data preprocessing, user session selection, data analysis, and interpretation of the results. The source data is the log entries of the proxy server. At the first stage, useful data for analysis are selected from the initial data, after which the continuous data stream is divided into small portions (sessions) using the kernel density estimation method. At the second stage, soft clustering of the used Internet resources is performed by applying the topic modeling method. The result of the second stage are unallocated groups of Internet resources. At the third stage, with the help of an expert, the results obtained are interpreted by analyzing the most popular Internet resources in each group. The method has many settings at each stage, which allows you to configure it for any format and specifics of the input data. The scope of the method is not limited. It can be used both as an additional preprocessing step to reduce the amount of input data and to detect anomalous data.