By the end of the 1960s, due to the increasing complexity of programs and further development software There was a need to increase the productivity of programmers, which led to the development of structured programming. The founder of this methodology is Edsger Dijkstra, who described the basic principles of structured programming.

With the development of structured programming, the next development was procedures and functions. That is, if there is a task that is executed several times, then it can be declared as a function or as a procedure and simply called during program execution. The overall program code in this case becomes smaller. This contributed to the creation of modular programs.

The next advance was to combine heterogeneous data that is used in a program in a bundle into structures - these are composite data types built using other data types.

Structured programming assumes precisely designated control structures, program blocks, the absence of instructions without conditional jump(GOTO), autonomous subroutines, support for recursion and local variables. The essence of this approach is the ability to split a program into its component elements, increasing the readability of the program code.

Also created functional(applicative) languages ​​(Lisp) and brain teaser languages ​​(Prolog)

Although the introduction of structured programming has given positive result, even this failed when the program reached a certain length. In order to write a more complex and longer program, a new approach to programming was needed.

Object-oriented programming (OOP)

When using data structures in a program, corresponding functions for working with them are also developed. This led to the idea of ​​combining them and using them together, and classes were born.

A class is a data structure that contains not only variables, but also functions that work with these variables.



Now programming could be divided into classes and tested not the entire program, consisting of 10,000 lines of code, but the program could be divided into 100 classes and tested each class. This made writing a software product much easier.

As a result, the principles of object-oriented programming were developed in the late 1970s and early 1980s.

The first object-oriented programming language is Simula-67, in which classes first appeared. OOP concepts received further development in the Smalltalk language, which also laid the foundations for windowing systems. More recent examples of object-oriented languages ​​are Object Pascal, C++, Java, C#, etc.

OOP allows you to optimally organize programs by breaking a problem into its component parts and working with each separately. A program in an object-oriented language, solving a certain problem, essentially describes a part of the world related to this problem.

Ticket 2.

Vector programming/prototyping language MatLab

MATLAB (Matrix Laboratory) - plastic bag application programs for solving technical computing problems and the programming language of the same name used in this package.

The MATLAB language was created in the late 1970s; it was intended to work with numerical method libraries written in Fortran without knowing the language itself. The language quickly gained popularity among people involved in applied mathematics.

MATLAB Used by over 1,000,000 engineers and scientists, it works on most modern operating systems ah, including Linux, Mac OS, Solaris and Microsoft Windows.

The MATLAB language is a high-level interpreted programming language that includes matrix-based data structures, a wide range of functions, an integrated development environment, object-oriented capabilities, and interfaces to programs written in other programming languages.

Programs written in MATLAB are of two types - functions and scripts. Functions have input and output arguments, as well as their own workspace for storing intermediate calculation results and variables. Scripts use a common workspace. Both scripts and functions are not compiled into machine code and are saved as text files. It is also possible to save so-called pre-parsed programs - functions and scripts processed into a form convenient for machine execution. In general, such programs run faster than regular ones, especially if the function contains graphing commands.

Typical uses of MATLAB are:

· mathematical calculations (solving differential equations, calculating matrix eigenvalues, etc.)

· creation of algorithms

· modeling

· data analysis, research and visualization

scientific and engineering graphics

Application development, including creation GUI

The MATLAB system consists of five main parts.

· MATLAB language. It is a language of matrices and arrays high level with thread management, functions, data structures, input/output and features of object-oriented programming.

· MATLAB environment. It is a set of tools and devices with which the user or MATLAB programmer works. It includes tools for managing variables in the MATLAB workspace, data input and output, and creating, monitoring, and debugging M-files and MATLAB applications.

· Controlled graphics. This graphics system MATLAB, which includes high-level commands for two- and three-dimensional data visualization, image processing, animation, and illustrated graphics. It also includes commands low level, allowing you to completely edit appearance graphics, as well as when creating a Graphical User Interface (GUI) for MATLAB applications.

· Library of mathematical functions. This is an extensive collection of computational algorithms from elementary functions such as sum, sine, cosine, complex arithmetic, to more complex ones such as matrix inversion, finding eigenvalues, Bessel functions, and fast Fourier transform.

· Software interface. This is a library that allows you to write programs in C and Fortran that interact with MATLAB. It includes facilities for calling programs from MATLAB (dynamic linking), calling MATLAB as a computational tool, and for reading and writing MAT files.

The Matlab shell consists of command line, text editor with a built-in debugger and windows with a list of files, a list of visible variables and a history of entered commands.

Matlab has a large number of packages (toolboxes) - both its own and those distributed by independent developers, often under open source conditions. Matlab includes Simulink, a visual editor for modeling dynamic systems.

Ticket 3.

Graphical programming language "G" LabView

LabVIEW(English) Lab oratory V virtual I instrumentation E ngineering W orkbench is a development environment and platform for running programs written in the National Instruments G graphical programming language. LabVIEW is used in data acquisition and processing systems, as well as for managing technical objects and technological processes.

National Instruments was founded in 1976 by three founders: Jeff Kodoski, James Truchard and Bill Nowlin.

The first version of LabVIEW was released in 1986 for the Apple Macintosh; currently there are versions for UNIX, Linux, Mac OS, etc., and the most developed and popular versions are for Microsoft Windows.

LabVIEW's capabilities are similar to those of general-purpose programming systems such as Delphi. However, there are a number of important differences between them:

1. The LabVIEW system is based on the principles of graphical programming.

2. The LabVIEW system is based on the principles of object-oriented programming

3. The LabVIEW system is problem-oriented;

Each LabVIEW program is a separate virtual device, that is, a software analogue of some real or imaginary device, consisting of two interconnected parts:

1. The first part is the “front panel”, which describes the appearance of the virtual instrument and contains many information input means (buttons, switches) - controls, as well as many information visualization means - indicators.

2. The second part - “block diagram” (block diagram) describes the algorithm of operation of the virtual device.

Each VP, in turn, can use as components other VIs, like any program written in a high-level language, use their own subroutines. Such lower-level VPs are usually called subVPs.

Important elements of the block diagram are functional nodes - built-in subVPs that are part of LabVIEW and perform predefined operations on data. Also the components of the "block diagram" are terminals(“back contacts” of front panel objects) and control structures(which are analogues of such elements of text programming languages ​​as conditional operator"IF", loop statements "FOR" and "WHILE", etc.).

Data is transferred from terminals to functional nodes and between different functional nodes using links.

An ordinary user, as a rule, has to deal with ready-made VIs, developed in advance by other specialists. Only the front panel of the VI is accessible to him, while the block diagram of the VI is hidden from his eyes. The user takes any readings, monitors the progress of some process, or even controls its progress using the front panel controls - knobs, toggle switches, buttons, etc.

So, with the help software environment LabView allows you to develop hardware and software systems for testing, measuring, data entry, analysis and control of external equipment.

Ticket 4.

Basics of standardization using C++ as an example. Hungarian notation.

C++- a compiled, statically typed general-purpose programming language.

In 1985, the first edition of The C++ Programming Language was published, providing the first description of the language, which was extremely important due to the lack of an official standard.

There are currently many implementations of the C++ language. Ideally, a program written in one implementation of a language should be executed in the same way on any other implementation of the same language. To ensure this condition, there are standards that describe the basic C++ constructs and the rules for their construction.

General requirements:

1) There should be no more than one C++ statement on one line

2) If a function call, operation, takes more than one line, then the hyphen should follow immediately after the comma.

3) If the expression takes several lines, move to a new one after the binary operation.

Code documentation:

1) The documentation for the source code contained in the comments should be sufficient for a complete understanding of the code for its further maintenance for both system developers and programmers

2) Every file source code The program must have an introductory part containing, in the form of a comment, information about the author of the program, the file name and its contents

3) The source code must include a copyright statement if the program is being developed over several years, it must be indicated every year

Names:

1) Names - the name must match the property or process that it displays. The names are meaningful, understandable, based on English words.

2) Constant names should be written in capital letters only. Must not match, no matter how you set the constants. Constants can be used using the following operators: const, enum, #define.

3) You should avoid using names of variables and functions composed entirely of capital letters.

Line length: The maximum length of lines should not exceed 70 characters. Although larger monitors can display longer lines, printing devices are more limited in their capabilities.

Strategic and tactical comments:

1) Tactical comments in one line describe the operation on the next line.

2) The strategic comments describe general purposes functions or code fragments and they are inserted into the program text as a block of several comment lines.

3) Too many tactical comments make the program code unreadable, so it is recommended to use strategic commenting.

Class name: Must correspond to the object that this class describes.

Using tabs:

1) To organize indentations, use tabs instead of spaces

2) There are no restrictions on the indentation size, but if the indentation size is more than 4 or 5 spaces, then the code may not fit the width of the page.

3) Comments must be at the level of the operator to which they correspond

4) Comments to the right of statements should be aligned with spaces rather than tabs

Source files

1) Each source file must contain the implementation of only one class or group of functions that are similar in purpose

2) Each file must have a header

3) Each .cpp file must include corresponding header files, which must contain: a) declaration of types and functions that are used by functions or methods of the class implemented in this .cpp file. b) declaration of types, variables and methods of classes that are implemented in this .cpp file.

Class names:

1) The class name must start with the letter "C", which means "class".

2) If the class name consists of several words, then each word must begin with capital letter(eg: class СConnectionPointForMyOcx)

3) Names combining more than 3 words are not recommended. Long identifiers make the program difficult to read.

4) The class name must begin with a capital letter. If the class name consists of several words, then each must begin with a capital letter, which serves as a word separator.

Hungarian notation – naming convention for variables, constants and other identifiers in program code.

Identifier names are preceded by predefined prefixes consisting of one or more characters. At the same time, as a rule, neither the presence of prefixes nor their writing are a requirement of programming languages, and each programmer (or team of programmers) can have their own.

Examples:
The prefix s (short for string) denotes a string.

The prefix a (short for array) denotes an array.

The prefix T (short for type) denotes the type.

1) If the built-in typing mechanism is not enough, the Hungarian notation allows you to write subtype variable

2) Convenient when naming objects for which the type is obvious - for example, the “OK” button can be called btnOk.

3) Hungarian notation is convenient for writing large programs in non-full-featured (by modern standards) editors without automated navigation through the text

1) Some programmers believe that the use of prefixes makes variable names less clear and thus reduces the readability of the code

2) If the name of a variable without prefixes is unknown, it is sometimes difficult to recover its prefixes.

3) When changing the type, you will need to change the variable name (not all code editors can do this automatically).

Ticket 5.

Life cycles of software tools and their standardization

The life cycle of a software is understood to mean the entire period of its development and operation (use), starting from the moment the software was conceived and ending with the cessation of all types of its use. The life cycle covers a rather complex process of creating and using a software system. This process can be organized differently for different classes of software and depending on the characteristics of the development team.

Currently, there are 5 main approaches to organizing the process of creating and using software:

  • Waterfall approach. With this approach, software development consists of a chain of stages. At each stage, documents are created that are used in the next stage. The source document sets out the requirements for the software. At the end of this chain, programs are created that are included in the software.
  • Exploratory programming. This approach involves the rapid implementation of working versions of software programs that perform only a first approximation of the required functions.
  • Prototyping. This approach models the initial phase of exploratory programming up to the creation of working versions of programs intended for conducting experiments in order to establish software requirements. In the future, the development of a software system should follow the established requirements within the framework of some other approach (for example, waterfall).
  • Formal transformations. This approach involves developing formal software specifications and turning them into programs through correct transformations. This approach is based computer technology(CASE technology) software development.
  • Assembly programming. This approach assumes that the PS is constructed primarily from components that already exist. There must be some repository (library) of such components, each of which can be reused in different PSs.

The waterfall approach is mainly considered with some modifications. Firstly, because this approach deals with most software engineering processes, and secondly, because most large software systems are created within this approach.

Within the waterfall approach, the following stages of the software life cycle are distinguished:

1) development of PS,

2) production of software products (PI)

3) operation of the substation.

Let's look at each stage in more detail.

Stages of PS development: external description, design, coding, certification.

External description stage: involves the processes leading to the creation of some document, which is called external description of the software. The external description of the software begins with the analysis and determination of requirements for the software on the part of users (customer), and also includes processes for specifying these requirements.

Software design stage: covers the following processes: development of software architecture, development of software program structures and their detailed specification.

Coding stage (programming in the narrow sense) of the software: includes the processes of creating program texts in programming languages, their debugging with testing of the software.

Stage of certification of the software: the quality of the software is assessed. If this assessment turns out to be acceptable for the practical use of the software, then the development of the software is considered complete. This is usually formalized in the form of some document recording the decision of the commission conducting the certification of the software.

Software product (SP)) - an instance or copy of the developed software. PI production is the process of generating and/or reproducing (making a copy) of programs and program documents PS for the purpose of supplying them to the user for their intended use. Production of PI is a set of works to ensure the production of the required quantity of PI on time.

Substation operation stage covers the processes of storage, implementation and maintenance of software, as well as transportation and use of software for its intended purpose. It consists of two parallel phases: the PS application phase and the PS maintenance phase.

Application (operation) of a software system is the use of a software system to solve practical problems on a computer by executing its programs.

Maintenance of a software system is the process of collecting information about the quality of the software in operation, eliminating errors found in it, improving and modifying it, as well as notifying users about changes made to it.

Ticket 6

Imperative and declarative programming

Before talking directly about these concepts, we will tell you the definition of a programming paradigm.

Programming Paradigm– a set of ideas and concepts that determine the style of writing a program.

The main programming paradigms are:
1) imperative programming
2) declarative programming
3) functional programming
4) object-oriented programming.

There are also other programming models, but we will only consider the first two.

Programming:

1) Imperative

Programs are a sequence of actions with conditional and unconditional transitions, i.e. need to explain to the computer How need to solve the problem.

A program is a set of statements describing a fragment subject area or the current situation, in other words, the specification of the solution to the problem is given, i.e. the programmer must describe What you need to decide what you want to get in the end.

Structured programming - these are certain general principles and rules for the design, development and execution of programs in order to facilitate the processes of their creation and testing, increase the productivity of programmers and improve the readability of the resulting program. The structure of the program and the algorithm for solving the problem should be easy to understand, simple to prove correctness, and convenient to modify. At its core, the structural approach is the rejection of a disorderly style in algorithmization and programming (in particular, the rejection of the goto operator) and the identification of a limited number of standard techniques for constructing easily readable algorithms and programs with a clearly defined structure, which is especially important when developing large software systems.

The experience of using structured programming methods when developing, for example, a number of complex operating systems shows that the correctness of the logical structure of the system in this case can be easily proven, and the system itself allows for fairly complete testing. Reducing the difficulty of debugging and testing programs leads to increased productivity of programmers, since testing a program spends from a third to half of its development time. A programmer's productivity is usually measured by the number of debugged statements he can write in a day. Approximate estimates show that the use of structured programming methods makes it possible to increase this number by 5-6 times. It must also be said that structured programming presupposes a certain organization of the programming process itself and a certain program design technology, which also has a positive effect on the productivity of programmers.

Basics of structured programming. The theoretical foundation of structured programming is structuring theorem, from which it follows that the algorithm (program) for solving any practically computable problem can be represented using three elementary basic control structures: a sequence structure; branching structures, cycle structures shown in Fig. 6.5-6.7 respectively, where P is the condition, S is the operator.

Following structure represents the natural flow of the algorithm - any sequence of operators executed one after another (see Fig. 6.5). In a programming language, this corresponds to a sequence of input, output, and assignment statements.

Represents a decision-making factor, includes checking a certain logical condition P and, depending on the results of this check, executing operator S1 or operator S2. In programming languages ​​(for example, Pascal) it is implemented by the operator if P then SI else S2 (see Fig. 6.6).

Loop structure (loop with precondition) represents the repetition factor of calculations, ensures that the execution of the operator S is repeated many times while a (true) logical

Rice. 6.5.Following structure

Rice. 6.6.

condition P. In programming languages ​​(for example, Pascal) it is implemented by the operator while P do S (see Fig. 6.7).

The basic set of control structures is functionally complete, i.e. with its help you can create any algorithm, no matter how complex, however, in order to create more compact and visual algorithms and programs, additional control structures are used: the structure of reduced branching; the structure of an option or multi-alternative choice; loop structure with parameter; loop structure with postcondition. IN different languages programming, the implementation of basic control structures can be different, for example in Pascal language All proposed structures have been implemented.

Any program can be built through the composition of basic structures: either by connecting them sequentially - forming sequential structures, or by nesting them into each other - forming nested structures.

Each of the structures can be considered as one functional block with one input and one output. Blocks S, SI, S2, which are part of the basic control structures, can themselves be one of them, so nested structures are possible. However, whatever the degree and depth of “nesting”, it is important that any design ultimately has one input and one output. Therefore, any complex structure can be viewed as a “black box” with one input and one output. Thus, it is possible to introduce the transformation of any structure into a function block. Then any algorithm composed of standard structures can be sequentially transformed into a single functional block, and this sequence of transformations can be used as a means of understanding the algorithm and proving its correctness. The reverse sequence of transformations can be used in the process of designing an algorithm with the gradual expansion of a single functional block into a complex structure of basic elements.

To structure and understand large programs, additional structural tools are also used that support the modular principle of software development: these are subroutines and modules. Using the machine subroutines (procedures and functions) - this is the ability to separate separate (often repeated) sections of code into independent program units with their own input and output data for subsequent repeated calls from various points in the program and other subroutines. Module is an autonomously compiled library of descriptions of types, data, procedures and functions, which allows you to group descriptions of data and subroutines by their functions and purpose in accordance with one of the basic principles of structured programming - breaking large tasks into subtasks.

Program development methodology. There are two common program development techniques (strategies) related to structured programming: top-down programming; bottom-up programming.

Top-down programming, or top-down program design, is a program development technique in which development begins with defining the goals of solving the problem, followed by sequential detailing, ending with a detailed program. First, several of the most global problems are identified, the solution of which can be presented in a general structure by functionally independent blocks. The development of the logical structure of each such block and its modification can be carried out independently of the other blocks. At this first stage of the project, the most important and significant connections are revealed, functional purpose each block, its input and output data. At subsequent design stages, the logical structure of individual functional blocks of the general circuit is clarified (detailed), which can also be carried out in several stages of detailing down to the simplest instructions. At each stage of the project, multiple checks and corrections are performed.

This approach is quite rational, it allows you to significantly speed up the development process of complex software projects and largely avoid erroneous decisions. In addition, it becomes possible not to implement some subprograms (modules) immediately, but to temporarily postpone their development until other parts are completed. For example, if there is a need to calculate a complex mathematical function, then a separate subroutine for such calculation is allocated and implemented temporarily by one operator, which simply assigns the desired value. Once the entire application is written and debugged, you can begin implementing this complex feature.

Programming “bottom up” or bottom-up program design - This is a program development technique that begins with the development of subroutines (procedures, functions), while the development of the general scheme has not been completed. This technique is less preferable than top-down design, as it often leads to undesirable results, code rewrites and increased development time. Its use may be appropriate when a new project uses well-known proprietary solutions.

General principles for developing software projects. The use of structured programming technology in the development of serious software projects is based on the following principles:

  • programming must be done from top to bottom;
  • the entire project should be divided into modules/subroutines with one input and one output;
  • any subroutine must allow only three basic structures: sequential execution of statements, branching and looping;
  • the unconditional transfer of control operator goto is invalid;
  • documentation should be created simultaneously with programming, partly in the form of program comments. The use of the principles and methods of structured programming makes it possible to increase the reliability of programs (thanks to good structuring during design, the program is easy to test and debug) and their efficiency (structuring the program makes it easy to find and correct errors, and individual subroutines can be remade/modified independently of others), reduce time and cost software development, improve the readability of programs.

Introduction

When creating medium-sized applications (several thousand lines of source code), we use structured programming, the idea of ​​which is that the structure of the program should reflect the structure of the problem being solved so that the solution algorithm is clearly visible from the source text. To do this, you need to have the means to create a program not only using three simple statements, but also using tools that more accurately reflect the specific structure of the algorithm. For this purpose, the concept was introduced into programming subroutines- a set of operators that perform the desired action and are independent of other parts of the source code. The program is broken down into many small subroutines (taking up to 50 statements - a critical threshold for quickly understanding the purpose of the subroutine), each of which performs one of the actions specified in the original task. By combining these subroutines, it is possible to form the final algorithm not from simple operators, but from complete blocks of code that have a certain semantic meaning, and such blocks can be referred to by name. It turns out that subroutines are new operators or language operations defined by the programmer.

The ability to use subroutines classifies a programming language as a class procedural languages.

Story

The methodology of structured programming appeared as a consequence of the increasing complexity of problems solved on computers and the corresponding complication of software. In the 1970s, the volume and complexity of programs reached such a level that “intuitive” program development, which was the norm in earlier times, no longer met the needs of practice. Programs were becoming too complex to be properly maintained, so some kind of systematization of the development process and program structure was required. The strongest criticism from developers structural approach The GOTO operator (unconditional jump operator), which is found in almost all programming languages, was introduced into programming. The use of arbitrary transitions in the program text leads to confusing, poorly structured programs, from the text of which it is almost impossible to understand the order of execution and the interdependence of fragments.

Following the principles of structured programming made program texts, even quite large ones, normally readable. The understanding of programs has become significantly easier; it has become possible to develop programs in a normal industrial mode, when the program can be understood without much difficulty not only by its author, but also by other programmers. This made it possible to develop software systems that were quite large for that time by development teams, and to maintain these systems for many years, even in conditions of inevitable rotation of personnel.

Structural Development Methodology software was recognized as "the most powerful formalization of the 70s." After that, the word “structural” became fashionable in the industry; it began to be used everywhere where it was necessary and not necessary. Works appeared on “structural design”, “structural testing”, “structural design” and so on, in general, approximately the same thing happened that happened in the 90s and is currently happening with the terms “object” and “object -oriented".

Background and purpose of structured programming.

Traditional programming technology was formed at the dawn of computer technology, when users had limited computer resources at their disposal, and the program developer was at the same time its main user. Under these conditions, the main attention was paid to obtaining effective programs in the sense of optimal use of computer resources.

Nowadays, when the scope of computer applications has expanded enormously, the development and operation of programs is carried out, as a rule, by different people. Therefore, along with efficiency, other important characteristics of programs such as understandability, good documentation, reliability, flexibility, ease of maintenance, etc. come to the fore.

The problem of developing programs with such qualities is explained by the labor-intensive programming process and the associated rapid increase in software costs.

To create a “good” program, it becomes necessary to adhere to certain principles or a certain programming discipline. Significant progress in the field of programming is achieved using so-called structured programming.

Appearance new technology, or, as they also say, the discipline of programming based on a structural approach, is associated with the name of the famous Dutch scientist E. Dijkstra (1965). In his works, he suggested that the GOTO statement could be eliminated from programming languages ​​and that a programmer's skill was inversely proportional to the number of GOTO statements in his programs. This programming discipline simplifies and structures the program.

However, the idea of ​​structured programming as programming without the use of a GOTO statement is erroneous.

For example, Hoor defines structured programming as "the systematic use of abstraction to control masses of detail and a method of documentation that aids program design."

Structured programming can be interpreted as "the design, writing and testing of a program according to a predefined discipline."

The structural approach to programming is precisely aimed at reducing the labor intensity of the entire process of creating software, from technical specifications for development to completion of operation. It means the need for a unified discipline at all stages of program development. The concept of a structured approach to programming usually includes top-down methods of program development (the “top-down” principle), structured programming itself, and the so-called end-to-end structural control.

The main purpose of structured programming is to reduce the difficulty of testing and proving the correctness of a program. This is especially important when developing large software systems. The experience of using structured programming methods in the development of a number of complex operating systems shows that the correctness of the logical structure of the system can be proven, and the program itself allows for fairly complete testing. As a result, the finished program contains only trivial coding errors that are easily corrected.

Structured programming improves the clarity and readability of programs.

Programs that are written using traditional methods, especially those that are overloaded with GOTO statements, have a chaotic structure.

Structured programs have a sequential organization, so it is possible to read such a program from top to bottom without interruption.

Finally, structured programming is designed to improve the efficiency of programs.

So, structured programming represents some principles of writing programs according to a strict discipline and aims to facilitate the testing process, increase the productivity of programmers, improve the clarity and readability of the program, and increase its efficiency.

Basic criteria for assessing the quality of a computer program.

It is known that the same algorithm can be implemented on a computer in different ways, i.e. Several different programs can be compiled to solve the same problem.

Thus, it is necessary to have some program evaluation criteria by which one can judge how much better one program is than another. Analysis and evaluation of the program are primarily qualitative in nature.

1. The program works and solves the problem. It is clear that this characteristic of the program is the most important.

In this regard, each program must be designed in such a way that the correctness of the results obtained can be verified. Such a check is carried out during program debugging, on certain sets of input data for which the answer is known in advance. But debugging can only prove the presence of errors in the program, but cannot prove the correctness of the program for all possible calculations implemented with its help. In this regard, it is necessary to develop methods for analytical verification of the program.

Analytical proof of the correctness of a program requires that the program be easy to analyze. This means that the program must be designed in such a way that it can be understood how it produces a given answer.

2. Minimum time spent on testing and debugging the program. Testing and debugging a program is a necessary stage in the process of solving a problem on a computer. It takes up between a third and a half of the total program development time, so it is very important to reduce the time spent on testing and debugging.

Testing and debugging of a program is made easier if the program is simply analyzed and provided with the necessary comments to make it easier to understand. Good comments can speed up the debugging process.

Understanding and debugging a program is easier if it has a simple and clear structure, in particular if the use of control transfer statements (GOTO) is limited. Overloading a program with these operators leads to a chaotic structure and makes debugging difficult.

Another important principle– use of mnemonic notations for variables. Programming languages ​​provide ample opportunities here. For a better understanding of the program, it is necessary to use mnemonics that reflect the physical (mathematical, economic, etc.) meaning of the variable (for example, SPEED - speed).

3. Reducing maintenance costs. A developed and debugged program is intended for repeated use, and its operation, as a rule, is carried out not by developers, but by other programmers included in the so-called maintenance group.

Programmers who maintain the program often have to continue debugging the program and modernize it due to changes in technical specifications, the introduction of new software tools, or the identification of new errors and shortcomings in the program.

To reduce maintenance costs, it is necessary for every developer to consider the complexity of maintenance. The program should be developed, debugged, and formatted with the expectation that it will be used and maintained by other programmers.

4. Flexibility of the program. The developed program is usually in operation for a long time. During this time, the requirements for the problem being solved, technical specifications, and program requirements may change. There is a need to make certain changes to the program, which in some cases can be difficult to do, because The developer does not provide such an option. A "good" program should allow modification.

5. Reduced development costs. Programming is a team effort. The composition of the group of programmers working on solving this problem may change for some reason. Therefore, the design and development of a program must be carried out in such a way that it is possible, if necessary, to transfer its completion to another programmer. Failure to comply with this requirement often leads to delays in the commissioning of programs.

6. Simplicity and efficiency. The program should be simply organized.

This can be manifested in the structure of the program, and in the use of simple and most natural programming language tools, and in the preference for simple data structures, etc.

The effectiveness of the program is considered one of its main characteristics.

Therefore, often, to the detriment of other qualities of the program, developers resort to complex tricks to reduce the amount of memory used or reduce the execution time of the program. In many cases, the effort spent on this is not worth it. A smart approach to improving program effectiveness is to identify the bottlenecks and try to improve them.

REVIEW LECTURE NOTES

For specialty students
T1002 “Information technology software”

(A.M.Kadan, Ph.D., Associate Professor)

Question No. 34.
Characteristics of the main software development methodologies

1. Programming methodology and technology.

2. Imperative programming.

2.1. Modular programming.

2.2. Structured programming.

3. Object-oriented programming method.

Programming methodology and technology

Let us give the basic definitions.

Program - a completed product suitable for launch by its authoron the system on which it was developed.

Software - a program that anyone can runbuild, test, correct and develop. Such a program should bewritten in a generalized style, thoroughly tested and supporteddetailed documentation. (Given the currently fashionable concept of copyright, it is necessary to clarify here - any person having times decision to work with source codes of programs)

Software package - a set of interacting programs, consistent in functions and formats, precisely defined interfaces, and togetherconstituting a complete tool for solving large problems.

Life cycle software is the entire period of its development and operation, starting from the moment the idea was conceived and ending with the termination of its use.

Methodology programming - a set of methods applicable in life cycle software and united by a common philosophical approach.

Of the four widely known programming methodologies today - imperative, object-oriented, logical, functional - let's look at what you were taught - imperative and object-oriented programming methodologies.

Technology studies programming technological processes and the order of their passage - stages(using knowledge, methods and tools).

It is convenient to characterize technologies in two dimensions - vertical (representing processes) and horizontal (representing stages).

Process - a set of interrelated actions that transform some input data into output data. Processes consist of a set of actions, and each action consists of a set of tasks. The vertical dimension reflects the static aspects of processes and operates with concepts such as work processes, actions, tasks, performance results and performers.

Stage - part of the activities to create software, limited by a certain time frame and ending with the release of a specific product, determined by the requirements specified for this stage. Stages consist of steps that are usually iterative in nature. Sometimes stages are combined into larger time frames called phases. So, the horizontal dimension represents time, reflects the dynamic aspects of processes and operates with concepts such as phases, stages, stages, iterations and control points.

Technological approach is determined by the specific combination of stages and processes, focused on different classes of software and the characteristics of the development team.

If we look again at the figure, in this case a single process is performed at each stage. Of course, when developing and creating large programs, such a scheme is not correct enough (and simply unrealistic). However, it can be used as a basis for many other technological approaches to life cycle management.

Imperative programming

Imperative programming is historically the first programming methodology used every a programmer who programs in any of the “massive” programming languages ​​– Basic Pascal, C.

It is focused on the classical von Neumann model, which remained for a long time the only hardware architecture. The imperative programming methodology is characterized by the principle sequential changes in the state of the computer in a step-by-step manner. At the same time, change management is fully defined and fully controlled.

Methods and concepts

· Method state changes- consists of a consistent change in sostoyaniy. The method is supported by the concept algorithm.

· Method flow control- consists of step-by-step controlmanagement. The method is supported by the concept execution thread.

Computational model. If by computer we mean modern computer, then its state will be the values ​​of all memory cells, the state of the processor (including the current instruction pointer) and all associated devices. The only data structure is a sequence of cells (address-value pairs) with linearly ordered addresses.

As mathematical model Imperative programming uses a Turing-Post machine, an abstract computing device proposed at the dawn of the computer age to describe algorithms.

Syntax and semantics. Languages ​​that support this computational model are, as it were, a means of describing the function of transitions between states of the computer. Their main syntactic concept is operator. The first group is simple operators in which no part of them is an independent operator (for example, an assignment operator, an unconditional jump operator, a procedure call, etc.). The second group is structural operators that combine other operators into a new, more large operator(for example, compound statement, selection statements, loop statements, etc.).

Traditional structuring tool- subroutine (procedure or function). Subroutines have parameters and local definitions and can be called recursively. Functions return values ​​as the result of their work.

If a given methodology requires solving a certain problem in order to use its results when solving the next problem, then a typical approach would be like this. First, the algorithm that solves the first problem is executed. The results of its work are stored in a special memory location that is known to the next algorithm and are used by it.

Imperative programming languages. Imperative programming languages manipulate data in a step-by-step fashion, using sequential instructions and applying them to a variety of data. It is believed that the first algorithmic programming language was Plankalkuel (from plan calculus), developed in 1945-1946 by Konrad Zuse.

The most famous and widespread imperative programming languages, most of which were created in the late 50s - mid-70s of the 20th century, are presented in the figure. Pay attention to the empty space in the figure, corresponding to the 80s and 90s of the last century. This is a period of enthusiasm for new paradigms, and imperative languages ​​practically did not appear at this time.

Class of problems.Imperative programming is most suitable for solving problems in which sequential execution of any commands is natural. An example here would be the management of modern hardware. Since almost all modern computers are imperative, this methodology allows the generation of fairly efficient executable code. As the complexity of the problem increases, imperative programs become less and less readable.

Programming and debugging really large programs (for example, compilers) written exclusively based on imperative programming methodology can take many years.

Recommendations for literature. The features of imperative programming are described in a huge number of books. They are most systematically presented in the work “Universal Programming Languages. Semantic Approach” [Kalinin, Matskevich 1991].

Modular programming

Modular programming - is a way of programming in which the entire program is broken down into a group of components calledmodules, each of them having its own controlled size, evenspecific purpose and a detailed interface with the external environment. The only alternative to modularity is a monolithic program, whichreally, inconvenient. Thus, the most interesting question when studyingmodularity research - determining the criterion for dividing into modules.

Modular programming concepts. At the core There are three main concepts behind modular programming:

The principle of withholding information Parnassus. Every component conceals the only design solution, i.e. the module serves to conceal information formations. The approach to program development is to first A list of design decisions that are particularly difficult to make or that are most likely to change is generated. Then determined fromseparate modules, each of which implements one of the specified real sheny.

Axiom of modularity Cowan. Module - independent software unita device that serves to perform some specific function program and to communicate with the rest of the program. Software The unit must satisfy the following conditions:

blockiness organizations,i.e. the ability to call a program unitfrom blocks of any degree of nesting;

syntactic isolation, i.e. highlighting the module in the text syntactical elements;

semantic independence, i.e. independence from the place wheregram unit called;

commonality of data, i.e. having your own data storedat every request;

completeness of definition, i.e., the independence of the program unit.

Assembly programming Tseytina. Modules are software tools pics from which the program is built. There are three mainpremises for modular programming:

the desire to identify an independent unit of program knowledge.Ideally, every idea (algorithm) should be formalized in the form of a module;

the need for organizational division of large developments;

possibility of parallel execution of modules (in the context of parallelpractical programming).

Module definitions and examples. Let us give several additional definitions of a module.

  • Module- this is a set of commands that can be accessed by name.

· Module -this is a collection of program statements that has boundary elements and an identifier (possibly aggregate).

The functional specification of the module should include:

  • syntactic specification of its inputs, which should allow build syntactically in the programming language usedcorrect address to him;

· description of the semantics of the functions performed by the module for each of its inputs.

Types of modules. There are three main types of modules:

1) "Small" (functional) modules that implement, as a rule, oneany specific function. The main and simplest module in almost all programming languages ​​is the procedure or function.

2) "Average" (informational) modules that, as a rule, implement severala set of operations or functions on the same data structure (information object), which is considered unknown outside this module. Examples of "middle" modules in programming languages:

a)problems in a programming languageAda;

b)cluster in a programming language CLU ;

c)classes in programming languages ​​C++ and Java.

3) "Large" (logical) modules that combine a set of "average" or"small" modules. Examples of "large" modules in programming languages roving:

a)module in the Modula-2 programming language;

b)packages in programming languagesAda and Java.

Kit module characteristics proposed by Myers [Myers 1980]. It consists of the followinggeneral design characteristics:

1) module size;

The module must contain 7 (+/-2) constructs (for example, statements forfunctions or functions for a package).This number is taken on the basis of psychologists’ ideas about the average operatingnom human memory buffer. Symbolic images in the human brain are combined into “chunks” - sets of facts and connections between them, memorized and extracted as a whole. At every moment of time a person can batch no more than 7 chunks.

The module (function) should not exceed 60 lines. As a result, it canbut placed on one page of printout or easily viewed on a monitor screen.

2) strength (connectivity) of the module;

There is a global data hypothesis that states that globalthe data is harmful and dangerous. The idea of ​​global data discredits itself like this same as the idea of ​​the unconditional jump operatorgoto. Data localitymakes it easy to read and understand modules, as well as easy to remove them from the program.

Connectivity (strength) module (cohesion ) is a measure of the independence of its parts.The higher the connectivity of the module, the better, the more connections in relationfor the rest of the program he hides it within himself. Types can be distinguishedconnections given below.

Functionalcoherence. Module with functional connectivity represents one specific function and cannot be divided into 2 modules with the same types of connections.

Sequential coherence. A module with such connectivity can be once bit into successive parts that perform independent functions, butjointly implementing a single function. For example, the same module may be used first for assessment and then for evaluation. data processing.

Information(communicative) coherence. Module with information No connectivity is a module that performs several operations or functions over the same data structure (information object) which is considered unknown outside this module. This informationmation connectivity is used to implement abstract types data.

Let us pay attention to the fact that the means for setting informationally strongmodules were absent in early programming languages ​​(for example, FORTRAN and even in the original version of the language Pascal ). And only later, in the program language worldization Ada, a package has appeared - a means of specifying an informationally robust module.

3) coupling of the module with other modules;

Clutch(coupling ) - a measure of the relative independence of a module from othersmodules. Independent modules can be modified without rework other modules. The weaker the module's clutch, the better. Let's consider againpersonal clutch types.

Independentmodules are the ideal case. Modules don't know anythingabout each other. You can organize the interaction of such modules by knowing theminterface and redirecting the output accordingly one module to the input of another. Achieve this clutch is difficult, and not necessary, since clutch according to data(parametric coupling) is quite good.

Clutch according to(parametric) - this is concatenation when the dataare passed to the module as the values ​​of its parameters, or as a result of itscalls to another module to calculate some function. Thistype of coupling is implemented in programming languages ​​when accessingto functions (procedures). Two varieties of this clutch are definedare determined by the nature of the data.

· Clutch by simple data elements.

· Clutch by data structure. In this case, both modules mustknow about the internal data structure.

4) routineness (idempotence, independence from previous niy) module.

Routine- this is the independence of the module from previous calls to it(from background). We will call a module routine if its result is bots depends only on the number of parameters passed (and not on the number of calls).

The module should be routine in most cases, but there are also caseswhen the module needs to save history. In choosing the degree of routine you can Dulya use three recommendations.

· In most cases, we make the module routine, i.e., independent ofprevious requests.

· History-dependent modules should only be used in thosecases where this is necessary for data concatenation.

· The specification of a history-dependent module must clearlyThis dependency is formulated so that users canability to predict the behavior of such a module.

Structured programming.

Structured programming (SP) emerged as an option solving the problem of reducing the COMPLEXITY of software development.

At the beginning of the programming era, the work of a programmer was not regulated in any way. The problems being solved were not distinguished by their scope and scale; mainly machine-oriented languages ​​and languages ​​close to them, such as Assembly, were used; the developed programs rarely reached significant sizes; no strict restrictions were placed on the time of their development.

As programming developed, tasks appeared for which limited deadlines were determined for increasingly complex tasks involving groups of programmers. And as a result, developers are faced with the fact that methods suitable for developing small tasks cannot be used in developing big projects due to the complexity of the latter.

Thus, the goal of structured programming is to increase reliability programs, provision maintenance and modifications, relief and acceleration development.

Structural Imperative Programming Methodology - approach for which involves specifying a good topology of imperative programs, including avoiding the use of global data and the unconditional jump operator, developing modules with strong connectivity and ensuring theirdependencies on other modules.

The approach is based on two main principles groin:

  • Sequential decomposition of an algorithm for solving a problem from top to bottom.
  • Using structural coding.

Let us remind you that this methodology is the most important development of imprational methodology.

Origin, history and evolution. Edsger Dijkstra is considered the creator of the structural approach. He alsobelongs to the attempt (unfortunately, completely inapplicable formass programming) to combine structured programming with methods of proof of correctness created programs. Such famous scientists as H. Mills, D.E. participated in its development. Knuth, S. Hoor.

Methods and concepts underlying structured programming. There are three of them

Method algorithmic decomposition from top to bottom - consists of a step-by-stepdetailing the problem statement, starting with the most general problem. TheThe method provides good structure. Method supportedconcept algorithm.

Method modular organization of program parts- consists of splittingprograms for special components called modules. Methodsupported by the concept module.

Method structural coding - is to use when kodiroving three main control structures. Labels and operator withoutconditional transitions are difficult to track connections, without which we want to get by. The method is supported by the concept management

Structural programming languages. The main difference from the classical methodology of imperative programming is the refusal (more precisely, one or another degree of refusal) from operational Rator of unconditional transition.

[Pratt T., 1979] “An important property of syntax for a programmer is the ability reflect in the program structure the structure of the underlying algorithm . When using a method known as structured programming , the program is constructed hierarchically - from top to bottom (from the main program to the subroutines of the lowest level), using at each level only a limited set of control structures: simple sequences of instructions, loops and some types of conditional branches. When this method is carried out consistently, the structure of the resulting algorithms is easy to understand, debug and modify. Ideally, we should be able to translate the program diagram constructed in this way directly into the corresponding program instructions that reflect the structure of the algorithm."

Structuring theorem (Böhm-Giacopini ( Boem - Jacopini )): Any regular program (that is, a program with one input and one output without loops and unreachable branches) can be written using the following logical structures - sequence, selection and loop repetition

Corollary 1: Any program can be reduced to form without a goto statement.

Corollary 2: Any algorithm can be implemented in a language based on three control constructs - sequence, loop, repetition.

Corollary 3: The complexity of structured programs is limited, even if their size is unlimited.

Structured programming is not an end in itself. Its main purpose is to obtain a good ("correct") program, however, even in the best program, the jump statementsgotosometimes needed: for example, exiting from many nested loops.

In almost all languages ​​that support imperative methodology,You can develop programs using this methodology. A number of languages ​​have introduced special substitutes for the goto operator to make it easier to manage loops (for example, Break and Continue in C).

Class of problems.The class of tasks for this methodology corresponds to the class of tasks for the imperialtive methodology. Note that in this case it is possible to develop morecomplex programs because they are easy to understand and analyze.

Recommendations for literature. One of the most famous works in this area is the article "Noteson structured programming" [Dijkstra 1975]. Methods of structuredprogramming are discussed in detail in the book "Theory and Practicestructural programming"[Linger, Mills, Witt 1982]. Practicestructured programming can be studied in the book "Algorithms +data structures = programs" [Wirth 1985]. Philosophy visual structuralprogramming is described in detail in [Parondzhanov 1999].

Object-oriented programming method.

The structured programming method has proven effective in writing programs of “limited complexity.” However, with the increasing complexity of implemented software projects and, accordingly, the volume of code of created programs, the capabilities of the structured programming method turned out to be insufficient.

The main reason for the problems that arose can be considered that the program did not directly reflect the structure of phenomena and concepts of the real world and the connections between them. When trying to analyze and modify the program text, the programmer was forced to operate with artificial categories.

To write increasingly complex programs, a new approach to programming was needed. As a result, the principles of Object-Oriented Programming were developed. OOP takes the best ideas from structured programming and combines them with powerful new concepts that let you organize your programs in new ways.

I must say that theoretical basis OOPs were founded back in the 70s of the last century, but their practical implementation became possible only in the mid-80s, with the advent of appropriate technical means.

OOP methodology uses object decomposition method, according to which the structure of the system (static component) is described in terms objects and connections between them, and the behavior of the system (dynamic component) - in terms of exchange messages between objects. Messages can be either a reaction to events caused by external factors or generated by the objects themselves.

Object-oriented programs are called "event-driven programs" in contrast to traditional programs called "data-driven programs."

Basic OOP methods and concepts

· Method object-oriented decomposition– consists in highlighting objects and connections between them. The method is supported by the concepts of encapsulation, inheritance and polymorphism.

· Method abstract data types– the method underlying encapsulation. Supported by the concept of abstract data types.

· Method forwarding messages– is to describe the behavior of the system in terms of the exchange of messages between objects. Supported by the message concept.

Computational model Pure OOP supports only one operation - sending a message to an object. Messages can have parameters, which are objects. The message itself is also an object.

An object has a set of message handlers (a set of methods). An object has fields - personal variables of this object, whose values ​​are references to other objects. One of the object's fields stores a link to an ancestor object, to which all messages not processed by this object are redirected. The structures that describe the processing and forwarding of messages are usually allocated to a separate object called the class of this object. The object itself is called an instance of the specified class.

Syntax and semantics

In the syntax of pure object-oriented languages, anything can happenwritten in the form of sending messages to objects. Class in object referencedescribed in languages ​​describes the structure and functioning of a setobjects with similar characteristics, attributes and behavior. An object belongs to a certain class and has its ownown internal state. Methods - functional propertiesva that can be activated.

There are three main definitions in object-oriented programming: properties:

Encapsulation. This is hiding information and combining data andfunctions (methods) inside an object.

Inheritance. Construction of a hierarchy of generated objects with possible completeness for each such access successor objectto the code and data of all parent ancestor objects.Building hierarchies is enough difficult, since it involves classification.

Most of the objects around us belong to the categories discussed in the book [Schleer, Mellor 1993]:

· Real objects are abstractions of objects that exist in the physical world;

· Roles are abstractions of the purpose or purpose of a person, piece of equipment, or organization;

· Incidents are abstractions of something that happened or happened;

· Interactions are objects resulting from relationships between other objects.

Polymorphism (inclusion polymorphism) - assigning ods to an action name, which is then split up and down the object hierarchytov, and each object of the hierarchy performs this action in a waysuitable for him.

Each object has a reference to the class to which it belongs. AtWhen a message is received, the object turns to the class to process this message.communication. The message can be passed up the inheritance hierarchytion if the class itself does not have a method to handle it. If The event handler for the message is selected dynamically, then the methods Event handlers that are used are usually called virtual.

A natural means of structuring in this methodology isclasses. Classes define which instance fields and methods are accessible from outside, how to process individual messages, etc. In pure object-In oriented languages, only methods are available from the outside, andAccess to an object's data is possible only through its methods.

The interaction of tasks in this methodology is carried out using exchange of messages between objects that implement these tasks.

An example of a description in some abstract Pascal -a similar object-oriented language of the “point” class, which is an inheritor of the “coordinates” class, can look like this:

Type TCoordinates = class(TObject)

x, y: integer;
Constructor Init (_x, _y: integer);
Function GetX : integer;
Function GetY : integer;
Procedure SetX(_x: integer);
Procedure SetY(_y: integer);
Procedure Move(dx, dy: integer);
Destructor Done; virtual;

Constructor Init();
x:= _x; y:= _y
end;
Function GetX : integer;
begin
GetX := x
end;
. . . . . . . . . . . . .
End;

TPoint = class (TCoordinates)
Color:integer;
Constructor Init(_x, _y, _Color: integer);
Function SetColor(_Color: integer);
Function GetColor : integer;

Constructor Init(_x, _y, _Color: integer);
Inherited Init(_x, _y);
Color:= _Color
end;
. . . . . . . . . . . . .

End ;

If we later want to use instances of the classTPoint, they will need to be created by calling the constructor method.

Var P1: Point;

P 1.Init (0 ,0 , 14); P 1. Move (+2, -2);

To support the OER concept, special object-oriented languages programming. All OOP languages ​​can be divided into three groups.

Pure languages, in the most classic form supporting the object-oriented methodology. Such languages ​​contain a small language part and a substantial library, as well as a set of runtime support tools.

Hybrid languages, which appeared as a result of the introduction of object-oriented constructs into popular imperative programming languages.

Trimmed languages, which appeared as a result of removing from hybrid languages ​​the most dangerous and unnecessary constructions from the standpoint of OOP.

Literature

ISO/IEC 12207:1995 Information Technology – Software Life Cycle Processes.

[Wirth 1985] – Wirth N. Algorithms + data structures = programs. – M.: Mir, 1985

[Dijkstra 1975] – Dijkstra E. Notes on structured programming // Dahl U., Dijkstra E., Hoare K. Structured programming. – M.: Mir, 1975

[Kalinin, Matskevich 1991] – Kalinin A.G., Matskevich I.V. Universal programming languages. Semantic approach. – M.: Radio and communication, 1991

[Linger, Mills, Witt 1982] - Linger R., Mills H., Witt B. Theory and practice of structured programming. – M.: Nauka, 1990

[Myers 1980] – Myers G. Software reliability. – M.: Mir, 1980

[Parondzhanov 1999] - Paranodzhanov V.D. How to improve your mind. – M.: Radio and communication, 1999

[Pratt T., 1979] - Pratt T. Programming languages: development and implementation. - M.:World, 1979

[Schleer, Mellor 1993] - Schleer S., Mellor S. Object-oriented analysis: modeling the world in states. – Kyiv: Dialectics, 1993.

Structured programming – it is the design, writing and testing of a program according to strict adherence to certain rules.

The main goal of structured programming is to increase the productivity of programmers. Other goals are:

– get rid of poor program structure;

– create programs and documentation for them that could be understood, maintained and modified without the participation of the authors (the cost of maintenance and modification is usually 3-5 times more than the cost of development).

Structured programming (or step-by-step method) includes:

1. Top-down design method. It is also called the “top-down” or “from general to specific” method. It involves breaking the task into several more simple parts or subtasks. They are separated in such a way that the design of subtasks is independent. At the same time, they draw up a plan for solving the entire problem, the points of which are the selected parts. The plan is written graphically in the form of a functional diagram (hierarchy, subordination diagram), where the main and subordinate subtasks and the connections between them are determined, i.e. interface. Here it is established what initial data (or values) each subtask receives for proper functioning and what results it produces. Then each subtask is detailed. The number of detail steps can be arbitrary. Detailing continues until it becomes clear how to program this fragment of the algorithm.

2. Structured programming. The implementation of the idea of ​​structured programming is based on the fact that a correct program of any complexity can be represented by a logical structure, which is a composition of three basic (logical or control) structures that define the rules for data processing: following (linear), branching (conditional jump) and repetition ( cycle).

3. End-to-end structural control. He is regular checks and coordination of the results of the work of performers - programmers of various structures. Its necessity is determined by the desire of developers to reduce the cost of developed programs. Required condition This is the early detection and correction of emerging errors and inconsistencies.

Thus, the method of compiling an algorithm and program called “top-down” or “from general to specific” consists in reducing the formulated problem to a sequence of simpler subtasks that are easier to process individually than as a whole original program. The sequential selection of increasingly simpler subtasks from the original problem ensures that the algorithm for solving the original problem is represented as a composition of algorithms for the selected subtasks.



Taken together (separated) subtask algorithms form a system, the control of which must be taken over by the dispatcher algorithm. He is called the main (or head), and all the rest are subordinates. A diagram showing the level and relationship, interaction of algorithms, both master and subordinate, is called a functional diagram - this is a diagram of the hierarchy of algorithms.

A slave algorithm must have one input and one output. For it, it is necessary to set a goal and determine a set of permissible input values ​​(formal parameters-values), possible own (local, internal) objects and possible side (wave) effects (parameters going beyond the range of permissible values, changing parameter values, in particular, obtaining results and/or data output). Thus, a subordinate algorithm is an element of the functional diagram of the algorithm that implements one independent subtask.

A part of an algorithm organized as a simple action, i.e. having one input and one output is called a functional block.


One input means that the execution of a given part always starts with the same action. One exit means that after completing this part of the algorithm, the same action always begins to be performed.

The functional block of the algorithm refers to simple type blocks.

Since the algorithm determines the order in which data is processed, it must contain, on the one hand, processing actions, and on the other hand, the order in which they occur, called control flow. A control flow can have the following properties:

1) each block is executed;

2) each block is executed no more than once.

In the structural organization of the algorithm, three types of control flows can be distinguished.

A control flow in which both of these properties are satisfied is called linear.


Obviously, several blocks connected by a linear flow can be combined into one functional block.

2. Branching flow of control. In this type, property (2) is satisfied, but property (1) is not satisfied.

This type of control flow organizes the execution of one of two functional blocks depending on the logical condition being tested.

3. Cyclic control flow. It organizes the repetition of a function block many times until logical condition its execution remains true.

IN this type control flow, property (1) is satisfied, but property (2) is not satisfied.


If the algorithm is a combination of the three considered types of control flows (basic algorithmic structures), then it is called a structured algorithm.

Structural Algorithms have a number of advantages compared to non-structural algorithms:

1. clarity and ease of perception of the algorithm;

2. testability (to check any of the main structures, it is enough to ensure the correctness of the functional blocks included in it);

3. modifiability.

Structure theorem: Any algorithm can be reduced to a structured algorithm.

The significance of the structure theorem for programming practice is that on its basis a structured programming method has been developed and is widely used. The basis of the method is the use of the principle of modularity in constructing complex programs. In this case, each software module is organized in the form of a standard functional block (consisting of three basic structures) and performs only one data processing function. Modules have a certain autonomy, which allows them to be debugged (searching for and eliminating errors) independently of the rest of the program and ensures relatively simple modifiability of both an individual module and the program as a whole. The effectiveness of structured programming is especially noticeable when developing complex programs - the modular principle allows you to break the overall task into component and relatively autonomous parts, each of which can be created and debugged independently. Of course, such a partition requires coordination of the input and output parameters of the modules.

Based on the structural approach to algorithm development, the typical stages of this process are:

1. Description of the general design of the algorithm;

2. Formalization of the task;

3. Development of a generalized algorithm scheme;

4. Development of individual blocks of the algorithm;

5. Docking of blocks;

6. Determining the possibility of using standard blocks;

7. Development of logical control blocks;

8. Optimization of the algorithm scheme;

9. Clarification of parameters;

10. Machine resource assessment.