Using client-server technology

Over time, the not very functional file server model for local networks (FS) was replaced by the “Client Server” building models that appeared one after another (RDA, DBS and AS).

Client-Server technology, which occupied the very bottom of the database, has become the main technology of the global Internet networks. Further, as a result of the transfer of Internet ideas to the sphere of corporate systems, Intranet technology emerged. Unlike the Client-Server technology, this technology is focused on information in its final form ready for consumption, and not on data. Computing systems that are built on the basis of the Intranet include central information servers and certain components for presenting information to the last user (browsers or navigator programs). The action between the server and the client on the Intranet is carried out using web technologies.

In modern times, the Client-Server technology has become very widespread, but this technology itself does not have universal recipes. It only provides a general judgment on how the current distribution information system should be created. Also, the implementation of this technology in certain software products and even in types software are recognized quite significantly.

Classic two-tier client-server architecture

As a rule, network components do not have equal rights: some have access to resources (for example: a database management system, processor, printer, file system, etc.), others have the ability to access these resources. operating system server technology

“Client-server” technology is the architecture of a software complex that distributes an application program into two logically different parts (server and client), which interact according to the “request-response” scheme and solve their own specific problems.

A program (or computer) that controls and/or owns a resource is called a server for that resource.

A program (computer or) that requests and uses a resource is called a client of that resource.

In this case, conditions may also arise when a certain software block simultaneously implements the functions of a server in relation to one block and a client in relation to another block.

The main principle of the Client-Server technology is to divide the application functions into at least three parts:

User interface modules;

This group is also called presentation logic. With its help, users can interact with applications. Regardless of the specific characteristics of the presentation logic (interface command line,interfaces through an intermediary, complex graphical user interfaces) its task is to provide a means for greater efficiency in the exchange of information between the information system and the user.

Data storage modules;

This group is also called business logic. Business logic finds what exactly a particular application is needed for (for example, application functions inherent to the provided subject area). Separating an application across program boundaries provides a natural basis for distributing an application across two or more computers.

Data processing modules (resource management functions);

This group is also called logic, data access algorithms or simply data access. Data entry algorithms are considered as a specific interface for a specific application to a stable data storage device such as a DBMS or file system. Using data processing modules, a specific interface for the DBMS application is organized. Using the interface, the application can manage database connections and queries (translating application-specific queries into SQL, retrieving results, and translating those results back into application-specific data structures). Each of the listed links can be implemented independently of several others. For example, without changing the programs that are used to process and store data, you can change the user interface so that the same data will be displayed in the form of tables, histograms or graphs. The most simple applications are often able to combine all three links into a single program, and such division corresponds to functional boundaries.

In accordance with the division of functions in each application, the following components are distinguished:

  • - data presentation component;
  • - application component;
  • - resource management component.

Client-server in a classic architecture needs to distribute the three main parts of the application into 2 physical modules. Typically, the application component is located on the server (for example, on the database server), the data presentation component is on the client side, and the resource management component is distributed between the server and client parts. This is the main disadvantage of the classical two-tier architecture.

In a two-tier architecture, when separating data processing algorithms, developers must have full information O latest changes that have been made to the system, and understand these changes, which creates considerable difficulties in the development of client-server systems, their maintenance and installation, since it is necessary to spend a lot of effort on coordinating the actions of different groups of specialists. Contradictions often arise in the actions of developers, and this slows down the development of the system and forces them to change ready-made and proven elements.

To avoid inconsistency between different elements of the architecture, two modifications of the two-tier “Client - Server” architecture were created: “Thick Client” (“Thin Server”) and “Thin Client” (“Fat Server”).

In this architecture, developers tried to perform data processing on one of two physical parts - either on the client side ("Thick Client") or on the server ("Thin Client").

Each approach has its significant disadvantages. In the first situation, the network is unjustifiably overloaded because unprocessed, that is, redundant data is transmitted through it. In addition, system support and changes become more difficult, because correcting an error or replacing a calculation algorithm requires a simultaneous complete replacement of all interface programs; if a complete replacement is not made, data inconsistency or errors may occur. If all information processing is performed on the server, then the problem of describing built-in procedures and their debugging arises. A system with information processing on a server is absolutely impossible to transfer to another platform (OS), this is a serious drawback.

If a two-level classic “Client - Server” architecture is created, then you need to know the following facts:

The "Fat Server" architecture is similar to the "Thin Client" architecture

Transferring a request from the client to the server, processing the request by the server and transmitting the result to the client. However, the architectures have the following disadvantages:

  • - implementation becomes more complicated, since languages ​​like SQL are not suitable for developing such software and there are no good debugging tools;
  • - the performance of programs written in languages ​​like SQL is very low than those created in other languages, which has the most important for complex systems;
  • - programs that are written in DBMS languages, as a rule, do not function very reliably; an error in them can lead to failure of the entire database server;
  • - the resulting programs are completely unportable to other platforms and systems.
  • - the “Thick Client” architecture is similar to the “Thin Server” architecture

The request is processed on the client side, that is, all raw data from the server is transferred to the client. In this case, architectures have negative sides:

  • - updating the software becomes more complicated, because it must be replaced simultaneously across the entire system;
  • - the distribution of powers becomes more complicated, because access is limited not by actions, but by tables;
  • - the network is overloaded due to the transmission of unprocessed data through it;
  • - weak data protection, since it is difficult to correctly distribute powers.

To solve these problems, you need to use multi-level (three or more levels) Client-Server architecture.

Three-level model .

Since the mid-90s of the last century, the three-tier architecture “Client - Server” has gained popularity among specialists, dividing information system By functionality into three parts: data access logic, presentation logic and business logic. In contrast to the two-tier architecture, the three-tier architecture has an additional link - the application server, designed to implement business logic, while the client, which sends requests to the middleware, is completely unloaded, and all the capabilities of the servers are used to the maximum.

In a three-tier architecture, the client is usually not overloaded with data processing functions, but carries out its main role systems for presenting information coming from the application server. Such an interface can be implemented using standard means Web technologies - browser, CGI and Java. This reduces the volume of data provided between the client and the application server, allowing client computers to connect even over slow lines such as telephone lines. In this regard, the client part can be so simple that in most cases it is carried out using a universal browser. However, if you still have to change it, then this procedure can be carried out quickly and painlessly.

An application server is software that acts as an intermediate layer between the server and the client.

  • - Message oriented - prominent representatives of MQseries and JMS;
  • - Object Broker - prominent representatives of CORBA and DCOM;
  • - Component based - bright representatives of .NET and EJB.

Using an application server brings many more benefits, for example, the load on client computers is reduced, since the application server distributes the load and provides protection against failures. Since the business logic is stored on the application server, any changes in reporting or calculations do not affect client programs in any way.

There are few application servers from such famous companies as Sun, Oracle Microsystem, IBM, Borland, and each of them differs in the set of services provided (I will not take performance into account in this case). These services make it easy to program and deploy enterprise-scale applications. Typically, an application server provides the following services:

  • - WEB Server - most often included in the package is the most powerful and popular Apache;
  • - WEB Container - allows you to run JSP and servlets. For Apache, this service is Tomcat;
  • - CORBA Agent - can provide a distributed directory for storing CORBA objects;
  • - Messaging Service - message broker;
  • - Transaction Service - already from the name it is clear that this is a transaction service;
  • - JDBC - drivers for connecting to databases, because it is the application server that will have to communicate with the databases and it needs to be able to connect to the database used in your company;
  • - Java Mail - this service can provide SMTP service;
  • - JMS (Java Messaging Service) - processing of synchronous and asynchronous messages;
  • - RMI (Remote Method Invocation) - calling remote procedures.

Multi-level client-server systems can be quite easily transferred to Web technology - to do this, you need to replace the client part with a specialized or universal browser, and supplement the application server with a Web server and small server procedure call programs. For

the development of these programs can use both the Common Gateway Interface (CGI) and more modern technology Java.

In a three-level system, the fastest lines that require minimal costs can be used as communication channels between the application server and the DBMS, since the servers are usually located in the same room (server room) and will not overload the network due to the transfer of a large amount of information.

From all of the above, the conclusion follows that the two-level architecture is very inferior to the multi-level architecture; therefore, today only the multi-level “Client - Server” architecture is used, recognizing three modifications - RDA, DBS and AS.

Various models of Client-Server technology

The very first major underlying technology for local area networks was file server (FS) model. At that time, this technology was very common among domestic developers who used systems such as FoxPro, Clipper, Clarion, Paradox and so on.

In the FS model, the functions of all 3 components (presentation component, application component and resource access component) are combined in one code, which is executed on the server computer (host). In such an architecture, there is no client computer at all, and the display and transfer of data is performed using a computer or terminal in the manner of terminal emulation. Applications are usually written in a fourth generation language (4GL). One of the computers on the network is considered a file server and provides file processing services to other computers. It operates under the control of a network OS and plays an important role as a component of access to information resources. On other PCs on the network, an application is running, the codes of which combine the application component and the presentation component.

The technology of action between the client and the server is as follows: a request is sent to a file server, which transmits the required data block to the DBMS, which is located on the client computer. All processing is performed on the terminal.

An exchange protocol is a set of calls that provide an application with access to the file system on a file server.

The positive aspects of this technology are:

  • - ease of application development;
  • - ease of administration and software updates
  • - low cost of workplace equipment (terminals or cheap computers with low specifications in terminal emulation mode are always cheaper than full-fledged PCs).

But the advantages of the FS model outweigh its disadvantages:

Despite the considerable amount of data that is sent over the network, response time is critical because every character entered by the client on the terminal must be transmitted to the server, processed by the application and returned back to be displayed on the terminal screen. In addition, there is the problem of distributing the load between several computers.

  • - expensive Hardware servers , since all users share its resources;
  • - lack of graphical interface .

Thanks to solving the problems inherent in the “File - Server” technology, a more advanced technology appeared, called “Client - Server”.

For modern DBMSs, the client-server architecture has become the de facto standard. If it is assumed that the designed network technology will have a client-server architecture, then this means that application programs, implemented within its framework, will be distributed in nature, that is, part of the application functions will be implemented in the client program, the other - in the server program.

Differences in the implementation of applications within the Client-Server technology are determined by four factors:

  • - what types of software are in logical components;
  • - what software mechanisms are used to implement the functions of logical components;
  • - how logical components are distributed by computers on the network;
  • - what mechanisms are used to connect components with each other.

Based on this, three approaches are distinguished, each of which is implemented in the corresponding Client-Server technology model:

  • - remote data access model (Remote Date Access - RDA);
  • - database server model (DateBase Server - DBS);
  • - application server model (Application Server - AS).

A significant advantage of the RDA model is its extensive selection of application development tools, which provide rapid development of desktop applications that work with SQL-oriented DBMSs. Usually, tools support GUI user with the OS, as well as means of automatic code generation, where presentation functions and application functions are mixed.

Despite its wide distribution, the RDA model is giving way to the most technologically advanced DBS model.

Database Server (DBS) Model - network architecture of the Client-Server technology, which is based on a stored procedure mechanism that implements application functions. In the DBS model the concept information resource compressed to a database due to the same stored procedure mechanism implemented in the DBMS, and even then not in all.

The positive aspects of the DBS model over the RDA model are obvious: this is the possibility of centralized administration of various functions, and a reduction in network traffic because instead of SQL queries, calls to stored procedures are transmitted over the network, and the ability to divide a procedure between two applications, and saving computer resources for by using a once created procedure execution plan.

Application Server (AS) Model is the network architecture of the Client-Server technology, which is a process that runs on the client computer and is responsible for the user interface (data input and display). The most important element of this model is the application component, which is called the application server, it operates on remote computer(or two computers). The application server is implemented as a group of application functions, designed as services. Each service provides some services to all programs that are willing and able to use them.

Having learned all the models of the “Client - Server” technology, we can draw the following conclusion: RDA and DBS models, these two models are based on a two-tier scheme for separating functions. In the RDA model, application functions are transferred to the client; in the DBS model, their execution is implemented through the DBMS kernel. In the RDA model, the application component merges with the presentation component; in the DBS model, it is integrated into the resource access component.

The AS model implements a three-tier function separation scheme, where the application component is highlighted as the main isolated element of the application, which has standardized interfaces with two other components.

The results of the analysis of the “File Server” and “Client - Server” technology models are presented in Table 1.

Despite its name, Client-Server technology is also a distributed computing system. In this case distributed computing understood as a “Client - Server” architecture with the participation of some servers. In the context of distributed processing, the term "server" simply means the program that responds to requests and performs the necessary actions as requested by the client. Since distributed computing is one of the types of Client-Server systems, users gain the same benefits, for example, an increase in the overall bandwidth and the ability to multitask. Also, integrating discrete network components and making them work as a single unit improves efficiency and reduces savings.

Because processing occurs anywhere on the network, distributed computing in a Client-Server architecture ensures efficient scaling. To achieve a balance between server and client, an application component should only run on the server if centralized processing is more efficient. If the program logic that interacts with centralized data resides on the same machine as the data, it does not need to be transferred over the network, so the demands on the network environment can be reduced.

As a result, we can draw the following conclusion: if you need to work with small information systems that do not require a graphical user interface, you can use the FS model. The issue of a graphical interface can be freely resolved using the RDA model. The DBS model is a very good option for database management systems (DBMS). AS-model is the best option in order to create large information systems, as well as when using low-speed communication channels.

3 Client-server technology

Client-server technology replaced the centralized scheme for managing the computing process on mainframes back in the 80s of the last century. Thanks to the high survivability and reliability of the computing system, ease of scaling, the ability of the user to simultaneously work with several applications, high efficiency of information processing, providing the user with a high-quality interface and other capabilities, this very promising and far from exhausted technology has received further development.

Over time, the low-functional file server model for local networks (FS) was replaced by the structures that appeared one after another " Client-server» (RDA, DBS and AS).

Having occupied the niche of databases, the Client-Server technology has become the main technology of the global Internet. Further, as a result of transferring the ideas of the Internet to the environment of corporate systems, Intranet technology appeared . Unlike the Client-Server technology, this technology is focused not on data, but on information in its final ready-to-consume form. Computing systems built on the basis of the Intranet include central information servers and distributed components for presenting information to the end user (navigator programs, or browsers). The interaction between client and server on an Intranet occurs when help web– technology.

Today, Client-Server technology is becoming increasingly widespread, but in itself it does not offer universal recipes. It only gives a general idea of ​​how a modern distributed information system should be organized. At the same time, the implementation of this technology in specific software products and even the types of software differ quite significantly.

3.1 Classic two-tier “Client-Server” architecture

Typically, network components are not equal: some have access to resources (for example, a printer, processor, database management system (DBMS), file system, and so on), others have the ability to access these resources.

Client-server technology - this is the architecture of a software package in which the application program is distributed among two logically different components (client and server), interacting according to the “request-response” scheme and solving their specific tasks (Figure 6).

Figure 6 – Client-server architecture

A computer (or program) that controls and/or owns a resource is called server this resource.

A computer (or program) that requests and uses a resource is called client this resource.

The client and server can be located either on the same computer (PC) or on different PCs on the network. A situation may also arise when a certain software block simultaneously performs the functions of a server in relation to one block and a client in relation to another.

The basic principle of the Client-Server technology is to divide the application functions into at least three groups:

- modules user interface ;

This group is also called presentation logic. Users interact with the application through this group. Regardless of the specific characteristics of presentation logic (command line interface, complex graphical user interfaces, interfaces through a proxy), its purpose is to provide a means for the most efficient exchange of information between the user and the information system.

- storage modules ;

This group is also called business logic. Business logic determines what the application is specifically intended for (for example, application functions specific to a given subject area). Dividing an application across program boundaries provides a natural basis for distributing an application across multiple computers.

- data processing modules (resource management functions);

This group is also called data access logic or data access algorithms. Data access algorithms have historically been viewed as an application-specific interface to a persistent data storage mechanism such as a file system or DBMS. Using data processing modules, an application-specific interface to the DBMS is organized. Using the interface, the application manages database connections and queries (translating application-specific queries into SQL, retrieving results, and translating those results back into application-specific data structures).

Each of these groups can be implemented independently of the other two. For example, without changing the programs used to store and process data, you can change the user interface so that the same data is displayed in the form of tables, graphs, or histograms. Very simple applications are often able to combine all three parts into a single program, and such separation corresponds to functional boundaries.

In accordance with the division of functions in any application, the following components are distinguished:

- data presentation component;

- application component;

- resource management component.

In a classic client-server architecture, the three main parts of the application must be distributed across two physical modules. Typically, the application component is located on the server (for example, a database server), the data presentation component is located on the client side, and the resource management component is distributed between the client and server sides. This is the main disadvantage of the classical two-tier architecture.

In a two-tier architecture, when breaking down data processing algorithms, developers must have full information about the latest changes made to the system and understand these changes, which creates great difficulties in the development of client-server systems, their installation and maintenance, since it is necessary to spend significant efforts on coordinating actions different groups of specialists. Contradictions often arise in the actions of developers, and this slows down the development of the system and forces them to change ready-made and proven elements.

To avoid inconsistency between various elements of the architecture, two modifications of the two-tier Client-Server architecture were created: “Thick Client” (“Thin Server”) and “Thin Client” (“Thick Server”).

In these architectures, developers tried to perform data processing on one of two physical parts - either on the client side ("Thick Client") or on the server ("Thin Client").

Each approach has its drawbacks. In the first case, the network is unjustifiably overloaded, because unprocessed, and therefore redundant, data is transmitted through it. In addition, system support and changes become more difficult, since replacing a calculation algorithm or correcting an error requires a simultaneous complete replacement of all interface programs, otherwise errors or data inconsistency may occur. If all information processing is performed on the server, then the problem of describing built-in procedures and their debugging arises. A system with information processing on a server is absolutely impossible to transfer to another platform (OS), which is a serious drawback.

If you are still developing a two-level classic “Client-Server” architecture, then you need to remember the following:

- the “Thick Server” architecture is similar to the “Thin Client” architecture (Figure 33) ;

Figure 33. – Thin Client architecture

Transferring a request from the client to the server, processing the request by the server and transmitting the result to the client. However, the architectures have the following disadvantages:

Implementation becomes more complicated, since languages ​​like SQL are not suitable for developing such software and there are no good debugging tools;

The performance of programs written in languages ​​like SQL is significantly lower than those created in other languages, which is important for complex systems;

Programs written in DBMS languages ​​usually do not work reliably; an error in them can lead to failure of the entire database server;

The resulting programs are completely unportable to other systems and platforms.

- the “Thin Server” architecture is similar to the “Thick Client” architecture (Figure 34).

Request processing occurs on the client side, that is, all raw data from the server is transferred to the client. However, the architectures have the following disadvantages:

Updating software becomes more difficult, since it must be replaced simultaneously across the entire system;

The distribution of powers becomes more complicated, since access is limited not by actions, but by tables;

The network is overloaded due to the transmission of unprocessed data through it;

Weak data protection, since it is difficult to correctly distribute powers.

Figure 34. – “Thick Client” architecture

To solve these problems, multi-level (three or more levels) Client-Server architectures are used.

3.2 Three-level model

Since the mid-90s of the last century, the three-tier “Client – ​​Server” architecture has received recognition from experts, which divided the information system according to functionality into three separate components: presentation logic, business logic and data access logic. In contrast to the two-tier architecture, the three-tier architecture has an additional link - the application server, which is designed to implement business logic, while the client, which sends requests to the middleware, is completely unloaded, and all the capabilities of the servers are used to the maximum.

In a three-tier architecture, the client is usually not overloaded with data processing functions, but performs its main role as a system for presenting information coming from the application server. Such an interface can be implemented using standard Web technology tools - a browser, CGI and Java. This reduces the amount of data transferred between the client and the application server, allowing client computers to connect even over slow lines such as telephone lines. In addition, the client side can be so simple that in most cases it is implemented using a universal browser. But if you still have to change it, then this procedure can be carried out quickly and painlessly.

Applications server– this is software that is an intermediate layer between the client and server (Figure 35).

Figure 35 - Application Server

There are several categories of midlayer products:

Message oriented – bright representatives of MQseries and JMS;

Object Broker – prominent representatives of CORBA and DCOM;

Component based are bright representatives of .NET and EJB.

Using an application server provides more benefits, for example, the load on client computers is reduced because the application server distributes the load and provides protection against failures. Since the business logic is stored on the application server, any changes in reporting or calculations do not affect client programs in any way.

There are several application servers from such famous companies as Sun Microsystem, Borland, IBM, Oracle, and each of them differs in the set of services provided (we will not take performance into account in this case). These services make it easy to program and deploy enterprise-scale applications. Typically an application server provides the following services:

WEB Server – most often the most popular and powerful Apache is included in the package;

WEB Container – allows you to run JSP and servlets. For Apache, this service is Tomcat;

Further distributed computing systems we will create using client-server technology. This technology provides a unified approach to the exchange of information between devices, be it computers located on different continents and connected via the Internet or Arduino boards lying on the same table and connected by a twisted pair cable.

In future lessons I plan to talk about creating information networks using:

  • Ethernet local network controllers;
  • WiFi modems;
  • GSM modems;
  • Bluetooth modems.

All these devices communicate using a client-server model. The same principle applies to the transfer of information on the Internet.

I do not pretend to provide complete coverage of this voluminous topic. I want to give the minimum information necessary to understand the following lessons.

Client-server technology.

Client and server are programs located on different computers, in various controllers and other similar devices. They interact with each other through computer network using network protocols.

Server programs are service providers. They constantly expect requests from client programs and provide them with their services (transfer data, solve computational problems, manage something, etc.). The server must be constantly on and “listening” to the network. Each server program can typically process requests from several client programs.

The client program is the initiator of the request, which can be made at any time. Unlike a server, the client does not have to be constantly on. It is enough to connect at the time of the request.

So, in general terms, the client-server system looks like this:

  • There are computers, Arduino controllers, tablets, cell phones and other smart devices.
  • All of them are included in the common computer network. Wired or wireless - it doesn't matter. They can even be connected to different networks connected to each other via a global network, such as the Internet.
  • Some devices have server programs installed. These devices are called servers, must be constantly turned on, and their task is to process requests from clients.
  • Client programs run on other devices. Such devices are called clients; they initiate requests to servers. They are turned on only at moments when it is necessary to contact the servers.

For example, if you want with cell phone turn on the iron via WiFi, then the iron will be the server and the phone will be the client. The iron must be constantly plugged in, and you will launch the control program on your phone as needed. If to WiFi networks If you connect the iron to a computer, you will be able to control the iron using the computer. This will be another client. The WiFi microwave oven added to the system will be the server. And so the system can be expanded endlessly.

Data transmission in packets.

Client-server technology is generally intended for use with large-scale information networks. From one subscriber to another, data can travel a complex path through different physical channels and networks. The data delivery path may vary depending on the state individual elements networks. Some network components may not work at this moment, then the data will go a different way. Delivery times may vary. The data may even disappear and not reach the recipient.

Therefore, simply transferring data in a loop, as we transferred data to a computer in some previous lessons, is completely impossible in complex networks. Information is transmitted in limited portions – packets. On the transmitting side, information is divided into packets, and on the receiving side, it is “glued” from packets into solid data. The packet size is usually no more than a few kilobytes.

A package is analogous to a regular postal letter. It must also, in addition to information, contain the address of the recipient and the address of the sender.

The packet consists of a header and an information part. The header contains the addresses of the recipient and the sender, as well as service information necessary for “gluing” packets on the receiving side. Network equipment uses the header to determine where to forward the packet.

Packet addressing.

There is a lot on this topic on the Internet. detailed information. I want to tell you as close to practice as possible.

In the next lesson, to transfer data using client-server technology, we will have to specify information for addressing packets. Those. information where to deliver data packages. In general, we will have to set the following parameters:

  • Device IP address;
  • subnet mask;
  • Domain name;
  • IP address of the network gateway;
  • MAC address;
  • port.

Let's figure out what it is.

IP addresses.

Client-server technology assumes that all subscribers of all networks in the world are connected to a single global network. In fact, in many cases this is true. For example, most computers or mobile devices connected to the Internet. Therefore, an addressing format is used that is designed for such a huge number of subscribers. But even if client-server technology is used in local networks, the accepted address format is still retained, with obvious redundancy.

Each device connection point to the network is assigned a unique number - an IP address (Internet Protocol Address). The IP address is assigned not to the device (computer), but to the connection interface. In principle, devices can have several connection points, which means several different IP addresses.

An IP address is a 32-bit number or 4 bytes. For clarity, it is customary to write it as 4 decimal numbers from 0 to 255, separated by dots. For example, my server's IP address is 31.31.196.216.

To make it easier for network equipment to build a route for delivering packets in IP address format, logical addressing has been introduced. The IP address is divided into 2 logical fields: network number and node number. The sizes of these fields depend on the value of the first (most significant) octet of the IP address and are divided into 5 groups - classes. This is the so-called classful routing method.

Class Major octet Format

(S-network,
U-node)

Start address End address Number of networks Number of nodes
A 0 S.U.U.U. 0.0.0.0 127.255.255.255 128 16777216
B 10 S.S.U.U 128.0.0.0 191.255.255.255 16384 65534
C 110 S.S.S.U 192.0.0.0 223.255.255.255 2097152 254
D 1110 Group address 224.0.0.0 239.255.255.255 - 2 28
E 1111 Reserve 240.0.0.0 255.255.255.255 - 2 27

Class A is intended for use in large networks. Class B is used in medium-sized networks. Class C is intended for networks with a small number of nodes. Class D is used to address groups of hosts, and Class E addresses are reserved.

There are restrictions on the choice of IP addresses. I considered the following to be the most important for us:

  • Address 127.0.0.1 is called loopback and is used for testing programs within the same device. Data sent to this address is not transmitted over the network, but is returned to the program top level, as accepted.
  • “Grey” addresses are IP addresses allowed only for devices operating on local networks without Internet access. These addresses are never processed by routers. They are used in local networks.
    • Class A: 10.0.0.0 – 10.255.255.255
    • Class B: 172.16.0.0 – 172.31.255.255
    • Class C: 192.168.0.0 – 192.168.255.255
  • If the network number field contains all 0s, then this means that the node belongs to the same network as the node that sent the packet.

Subnet masks.

With the classful routing method, the number of network and host address bits in the IP address is determined by the class type. And there are only 5 classes; 3 are actually used. Therefore, the classful routing method in most cases does not allow the optimal choice of network size. Which leads to wasteful use of IP address space.

In 1993, classless routing was introduced, which this moment is basic. It allows you to flexibly, and therefore rationally, select the required number of network nodes. This addressing method uses variable-length subnet masks.

The network node is assigned not only an IP address, but also a subnet mask. It is the same size as an IP address, 32 bits. The subnet mask determines which part of the IP address belongs to the network and which part to the host.

Each bit of the subnet mask corresponds to a bit of the IP address in the same bit. A one in a mask bit indicates that the corresponding bit of the IP address belongs to network address, and a mask bit with a value of 0 determines the ownership of the IP address bit to the host.

When transmitting a packet, a node, using a mask, extracts the network part from its IP address, compares it with the destination address, and if they match, this means that the sending and receiving nodes are on the same network. Then the package is delivered locally. Otherwise, the packet is sent through the network interface to another network. I emphasize that the subnet mask is not part of the package. It only affects the node's routing logic.

Essentially, the mask allows one large network to be divided into several subnets. The size of any subnet (number of IP addresses) must be a multiple of a power of 2. i.e. 4, 8, 16, etc. This condition is determined by the fact that the bits of the network and node address fields must be consecutive. You cannot set, for example, 5 bits - the network address, then 8 bits - the node address, and then again the network addressing bits.

An example form for a network with four nodes looks like this:

Network 31.34.196.32, mask 255.255.255.252

A subnet mask always consists of consecutive ones (signs of a network address) and consecutive zeros (signs of a host address). Based on this principle, there is another way to record the same address information.

Network 31.34.196.32/30

/30 is the number of ones in the subnet mask. IN in this example two zeros remain, which corresponds to 2 bits of the node address or four nodes.

Network size (number of nodes) Long mask Short mask
4 255.255.255.252 /30
8 255.255.255.248 /29
16 255.255.255.240 /28
32 255.255.255.224 /27
64 255.255.255.192 /26
128 255.255.255.128 /25
256 255.255.255.0 /24
  • The last number of the first subnet address must be divisible by the size of the network.
  • The first and last addresses of the subnet are service addresses and cannot be used.

Domain name.

It is inconvenient for a person to work with IP addresses. These are sets of numbers, and a person is used to reading letters, even better letters written in a coherent way, i.e. words. To make it more convenient for people to work with networks, a different system for identifying network devices is used.

Any IP address can be assigned a letter identifier that is more understandable to humans. The identifier is called a domain name or domain.

A domain name is a sequence of two or more words separated by periods. The last word is a first-level domain, the penultimate word is a second-level domain, etc. I think everyone knows about this.

The connection between IP addresses and domain names occurs through distributed database data using DNS servers. Every second-level domain owner must have a DNS server. DNS servers are united in a complex hierarchical structure and are able to exchange data with each other about the correspondence of IP addresses and domain names.

But all this is not so important. The main thing for us is that any client or server can contact the DNS server with a DNS query, i.e. with a request for correspondence between an IP address and a domain name, or vice versa, a domain name and an IP address. If the DNS server has information about the correspondence between the IP address and the domain, then it responds. If it doesn’t know, then it looks for information on other DNS servers and then informs the client.

Network gateways.

A network gateway is a hardware router or software for connecting networks with different protocols. In general, its task is to convert protocols of one type of network into protocols of another network. Typically, networks have different physical data transmission media.

An example is a local network of computers connected to the Internet. Within their local network (subnet), computers communicate without the need for any intermediate device. But as soon as a computer needs to communicate with another network, such as accessing the Internet, it uses a router, which acts as a network gateway.

Routers, which everyone has with a wired Internet connection, are one example of a network gateway. A network gateway is the point through which access to the Internet is provided.

In general, using a network gateway looks like this:

  • Let's say we have a system of several Arduino boards connected via an Ethernet local network to a router, which in turn is connected to the Internet.
  • On the local network we use “gray” IP addresses (this is described above), which do not allow access to the Internet. The router has two interfaces: our local network with a “gray” IP address and an interface for connecting to the Internet with a “white” address.
  • In the node configuration we specify the gateway address, i.e. “white” IP address of the router interface connected to the Internet.
  • Now, if a router receives a packet from a device with a “gray” address with a request to receive information from the Internet, it replaces the “gray” address in the packet header with its “white” one and sends it to the global network. Having received a response from the Internet, it replaces the “white” address with the “gray” one remembered during the request and transmits the packet to the local device.

MAC address.

A MAC address is a unique identifier for local network devices. As a rule, it is written at the equipment manufacturer into the permanent memory of the device.

The address consists of 6 bytes. It is customary to write it in hexadecimal notation in the following formats: c4-0b-cb-8b-c3-3a or c4:0b:cb:8b:c3:3a. The first three bytes are the unique identifier of the manufacturing organization. The remaining bytes are called “Interface number” and their meaning is unique for each specific device.

The IP address is logical and is set by the administrator. MAC address is the physical Permanent Address. It is used to address frames, for example, in local Ethernet networks. When a packet is sent to a specific IP address, the computer determines the corresponding MAC address using a special ARP table. If there is no data about the MAC address in the table, then the computer requests it using a special protocol. If the MAC address cannot be determined, then packets will not be sent to this device.

Ports.

Using the IP address, network equipment determines the recipient of the data. But a device, such as a server, can have multiple applications running. In order to determine which application the data is intended for, another number is added to the header - the port number.

The port is used to determine the receiver process of the packet within the same IP address.

16 bits are allocated for the port number, which corresponds to numbers from 0 to 65535. The first 1024 ports are reserved for standard processes such as mail, websites, etc. It is better not to use them in your applications.

Static and dynamic IP addresses. DHCP protocol.

IP addresses can be assigned manually. Quite a tedious operation for the administrator. And in the case when the user does not have the necessary knowledge, the task becomes difficult to solve. In addition, not all users are constantly connected to the network, and other subscribers cannot use the static addresses allocated to them.

The problem is solved by using dynamic IP addresses. Dynamic addresses are issued to clients for a limited time while they are continuously online. Dynamic address distribution occurs under the control of the DHCP protocol.

DHCP is a network protocol that allows devices to automatically obtain IP addresses and other parameters needed to operate on a network.

At the configuration stage, the client device contacts the DHCP server and receives from it required parameters. A range of addresses distributed among network devices can be specified.

View network device settings using the command line.

There are many ways to find out the IP address or MAC address of your network card. The simplest one is to use cmd commands operating system. I'll show you how to do this using Windows 7 as an example.

The cmd.exe file is located in the Windows\System32 folder. This is a command line interpreter. It can be used to obtain system information and configure the system.

Open the Execute window. To do this, execute the menu Start -> Run or press the key combination Win+R.

Type cmd and press OK or Enter. A command interpreter window appears.

Now you can specify any of the numerous commands. For now we are interested in commands for viewing the configuration of network devices.

First of all, it's a team ipconfig, which displays network card settings.

Detailed option ipconfig/all.

Only MAC addresses are shown by the command getmac.

The command shows the correspondence table between IP and MAC addresses (ARP table) arp –a.

You can check the connection with the network device with the command ping.

  • ping domain name
  • ping IP address

My site server is responding.

Basic network protocols.

I'll briefly talk about the protocols we need in later lessons.

A network protocol is a set of agreements, rules that define the exchange of data on a network. We are not going to implement these protocols at a low level. We intend to use ready-made hardware and software modules that implement network protocols. Therefore, there is no need to go into detail about the formats of headers, data, etc. But why each protocol is needed, how it differs from others, and when it is used, you need to know.

IP protocol.

The Inernet Protocol delivers data packets from one network device to another. IP protocol combines local networks into a single global network, ensuring the transfer of information packets between any network devices. Of the protocols presented in this lesson, IP is at the lowest level. All other protocols use it.

The IP protocol operates without establishing connections. It simply tries to deliver the packet to the specified IP address.

IP treats each data packet as a separate independent unit, not connected to other packets. It is impossible to transmit a significant amount of related data using only the IP protocol. For example, in Ethernet networks the maximum data volume of one IP packet is only 1500 bytes.

The IP protocol does not have mechanisms to control the reliability of the final data. Control codes are used only to protect the integrity of the header data. Those. IP does not guarantee that the data in the received packet will be correct.

If an error occurs during packet delivery and the packet is lost, then IP does not attempt to resend the packet. Those. IP does not guarantee that the packet will be delivered.

Briefly about the IP protocol we can say that:

  • it delivers small (no more than 1500 bytes) individual data packets between IP addresses;
  • it does not guarantee that the data delivered will be correct;

TCP protocol.

Transmission Control Protocol is the primary data transmission protocol for the Internet. It uses the ability of the IP protocol to deliver information from one node to another. But unlike IP, it:

  • Allows you to transfer large amounts of information. The division of data into packets and “gluing” of data on the receiving side is provided by TCP.
  • Data is transmitted with a pre-established connection.
  • Monitors data integrity.
  • In case of data loss, it initiates repeated requests for lost packets and eliminates duplication when receiving copies of the same packet.

Essentially, the TCP protocol removes all data delivery problems. If possible, he will deliver them. It is no coincidence that this is the main data transfer protocol in networks. TCP/IP network terminology is often used.

UDP protocol.

User Datagram Protocol is a simple protocol for connectionless data transfer. Data is sent in one direction without checking whether the receiver is ready or confirming delivery. The data size of a packet can be up to 64 kBytes, but in practice many networks only support data sizes of 1500 bytes.

The main advantage of this protocol is the prostate and high speed transfers. Often used in applications that are critical to data delivery speed, such as video streams. In such tasks, it is preferable to lose a few packets rather than wait for stragglers.

The UDP protocol is characterized by:

  • it is a connectionless protocol;
  • it delivers small individual packets of data between IP addresses;
  • it does not guarantee that the data will be delivered at all;
  • it will not inform the sender whether the data was delivered and will not retransmit the packet;
  • there is no ordering of packets, the order of message delivery is not defined.

HTTP protocol.

Most likely, I will write more about this protocol in future lessons. Now let me briefly say that this is the Hyper Text Transfer Protocol. It is used to retrieve information from websites. In this case, the web browser acts as a client, and network device as a web server.

In the next lesson we will apply client-server technology in practice using an Ethernet network.

Client-server architecture is used in a large number of network technologies used to access various network services. Let's briefly look at some types of such services (and servers).

Web servers

Initially, they provided access to hypertext documents via HTTP (Huper Text Transfer Protocol). Now they support advanced capabilities, in particular working with binary files (images, multimedia, etc.).

Application Servers

Designed for centralized solution of applied problems in a certain subject area. To do this, users have the right run server programs for execution. Using application servers reduces client configuration requirements and simplifies general management network.

Database servers

Database servers are used to process user requests for SQL language. In this case, the DBMS is located on the server, to which client applications connect.

File servers

File server stores information in the form of files and provides users with access to it. As a rule, a file server also provides a certain level of protection against unauthorized access.

Proxy server

First, it acts as an intermediary, helping users obtain information from the Internet while protecting the network.

Secondly, it stores frequently accessed information in cache memory on local disk, quickly delivering it to users without having to access the Internet again.

Firewalls(firewalls)

Firewalls that analyze and filter passing network traffic to ensure network security.

Mail servers

Provide services for sending and receiving electronic mail messages.

Servers remote access(RAS)

These systems provide communication with the network via dial-up lines. A remote employee can use the resources of a corporate LAN by connecting to it using a regular modem.

These are just a few types of the entire variety of client-server technologies used in both local and global networks.

To access certain network services, clients are used whose capabilities are characterized by the concept of “thickness”. It defines the hardware configuration and software available to the client. Let's consider possible boundary values:

Thin client

This term defines a client whose computing resources are only sufficient to run the required network application via a web interface. The user interface of such an application is formed by means static HTML (JavaScript execution is not provided), all application logic is executed on the server.
For the thin client to work, it is enough just to provide the ability to launch a web browser, in the window of which all actions are carried out. For this reason, a web browser is often called a "universal client".

"Fat" client

This is a workstation or Personal Computer, running their own disk operating system and having the necessary set of software. Thick clients access network servers mainly for additional services(for example, access to a web server or corporate database).
A “thick” client also means a client network application running under the local OS. Such an application combines a data presentation component (OS graphical user interface) and an application component (the computing power of the client computer).

Recently, another term has been increasingly used: “rich”-client. The “Rich” client is a kind of compromise between the “thick” and “thin” clients. Like the “thin” client, the “rich” client also presents a graphical interface, described using XML tools and including some functionality of thick clients (for example, drag-and-drop interface, tabs, multiple windows, drop-down menus, etc.)

The application logic of the “rich” client is also implemented on the server. Data is sent to standard format exchange, based on the same XML (SOAP, XML-RPC protocols) and interpreted by the client.

Some basic XML-based rich client protocols are given below:

  • XAML (eXtensible Application Markup Language) - developed by Microsoft, used in applications on the .NET platform;
  • XUL (XML User Interface Language) is a standard developed within the Mozilla project, used, for example, in email Mozilla client Thunderbird or Mozilla browser Firefox;
  • Flex- multimedia technology XML-based, developed by Macromedia/Adobe.

Conclusion

So, The main idea of ​​the client-server architecture is to divide the network application into several components, each of which implements a specific set of services. The components of such an application can run on different computers, performing server and/or client functions. This improves the reliability, security, and performance of network applications and the network as a whole.

Control questions

1. What is the main idea? K-S interactions?

2. What are the differences between the concepts of “client-server architecture” and “client-server technology”?

3. List components K-S interactions.

4. What tasks does the presentation component perform in the KS architecture?

5. For what purpose are database access tools presented as a separate component in the KS architecture?

6. Why is business logic identified as a separate component in the KS architecture?

7. List the models of client-server interaction.

8. Describe the file-server model.

9. Describe the database server model.

10. Describe the application server model

11. Describe the terminal server model

12. List the main types of servers.

Client-server technology provides for the presence of two independent interacting processes - a server and a client, communication between which is carried out over the network.

Servers are processes that are responsible for maintaining the file system, and clients are processes that send a request and expect a response from the server.

The client-server model is used when building a system based on a DBMS, as well as postal systems. There is also a so-called file-server architecture, which differs significantly from the client-server architecture.

Data in the file server system is stored on file server (Novell NetWare or WindowsNT Server), and they are processed on workstations through the functioning of “desktop DBMSs”, such as Access, Paradox, FoxPro, etc.

The DBMS is located on a workstation, and data manipulation is performed by several independent and inconsistent processes. All data is transferred from the server over the network to the workstation, which slows down the speed of information processing.

Client-server technology is implemented by the functioning of two (at least) applications - clients and server, which share functions among themselves. The server is responsible for storing and directly manipulating data, an example of which can be SQLServer, Oracle, Sybase and others.

The user interface is formed by the client, which is based on special tools or desktop DBMSs. Logical data processing is performed partly on the client and partly on the server. Requests are sent to the server by the client, usually in SQL. Received requests are processed by the server and the result is returned to the client(s).

In this case, the data is processed in the same place where it is stored - on the server, so a large volume of it is not transmitted over the network.

Advantages of client-server architecture

Client-server technology brings the following qualities to an information system:

  • Reliability

Data modification is carried out by the database server using the transaction mechanism, which gives the set of operations such properties as: 1) atomicity, which ensures data integrity upon any completion of the transaction; 2) independence of transactions of different users; 3) resistance to failures - saving the results of transaction completion.

  • Scalability, i.e. the ability of the system to be independent of the number of users and volumes of information without replacing the software used.

Client-server technology supports thousands of users and gigabytes of information with the appropriate hardware platform.

  • Security, i.e. reliable protection information from
  • Flexibility. In applications that work with data, there are logical layers: user interface; logical processing rules; Data management.

As already noted, in file-server technology, all three layers are combined into one monolithic application running on a workstation, and all changes in the layers necessarily lead to modification of the application, the versions of the client and server differ, and it is necessary to update the versions on all workstations .

Client-server technology in a two-tier application provides for the execution of all functions for creating on the client, and all functions for managing database information on the server; business rules can be implemented both on the server and on the client.

A three-tier application allows for a middle tier that implements business rules, which are the most changeable components.

Several levels allow you to flexibly and cost-effectively adapt your existing application to constantly changing business requirements.