Requirements for software and hardware for information security are formulated in the governing documents of the FSTEC of Russia. Order No. 21 of the FSTEC of Russia dated February 18, 2013 “On approval of the composition and content of organizational and technical measures to ensure the security of personal data when processed in personal data information systems” presents measures to ensure the security of personal data when processed in information systems. This is protection from unauthorized or accidental access to data, destruction, modification, blocking, copying, provision, distribution of personal data, as well as from other unlawful actions in relation to personal data.

The security of personal data when processed in the personal data information system is ensured by the operator or the person processing personal data on behalf of the operator.

Measures to ensure the security of personal data are implemented, inter alia, through the use of information security tools in the information system that have passed the conformity assessment procedure in accordance with the established procedure, in cases where the use of such tools is necessary to neutralize current threats to the security of personal data.

An assessment of the effectiveness of measures implemented within the personal data protection system to ensure the security of personal data is carried out by the operator independently or with the involvement on a contractual basis of legal entities and individual entrepreneurs licensed to carry out activities for the technical protection of confidential information. This assessment is carried out at least once every three years.

The measures to ensure the security of personal data, implemented within the framework of the personal data protection system, taking into account current threats to the security of personal data and the information technologies used, include:

  • identification and authentication of access subjects and access objects;
  • access control of access subjects to access objects;
  • limitation of the software environment;
  • protection of computer storage media on which personal data is stored and (or) processed;
  • security event logging;
  • antivirus protection;
  • intrusion detection (prevention);
  • control (analysis) of the security of personal data;
  • ensuring the integrity of the information system and personal data;
  • ensuring the availability of personal data;
  • protecting the virtualization environment;
  • protection of technical means;
  • protection of the information system, its facilities, communication and data transmission systems;
  • identifying incidents (one event or group of events) that may lead to failures or disruption of the functioning of the information system and (or) the emergence of threats to the security of personal data, and responding to them;
  • management of the configuration of the information system and personal data protection system.

Firewall

Firewall is a set of hardware and (or) software measures that filter network packets passing through it. Its main task is to protect computer networks or individual devices from unauthorized access.

For example, a new certified version of the Internet gateway - the Internet Control Server (ICS) firewall - is designed to protect confidential information and personal data. Has FSTEC certificate dated April 19, 2012 No. 2623. Main characteristics of the screen:

  • complies with the requirements of the RD "Computer equipment. Firewalls. Protection from unauthorized access to information. Indicators of security from unauthorized access to information" for the 4th security class;
  • IKS users can have 4th, 3rd and 2nd classes of ISPD (personal data information system);
  • can be used as part of personal data information systems up to and including the KZ security class;
  • the ability to restore firewall properties in the event of equipment failure or failure;
  • the ability to verify the authenticity of network addresses, thanks to filtering taking into account the input and output network interface;
  • the ability to filter taking into account any significant fields of network packets;
  • independent filtering of each network packet;
  • organization of control of the integrity of information and software parts;
  • filtering of service protocol packets used to diagnose and regulate the operation of network devices;
  • authentication and identification of the administrator in the event of his local access requests;
  • monitoring HTTP traffic and implementing access policies based on URL and REG-EXP through a certified proxy server;
  • logging system for blocked network traffic;
  • the integrity of the software part of the product is ensured by a control system using checksums;
  • if necessary: ​​the ability to purchase a certified hardware firewall.

What is a firewall (firewall)?

Firewall is a collection of hardware and software that connects two or more networks and at the same time serves as a central point for security management. Firewalls can be implemented either in software or in hardware-software. At the beginning of the 21st century. Increasing attention is being paid to the use of hardware firewalls. They are specialized computers, usually built into a rack with a network operating system adapted to the functions they perform.

Typically, a firewall is installed between an organization's corporate network and the Internet as a way to block the rest of the world from accessing the corporate network. It should be said right away that a firewall cannot protect corporate computers from viruses - special anti-virus programs are used for this purpose.

To meet the needs of a wide range of users, there are three types of firewalls: network layer, application layer, and connection layer. Each type of firewall uses several different approaches to protect corporate networks. By making the best choices, you can better design your firewall.

A network layer firewall is usually a router or special computer that examines packet addresses and then decides whether to pass the packet into (from) the corporate network or reject it. As you know, packets, along with other information, contain the IP addresses of the sender and recipient. You could, for example, configure your network layer firewall to block all messages from a particular host. Typically, packets are blocked using a file that contains a set of IP addresses of some hosts. The firewall (or router) must block packets that use these addresses as source or destination addresses. Having detected a packet that contains such an IP address, the router will reject it, preventing it from entering the corporate network. This type of blocking of specific nodes is sometimes called blacklisting. Typically, router software allows you to blacklist an entire host, but not a specific user.

The packet arriving at the router may contain an email message, a request for a service such as HTTP (access to a web page), ftp (the ability to send or download a file), or even a telnet request to log into a corporate system (remote access to a computer). The network layer router recognizes each type of request and takes specific actions in response. For example, you can program your router to allow Internet users to browse your organization's Web pages, but not allow them to use FTP to transfer files to or from the corporate server.

A properly installed and configured network layer firewall will be very fast and transparent to users. Of course, for blacklisted users, the router will live up to its name (firewall) in terms of its effectiveness in stopping unwanted visitors.

Typically, routers come with the appropriate software. To program a router, appropriate rules are entered into a specialized file that tell the router how to process each incoming packet.

An application-level firewall typically uses a network host running software known as a proxy server. A Mediation Server is a program that manages traffic between two networks. When you use an application-level firewall, the corporate network and the Internet are not physically connected. Traffic from one network never mixes with traffic from another because their cables are disconnected. The job of a proxy server is to transmit isolated copies of packets from one network to another. This type of firewall effectively masks the origin of the connection initiation and protects the corporate network from Internet users attempting to mine information from that network.

Proxy servers understand network protocols, so you can configure such a server and install a set of services provided by the corporate network.

When installing an application-level proxy, users must use client programs that support proxy mode.

In this way, application-level firewalls allow you to control the type and volume of traffic entering a host. They provide a strong physical barrier between the corporate network and the Internet and are therefore a good option in situations where increased security is required. However, because the program must analyze packets and make access control decisions, application-level firewalls can reduce network efficiency. If you plan to use such a firewall, you must use the fastest computer possible to install the proxy server.

A connection-level firewall is similar to an application-level firewall—both are proxy servers. However, connection-level firewall does not require the use of special applications that support client man-in-the-middle mode.

A connection-level firewall establishes communication between a client and a server without requiring each application to know anything about the service.

The advantage of a connection-level firewall is that it provides service for a broad class of protocols, while an application-level firewall requires an intermediary at this level for each and every kind of service. So, using a connection-level firewall for HTTP, ftp or telnet for example, there is no need to take any special measures or make changes to applications - you can simply use existing software. Another useful feature of connection-level firewalls is that you can only work with one proxy server, which is easier than registering and monitoring multiple servers.

When creating a firewall, you need to determine what traffic to allow through your corporate network. As noted above, you can choose a router that will filter selected packets, or you can use some type of man-in-the-middle program that will run on the host computer on the network. In turn, the firewall architecture may include both of these configurations. In other words, you can maximize the security of your corporate network by combining both the router and the intermediary server in a firewall.

There are three most popular types of firewall architecture:

  • two-way master firewall;
  • filtering master firewall;
  • subnet filtering firewall.

The master filter firewall and the subnet filter firewall use a combination of a router and a proxy server.

A two-way host firewall is a simple but highly secure configuration in which one host computer acts as the dividing line between the corporate network and the Internet. The host computer uses two separate network cards to connect to each network. When using a two-way master firewall, you must block the computer's routing capabilities because it does not connect the two networks. One of the disadvantages of this configuration is that you may simply inadvertently allow access to the internal network.

A two-way master firewall operates by executing an application-layer or connection-layer mediation server program. As already mentioned, a proxy program controls the transmission of packets from one network to another. Being bidirectional (connected to two networks), the firewall host sees packets on both networks, allowing it to run middleware and control traffic between the networks.

A filter master firewall provides greater security than a two-way firewall. By adding a router and thereby placing the host computer further away from the Internet, you can create a very effective and easy-to-use firewall. A router connects the Internet to the corporate network and simultaneously filters the types of packets passing through it. You can configure the router to see only one host computer. Corporate network users who want to connect to the Internet must do so only through the host computer. Thus, internal users have direct access to the Internet, but external users' access is limited to the host computer.

A subnet filtering firewall further isolates the corporate network from the Internet by including an intermediate peripheral network between them. A subnet filtering firewall places the host computer on this peripheral network, which users access through two separate routers. One of them controls corporate network traffic, and the second controls Internet traffic.

A subnet filter firewall provides extremely effective protection against attack. It isolates the host computer on a separate network, which reduces the likelihood of a successful attack on the host computer and further reduces the chances of causing damage to the corporate network.

From all of the above, the following conclusions can be drawn.

  • 1. A firewall can be very simple - a single router, or very complex - a system of routers and well-protected host computers.
  • 2. You can install firewalls within the corporate network to strengthen security measures for individual segments.
  • 3. In addition to ensuring security, it is necessary to detect and prevent the penetration of computer viruses. Firewalls can't do this.

When using firewalls, you should not underestimate the protection capabilities provided by system software. For example, the Febos operating system (OS) implements the following functions:

  • identification and authentication of the user based on a password with subsequent provision of access to information resources in accordance with his authority;
  • control and management of access to information resources in accordance with discretionary and mandatory security policies;
  • registration and audit of all public events, critical situations, successful and unsuccessful attempts at identification and authentication, completed and rejected access operations to information resources, changes in security attributes of subjects and objects;
  • local and remote administration, managing user permissions in accordance with the security policy;
  • monitoring the integrity of security measures and system components of the protected Febos OS.

Cryptography

Cryptography, previously a strategic technology, has now, thanks to the rapid development of corporate networks and the Internet, penetrated into wide areas of activity and began to be used by a large number of users.

Cryptography technology and data encryption protocols are specifically designed for use in conditions where the receiving and transmitting parties are not confident that the transmitted information will not be intercepted by a third party. The confidentiality of the transmitted information will be ensured, because although it is intercepted, it cannot be used without decryption.

Let's consider the basic concepts of encryption used to protect data during its transmission in corporate networks, in electronic and digital payment systems on the Internet.

Private key encryption. Encryption using any algorithm means converting the original message into an encrypted one. This involves creating a secret key - a password, without which it is impossible to decode the message.

Such a key must be secret, otherwise the message will be easily read by unwanted persons.

The most well-known and used cryptographic algorithms for encrypting data with a private key in the USA and Europe are DES, IDEA, RC2 - RC5.

Public key encryption. Encrypting a message with a public key involves creating two keys that are completely independent from each other - a public and a private one. You can encrypt a message using the public key, but you can only decrypt it using the private key. By freely distributing the public key, you make it possible to encrypt and send you encrypted messages that no one else can decrypt except you.

To carry out two-way communication, the parties each create their own pair of keys and then exchange public keys. Transmitted messages are encrypted by each party using the partner's public keys, and decryption is carried out using their own private keys.

Public key distribution algorithm. Another option for working with public keys is the public key distribution algorithm (Diffie–Hellman algorithm). It allows you to generate one shared secret key to encrypt data without transmitting it over a communication channel.

This algorithm is also based on the use of a key pair (public/private) and is formed as follows:

  • both parties create their own key pairs;
  • after that they exchange public keys;
  • from a combination of two keys - one’s own (private) and someone else’s (public), using this algorithm, an identical and unique private (secret) key is generated for both parties;
  • messages are then encrypted and decrypted with a single private key.

Digital signature technology. The digital signature was introduced into practice on the basis of the Federal Law of April 6, 2011 No. 63-FZ “On Electronic Signatures”. This law regulates relations in the field of using electronic signatures when making civil transactions, providing state and municipal services, performing state and municipal functions, and when performing other legally significant actions.

Digital signature technology allows you to uniquely determine the owner of the transmitted information. This is necessary in electronic and digital payment systems and is used in e-commerce.

To create a digital signature, a hashing algorithm is used - a special mathematical algorithm, using which another small hash file is formed from a file.

After this, the following actions are carried out:

  • the resulting hash file is encrypted using the private key and the resulting encrypted message is a digital signature;
  • the original unencrypted file along with the digital signature is sent to the other party.

Now the receiving party can check the authenticity of the received message and the sending party. This can be done as follows:

  • Using the public key, the recipient decrypts the digital signature and restores the hash file;
  • Using a hashing algorithm, the recipient creates its hash file from the original received file;
  • the recipient compares two copies of the hash files. The coincidence of these files means the authenticity of the transmitting party and the information received.

Blind signature (blind signature). This important algorithm is used in electronic payment systems and is a type of digital signature.

This algorithm involves exchanging messages in such a way that the receiving party cannot decipher the received message, but can be quite sure who it is dealing with. For example, it is not desirable for an e-commerce customer to hand over his or her credit card number, and for a merchant to know exactly who he or she is dealing with. The intermediary in commercial transactions is the bank, which verifies the authenticity of both the seller and the buyer and then transfers money from the client's account to the seller's account.

The corresponding encryption protocols and application programming interfaces are included in the system software of computer networks.

Protection against computer viruses

Computer viruses and worms are small programs that are designed to spread from one computer to another and interfere with the operation of the computer. Computer viruses are often distributed as file attachments in emails or instant messages. That's why you should never open an attachment in an email unless you know who it's from and aren't expecting it. Viruses can come in the form of funny images or greeting cards. Computer viruses are also spread by downloading from the Internet. They may be hiding in illegal software or other files or programs you may download.

The problem arose a long time ago and immediately became widespread. In 1988, with the appearance of the “Morris virus” on the Internet, the more or less targeted development of antivirus agents actually began.

The term "virus" as applied to computers was coined by Fred Cohen of the University of South Carolina. The word "virus" is of Latin origin and means "poison". Computer virus is a program that tries to secretly write itself onto computer disks. Every time the computer boots from an infected disk, a hidden infection occurs.

Viruses are quite complex and unique programs that perform actions that are not authorized by the user.

The way most viruses operate is by modifying the system files of the user's computer so that the virus begins its activity either at each boot or at one point when some "calling event" occurs.

Many technical innovations are used in the development of modern computer viruses, but most viruses are imitation and modification of several classical schemes.

Viruses can be classified by type of behavior as follows.

Boot viruses penetrate the boot sectors of storage devices (hard drives, floppy disks, portable storage devices). When the operating system is loaded from an infected disk, the virus is activated. Its actions may consist of disrupting the operating system loader, which makes it impossible to work, or changing the file table, which makes certain files inaccessible.

File viruses are most often embedded in the executive modules of programs (files that are used to launch a particular program), which allows them to be activated at the moment the program is launched, affecting its functionality. Less commonly, file viruses can be embedded in operating system or application software libraries, executive batch files, Windows registry files, script files, and driver files. Injection can be carried out either by changing the code of the attacked file, or by creating a modified copy of it. Thus, the virus, while located in a file, is activated when accessing this file, initiated by the user or the OS itself. File viruses are the most common type of computer virus.

File boot viruses combine the capabilities of the two previous groups, which allows them to pose a serious threat to the operation of the computer.

Network viruses distributed through network services and protocols, such as mail distribution, file access via FTP, file access via local network services. This makes them very dangerous, since the infection does not remain within one computer or even one local network, but begins to spread through various communication channels.

Document viruses(they are often called macro viruses) infect files of modern office systems (Microsoft Office, Open Office,...) through the ability to use macros in these systems. A macro is a predefined set of actions, microprogram, built into a document and called directly from it to modify that document or other functions. It is the macro that is the target of macro viruses.

The best way to protect your system from viruses is to regularly use antivirus programs designed to scan system memory and files, and scan for virus signatures. A virus signature is some unique characteristic of a virus program that indicates the presence of a virus in a computer system. Typically, antivirus programs include a periodically updated database of virus signatures.

When executed, the antivirus program scans the computer system for signatures similar to those in the database.

Most good antivirus software not only looks for signatures in a database, but uses other methods as well. Such antivirus programs can detect a new virus even when it has not yet been specifically identified.

However, most viruses are neutralized by searching for a match with the database. When the program finds such a match, it will try to clean the detected virus. It is important to constantly update the existing database of virus signatures. Most antivirus software providers distribute update files over the Internet.

There are three main methods for searching for viruses using antivirus programs.

In the first method, the virus is searched during the initial boot. In this case, the command to launch the anti-virus program is included in the AUTOEXEC.BAT file.

There is no doubt that this method is effective, but it increases the boot time of the computer, and many users may find it too cumbersome. Its advantage is that viewing occurs automatically when loading.

The second method is for the user to manually scan the system using an antivirus program. This method can be just as effective as the first if it is done conscientiously, just like backing up. The downside to this method is that it can take weeks, even months, before a careless user gets around to checking.

The third method of searching for a virus infection is to look at each downloaded file, without having to scan the entire system too often.

However, it should be borne in mind that sometimes there are viruses that are difficult to identify, either because they are new or because there is a long period of time before they become active (viruses have an incubation period and hide for some time before becoming active and spreading to other drives and systems).

Therefore, you should pay attention to the following.

  • 1. File size changes. File viruses almost always change the size of the infected files, so if you notice that the size of any file, especially COM or EXE, has grown by several kilobytes, you should immediately scan your hard drives with an antivirus program.
  • 2. Unexplained changes in available memory. To spread effectively, a virus must reside in memory, which inevitably reduces the amount of random access memory (RAM) available to run programs. So if you haven't done anything to change the amount of available memory, but you notice it's decreasing, you should also run an antivirus program.
  • 3. Unusual behavior. When a virus, like any new program, is loaded onto a computer system, some change in its behavior occurs. This could be either an unexpected change in the reboot time, a change in the reboot process itself, or unusual messages appearing on the screen. All these symptoms indicate that you should immediately run an antivirus program.

If you find any of the above symptoms on your computer, and the antivirus program is not able to detect a virus infection, you should pay attention to the antivirus program itself - it may be outdated (do not contain new virus signatures) or it may itself be infected. Therefore, you need to run a reliable antivirus program.

  • URL: avdesk.kiev.ua/virus/83-virus.html.

Security software– This is the most common method of protecting information on computers and information networks. They are usually used when it is difficult to use some other methods and means. User authentication is usually done by the operating system. The user is identified by his name, and the password is used as a means of authentication.

Security software represents a complex of algorithms and programs for special purposes and general support for the operation of computers and information networks. They are aimed at: controlling and limiting access to information, excluding unauthorized actions with it, managing security devices, etc. Software protection tools are universal, easy to implement, flexible, adaptable, system customizable, etc.

Software tools are widely used to protect against computer viruses. For protecting machines from computer viruses , prevention and “treatment”, antivirus programs are used, as well as diagnostic and preventative tools to prevent a virus from entering a computer system, treat infected files and disks, and detect and prevent suspicious activities. Antivirus programs are rated based on their accuracy in detecting and effectively eliminating viruses, ease of use, cost, and ability to work online.

The most popular programs are those designed to prevent infection, detect and destroy viruses. Among them are domestic anti-virus programs DrWeb (Doctor Web) by I. Danilov and AVP (Antiviral Toolkit Pro) by E. Kaspersky. They have a user-friendly interface, tools for scanning programs, checking the system at boot, etc. Foreign anti-virus programs are also used in Russia.

There are no absolutely reliable programs that guarantee the detection and destruction of any virus. Only multi-level defense can provide the most complete protection against viruses. An important element of protection against computer viruses is prevention. Anti-virus programs are used simultaneously with regular data backup and preventive measures. Together, these measures can significantly reduce the likelihood of contracting the virus.



The main measures to prevent viruses are:

1) use of licensed software;

2) regular use of several constantly updated anti-virus programs to scan not only your own storage media when transferring third-party files to them, but also any “foreign” floppy disks and disks with any information on them, incl. and reformatted;

3) the use of various protective equipment when working on a computer in any information environment (for example, on the Internet). Checking files received over the network for viruses;

4) periodic backup of the most valuable data and programs.

The most common sources of infection are computer games purchased “unofficially” and unlicensed programs. Therefore, a reliable guarantee against viruses is the accuracy of users when choosing programs and installing them on the computer, as well as during Internet sessions. The likelihood of infection not coming from a computer network can be reduced to almost zero if you use only licensed, legal products and never let friends with unknown programs, especially games, onto your computer. The most effective measure in this case is to establish access control that prevents viruses and defective programs from harmfully affecting data even if viruses penetrate such a computer.

One of the most well-known ways to protect information is its coding (encryption, cryptography). It does not save you from physical influences, but in other cases it serves as a reliable remedy.

The code is characterized by: length– the number of characters used in coding and structure– the order of arrangement of symbols used to indicate the classification attribute.

Coding tool serves as a correspondence table. An example of such a table for converting alphanumeric information into computer codes is the ASCII code table.

The first encryption standard appeared in 1977 in the USA. The main criterion for the strength of any cipher or code is the available computing power and the time during which it can be decrypted. If this time is several years, then the durability of such algorithms is sufficient for most organizations and individuals. To encrypt information, cryptographic methods of protecting it are increasingly used.

Cryptographic methods of information protection

General cryptography methods have been around for a long time. It is considered a powerful means of ensuring confidentiality and monitoring the integrity of information. There is no alternative to cryptography methods yet.

The strength of the cryptoalgorithm depends on the complexity of the conversion methods. The State Technical Commission of the Russian Federation deals with the development, sale and use of data encryption tools and certification of data protection means.

If you use 256 or more bit keys, the level of data protection reliability will be tens or hundreds of years of operation of a supercomputer. For commercial use, 40- and 44-bit keys are sufficient.

One of the important problems of information security is the organization of the protection of electronic data and electronic documents. To encode them, in order to meet the requirements for ensuring data security from unauthorized influences on them, an electronic digital signature (EDS) is used.

Electronic signature

Digital signature represents a sequence of characters. It depends on the message itself and on the secret key, known only to the signer of this message.

The first domestic digital signature standard appeared in 1994. The Federal Agency for Information Technologies (FAIT) deals with the use of digital signatures in Russia.

Highly qualified specialists are involved in implementing all necessary measures to protect people, premises and data. They form the basis of the relevant departments, are deputy heads of organizations, etc.

There are also technical means of protection.

Technical means of protection

Technical means of protection are used in various situations; they are part of physical means of protection and software and hardware systems, complexes and access devices, video surveillance, alarms and other types of protection.

In the simplest situations, to protect personal computers from unauthorized startup and use of the data on them, it is proposed to install devices that restrict access to them, as well as work with removable hard magnetic and magneto-optical disks, self-booting CDs, flash memory, etc.

To protect objects in order to protect people, buildings, premises, material and technical means and information from unauthorized influences on them, active security systems and measures are widely used. It is generally accepted to use access control systems (ACS) to protect objects. Such systems are usually automated systems and complexes formed on the basis of software and hardware.

In most cases, to protect information and limit unauthorized access to it, to buildings, premises and other objects, it is necessary to simultaneously use software and hardware, systems and devices.

Software tools are objective forms of representing a set of data and commands intended for the operation of computers and computer devices in order to obtain a certain result, as well as materials prepared and recorded on a physical medium obtained during their development, and the audiovisual displays generated by them

Data protection tools that operate as part of software are called software. Among them, the following can be highlighted and considered in more detail:

· data archiving tools;

· antivirus programs;

· cryptographic means;

· means of identification and authentication of users;

· access control tools;

· logging and auditing.

Examples of combinations of the above measures include:

· database protection;

· protection of operating systems;

· protection of information when working in computer networks.

3.1 Information archiving tools

Sometimes backup copies of information have to be performed when there are generally limited resources for storing data, for example, owners of personal computers. In these cases, software archiving is used. Archiving is the merging of several files and even directories into a single file - an archive, while simultaneously reducing the total volume of source files by eliminating redundancy, but without loss of information, i.e. with the ability to accurately restore source files. Most archiving tools are based on the use of compression algorithms proposed in the 80s. Abraham Lempel and Jacob Ziv. The most well-known and popular archive formats are:

· ZIP, ARJ for DOS and Windows operating systems;

· TAR for the Unix operating system;

· cross-platform JAR format (Java ARchive);

· RAR (the popularity of this format is growing all the time, as programs have been developed that allow it to be used in the DOS, Windows and Unix operating systems).

The user should only choose for himself a suitable program that ensures work with the selected format by assessing its characteristics - speed, compression ratio, compatibility with a large number of formats, user-friendliness of the interface, choice of operating system, etc. The list of such programs is very large - PKZIP, PKUNZIP, ARJ, RAR, WinZip, WinArj, ZipMagic, WinRar and many others. Most of these programs do not need to be purchased specifically, since they are offered as shareware or freeware. It is also very important to establish a regular schedule for performing such data archiving work or to perform it after a major data update.

3.2 Antivirus programs

E These are programs designed to protect information from viruses. Inexperienced users usually believe that a computer virus is a specially written small program that can “attribute” itself to other programs (i.e., “infect” them), as well as perform various unwanted actions on the computer. Computer virology specialists determine that a mandatory (necessary) property of a computer virus is the ability to create its own duplicates (not necessarily identical to the original) and introduce them into computer networks and/or files, system areas of the computer and other executable objects. At the same time, duplicates retain the ability to further spread. It should be noted that this condition is not sufficient, i.e. final. That is why there is still no exact definition of the virus, and it is unlikely to appear in the foreseeable future. Consequently, there is no precisely defined law by which “good” files can be distinguished from “viruses”. Moreover, sometimes even for a specific file it is quite difficult to determine whether it is a virus or not.

Computer viruses pose a particular problem. This is a separate class of programs aimed at disrupting the system and damaging data. Among viruses, there are a number of varieties. Some of them are constantly in the computer's memory, some produce destructive actions with one-time “blows”.

There is also a whole class of programs that look quite decent on the outside, but actually spoil the system. Such programs are called "Trojan horses". One of the main properties of computer viruses is the ability to “reproduce” - i.e. self-distribution within a computer and computer network.

Since various office application software have been able to work with programs specially written for them (for example, applications can be written in Visual Basic for Microsoft Office), a new type of malware has appeared - MacroViruses. Viruses of this type are distributed along with ordinary document files, and are contained within them as ordinary routines.

Taking into account the powerful development of communication tools and the sharply increased volumes of data exchange, the problem of virus protection becomes very urgent. Practically, with every document received, for example, by e-mail, a macro virus can be received, and every running program can (theoretically) infect the computer and make the system inoperable.

Therefore, among security systems, the most important area is the fight against viruses. There are a number of tools specifically designed to solve this problem. Some of them run in scanning mode and scan the contents of the computer's hard drives and RAM for viruses. Some must be constantly running and located in the computer's memory. At the same time, they try to monitor all ongoing tasks.

In the Kazakh software market, the AVP package developed by the Kaspersky Anti-Virus Systems Laboratory has gained the greatest popularity. This is a universal product that has versions for a wide variety of operating systems. There are also the following types: Acronis AntiVirus, AhnLab Internet Security, AOL Virus Protection, ArcaVir, Ashampoo AntiMalware, Avast!, Avira AntiVir, A-square anti-malware, BitDefender, CA Antivirus, Clam Antivirus, Command Anti-Malware, Comodo Antivirus, Dr.Web, eScan Antivirus, F-Secure Anti-Virus, G-DATA Antivirus, Graugon Antivirus, IKARUS virus.utilities, Kaspersky Anti-Virus, McAfee VirusScan, Microsoft Security Essentials, Moon Secure AV, Multicore antivirus, NOD32, Norman Virus Control, Norton AntiVirus, Outpost Antivirus, Panda, etc.

Methods for detecting and removing computer viruses.

Methods to counteract computer viruses can be divided into several groups:

· prevention of viral infection and reduction of expected damage from such infection;

· methods of using anti-virus programs, including neutralization and removal of known viruses;

Methods for detecting and removing an unknown virus:

· Prevention of computer infection;

· Restoration of affected objects;

· Antivirus programs.

Preventing computer infection.

One of the main methods of combating viruses is, as in medicine, timely prevention. Computer prevention involves following a small number of rules, which can significantly reduce the likelihood of getting a virus and losing any data.

In order to determine the basic rules of computer hygiene, it is necessary to find out the main ways a virus penetrates a computer and computer networks.

The main source of viruses today is the global Internet. The largest number of virus infections occurs when exchanging letters in Word formats. The user of an editor infected with a macro virus, without knowing it, sends infected letters to recipients, who in turn send new infected letters, etc. Conclusions - you should avoid contact with suspicious sources of information and use only legitimate (licensed) software products.

Restoring affected objects

In most cases of virus infection, the procedure for restoring infected files and disks comes down to running a suitable antivirus that can neutralize the system. If the virus is unknown to any antivirus, then it is enough to send the infected file to antivirus manufacturers and after some time (usually several days or weeks) receive a cure - “update” against the virus. If time does not wait, then you will have to neutralize the virus yourself. For most users, it is necessary to have backups of their information.

The main breeding ground for the mass spread of a virus in a computer is:

· weak security of the operating system (OS);

· availability of varied and fairly complete documentation on the OS and hardware used by virus authors;

· widespread distribution of this OS and this hardware.

Information security software means special programs included in the CS software exclusively to perform protective functions.

The main software tools for information security include:

  • * identification and authentication programs for CS users;
  • * programs for restricting user access to CS resources;
  • * information encryption programs;
  • * programs for protecting information resources (system and application software, databases, computer training tools, etc.) from unauthorized modification, use and copying.

It must be understood that by identification, in relation to ensuring the information security of a computer system, we mean the unambiguous recognition of the unique name of the subject of the computer system. Authentication means confirming that the name presented corresponds to a given subject (confirming the authenticity of the subject)5.

Information security software also includes:

  • * programs for destroying residual information (in blocks of RAM, temporary files, etc.);
  • * audit programs (maintaining logs) of events related to the safety of the CS to ensure the possibility of recovery and proof of the fact of the occurrence of these events;
  • * programs for simulating work with a violator (distracting him to obtain supposedly confidential information);
  • * test control programs for CS security, etc.

The advantages of information security software include:

  • * ease of replication;
  • * flexibility (the ability to customize for various application conditions, taking into account the specifics of threats to the information security of specific CS);
  • * ease of use - some software tools, for example encryption, operate in a “transparent” (invisible to the user) mode, while others do not require any new (compared to other programs) skills from the user;
  • * virtually unlimited possibilities for their development by making changes to take into account new threats to information security.

Rice. 4

Rice. 5

The disadvantages of information security software include:

  • * reducing the effectiveness of the CS due to the consumption of its resources required for the functioning of protection programs;
  • * lower performance (compared to hardware security tools that perform similar functions, such as encryption);
  • * the docking of many software protection tools (and not their arrangement in the CS software, Fig. 4 and 5), which creates a fundamental possibility for an intruder to bypass them;
  • * the possibility of malicious changes in software protection during the operation of the CS.

Security at the operating system level

The operating system is the most important software component of any computer, therefore the overall security of the information system largely depends on the level of implementation of the security policy in each specific OS.

The Windows 2000 and Millenium family of operating systems are clones, initially aimed at working on home computers. These operating systems use protected mode privilege levels, but do not do any additional checks or support security descriptor systems. As a result, any application can access the entire amount of available RAM with both read and write rights. Network security measures are present, however, their implementation is not up to par. Moreover, in the version of Windows XP, a fundamental error was made that made it possible to remotely cause the computer to freeze in just a few packets, which also significantly undermined the reputation of the OS; in subsequent versions, many steps were taken to improve the network security of this clone6.

The generation of operating systems Windows Vista, 7 is already a much more reliable development of MicroSoft. They are truly multi-user systems that reliably protect files of different users on the hard drive (however, data is not encrypted and the files can be read without problems by booting from the disk of another operating system - for example, MS-DOS). These operating systems actively use the protected mode capabilities of Intel processors, and can reliably protect data and process code from other programs, unless the process itself wants to provide additional access to them from outside the process.

Over the long period of development, many different network attacks and security errors were taken into account. Corrections for them were released in the form of service packs.

Another branch of clones grows from the UNIX operating system. This OS was initially developed as a network and multi-user OS, and therefore immediately contained information security tools. Almost all widespread UNIX clones have gone through a long development process and, as they were modified, took into account all the attack methods discovered during this time. They have proven themselves quite well: LINUX (S.U.S.E.), OpenBSD, FreeBSD, Sun Solaris. Naturally, all of the above applies to the latest versions of these operating systems. The main errors in these systems no longer relate to the kernel, which works flawlessly, but to system and application utilities. The presence of errors in them often leads to the loss of the entire safety margin of the system.

Main components:

Local security administrator - responsible for unauthorized access, checks user permissions to log in to the system, supports:

Audit - checking the correctness of user actions

Account Manager - database support for users of their actions and interactions with the system.

Security monitor - checks whether the user has sufficient access rights to the object

Audit log - contains information about user logins, records work with files and folders.

Authentication package - analyzes system files to ensure that they have not been replaced. MSV10 is the default package.

Windows XP added:

you can assign passwords for archived copies

File replacement protection tools

delimitation system ... by entering a password and creating an account of user records. Archiving can be carried out by a user who has such rights.

NTFS: access control to files and folders

In XP and 2000 there is a more complete and deeper differentiation of user access rights.

EFS - provides encryption and decryption of information (files and folders) to limit access to data.

Cryptographic protection methods

Cryptography is the science of ensuring data security. She is looking for solutions to four important security problems - confidentiality, authentication, integrity and participant control. Encryption is the transformation of data into an unreadable form using encryption-decryption keys. Encryption allows you to ensure confidentiality by keeping information secret from those to whom it is not intended.

Cryptography deals with the search and study of mathematical methods for transforming information (7).

Modern cryptography includes four major sections:

symmetric cryptosystems;

public key cryptosystems;

electronic signature systems;

key management.

The main areas of use of cryptographic methods are the transfer of confidential information through communication channels (for example, e-mail), establishing the authenticity of transmitted messages, storing information (documents, databases) on media in encrypted form.

Disk encryption

An encrypted disk is a container file that can contain any other files or programs (they can be installed and launched directly from this encrypted file). This disk is accessible only after entering the password for the container file - then another disk appears on the computer, recognized by the system as logical and working with it is no different from working with any other disk. After disconnecting the disk, the logical disk disappears; it simply becomes “invisible”.

Today, the most common programs for creating encrypted disks are DriveCrypt, BestCrypt and PGPdisk. Each of them is reliably protected from remote hacking.

Common features of the programs: (8)

  • - all changes to information in the container file occur first in RAM, i.e. the hard drive always remains encrypted. Even if the computer freezes, the secret data remains encrypted;
  • - programs can block a hidden logical drive after a certain period of time;
  • - they are all distrustful of temporary files (swap files). It is possible to encrypt all confidential information that could end up in the swap file. A very effective method of hiding information stored in a swap file is to disable it altogether, while not forgetting to increase the computer’s RAM;
  • - the physics of a hard drive is such that even if you write other data on top of some data, the previous record will not be completely erased. With the help of modern magnetic microscopy (Magnetic Force Microscopy - MFM), they can still be restored. With these programs, you can securely delete files from your hard drive without leaving any trace of their existence;
  • - all three programs store confidential data in a securely encrypted form on the hard drive and provide transparent access to this data from any application program;
  • - they protect encrypted container files from accidental deletion;
  • - copes well with Trojan applications and viruses.

User identification methods

Before gaining access to the computer, the user must identify himself, and network security mechanisms then authenticate the user, i.e., check whether the user is who he claims to be. In accordance with the logical model of the protection mechanism, the aircraft are located on a working computer, to which the user is connected through his terminal or in some other way. Therefore, identification, authentication and authorization procedures are performed at the beginning of the session on the local desktop computer.

Subsequently, as various network protocols are established and prior to gaining access to network resources, identification, authentication and authorization procedures may be re-enabled on some remote hosts to host the required resources or network services.

When a user starts working on a computing system using a terminal, the system asks for his name and identification number. In accordance with the user's answers, the computer system identifies him. In a network, it is more natural for objects establishing mutual communication to identify each other.

Passwords are just one way to verify authenticity. There are other ways:

  • 1. Predefined information at the user's disposal: password, personal identification number, agreement on the use of special encoded phrases.
  • 2. Hardware elements at the user’s disposal: keys, magnetic cards, microcircuits, etc.
  • 3. Characteristic personal characteristics of the user: fingerprints, retinal pattern, figure size, voice timbre and other more complex medical and biochemical properties.
  • 4. Characteristic techniques and features of user behavior in real time: dynamics features, keyboard style, reading speed, ability to use manipulators, etc.
  • 5. Habits: using specific computer routines.
  • 6. User skills and knowledge due to education, culture, training, background, upbringing, habits, etc.

If someone wishes to log into a computing system through a terminal or execute a batch job, the computing system must authenticate the user. The user himself, as a rule, does not verify the authenticity of the computer system. If the authentication procedure is one-sided, such a procedure is called one-way object authentication (9).

Specialized information security software.

Specialized software tools for protecting information from unauthorized access generally have better capabilities and characteristics than built-in network OS tools. In addition to encryption programs, there are many other external information security tools available. Of the most frequently mentioned, the following two systems should be noted that allow limiting information flows.

Firewalls - firewalls (literally firewall - fire wall). Special intermediate servers are created between the local and global networks, which inspect and filter all network/transport level traffic passing through them. This allows you to dramatically reduce the threat of unauthorized access from outside to corporate networks, but does not eliminate this danger completely. A more secure version of the method is the masquerading method, when all traffic originating from the local network is sent on behalf of the firewall server, making the local network practically invisible.

Proxy-servers (proxy - power of attorney, trusted person). All network/transport layer traffic between the local and global networks is completely prohibited - there is simply no routing as such, and calls from the local network to the global network occur through special intermediary servers. It is obvious that with this method, access from the global network to the local one becomes impossible in principle. It is also clear that this method does not provide sufficient protection against attacks at higher levels - for example, at the application level (viruses, Java and JavaScript code).

Let's take a closer look at how the firewall works. This is a method of protecting a network from security threats posed by other systems and networks by centralizing access to the network and controlling it through hardware and software. A firewall is a protective barrier made up of several components (for example, a router or gateway that runs the firewall software). The firewall is configured in accordance with the organization's internal network access control policy. All incoming and outgoing packets must pass through the firewall, which allows only authorized packets to pass through.

A packet filtering firewall is a router or computer running software configured to reject certain types of incoming and outgoing packets. Packet filtering is carried out based on the information contained in the TCP and IP headers of packets (sender and recipient addresses, their port numbers, etc.).

Expert level firewall - checks the contents of received packets at three levels of the OSI model - network, session and application. To accomplish this task, special packet filtering algorithms are used to compare each packet with a known pattern of authorized packets.

Creating a firewall relates to solving the problem of shielding. The formal formulation of the screening problem is as follows. Let there be two sets of information systems. A screen is a means of delimiting access of clients from one set to servers from another set. The screen carries out its functions by controlling all information flows between two sets of systems (Fig. 6). Stream control consists of filtering them, possibly performing some transformations.

At the next level of detail, it is convenient to think of a screen (semi-permeable membrane) as a series of filters. Each of the filters, having analyzed the data, can delay (not miss) it, or can immediately “throw” it off the screen. In addition, it is possible to transform data, transfer a portion of data to the next filter to continue analysis, or process data on behalf of the recipient and return the result to the sender (Fig. 7).


Rice. 7

In addition to access control functions, screens record information exchange.

Usually the screen is not symmetrical; the concepts of “inside” and “outside” are defined for it. In this case, the shielding task is formulated as protecting the internal area from a potentially hostile external one. Thus, firewalls (FiW) are most often installed to protect the corporate network of an organization that has access to the Internet.

Shielding helps maintain the availability of internal domain services by reducing or eliminating the load caused by external activity. The vulnerability of internal security services is reduced, since the attacker must initially overcome the screen where the protective mechanisms are configured especially carefully. In addition, the shielding system, in contrast to the universal one, can be designed in a simpler and, therefore, safer way.

Shielding also makes it possible to control information flows directed to the external area, which helps maintain the confidentiality regime in the organization's information system.

Shielding can be partial, protecting certain information services (for example, email shielding).

A limiting interface can also be thought of as a type of shielding. An invisible target is difficult to attack, especially with a fixed set of weapons. In this sense, the Web interface has natural security, especially when hypertext documents are generated dynamically. Each user sees only what he is supposed to see. An analogy can be drawn between dynamically generated hypertext documents and representations in relational databases, with the significant caveat that in the case of the Web, the possibilities are much wider.

The screening role of a Web service is clearly manifested when this service performs intermediary (more precisely, integrating) functions when accessing other resources, for example, database tables. This not only controls the flow of requests, but also hides the real organization of the data.

Architectural Security Aspects

It is not possible to combat the threats inherent in the network environment using universal operating systems. The Universal OS is a huge program that most likely contains, in addition to obvious errors, some features that can be used to illegally gain privileges. Modern programming technology does not make it possible to make such large programs safe. In addition, an administrator dealing with a complex system is not always able to take into account all the consequences of the changes made. Finally, in a universal multi-user system, security holes are constantly created by the users themselves (weak and/or rarely changed passwords, poorly set access rights, an unattended terminal, etc.). The only promising path is associated with the development of specialized security services, which, due to their simplicity, allow formal or informal verification. A firewall is just such a tool, allowing further decomposition associated with servicing various network protocols.

The firewall is located between the protected (internal) network and the external environment (external networks or other segments of the corporate network). In the first case we talk about external ME, in the second - about internal ME. Depending on your point of view, an external firewall can be considered the first or last (but not the only) line of defense. The first is if you look at the world through the eyes of an external attacker. The latter - if we strive to protect all components of the corporate network and suppress illegal actions of internal users.

The firewall is an ideal place to embed active auditing capabilities. On the one hand, at both the first and last defensive line, identifying suspicious activity is important in its own way. On the other hand, ME is capable of implementing an arbitrarily powerful reaction to suspicious activity, up to and including breaking the connection with the external environment. However, you need to be aware that connecting two security services could, in principle, create a gap that could facilitate accessibility attacks.

It is advisable to entrust the firewall with the identification/authentication of external users who need access to corporate resources (supporting the concept of single sign-on to the network).

Due to the principles of defense in depth, two-part shielding is typically used to protect external connections (see Figure 8). Primary filtering (for example, blocking packets of the SNMP control protocol, which is dangerous due to accessibility attacks, or packets with certain IP addresses included in the “black list”) is carried out by the border router (see also the next section), behind which there is a so-called demilitarized zone ( a network with moderate security trust, where the organization’s external information services are located - Web, email, etc.) and the main firewall that protects the internal part of the corporate network.

Theoretically, a firewall (especially an internal one) should be multi-protocol, but in practice the dominance of the TCP/IP protocol family is so great that supporting other protocols seems like an overkill that is detrimental to security (the more complex the service, the more vulnerable it is).


Rice. 8

Generally speaking, both external and internal firewalls can become a bottleneck as the volume of network traffic tends to grow rapidly. One approach to solving this problem involves dividing the firewall into several hardware parts and organizing specialized intermediary servers. The primary firewall can roughly classify incoming traffic by type and delegate filtering to appropriate intermediaries (for example, an intermediary that analyzes HTTP traffic). Outgoing traffic is first processed by an intermediary server, which can also perform functionally useful actions, such as caching pages of external Web servers, which reduces the load on the network in general and the main firewall in particular.

Situations where a corporate network contains only one external channel are the exception rather than the rule. On the contrary, a typical situation is when a corporate network consists of several geographically dispersed segments, each of which is connected to the Internet. In this case, each connection must be protected by its own shield. More precisely, we can consider that the corporate external firewall is composite, and it is necessary to solve the problem of consistent administration (management and auditing) of all components.

The opposite of composite corporate firewalls (or their components) are personal firewalls and personal shielding devices. The first are software products that are installed on personal computers and protect only them. The latter are implemented on individual devices and protect a small local network, such as a home office network.

When deploying firewalls, you should adhere to the principles of architectural security we discussed earlier, first of all taking care of simplicity and manageability, the echelon of defense, and the impossibility of transitioning into an insecure state. In addition, not only external but also internal threats should be taken into account.

Archiving and duplication systems

Organizing a reliable and efficient data archiving system is one of the most important tasks in ensuring the safety of information on the network. In small networks where one or two servers are installed, the most common method is to install an archiving system directly into the free slots of the servers. In large corporate networks, it is most preferable to organize a dedicated specialized archiving server.

Such a server automatically archives information from the hard drives of servers and workstations at a time specified by the administrator of the local computer network, issuing a report on the backup.

Storage of archival information of particular value must be organized in a special secured room. Experts recommend storing duplicate archives of your most valuable data in another building, in case of fire or natural disaster. To ensure data recovery in the event of magnetic disk failures, disk array systems have recently been most often used - groups of disks operating as a single device that comply with the RAID (Redundant Arrays of Inexpensive Disks) standard. These arrays provide the highest speed of writing/reading data, the ability to completely restore data and replace failed disks in “hot” mode (without disconnecting the remaining disks of the array).

The organization of disk arrays provides for various technical solutions implemented at several levels:

RAID Level 0 simply divides the data stream between two or more drives. The advantage of this solution is that the I/O speed increases in proportion to the number of disks used in the array.

RAID level 1 consists of organizing so-called “mirror” disks. During data recording, the information on the main disk of the system is duplicated on the mirror disk, and if the main disk fails, the “mirror” disk immediately comes into operation.

RAID levels 2 and 3 provide for the creation of parallel disk arrays, when written to which data is distributed across disks at the bit level.

RAID levels 4 and 5 are a modification of level zero, in which the data flow is distributed across the array disks. The difference is that at level 4 a special disk is allocated to store redundant information, and at level 5 the redundant information is distributed across all disks of the array.

Increasing reliability and protecting data on a network, based on the use of redundant information, is implemented not only at the level of individual network elements, such as disk arrays, but also at the level of network operating systems. For example, Novell implements fault-tolerant versions of the Netware operating system - SFT (System Fault Tolerance):

  • - SFT Level I. The first level provides for the creation of additional copies of FAT and Directory Entries Tables, immediate verification of each data block newly written to the file server, as well as reservation of about 2% of the disk capacity on each hard drive.
  • - SFT Level II additionally contained the ability to create “mirror” disks, as well as duplicating disk controllers, power supplies and interface cables.
  • - The SFT Level III version allows you to use duplicate servers on a local network, one of which is the “master”, and the second, containing a copy of all information, comes into operation if the “main” server fails.

Security analysis

The security analysis service is designed to identify vulnerabilities in order to quickly eliminate them. This service itself does not protect against anything, but it helps to detect (and eliminate) security gaps before an attacker can exploit them. First of all, we do not mean architectural ones (they are difficult to eliminate), but “operational” gaps that appeared as a result of administration errors or due to inattention to updating software versions.

Security analysis systems (also called security scanners), like the active audit tools discussed above, are based on the accumulation and use of knowledge. In this case, we mean knowledge about security gaps: how to look for them, how serious they are and how to fix them.

Accordingly, the core of such systems is a database of vulnerabilities, which determines the available range of capabilities and requires almost constant updating.

In principle, gaps of a very different nature can be identified: the presence of malware (in particular, viruses), weak user passwords, poorly configured operating systems, insecure network services, uninstalled patches, vulnerabilities in applications, etc. However, the most effective are network scanners (obviously due to the dominance of the TCP/IP protocol family), as well as antivirus tools (10). We classify anti-virus protection as a security analysis tool, without considering it a separate security service.

Scanners can identify vulnerabilities both through passive analysis, that is, studying configuration files, involved ports, etc., and by simulating the actions of an attacker. Some detected vulnerabilities can be eliminated automatically (for example, disinfection of infected files), others are reported to the administrator.

The control provided by security analysis systems is reactive, delayed in nature, it does not protect against new attacks, however, it should be remembered that defense must be layered, and security control as one of the boundaries is quite adequate. It is known that the vast majority of attacks are routine in nature; they are only possible because known security holes remain unfixed for years.