The computer sound system consists of a sound adapter (sound card) and electroacoustic sound transducers (microphone and speakers).

Sound cards perform following functions:

§ sampling of analog signals with frequencies of 11.025 kHz, 22.05 and 44.1 kHz. The first frequency refers to 8 bit maps, the others to 16 bit maps;

§ 8- or 16-bit quantization, encoding and decoding using linear pulse code modulation (PCM);

§ Simultaneously record and playback audio information(Full duplex mode);

§ input of signals through a monaural microphone with automatic control of the input signal level;

§ input and output of audio signals via linear input/output;

§ mixing (mixing) signals from several sources and outputting the total signal to the output channel. The sources used are:

a) analog CD-ROM output;

c) musical synthesizer;

d) external source connected to the line input.

§ control of the level of the total signal and the signal of each channel separately;

§ stereo signal processing;

§ synthesis of sound vibrations using frequency modulation (FM) and wave tables (WT).

The sound card should use no more than 13% of the computer processor resources at a sampling frequency of 44.1 kHz and no more than 7% at f g = 22.05 kHz. IN sound card analog and digital signals are processed. In accordance with specification AC-97 ( Audio Codec 97 Component Specification), developed by Intel in 1997, audio signal processing is divided between two devices:

audio codec (AC-audio codec) And

digital controller (DC – digital controller).

The analog LSI should be located close to the audio I/O connectors and as far away from noisy digital buses as possible. The digital LSI is located closer to the system bus of the sound card. The connection of these microcircuits is carried out via a unified internal AC-link bus. In modern PC models, these microcircuits are located on the computer's motherboard. Extended modification of LSI sound codec additionally performs the functions of a modem.

In a simplified form, the circuit diagram of a PC audio system can be presented as follows (Figure 10.13). The microphone (M) converts acoustic vibrations into electrical ones, and the loudspeaker (Gr.) converts electrical vibrations into acoustic ones. The input signal from the microphone is amplified, and from the line input is fed directly to the analog-to-digital converter.

Figure 10.13 - Sound card structure

A discrete signal can be represented as a product of the original signal U(t) and the sampling sequence P(t)

U d(t) = U(t)P(t) .

The sampling sequence consists of very short pulses. In a theoretical description, this sequence is represented by δ - pulses that follow with a sampling frequency f o = 1/T o

P(t) = ∑ δ (t - nT o)

The timing diagram of the sampling and quantization process is shown in Figure 10.14

Synthesis of sound signals. The synthesizer is designed to generate sounds of musical instruments corresponding to certain notes, as well as create “non-musical” sounds: the noise of wind, gunshots, etc.

The same note played on a musical instrument sounds differently (violin, trumpet, saxophone). This is due to the fact that although a certain note corresponds to a vibration of a specific frequency, the sounds of various instruments, in addition to the fundamental tone (sine wave), are characterized by the presence of additional harmonics - overtones. It is the overtones that determine the timbre color of the voice of a musical instrument.

Figure 10.14– Timing diagram of input signal digitization

A sound signal created with the help of a musical instrument consists of three characteristic fragments - phases. So, for example, when you press a piano key, the amplitude of the sound first quickly increases to a maximum, and then decreases slightly (Figure 10.15). The initial phase of the sound signal is called attack. The attack duration for various musical instruments varies from units to tens and even hundreds of ms. After the attack, the “support” phase begins, during which the sound signal has a stable amplitude. The auditory sensation of pitch is formed precisely at the support stage.

This is followed by a section with a relatively rapid attenuation of the signal level. The envelope of vibrations during attack, support and decay is called the amplitude envelope. Different musical instruments have different amplitude envelopes, however, the marked phases are characteristic of almost all musical instruments, with the exception of percussion.

To create an electronic analogue of real sound, i.e. For synthesis sound, it is necessary to recreate the harmonic envelopes that make up the real sound. There are several synthesis methods. One of the first and most studied is additive synthesis. The sound in the synthesis process is formed by adding several initial sound waves. This method was used in the classical organ. With a special design of valves, when pressing a key, several pipes were made to sound at once. In this case, the sounding pipes were tuned either in unison or in one or two octaves. When a key was pressed, the short trumpets began to sound first, giving high overtones, then the middle section came in and the bass came last.

In digital additive synthesis, N harmonics with frequencies ranging from f 1 to f N and amplitudes from A 1 (t) to A N(t). These harmonics are then added together.

Second The method is a type of nonlinear synthesis. To obtain one musical sound, the signal of one generator is used. Harmonic coloring is obtained as a result of nonlinear distortions of the original signal. To do this, a sinusoidal signal generated by a code-controlled generator (CGG) with an amplitude A 1 and frequency f 1 (Figure 10.16 a) is passed through a nonlinear element with some characteristic K(x)(Figure 10.16 b). Knowing the signal amplitude A 1 and type of characteristics K(x), you can calculate the spectrum of the output signal (Figure 10.16 c).

The next widely used method is synthesis based on frequency modulation (widely used in Yamaha EMR). Frequency modulation changes the frequency f 0 carrier vibration U(t) = A sin(2π f 0 + φ) according to the law of modulating oscillation x(t). Expressions for frequency-modulated oscillations have the form

U(t) = A sin(ω o t + Δω∫dt),

The magnitude of the change in the frequency of the carrier oscillation Δω 0 =2π f 0 is called frequency deviation, and the deviation ratio Δ f 0 modulated oscillation frequency to modulating oscillation frequency f m is called the frequency modulation index m f = Δ f 0 /f m. By changing the modulation index, you can change the spectrum of the signal at the output of the modulator and thereby achieve a synthesized sound quality close to natural sound.

Expressions for frequency-modulated oscillation with sinusoidal modulating oscillation x(t) = sin ω o t has the form

U(t) = A sin .

The spectrum of modulated signals at various modulation indices is shown in Figure 10.17.

know:




PC sound system. Composition of the PC sound system. Working principle and specifications sound cards. Directions for improving the sound system. The principle of processing sound information. Specification of sound systems.
Guidelines
PC sound system- a set of software and hardware that performs the following functions:


  • recording audio signals coming from external sources, such as a microphone or tape recorder, by converting input analog audio signals to digital ones and then storing them on a hard drive;

  • playback of recorded audio data using an external speaker system or headphones (headphones);

  • playback of audio CDs;

  • mixing (mixing) when recording or playing back signals from several sources;

  • simultaneous recording and playback of audio signals (Full Duplex mode);

  • processing of audio signals: editing, combining or separating signal fragments, filtering, changing its level;

  • audio signal processing in accordance with surround (three-dimensional - 3D-Sound) sound algorithms;

  • generating the sound of musical instruments, as well as human speech and other sounds using a synthesizer;

  • control of external electronic musical instruments via a special MIDI interface.
The PC sound system is structurally represented by sound cards, either installed in a motherboard slot, or integrated on the motherboard or an expansion card of another PC subsystem. Individual functional modules of the sound system can be implemented in the form of daughter boards installed in the corresponding connectors of the sound card.

Figure 10 - Structure of the PC sound system
Classic sound system as shown in fig. 5.1, contains:


  • sound recording and playback module;

  • synthesizer module;

  • interface module;

  • mixer module;

  • sound system.
The first four modules are usually installed on the sound card. Moreover, there are sound cards without a synthesizer module or a recording/playback module digital audio. Each of the modules can be made either in the form of a separate microcircuit or be part of a multifunctional microcircuit. Thus, a sound system Chipset can contain either several or one chip.

PC sound system designs are undergoing significant changes; meet motherboards with a Chipset installed on them for audio processing.

However, the purpose and functions of the modules of a modern sound system (regardless of its design) do not change. When considering the functional modules of a sound card, it is customary to use the terms “PC sound system” or “sound card.”
Questions for self-control:


  1. PC sound system;

  2. Composition of the PC sound system;

  3. Operating principle and technical characteristics of sound cards;

  4. Directions for improving the sound system;

  5. The principle of processing sound information;

  6. Specification of sound systems.

Topic 6.2 Audio information processing interface module
The student must:
have an idea:


  • about the PC sound system

know:


  • composition of the PC audio subsystem;

  • operating principle of the recording and playback module;

  • principle of operation of the synthesizer module;

  • operating principle of the interface module;

  • operating principle of the mixer module;

  • organizing the operation of the acoustic system.

Composition of the PC audio subsystem. Recording and playback module. Synthesizer module. Interface module. Mixer module. Operating principle and technical characteristics of acoustic systems. Software. Sound file formats. Speech recognition tools.
Guidelines
Sound system recording and playback module carries out analog-to-digital and digital-to-analog conversions in the mode of software transmission of audio data or transmission via DMA channels (Direct Memory Access - direct memory access channel).

Sound recording is the storage of information about sound pressure fluctuations at the time of recording. Currently, analog and digital signals are used to record and transmit sound information. In other words, the audio signal can be in analog or digital form.

In most cases, the sound signal is supplied to the input of the PC sound card in analog form. Due to the fact that the PC operates only with digital signals, the analog signal must be converted to digital. At the same time, the speaker system installed at the output of the PC sound card perceives only analog electrical signals, therefore, after processing the signal using a PC, it is necessary to reverse convert the digital signal to analog.

A/D conversion is the conversion of an analog signal to a digital signal and consists of the following main steps: sampling, quantization and encoding.

^ The pre-analog audio signal is fed to an analog filter, which limits the frequency band of the signal.

Signal sampling consists of sampling samples of an analog signal with specified frequency and is determined by the sampling frequency. Moreover, the sampling frequency must be no less than twice the frequency of the highest harmonic (frequency component) of the original audio signal.

Amplitude quantization is the measurement of instantaneous amplitude values ​​of a discrete time signal and converting it into discrete time and amplitude. Figure 11 shows the analog signal level quantization process, with instantaneous amplitude values ​​encoded as 3-bit numbers.

^ Figure 11 - Scheme of analog-to-digital conversion of an audio signal
Coding consists of converting a quantized signal into a digital code. In this case, the measurement accuracy during quantization depends on the number of bits of the code word.

^ Figure 12 - Time sampling and quantization based on the level of the analog signal for quantizing the sample amplitude.
Analog-to-digital conversion is carried out by a special electronic device- an analog-to-digital converter (ADC), in which discrete signal samples are converted into a sequence of numbers. The resulting digital data stream, i.e. signal includes both useful and unwanted high-frequency interference, to filter which the received digital data is passed through a digital filter.

Digital-to-analog conversion generally occurs in two stages, as shown in Figure 12. At the first stage, signal samples are extracted from the digital data stream using a digital-to-analog converter (DAC), following the sampling frequency. At the second stage, a continuous analog signal is formed from discrete samples by smoothing (interpolation) using a low-frequency filter, which suppresses the periodic components of the discrete signal spectrum.

To reduce the amount of digital data required to represent an audio signal with a given quality, compression is used, which consists in reducing the number of samples and quantization levels or the number of bits per sample.

^ Figure 13 - Digital-to-analog conversion circuit
Such methods of encoding audio data using special encoding devices make it possible to reduce the volume of information flow to almost 20% of the original one. The choice of encoding method when recording audio information depends on the set of compression programs - codecs (encoding-decoding) supplied with the sound card software or included in operating system.

Performing the functions of analog-to-digital and digital-to-analog signal conversion, the digital audio recording and playback module contains an ADC, a DAC and a control unit, which are usually integrated into a single chip, also called a codec. The main characteristics of this module are: sampling frequency; type and capacity of ADC and DAC; audio data encoding method; ability to work in Full Duplex mode.

The sampling rate determines the maximum frequency of the signal that is recorded or played back. For recording and playback of human speech, 6 - 8 kHz is sufficient; music with no high quality- 20 - 25 kHz; To ensure high quality sound (audio CD), the sampling frequency must be at least 44 kHz. Almost all sound cards support recording and playback of stereo audio at a sampling rate of 44.1 or 48 kHz.

^ The bit depth of the ADC and DAC determines the bit depth of the digital signal (8, 16 or 18 bits).

Full Duplex is a data transmission mode over a channel, according to which the sound system can simultaneously receive (record) and transmit (play) audio data. However, not all sound cards fully support this mode, since they do not provide high sound quality during intensive data exchange. Such cards can be used to work with voice data on the Internet, for example, during teleconferences, when high sound quality is not required.

Synthesizer module

An electromusical digital sound system synthesizer allows you to generate almost any sound, including the sound of real musical instruments. The principle of operation of the synthesizer is illustrated in Figure 14.

Synthesis is the process of recreating the structure of a musical tone (note). The sound signal of any musical instrument has several time phases. Figure 15, a shows the phases of the sound signal that appears when you press a piano key. For each musical instrument, the type of signal will be unique, but three phases can be distinguished in it: attack, support and attenuation. The set of these phases is called the amplitude envelope, the shape of which depends on the type of musical instrument. The attack duration for different musical instruments varies from a few to several tens or even hundreds of milliseconds. In the phase called support, the amplitude of the signal remains almost unchanged, and the pitch of the musical tone is formed during support. The last phase, attenuation, corresponds to a section of a fairly rapid decrease in the signal amplitude.

In modern synthesizers, sound is created as follows. A digital device using one of the synthesis methods generates a so-called excitation signal with a given pitch (note), which should have spectral characteristics as close as possible to the characteristics of the simulated musical instrument in the support phase, as shown in Figure 15, b. Next, the excitation signal is fed to a filter that simulates the amplitude-frequency response of a real musical instrument. The amplitude envelope signal of the same instrument is supplied to the other filter input. Next, the set of signals is processed in order to obtain special sound effects, for example, echo (reverb), choral performance (cho-rus). Next, digital-to-analog conversion and filtering of the signal using a filter are performed low frequencies(LPF).


Figure 15 - Operating principle of a modern synthesizer: a - phases of the sound signal; 6 - synthesizer circuit
Main characteristics of the synthesizer module:


  1. sound synthesis method;

  2. Memory;

  3. possibility of hardware signal processing to create sound effects;

  4. polyphony - maximum number simultaneously reproduced sound elements.
The sound synthesis method used in a PC sound system determines not only the sound quality, but also the composition of the system. In practice, sound cards are equipped with synthesizers that generate sound using the following methods.

The synthesis method based on frequency modulation (Frequency Modulation Synthesis - FM synthesis) involves the use of at least two signal generators of complex shapes to generate the voice of a musical instrument. The carrier frequency generator generates a fundamental tone signal, frequency-modulated by a signal of additional harmonics and overtones that determine the sound timbre of a particular instrument. The envelope generator controls the amplitude of the resulting signal. The FM generator provides acceptable sound quality, is inexpensive, but does not implement sound effects. Therefore, sound cards using this method are not recommended according to the PC99 standard.

Sound synthesis based on a wave table (Wave Table Synthesis - WT synthesis) is performed by using pre-digitized sound samples of real musical instruments and other sounds stored in a special ROM, made in the form of a memory chip or integrated into the WT generator memory chip. The WT synthesizer provides high quality sound generation. This synthesis method is implemented in modern sound cards.

^ The amount of memory on sound cards with a WT synthesizer can be increased by installing additional memory elements (ROM) for storing banks with instruments.

Sound effects are generated using a special effect processor, which can either be an independent element (microcircuit) or integrated into the WT synthesizer. For the vast majority of cards with WT synthesis, reverb and chorus effects have become standard. Sound synthesis based on physical modeling involves the use of mathematical models of sound production of real musical instruments for digital generation and for further conversion into an audio signal using a DAC. Sound cards using the physical modeling method are not yet widely used because they require a powerful PC to operate.

Interface module Provides data exchange between the sound system and other external and internal devices.

The PCI interface provides high bandwidth (for example, version 2.1 - more than 260 Mbit/s), which allows you to transmit audio data streams in parallel. Using the PCI bus allows you to improve sound quality, providing a signal-to-noise ratio of over 90 dB. In addition, the PCI bus allows for cooperative processing of audio data, when data processing and transmission tasks are distributed between the sound system and the CPU.

MIDI (Musical Instrument Digital Interface - digital interface of musical instruments) is regulated by a special standard containing specifications for the hardware interface: types of channels, cables, ports through which MIDI devices are connected to one another, as well as a description of the order of data exchange - the information exchange protocol between MIDI devices. In particular, using MIDI commands, you can control lighting equipment and video equipment during the performance of a musical group on stage. Devices with a MIDI interface are connected in series, forming a kind of MIDI network, which includes a controller - a control device, which can be used as a PC or a musical keyboard synthesizer, as well as slave devices (receivers) that transmit information to the controller via its request. The total length of the MIDI chain is not limited, but the maximum cable length between two MIDI devices should not exceed 15 meters.

Connecting a PC to a MIDI network is done using a special MIDI adapter, which has three MIDI ports: input, output and pass-through, as well as two connectors for connecting joysticks.

^ The sound card includes an interface for connecting CD-ROM drives

Mixer module

The sound card mixer module does:


  1. switching (connection/disconnection) of sources and receivers of audio signals, as well as regulation of their level;

  2. mixing (mixing) several audio signals and adjusting the level of the resulting signal.
The main characteristics of the mixer module include:

  1. number of mixed signals on the playback channel;

  2. regulation of the signal level in each mixed channel;

  3. regulation of the level of the total signal;

  4. amplifier output power;

  5. availability of connectors for connecting external and internal
    receivers/sources of audio signals.
Audio signal sources and receivers are connected to the mixer module via external or internal connectors. External sound system connectors are usually located on the rear panel of the case system unit: Joystick/MIDI - for connecting a joystick or MIDI adapter; MicIn - to connect a microphone; LineIn - linear input for connecting any sources of audio signals; LineOut - linear output for connecting any audio signal receivers; Speaker - for connecting headphones (headphones) or a passive speaker system.

Software control of the mixer is carried out either using Windows tools or using the mixer program supplied with the sound card software.

Compatibility of the sound system with one of the sound card standards means that the sound system will provide high-quality reproduction of sound signals. Compatibility issues are especially important for DOS applications. Each of them contains a list of sound cards that the DOS application is designed to work with.

The Sound Blaster standard is supported by applications in the form of DOS games, in which the sound is programmed with a focus on sound cards of the Sound Blaster family.

^ Microsoft's Windows Sound System (WSS) standard includes a sound card and software package aimed primarily at business applications.

Acoustic system (AS) directly converts the audio electrical signal into acoustic vibrations and is the last link in the sound-reproducing tract. A speaker system usually includes several audio speakers, each of which can have one or more speakers. The number of speakers in a speaker system depends on the number of components that make up the sound signal and form separate sound channels.

As a rule, the principle of operation and internal organization Sound speakers for household use and those used in technical means of informatization as part of a PC speaker system are practically the same.

Basically, a PC speaker consists of two audio speakers that provide stereo playback. Typically, each speaker in a PC speaker has one speaker, but expensive models use two: for high and low frequencies. At the same time, modern models of speaker systems allow you to reproduce sound in almost everything audible. frequency range thanks to the use of a special design of the speaker or loudspeaker housing.

To reproduce low and ultra-low frequencies with high quality in the speakers, in addition to two speakers, a third sound unit is used - a subwoofer, installed under the desktop. This three-component speaker system for PC consists of two so-called satellite speakers that reproduce mid and high frequencies(from approximately 150 Hz to 20 kHz), and a subwoofer that reproduces frequencies below 150 Hz.

A distinctive feature of PC speakers is the possibility of having its own built-in power amplifier. A speaker with a built-in amplifier is called active. Passive speakers do not have an amplifier.

The main advantage of active speakers is the ability to connect to the linear output of a sound card. The active speaker is powered either from batteries (accumulators) or from the electrical network through a special adapter, made in the form of a separate external unit or power module installed in the housing of one of the speakers.

The output power of PC speakers can vary widely depending on the specifications of the amplifier and speakers. If the system is intended for sound computer games, 15 - 20 W of power per speaker is enough for a medium-sized room. If it is necessary to ensure good audibility during a lecture or presentation in a large audience, it is possible to use one speaker with a power of up to 30 W per channel. As the power of the speakers increases, their dimensions and the cost increases.

^ Main characteristics of the speakers: reproduced frequency band, sensitivity, harmonic distortion, power.

Reproducible frequency band (FrequencyResponse) is the amplitude-frequency dependence of sound pressure, or the dependence of sound pressure (sound intensity) on the frequency of the alternating voltage supplied to the speaker coil. The frequency band perceived by the human ear is in the range from 20 to 20,000 Hz. Speakers, as a rule, have a range limited in the low frequency region of 40 - 60 Hz. The problem of reproducing low frequencies can be solved by using a subwoofer.

The sensitivity of a speaker (Sensitivity) is characterized by the sound pressure that it creates at a distance of 1 m when an electrical signal with a power of 1 W is applied to its input. In accordance with the requirements of the standards, sensitivity is defined as average sound pressure in a certain frequency band.

The higher the value of this characteristic, the better the speaker conveys the dynamic range of the music program. The difference between the “quiest” and “loudest” sounds of modern phonograms is 90 - 95 dB or more. Speakers with high sensitivity reproduce both quiet and loud sounds quite well.

Total Harmonic Distortion (THD) evaluates nonlinear distortion associated with the appearance of new spectral components in the output signal. The harmonic distortion factor is standardized in several frequency ranges. For example, for high-quality Hi-Fi speakers this coefficient should not exceed: 1.5% in the frequency range 250 - 1000 Hz; 1.5% in the frequency range 1000 - 2000 Hz and 1.0% in the frequency range 2000 - 6300 Hz. The lower the harmonic distortion value, the better the speaker quality.

The electrical power (Power Handling) that the speaker can withstand is one of the main characteristics. However, there is no direct relationship between power and sound reproduction quality. The maximum sound pressure depends rather on sensitivity, and the power of the speaker mainly determines its reliability.

Often on the packaging of PC speakers they indicate the peak power of the speaker system, which does not always reflect the real power of the system, since it can exceed the nominal power by 10 times. Due to significant differences in the physical processes occurring during AS tests, the electrical power values ​​may differ several times. To compare the power of different speakers, you need to know exactly what power the product manufacturer indicates and by what test methods it is determined.

Some models of Microsoft speakers are connected not to the sound card, but to USB port. In this case, the sound arrives at the speakers in digital form, and its decoding is carried out by a small Chipset installed in the speakers.
Questions for self-control:


  1. Composition of the PC audio subsystem;

  2. Recording and playback module;

  3. Synthesizer module;

  4. Interface module;

  5. Mixer module;

  6. Operating principle and technical characteristics of acoustic systems. Software;

  7. Sound file formats;

  8. Speech recognition tools.

Practical work 8. PC sound system
The student must:
have an idea:


  • about the PC sound system

know:


  • principles of processing audio information;

  • composition of the PC audio subsystem;

  • main characteristics of sound cards

be able to:


  • connect and configure PC audio subsystems;

  • record audio files.

Section 7. Printing devices
Topic 7.1 Printer
The student must:
have an idea:


  • about devices printing information

know:


  • operating principle of dot matrix printer output devices. Main components and operating features, technical characteristics;

  • operating principle of inkjet printer information output devices Main components and operating features, technical characteristics;

  • operating principle of print output devices laser printer Main components and operating features, technical characteristics.

General characteristics of printing devices. Classification of printing devices. Impact printers: principle of operation, mechanical components, operating features, technical characteristics, operating rules. Basic modern models.

^ Inkjet printers: principle of operation, mechanical components, operating features, technical characteristics, operating rules. Basic modern models.

Laser printers: principle of operation, mechanical components, operating features, technical characteristics, operating rules. Basic modern models.
Guidelines
Printers- devices for outputting data from a computer, converting ASCII information codes into corresponding graphic symbols and recording these symbols on paper.

Printers can be classified according to a number of characteristics:


  1. the method of forming symbols (printing signs and synthesizing signs);

  2. chromaticity (black and white and color);

  3. method of forming lines (serial and parallel);

  4. printing method (character-by-character, line-by-line and page-by-page)

  5. print speed;

  6. resolution.
Printers usually operate in two modes: text and graphics.

When working in text mode The printer receives character codes from the computer, which must be printed from the character generator of the printer itself. Many manufacturers equip their printers with a large number of built-in fonts. These fonts are written to the printer ROM and can only be read from there.

For print text information There are print modes that provide different quality:


  • draft printing (Draft);

  • typographic print quality (NLQ - Near Letter Quality);

  • print quality close to typographical (LQ - Letter Quality);

  • high-quality mode (SQL - Super Letter Quality).
IN graphic mode Codes are sent to the printer that determine the sequence and location of dots in the image.

Based on the method of applying an image to paper, printers are divided into impact, inkjet, photoelectronic and thermal printers.

PC sound system is a complex of software and hardware that performs the following functions:

Structurally, the PC sound system are sound cards installed in a slot, or integrated on the motherboard or an expansion card of another PC subsystem.

Classic PC Sound System Contains:

  • sound recording and playback module;
  • synthesizer module;
  • interface module;
  • mixer module;
  • sound system.

The first four modules are usually installed on the sound card. Each of the modules can be made in the form of a microcircuit, or be part of a multifunctional microcircuit.

Diagram PC Sound System

Figure - Structure of the PC audio subsystem

  1. Recording/playback module carries out analog-to-digital and digital-to-analog conversions in the mode of software transmission of audio data via DMA channels ( Direct Memory Access– direct memory access channel).
  2. Synthesizer module allows you to generate almost any sounds, including the sound of real musical instruments.

Figure 2 – Diagram of a modern synthesizer

Sound is created as follows. The digital device generates a so-called excitation signal with a given pitch, which should have spectral characteristics close to those of the simulated musical instrument. Next, the signal goes to a filter that simulates the amplitude-frequency response of this instrument. The signal of the amplitude envelope of the same instrument is supplied to the other input. Then the set of signals is processed to obtain special sound effects (echo, etc.). Then digital-to-analog conversion is performed and the signal is filtered using a low-pass filter (LPF).

Main characteristics of the synthesizer module:

  • sound synthesis method : based on frequency modulation, based on wave tables, based on physical modulation;
  • Memory ;
  • possibility of hardware signal processing to create sound effects;
  • polyphony – the maximum number of simultaneously reproduced sound elements.
  1. Interface module Provides data exchange between the sound system and other external and internal devices.
  1. Mixer module sound card does:
  • switching (connection/disconnection) of sources and receivers of audio signals, as well as regulation of their level;
  • mixing several sound signals and regulation of the level of the resulting signal.

Main characteristics:

  • number of mixed signals on the playback channel;
  • regulation of the signal level in each mixed channel;
  • regulation of the level of the total signal;
  • amplifier output power;
  • availability of connectors for connecting external and internal receivers/sources of audio signals.

The mixer control software is carried out either using Windows, or using special software.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

Ministry of Education of the PMR

State educational institution "Tiraspol College of Informatics and Law"

Graduate work

Topic: Studying the PC sound system using a diode plate

Tiraspol

Introduction

Chapter 1. Theoretical part. Studying the PC sound system using a diode plate

1.1 Analytical review on the topic

1.2 Practical part

1.2.1 Structural scheme transceiver device for wireless transmission signal

1.2.2 Selecting the element base for building a device for studying the PC sound system

1.2.3 Operating principle of the device for studying the PC sound system

1.2.4 Application of the device

Chapter 2. Labor protection. Safety precautions for maintenance computer facilities

2.1 Industrial sanitation and occupational hygiene

2.2 Requirements for the organization and equipment of a technician’s workplace

2.3 Fire safety requirements

Conclusion

List of used literature

Introduction

The traditional way to transfer sound from a PC sound card to a speaker amplifier is through cables. The thesis project examines the wireless transmission of sound over a laser beam over a distance of up to several meters.

This work is relevant, since the sound system significantly expands the capabilities of the PC as a technical means of informatization. The PC sound system is structurally represented by sound cards, either installed in a motherboard slot, or integrated on the motherboard or an expansion card of another PC subsystem.

The purpose of this thesis is to study circuit design solutions for devices to study the operation of a PC sound system, develop structural and schematic diagram, making a layout.

To achieve these goals, the following tasks need to be solved:

review literature data on the topic of the diploma, conduct research on this topic (develop circuits, design a device, analyze the performance characteristics of the device), provide engineering calculations of this device being developed.

The purpose of labor protection is the scientific analysis of working conditions, technological processes, apparatus and equipment from the point of view of the possibility of the occurrence of hazardous factors, the release of harmful industrial substances. Based on this analysis, hazardous areas of production and possible emergency situations are identified and measures are developed to eliminate them or limit the consequences.

Studying and solving problems related to ensuring healthy and safe conditions in which human work takes place is one of the most important tasks in the development of new technologies and production systems.

Study and identification possible reasons industrial accidents, occupational diseases, accidents, explosions, fires, and the development of measures and requirements aimed at eliminating these causes make it possible to create safe and favorable conditions for human work. Comfortable and safe working conditions are one of the main factors influencing productivity and safety, and human health.

Chapter 1. Theoretical part. Studying the PC sound system using a diode plate

1.1 Analytical review on the topic

The PC sound system in the form of a sound card appeared in 1989, significantly expanding the capabilities of the PC as a technical means of informatization.

PC sound system is a complex of software and hardware that performs the following functions:

recording audio signals coming from external sources, such as a microphone or tape recorder, by converting input analog audio signals to digital ones and then storing them on a hard drive;

playback of recorded audio data using an external speaker system or headphones (headphones);

playback of audio CDs;

mixing (mixing) when recording or playing back signals from several sources;

simultaneous recording and playback of audio signals (Full Duplex mode);

processing of audio signals: editing, combining or separating signal fragments, filtering, changing its level;

audio signal processing in accordance with surround (three-dimensional - 3D-Sound) sound algorithms;

generating the sound of musical instruments, as well as human speech and other sounds using a synthesizer;

control of external electronic musical instruments via a special MIDI interface.

The PC sound system is structurally composed of sound cards, either installed in a motherboard slot, or integrated on the motherboard or an expansion card of another PC subsystem, as well as devices for recording and reproducing audio information (speaker system). Individual functional modules of the sound system can be implemented in the form of daughter boards installed in the corresponding connectors of the sound card.

Classic sound system as shown in fig. 1, contains:

sound recording and playback module;

synthesizer module;

interface module;

mixer module;

sound system.

Rice. 1 - PC sound system structure

The first four modules are usually installed on the sound card. Moreover, there are sound cards without a synthesizer module or a digital audio recording/playback module. Each of the modules can be made either in the form of a separate microcircuit or be part of a multifunctional microcircuit. Thus, a sound system Chipset can contain either several or one chip.

PC sound system designs are undergoing significant changes; There are motherboards with a Chipset installed on them for audio processing.

However, the purpose and functions of the modules of a modern sound system (regardless of its design) do not change. When considering the functional modules of a sound card, it is customary to use the terms “PC sound system” or “sound card”.

RECORDING AND PLAYBACK MODULE

The audio system recording and playback module carries out analog-to-digital and digital-to-analog conversions in the mode of software transmission of audio data or transmission via DMA channels (Direct Memory Access - direct memory access channel).

Sound, as is known, is a longitudinal wave that propagates freely in air or another medium, so the sound signal continuously changes in time and space.

Sound recording is the storage of information about sound pressure fluctuations at the time of recording. Currently, analog and digital signals are used to record and transmit sound information. In other words, the audio signal can be in analog or digital form.

If, when recording sound, a microphone is used, which converts a time-continuous sound signal into a time-continuous electrical signal, a sound signal is obtained in analog form. Since the amplitude of a sound wave determines the loudness of the sound, and its frequency determines the pitch of the sound tone, in order to maintain reliable information about the sound, the voltage of the electrical signal must be proportional to the sound pressure, and its frequency must correspond to the frequency of sound pressure oscillations.

In most cases, the sound signal is supplied to the input of the PC sound card in analog form. Due to the fact that the PC operates only with digital signals, the analog signal must be converted to digital. At the same time, the speaker system installed at the output of the PC sound card perceives only analog electrical signals, therefore, after processing the signal using a PC, it is necessary inverse conversion digital signal to analog.

A/D conversion is the conversion of an analog signal to a digital signal and consists of the following main steps: sampling, quantization and encoding. The analog-to-digital conversion circuit of an audio signal is shown in Fig. 2.

Rice. 2 - Circuit for analog-to-digital audio signal conversion

The pre-analog audio signal is fed to an analog filter, which limits the frequency band of the signal.

Signal sampling consists of sampling samples of an analog signal with a given periodicity and is determined by the sampling frequency. Moreover, the sampling frequency must be no less than twice the frequency of the highest harmonic (frequency component) of the original audio signal. Since humans are able to hear sounds in the frequency range from 20 Hz to 20 kHz, the maximum sampling frequency of the original audio signal must be at least 40 kHz, i.e., samples must be taken 40,000 times per second. Because of this, most modern PC audio systems have a maximum audio sampling rate of 44.1 or 48 kHz.

Rice. 3 - Time sampling and quantization based on analog signal level

Amplitude quantization is the measurement of instantaneous amplitude values ​​of a discrete time signal and converting it into discrete time and amplitude. In Fig. Figure 3 shows the process of quantization by analog signal level, with instantaneous amplitude values ​​encoded as 3-bit numbers.

Coding consists of converting a quantized signal into a digital code. In this case, the measurement accuracy during quantization depends on the number of bits of the code word. If the amplitude values ​​are written using binary numbers and the codeword length is set to N bits, the number of possible codeword values ​​will be 2N. There can be the same number of levels of quantization of the sample amplitude. For example, if the sample amplitude value is represented by a 16-bit code word, the maximum number of amplitude gradations (quantization levels) will be 216 = 65,536. For an 8-bit representation, we get 28 = 256 amplitude gradations.

Analog-to-digital conversion is carried out by a special electronic device - an analog-to-digital converter (ADC), in which discrete signal samples are converted into a sequence of numbers. The resulting digital data stream, i.e. signal includes both useful and unwanted high-frequency interference, to filter which the received digital data is passed through a digital filter.

Digital-to-analog conversion generally occurs in two stages, as shown in Fig. 4. At the first stage, signal samples are extracted from the digital data stream using a digital-to-analog converter (DAC), following the sampling frequency. At the second stage, a continuous analog signal is formed from discrete samples by smoothing (interpolation) using a low-frequency filter, which suppresses the periodic components of the discrete signal spectrum.

Rice. 4 - Digital-to-analog conversion circuit

Recording and storing an audio signal in digital form requires a large amount of disk space. For example, a 60-second stereo audio signal digitized at a sampling rate of 44.1 kHz with 16-bit quantization requires about 10 MB of storage space on the hard drive.

To reduce the amount of digital data required to represent an audio signal with a given quality, compression is used, which consists in reducing the number of samples and quantization levels or the number of bits per sample.

Such methods of encoding audio data using special encoding devices make it possible to reduce the volume of information flow to almost 20% of the original one. The choice of encoding method when recording audio information depends on the set of compression codec programs (encoding-decoding) supplied with the sound card software or included in the operating system.

Performing the functions of analog-to-digital and digital-to-analog signal conversion, the digital audio recording and playback module contains an ADC, a DAC and a control unit, which are usually integrated into a single chip, also called a codec. The main characteristics of this module are: sampling frequency; type and capacity of ADC and DAC; audio data encoding method; ability to work in Full Duplex mode.

The sampling rate determines the maximum frequency of the signal that is recorded or played back. For recording and playback of human speech, 6 - 8 kHz is sufficient; music with low quality - 20 - 25 kHz; To ensure high quality sound (audio CD), the sampling frequency must be at least 44 kHz. Almost all sound cards support recording and playback of stereo audio at a sampling rate of 44.1 or 48 kHz.

The bit depth of the ADC and DAC determines the bit depth of the digital signal (8, 16 or 18 bits). The vast majority of sound cards are equipped with 16-bit ADCs and DACs. Such sound cards can theoretically be classified as Hi-Fi, which should provide studio-quality sound. Some sound cards are equipped with 20- and even 24-bit ADCs and DACs, which significantly improves the quality of sound recording/playback.

Full Duplex is a data transmission mode over a channel, according to which the sound system can simultaneously receive (record) and transmit (play) audio data. However, not all sound cards fully support this mode, since they do not provide high sound quality during intensive data exchange. Such cards can be used to work with voice data on the Internet, for example, during teleconferences, when high sound quality is not required.

SYNTHESIS MODULE

An electromusical digital sound system synthesizer allows you to generate almost any sound, including the sound of real musical instruments. The operating principle of the synthesizer is illustrated in Fig. 5.

Rice. 5 - The principle of operation of a modern synthesizer: a - phases of the sound signal; b - synthesizer circuit

Synthesis is the process of recreating the structure of a musical tone (note). The sound signal of any musical instrument has several time phases. In Fig. Figure 5a shows the phases of the sound signal that occurs when you press a piano key. For each musical instrument, the type of signal will be unique, but three phases can be distinguished in it: attack, support and attenuation. The set of these phases is called the amplitude envelope, the shape of which depends on the type of musical instrument. The attack duration for different musical instruments varies from a few to several tens or even hundreds of milliseconds. In the phase called support, the amplitude of the signal remains almost unchanged, and the pitch of the musical tone is formed during support. The last phase, attenuation, corresponds to a section of a fairly rapid decrease in the signal amplitude.

In modern synthesizers, sound is created as follows. A digital device using one of the synthesis methods generates a so-called excitation signal with a given pitch (note), which should have spectral characteristics as close as possible to the characteristics of the simulated musical instrument in the support phase, as shown in Fig. 5 B. Next, the excitation signal is fed to a filter that simulates the amplitude-frequency response of a real musical instrument. The amplitude envelope signal of the same instrument is supplied to the other filter input. Next, the set of signals is processed to obtain special sound effects, for example, echo (reverberation), choral performance (chorus). Next, digital-to-analog conversion and filtering of the signal are performed using a low-pass filter (LPF). Main characteristics of the synthesizer module:

sound synthesis method;

Memory;

possibility of hardware signal processing to create sound effects;

polyphony - the maximum number of simultaneously reproduced sound elements.

The sound synthesis method used in a PC sound system determines not only the sound quality, but also the composition of the system. In practice, sound cards are equipped with synthesizers that generate sound using the following methods.

The synthesis method based on frequency modulation (Frequency Modulation Synthesis - FM synthesis) involves the use of at least two signal generators of complex shapes to generate the voice of a musical instrument. The carrier frequency generator generates a fundamental tone signal, frequency-modulated by a signal of additional harmonics and overtones that determine the sound timbre of a particular instrument. The envelope generator controls the amplitude of the resulting signal. The FM generator provides acceptable sound quality, is inexpensive, but does not implement sound effects. Therefore, sound cards using this method are not recommended according to the PC99 standard.

Sound synthesis based on a wave table (Wave Table Synthesis - WT synthesis) is performed by using pre-digitized sound samples of real musical instruments and other sounds stored in a special ROM, made in the form of a memory chip or integrated into the WT generator memory chip. The WT synthesizer provides high quality sound generation. This synthesis method is implemented in modern sound cards.

The amount of memory on sound cards with a WT synthesizer can be increased by installing additional memory elements (ROM) for storing banks with instruments.

Sound effects are generated using a special effects processor, which can be either an independent element (microcircuit) or integrated into the WT synthesizer. For the vast majority of cards with WT synthesis, reverb and chorus effects have become standard.

Sound synthesis based on physical modeling involves the use of mathematical models of sound production of real musical instruments for digital generation and for further conversion into an audio signal using a DAC. Sound cards using the physical modeling method are not yet widely used because they require a powerful PC to operate.

INTERFACE MODULE

The interface module provides data exchange between the sound system and other external and internal devices.

The ISA interface was replaced in sound cards by the PCI interface in 1998.

The PCI interface provides high bandwidth (for example, version 2.1 - more than 260 Mbit/s), which allows you to transmit audio data streams in parallel. Using the PCI bus allows you to improve sound quality, providing a signal-to-noise ratio of over 90 dB. In addition, the PCI bus allows for cooperative processing of audio data, when data processing and transmission tasks are distributed between the sound system and the CPU.

MIDI (Musical Instrument Digital Interface - digital interface of musical instruments) is regulated by a special standard containing specifications for the hardware interface: types of channels, cables, ports through which MIDI devices are connected to one another, as well as a description of the order of data exchange - the information exchange protocol between MIDI devices. In particular, using MIDI commands, you can control lighting equipment and video equipment during the performance of a musical group on stage. Devices with a MIDI interface are connected in series, forming a kind of MIDI network, which includes a controller - a control device, which can be used as a PC or a musical keyboard synthesizer, as well as slave devices (receivers) that transmit information to the controller via its request. The total length of the MIDI chain is not limited, but the maximum cable length between two MIDI devices should not exceed 15 meters.

Connecting a PC to a MIDI network is done using a special MIDI adapter, which has three MIDI ports: input, output and pass-through, as well as two connectors for connecting joysticks.

The sound card includes an interface for connecting CD-ROM drives.

MIXER MODULE

The sound card mixer module does:

switching (connection/disconnection) of sources and receivers of audio signals, as well as regulation of their level;

mixing (mixing) several audio signals and adjusting the level of the resulting signal.

The main characteristics of the mixer module include:

number of mixed signals on the playback channel;

regulation of the signal level in each mixed channel;

regulation of the level of the total signal;

amplifier output power;

availability of connectors for connecting external and internal
receivers/sources of audio signals.

Audio signal sources and receivers are connected to the mixer module via external or internal connectors. External sound system connectors are usually located on the rear panel of the system unit case: Joystick/MIDI - for connecting a joystick or MIDI adapter; Mic In - to connect a microphone; Line In - linear input for connecting any sources of audio signals; Line Out - linear output for connecting any audio signal receivers; Speaker - for connecting headphones (headphones) or a passive speaker system.

Software control of the mixer is carried out either using Windows tools or using the mixer program supplied with the sound card software.

Compatibility of the sound system with one of the sound card standards means that the sound system will provide high-quality reproduction of sound signals. Compatibility issues are especially important for DOS applications. Each of them contains a list of sound cards that the DOS application is designed to work with.

The Sound Blaster standard is supported by applications in the form of DOS games, in which the sound is programmed with a focus on sound cards of the Sound Blaster family.

Microsoft's Windows Sound System (WSS) standard includes a sound card and software package aimed primarily at business applications.

ACOUSTIC SYSTEM

The acoustic system (AS) directly converts the audio electrical signal into acoustic vibrations and is the last link in the sound-reproducing path.

A speaker system usually includes several audio speakers, each of which can have one or more speakers. The number of speakers in a speaker system depends on the number of components that make up the sound signal and form separate sound channels.

For example, a stereo signal contains two components - left and right stereo signals, which requires at least two speakers in a stereo speaker system. A Dolby Digital audio signal contains information for six audio channels: two front stereo channels, a center channel (dialogue channel), two rear channels, and a subwoofer channel. Therefore, to reproduce a Dolby Digital signal, the speaker system must have six sound speakers.

As a rule, the operating principle and internal structure of sound speakers for household use and those used in technical means of informatization as part of a PC speaker system are practically the same.

Basically, a PC speaker consists of two audio speakers that provide stereo playback. Typically, each speaker in a PC speaker has one speaker, but expensive models use two: for high and low frequencies. At the same time, modern models of acoustic systems make it possible to reproduce sound in almost the entire audible frequency range due to the use of a special design of the speaker or loudspeaker housing.

To reproduce low and ultra-low frequencies with high quality in the speakers, in addition to two speakers, a third sound unit is used - a subwoofer, installed under the desktop. This three-component PC speaker system consists of two so-called satellite speakers that reproduce mid and high frequencies (from approximately 150 Hz to 20 kHz), and a subwoofer that reproduces frequencies below 150 Hz.

A distinctive feature of PC speakers is the possibility of having its own built-in power amplifier. A speaker with a built-in amplifier is called active. Passive speakers do not have an amplifier.

The main advantage of active speakers is the ability to connect to the linear output of a sound card. The active speaker is powered either from batteries (accumulators) or from the electrical network through a special adapter, made in the form of a separate external unit or power module installed in the housing of one of the speakers.

The output power of PC speakers can vary widely depending on the specifications of the amplifier and speakers. If the system is intended for sounding computer games, a power of 15 - 20 W per speaker is sufficient for a medium-sized room. If it is necessary to ensure good audibility during a lecture or presentation in a large audience, it is possible to use one speaker with a power of up to 30 W per channel. As the power of the speaker increases, its overall dimensions increase and the cost increases.

Modern models of speaker systems have a jack for headphones, when connected, the sound playback through the speakers automatically stops.

Main characteristics of the speakers:

reproduced frequency band,

sensitivity,

harmonic coefficient,

power.

Reproducible frequency band (FrequencyResponse) is the amplitude-frequency dependence of sound pressure, or the dependence of sound pressure (sound intensity) on the frequency of the alternating voltage supplied to the speaker coil. The frequency band perceived by the human ear is in the range from 20 to 20,000 Hz. Speakers, as a rule, have a range limited in the low frequency region of 40 - 60 Hz. The problem of reproducing low frequencies can be solved by using a subwoofer.

The sensitivity of a speaker (Sensitivity) is characterized by the sound pressure that it creates at a distance of 1 m when an electrical signal with a power of 1 W is applied to its input. In accordance with the requirements of the standards, sensitivity is defined as the average sound pressure in a certain frequency band.

The higher the value of this characteristic, the better the speaker conveys the dynamic range of the music program. The difference between the “quiest” and “loudest” sounds of modern phonograms is 90 - 95 dB or more. Speakers with high sensitivity reproduce both quiet and loud sounds quite well.

Total Harmonic Distortion (THD) evaluates nonlinear distortion associated with the appearance of new spectral components in the output signal. The harmonic distortion factor is standardized in several frequency ranges. For example, for high-quality Hi-Fi speakers this coefficient should not exceed: 1.5% in the frequency range 250 - 1000 Hz; 1.5% in the frequency range 1000 - 2000 Hz and 1.0% in the frequency range 2000 - 6300 Hz. The lower the harmonic distortion value, the better the speaker quality.

The electrical power (Power Handling) that the speaker can withstand is one of the main characteristics. However, there is no direct relationship between power and sound reproduction quality. The maximum sound pressure depends rather on the sensitivity, and the power of the speaker mainly determines its reliability.

Often on the packaging of PC speakers they indicate the peak power of the speaker system, which does not always reflect the real power of the system, since it can exceed the nominal power by 10 times. Due to significant differences in the physical processes occurring during AS tests, the electrical power values ​​may differ several times. To compare the power of different speakers, you need to know exactly what power the product manufacturer indicates and by what test methods it is determined.

Among the manufacturers of high-quality and expensive speakers are Creative, Yamaha, Sony, and Aiwa. Lower class ACs are produced by Genius, Altec, JAZZ Hipster.

Some models of Microsoft speakers are connected not to the sound card, but to the USB port. In this case, the sound arrives at the speakers in digital form, and its decoding is performed by a small Chipset installed in the speakers.

METHODS OF COMPRESSING AUDIO INFORMATION

The simplest way digital representation signals is called pulse-code modulation (PCM) or PCM (Pulse-Code Modulation). The PCM data stream is a sequence of instantaneous values ​​or samples in binary code. If the converters used have a linear characteristic (the instantaneous value of the signal voltage is proportional to the code), then this modulation is called linear (Linear PCM). In the case of PCM, the encoder and decoder do not perform information conversion, but only pack/unpack bits into bytes and data words. The bit rate is defined as the product of the sampling rate by the bit depth and the number of channels. Audio CD gives a stream of 44,100 x 16 x 2 = 1,411,200 bps (stereo).

For real audio signals, linear PCM coding is uneconomical. The data stream can be reduced by using a simple compression algorithm used in the delta PCM (DPCM) system, also known as DPCM (Differential Pulse-Code Modulation). In a simplified way, this algorithm looks like this: the digital stream does not transmit the instantaneous samples themselves, but the scaled difference between the real sample and its value constructed by the codec based on the data stream it previously generated. The difference is transmitted with fewer digits than the readings themselves. In ADPCM (adaptive | DPCM, or ADPCM - Adaptive Differential Pulse-Code Modulation), the scale of the difference is determined by the history - if the difference grows monotonically, the scale increases, and vice versa.

Of course, the reconstructed signal with this representation will differ more from the original one than with conventional PCM, but a significant reduction in the digital data flow can be achieved. ADPCM has become widely used in digital storage and transmission of audio information (for example, in voice modems). From the point of view of the PC processor, the ADPCM algorithm can be implemented both in software and in hardware using a sound card (modem).

More complex algorithms and high degree compression is used in MPEG audio codecs. In the MPEG-1 encoder, the input stream is 16-bit samples at 48 kHz (professional audio), 44.1 kHz ( Appliances) or 32 kHz (used in telecommunications).

The standard defines three compression "layers" - Layer I, Layer 2 and Layer 3, working one on top of the other.

The initial compression is carried out on the basis of the psychophysical properties of sound perception. Here the property of sound masking is played out: if the signal contains two tones with similar frequencies that differ significantly in level, then the more powerful signal will mask the weaker one (it will not be heard). Masking thresholds depend on the distance between frequencies.

In MPEG, the entire audio frequency range is divided into 32 sub-bands; in each sub-band, the most powerful spectral components are determined and masking frequency thresholds are calculated for them. The masking effects of several powerful components are cumulative. The effect of masking extends not only to signals present simultaneously with the powerful one, but also to those preceding it for 2-5 ms (premasking) and subsequent ones for up to 100 ms (postmasking). Masked region signals are processed at lower resolution because they have lower signal-to-noise ratio requirements. Due to this “coarsening”, compression occurs. Psychophysical compression is performed by Layer 1.

The next stage (Layer 2) improves the accuracy of the presentation and packages the information more efficiently. Here the encoder has a “window” of 23 ms (1152 samples) in operation.

At the last stage (Layer 3), complex sets of filters and nonlinear quantization are applied. The highest degree of compression is provided by Layer 3, for which a compression ratio of 11:1 is achieved with high decoding reliability.

METHODS OF PROCESSING AUDIO INFORMATION

Digital storage makes it easy to implement many effects that previously required bulky electromechanical or electroacoustic devices or complex analog electronics.

It is known that in an enclosed space (for example, a hall), not only direct sound reaches the listener from the source, but also reflected (multiple times) from various surfaces(walls, columns, etc.). Reflected signals arrive relative to the direct signal with various delays and attenuation. This phenomenon is called reverberation. And this phenomenon can be controlled with digital signal processing. Digital storage makes it easy to implement many effects that previously required bulky electromechanical or electroacoustic devices or complex analog electronics.

First of all, there is artificial reverb and echo.

It is known that in an enclosed space (for example, a hall), not only direct sound reaches the listener from the source, but also reflected (multiple times) from various surfaces (walls, columns, etc.). Reflected signals arrive relative to the direct signal with various delays and attenuation. This phenomenon is called reverberation. And this phenomenon can be controlled with digital signal processing.

More complex effects can be made based on sample bias. In the digital form of representation, the Doppler effect is easily simulated - a change in frequency when a sound source quickly approaches the listener or the source moves away from the listener. Everyone has encountered this effect - the single-tone whistle of an approaching train sounds higher, and that of a departing train sounds lower than the real tone. In digital playback, accumulating sample lag will cause the tone to go down, while reducing the lag will cause the tone to go up.

In addition to tricks with delays, it is possible to use digital filtering - from implementing the simplest tone blocks and equalizers to “cutting out” a voice from a song (the “karaoke” effect). Everything is determined by the software and the computing resources of the processor.

DIRECTIONS FOR IMPROVING THE SOUND SYSTEM

Currently, Intel, Compaq and Microsoft have proposed a new architecture for the PC sound system. According to this architecture, audio signal processing modules are moved outside the PC case, where they are subject to electrical noise, and are placed, for example, in the speakers of an acoustic system. In this case, sound signals are transmitted in digital form, which significantly increases their immunity to noise and the quality of sound reproduction. To transmit digital data in digital form, high-speed USB and IEEE 1394 buses are used.

Another direction in improving the sound system is the creation of surround (spatial) sound, called three-dimensional, or 3D-Sound (Three Dimentional Sound). To obtain surround sound, special processing Signal Phases: The phases of the left and right channel output signals are shifted relative to the original. This uses the ability of the human brain to determine the position of the sound source by analyzing the relationship between the amplitudes and phases of the sound signal perceived by each ear. The user of a sound system equipped with a special 3D sound processing module experiences the effect of “moving” the sound source.

New direction of application multimedia technologies is to create a PC-based home theater (PC-Theater), i.e. a variant of a multimedia PC intended for several users simultaneously to watch a game, watch an educational program or a movie in the DVD standard. PC-Theater includes a special multi-channel acoustic system that generates surround sound. Surround Sound systems create various sound effects in a room, with the user feeling that he is in the center of the sound field, and the sound sources are around him. Multichannel surround sound systems are used in cinemas and are already beginning to appear in the form of consumer devices.

In multichannel consumer systems, sound is recorded on two tracks of laser video discs or video cassettes using Dolby Surround technology developed by Dolby Laboratories. The most famous developments in this direction include:

Dolby (Surround) Pro Logic- a four-channel sound system containing left and right stereo channels, a center channel for dialogue and a rear channel for effects.

Dolby Surround Digital is a sound system consisting of 5 + 1 channels: left, right, center, left and right rear effects channels and an ultra-low frequency channel. Signals for the system are recorded in the form of a digital optical soundtrack on film.

Available on selected models acoustic speakers In addition to the standard high/low frequency, volume and balance controls, there are buttons for turning on special effects, for example, 3D sound, Dolby Surround, etc.

1.2 Practical part

1.2.1 Block diagram of a transceiver device for wireless signal transmission

With growing popularity wireless technologies The scope of their application is also expanding. IN diploma work a solution is considered, built on the principle of transmitting media data over wireless channels and designed to combine PCs and components of household audio equipment into a single multimedia complex.

From time to time, users of personal computers need to connect this device to stationary audio equipment, for example, to a music center. Of course, the most simple option in this case the connection is via cable. However, the vast majority of stationary audio components have connectors for connecting signal sources located on the rear panel, which is usually not so easy to reach. Second, more serious problem- Many inexpensive radios and music centers lack inputs for connecting external signal sources.

One of the most universal ways to solve such problems is the use of low-power radio transmitters that broadcast an audio signal in the VHF range (the ability to receive programs at these frequencies is implemented in almost all modern models of radios and music centers). It is also worth noting that the signal transmitted in this way can be received by several nearby radio receivers at once.

In the case of interaction of a digital player with analog equipment (radio tape recorders, stereo systems, etc.), the transmission of sound in analog form is the only possible option. If we consider the interaction of two digital devices (for example, a computer and a media center), then in this case it is preferable to use the transmission of audio data over a wireless channel in digital form.

The traditional way to transfer sound from your PC's sound card to your speaker amplifier is through cables. The thesis project examines the wireless transmission of sound via a laser beam over a distance of up to several meters.

In Fig. Figure 6 shows a block diagram of an audio signal receiver:

Rice. 6 - Block diagram of an audio signal receiver

In Fig. Figure 7 shows a block diagram of an audio signal transmitter:

Rice. 7 - Block diagram of the audio signal transmitter

The primary winding must be directly connected to the audio signal output. We connect the minus of the battery to one of the ends of the secondary winding, and connect the plus of the battery directly to the plus of the laser diode.

We connect the second end of the secondary winding through a 15-47 Ohm resistor to the minus of the laser diode.

1.2.2 Selecting the element base for building a device for studying the PC sound system

To assemble a device for wireless signal transmission, the following equipment is required: audio signal source (personal computer, music Center or mobile phone), a network transformer with a power of 10-15 W, a resistor from 5 to 20 Ohms and a battery.

You can use any network transformer with a power of no more than 20 W, containing a secondary winding of 6 or 12 V, or you can wind it yourself (primary winding - 15 turns of 0.8 mm wire, secondary winding - 10 turns of 0.8 mm wire).

For an audio signal receiver you will need a photodiode and a low-frequency amplifier.

The LED used is a regular one. It can be replaced with a laser (will significantly increase the transmission distance), which will need to be connected through a 5 Ohm, 0.5 W resistor. The light beam source can also be supplemented with optics from DVD drive, thereby concentrating the light beam and increasing the transmission distance. The battery is used Li - Ion (lithium - ion) from a mobile phone. Instead, you can use a stabilized power supply of 3.5 - 4 V, with a current of no more than 1 A. Parameters of the solar module: maximum voltage 14 V, with a maximum current of 100 mA. The module can be replaced with any other photodetector.

1.2.3 Operating principle of the device for studying the PC sound system

From a low-power sound source (personal computer, mobile phone), an audio signal is sent to the primary winding of the transformer, comes out of the secondary winding, is amplified by a battery and goes to the LED / laser diode. The photodiode, which serves as an audio signal receiver, is directly connected to the input of the power amplifier. Next, turn on the music and direct the beam to the photodetector. The light beam is received by the solar module, which is connected to the amplifier, and the power amplifier amplifies weak signal and in the end it turns out enough high quality sound. Instead of a laser, you can also use an ordinary LED, but in this case the transmission range of the sound signal will be no more than 30 centimeters; it is advisable to use white or ultraviolet LEDs from lighters. When using a laser pointer, it is possible to transmit an audio signal over a distance of up to 15 meters, and note that the sound quality is quite good. The transmitted sound is quite powerful at a distance of 7 meters; the amplifier delivered 80 percent of its power to the load at full volume.

The quality of the transmitted signal is quite good, no sound distortion is observed.

1.2.4 Application of the device

Such a device has found very wide application in science and technology; laser microphones for espionage are based on precisely such a transmitter and receiver.

Such a device is an excellent accessory for a computer, for example, music is playing on the computer, and the power amplifier is not connected by cable to the computer, this way you can also transmit a conversation, you just need to apply a signal from the microphone to the input of the device (with preamplifier) and in the end it turns out wireless phone or a walkie-talkie, or an excellent bug for short distances.

Chapter 2. Labor protection. Safety measures during maintenance of computer equipment

2.1 Industrial sanitation and occupational hygiene

recording mixer signal transmission

In accordance with GOST 12.0.002 SSBT “Terms and Definitions”, industrial sanitation is a system of organizational, sanitary and hygienic measures, technical means and methods that prevent or reduce the impact of harmful production factors on workers to values ​​​​not exceeding acceptable ones.

The range of issues addressed within the framework of industrial sanitation and occupational hygiene includes:

Ensuring sanitary and hygienic requirements for the air in the working area;

Providing microclimate parameters at workplaces;

Providing standard natural and artificial lighting;

Protection from noise and vibration in workplaces;

Protection from ionizing radiation and electromagnetic fields;

Providing special food, protective pastes and ointments, special clothing and special ointments. shoes, personal protective equipment (gas masks, respirators, etc.);

Providing sanitary facilities, etc., in accordance with standards.

Occupational hygiene or professional hygiene is a branch of hygiene that studies the impact of the labor process and the surrounding production environment on the body of workers in order to develop sanitary, hygienic and therapeutic standards and measures aimed at creating more favorable working conditions, ensuring health and high level human ability to work.

In industrial production conditions, humans are often exposed to low and high air temperatures, strong thermal radiation, dust, harmful chemicals, noise, vibration, electromagnetic waves, as well as a wide variety of combinations of these factors, which can lead to certain health problems. , to a decrease in performance. To prevent and eliminate these adverse effects and their consequences, a study is carried out of the characteristics of production processes, equipment and processed materials (raw materials, auxiliary, intermediate, by-products, production waste) from the point of view of their impact on the body of workers; sanitary working conditions (meteorological factors, air pollution with dust and gases, noise, vibration, ultrasound, etc.); the nature and organization of labor processes, changes in physiological functions during work.

Industrial sanitation is a system of organizational, preventive and sanitary-hygienic measures and means aimed at preventing workers from being exposed to harmful production factors.

Work activities can be performed on outdoors and indoors.

Industrial premises are enclosed spaces in any buildings and structures where, during working hours, labor activities of people are carried out constantly or periodically. various types production. A person can work in various rooms of one or more buildings and structures. Under such working conditions, it is necessary to talk about a workplace or work area.

The production environment of a workspace is determined by a complex of factors. The presence of these factors (hazards) in the work environment can affect not only the state of the body, but also productivity, quality, labor safety, lead to a decrease in performance, cause functional changes in the body and occupational diseases.

In modern conditions of labor automation, a complex of weakly expressed factors acts on the body; studying the affect of interaction is extremely difficult, therefore, industrial sanitation and occupational hygiene solve the following problems:

taking into account the influence of working environment factors on health and performance;

improving methods for assessing performance and health status;

development of organizational, technological, engineering, socio-economic measures to rationalize the production environment;

development of preventive and health measures;

improve teaching methods.

Temperature and humidity in the room are the most important parameters that determine the state of comfort indoors.

Recommended indoor air temperatures according to various standards are in the range of 20-22С and 22-26С. Another physical parameter of the internal atmosphere that directly affects the heat exchange of the human body is air humidity, which characterizes its saturation with water vapor. Thus, a lack of humidity, less than 20% relative humidity, leads to drying out of the mucous membranes and causes coughing. And exceeding the humidity level, more than 65%, leads to a deterioration in heat transfer when sweat evaporates, and a feeling of suffocation occurs. Therefore, temperature must be related to humidity levels.

The air speed is determined in the working area of ​​the room, i.e. where people are, namely in a space of 0.15 m. from the floor to 1.8 m in height and at a distance of at least 0.15 m from the walls. Air speed in the working area is recommended within 0.13-0.25 m/s. At a lower speed, it’s stuffy or even hot; at a higher speed, it’s just a draft, which makes sense only when the temperature rises above the standard values.

Analysis of working conditions

The assessment of working conditions is carried out using a special methodology, based on an analysis of the levels of harmful and dangerous factors in a given workplace.

To carry out workplace certification, it is also necessary to comprehensively assess working conditions.

Determining the class of working conditions at workplaces is carried out with the aim of:

prioritization of health-improving activities;

creating a data bank on existing working conditions;

determination of payments and compensation for harmful working conditions.

A harmful production factor is an environmental and labor process factor that can cause a decrease in performance, pathology (occupational disease), and lead to impaired health of the offspring.

May be harmful:

physical factors: temperature, humidity and air mobility, non-ionizing and ionizing radiation, noise, vibration, insufficient lighting;

chemical factors: gas and dust levels in the air;

biological factors: pathogens;

labor severity factors: physical static and dynamic load; a large number of stereotypical working movements, a large number of body bends, uncomfortable working posture;

labor stress factors: intellectual, sensory, emotional stress, monotony and duration of work.

A hazardous production factor is an environmental and labor process factor that can cause a sharp deterioration in health, injury, and death.

This: electricity, fire, heated surface, moving parts of equipment, excess pressure, sharp edges of objects, height, etc.).

Similar documents

    Selecting methods for designing a device for processing and transmitting information. Development of an operation algorithm for information processing, a block diagram of the device. Timing diagram of control signals. Element base for the development of a circuit diagram.

    course work, added 08/16/2012

    Devices for recording and reproducing information are an integral part of a computer. The process of recovering information based on changes in the characteristics of the media. Detonation coefficient. Requirements for the accuracy of manufacturing parts of the transport mechanism.

    abstract, added 11/13/2010

    The concept of sound explication. Features of the recording technology used. Layouts of filming equipment on film sets. Justification for choosing equipment. Block diagram of equipment connection taking into account the selected synchronization.

    course work, added 12/27/2011

    Principles of constructing the Strelets radio system. Wireless data transmission module using ZigBee technology, advantages and disadvantages of its use, principle of operation and assessment of capabilities. Description of the structural and fundamental electrical diagram devices.

    thesis, added 04/24/2015

    Development of information carriers. Sound recording and the process of recording sound information for the purpose of its storage and subsequent reproduction. Musical mechanical instruments. The first two-track tape recorder. Sound and basic standards for its recording.

    abstract, added 05/25/2015

    Methods for creating a transmitting device for a radio altimeter transceiver module. Feasibility study of the work. Ensuring the safety of personnel working on the project. Classification of production by fire and explosion hazard.

    thesis, added 07/15/2010

    Main technical characteristics of an automated receiving and transmitting center. General information and the operating principle of the device. Automatic 100% redundancy of radio equipment. Methods for outputting transceivers into radiation, device control.

    practice report, added 02/12/2016

    Algorithms digital processing data. Diagram of a light and music installation using the ATmega8 microcontroller as an example. Submission, reception and processing of audio signals. Development of galvanic isolation. A copy of the signal that is supplied to the high-voltage part.

    course work, added 12/02/2014

    Block diagram of a data and command transmission device. Operating principle of the temperature sensor. Conversion of signals coming from four channels. Data transmission device model. Building code with doubling. Formation of code combinations.

    course work, added 01/28/2015

    Audio information coding scheme. Analogue and discrete forms of information presentation. Identification of the number of volume levels in the process of encoding audio information. Quality binary coding sound. Calculation of information volume.

Sound devices are becoming an integral part of every personal computer. Through competition, a universal, widely supported standard for audio software and hardware. Audio devices have evolved from expensive, exotic add-ons to a familiar part of almost any system configuration.

IN modern computers Hardware audio support comes in one of the following forms:

  • audio adapter placed in the PCI or ISA bus connector;
  • a microcircuit on the system board manufactured by Crystal, Analog Devices, Sigmatel, ESS, etc.;
  • audio devices integrated into the motherboard's base chipset, which includes the most advanced chipsets from Intel, SiS, and VIA Technologies designed for low-cost computers.

In addition to the main audio device, there are many additional audio devices: speaker systems, microphone, etc. This chapter discusses the functionality and operating features of all components of the computer audio system.

The first sound cards appeared in the late 1980s. based on developments by AdLib, Roland and Creative Labs and were used only for games. In 1989, Creative Labs released the Game Blaster stereo sound card; later the Sound Blaster Pro board appeared.

For stable operation of the board, certain software (MS DOS, Windows) and hardware resources (IRQ, DMA and I/O port addresses) were required.

Due to problems arising when using sound cards that are not compatible with the Sound Blaster Pro system, in December 1995, new development Microsoft- DirectX, which is a series of programmable application interfaces (Application Program Interfaces - API) for direct interaction with hardware devices.

Today, almost every computer is equipped with a sound adapter of one type or another and a CD-ROM or

CD-ROM compatible drive. After the adoption of the MPC-1-MRS-3 standards, which defined the classification of computers, systems equipped with a sound card and a CD-ROM compatible drive were named multimedia computers(Multimedia PC). The first MRS-1 standard was introduced in 1990; the MRS-3 standard, which replaced it in June 1995, defined the following minimum requirements to hardware and software:

  • processor - Pentium, 75 MHz;
  • RAM - 8 MB;
  • HDD- 540 MB;
  • CD-ROM drive- four-speed (4x);
  • VGA resolution - 640 x 480;
  • color depth - 65,536 colors (16-bit color);
  • minimum operating system - Windows 3.1.

Any computers built after 1996 containing

The sound adapter and CD-ROM-compatible drive fully meet the requirements of the MPC-3 standard.

Currently, the criteria for a computer to belong to the multimedia class have changed somewhat due to technical advances in this area:

  • processor - Pentium III, Celeron, Athlon, Duron or any other Pentium-class processor, 600 MHz;
  • RAM - 64 MB;
  • hard drive - 3.2 GB;
  • floppy disk - 1.44 MB (3.5" high-density disk);
  • CD-ROM drive - 24-speed (24x);
  • audio sampling frequency - 16-bit;
  • VGA resolution - 1024 x 768;
  • color depth - 16.8 million colors (24-bit color);
  • input/output devices - parallel, serial, MIDI, game port;
  • The minimum operating system is Windows 98 or Windows Me.

Although sound speakers or headphones are not technically part of the MPC specification or the list above, they are required for sound reproduction. In addition, a microphone is required to enter voice information used to record audio or speak to the computer. Systems equipped with an audio adapter usually also contain inexpensive passive or active speakers(can be replaced with headphones that provide the required quality and frequency characteristics of the reproduced sound).

A multimedia computer equipped with speakers and a microphone has a number of capabilities and provides:

  • adding stereo sound to entertainment (game) programs;
  • increasing the effectiveness of educational programs (for young children);
  • adding sound effects to demos and tutorials;
  • creating music using hardware and software MIDI;
  • adding audio comments to files;
  • implementation of audio network conferences;
  • adding sound effects to operating system events;
  • audio reproduction of text;
  • playing audio CDs;
  • playing files in .mp3 format;
  • playing video clips;
  • DVD movie playback;
  • voice control support.

Audio system components. When choosing an audio system, you must take into account the parameters of its components.

Sound card connectors. Most sound cards have the same miniature (1/8") connectors that send signals from the card to speakers, headphones, and stereo inputs; similar connectors connect a microphone, CD player, and tape recorder. Figure 5.4 shows the four types connectors that, at a minimum, must be installed on the sound card.The color designations for each type of connector are defined in the PC99 Design Guide and vary for different sound adapters.

Rice. 5.4.

We list the most common connectors:

  • linear output of the board. The signal from this connector is sent to external devices- speaker systems, headphones or to the input of a stereo amplifier, with the help of which the signal is amplified to the required level;
  • linear input of the board. Used when mixing or recording audio from an external audio system to a hard drive;
  • connector for speaker system and headphones. Not present on all boards. Signals to the speakers are supplied from the same connector (line output) as the input of the stereo amplifier;
  • microphone input, or mono input. Used to connect a microphone. Microphone recording is monophonic. The input signal level is maintained constant and optimal for conversion. For recording, it is best to use an electrodynamic or condenser microphone designed for a load impedance from 600 Ohms to 10 kOhms. Some cheap sound cards connect the microphone to the line input;
  • joystick connector (MIDI port). It is a 15-pin D-shaped connector. Its two pins can be used to control a MIDI device, such as a keyboard synthesizer. In this case, you need to purchase a Y-shaped cable;
  • MIDI connector. Included in the joystick port, has two round 5-pin DIN connectors used to connect MIDI devices, as well as a joystick connector;
  • internal contact connector - a special connector for connecting to an internal CD-ROM drive. Allows you to play audio from CDs through speakers connected to your sound card. This connector differs from the connector for connecting a CD-ROM controller to a sound card, since it does not transfer data to the computer bus.

Additional connectors. Most modern sound adapters support DVD playback, audio processing, etc., and therefore have several additional connectors, the features of which are listed below:

  • MIDI input and output. This connector, which is not combined with the game port, allows you to simultaneously use both the joystick and external MIDI devices;
  • SPDIF input and output (Sony/Philips Digital Interface - SP/DIF). The connector is used to transmit digital audio signals between devices without converting them to analog. The SPDIF interface is sometimes called Dolby Digital;
  • CD SPDIF. The connector is designed to connect a CD-ROM drive to a sound card using the SPDIF interface;
  • TAD input. Connector for connecting modems with support for Telephone Answering Device to the sound card;
  • digital output DIN. The connector is designed for connecting multi-channel digital speaker systems;
  • entrance Aich. Provides connection to the sound card from other signal sources, such as a TV tuner;
  • I2S input. Allows you to connect the digital output of external sources, such as DVD, to the sound card.

Additional connectors are usually located directly on the sound card or connected to an external unit or daughter card. For example, Sound Blaster Live! Platinum 5.1 is a device consisting of two parts. The sound adapter itself is connected via a PCI connector, and additional connectors are connected to an external LiveDrive IR switching unit, which is installed in an unused drive bay.

Volume control. IN Some sound cards provide manual volume control; on more complex boards, volume control is carried out programmatically using key combinations directly while playing Windows system or in any application.

Synthesizers. Currently, all boards produced are stereophonic and support the MIDI standard.

Stereo sound cards simultaneously play (and record) multiple signals from two different sources. The more signals provided in the adapter, the more natural the sound. Each synthesizer chip located on the board, most often from Yamaha, allows you to receive 11 (YM3812 or OPL2 chip) signals or more. To simulate more than 20 signals (YMF262 or OPL3 chip), one or two frequency synthesizer chips are installed.

Instead of synthesized sounds generated by a frequency modulation chip, wavetable sound cards use digital recordings of real instruments and sound effects. For example, when such an audio adapter plays a trumpet sound, the trumpet sound is heard directly, and not an imitation of it. The first sound cards that supported this function contained up to 1 MB of sound fragments stored in the adapter's memory chips. But as a result of the advent of the high-speed PCI bus and the increase in volume random access memory Most computer sound cards currently use the so-called programmable table-wave method, which allows you to load 2-8 MB of short sound fragments of various musical instruments into the computer's RAM.

Modern computer games rarely use MIDI audio, but despite this, changes made to the DirectX 8 sound card make it a viable option for game soundtracks.

Data compression. IN On most boards, the sound quality matches that of CDs at sampling rates

44.1 kHz, when for every minute of sound when recording even a normal voice, about 11 MB of disk space is consumed. In order to reduce the size of audio files, many boards use data compression. For example, the Sound Blaster ASP 16 board compresses audio in real time (directly during recording) with a compression ratio of 2:1, 3:1 or 4:1.

Since storing an audio signal requires a large amount of disk space, it is compressed using the Adaptive Differential Pulse Code Modulation (ADPCM) method, which reduces the file size by approximately 50%. However, the sound quality deteriorates.

Multifunctional signal processors. Many sound cards use Digital Signal Processors (DSPs). Thanks to them, the boards became more “intelligent” and freed up CPU computer from performing time-consuming tasks such as denoising signals and compressing data in real time.

Processors are installed in many universal sound cards. For example, the EMU10K1 programmable digital signal processor on the Sound Blaster Live! compresses data, converts text to speech and synthesizes so-called three-dimensional sound, creating the effect of sound reflection and choral accompaniment. With such a processor, the sound card turns into a multifunctional device. For example, in IBM's WindSurfer communications board, the digital processor functions as a modem, fax machine, and digital answering machine.

Sound card drivers. Most boards come with universal drivers for DOS and Windows applications. The Windows 9x and Windows NT operating systems already have drivers for popular sound cards; Drivers for other boards can be purchased separately.

DOS applications usually don't have a wide selection of drivers, but computer games support Sound Blaster Pro adapters.

Recently, the requirements for audio devices have increased significantly, which in turn has led to an increase in hardware power. Today's unified multimedia hardware is not entirely perfect. multimedia system, characterized by the following features:

  • realistic surround sound in computer games;
  • high-quality sound in DVD films;
  • speech recognition and voice control;
  • creating and recording audio files in MIDI, MP3, WAV and CD-Audio formats.

Additional hardware and software requirements necessary to achieve the above characteristics are presented in Table. 5.3.

Table 5.3. Additional features and properties of sound adapters

Purpose

Required

possibilities

Additional hardware

Additional software

Game port; three-dimensional sound; audio acceleration

Game controller; rear speakers

DVD movies

Dolby 5.1 decoding

Speakers with audio adapter compatible with Dolby 5.1

MPEG file decoding program

Software-compatible audio adapter

Microphone

Software that allows you to dictate texts

Creating MIDI Files

Audio adapter with MIDI input

MIDI compatible

musical

keyboard

Program for creating MIDI files

Creating MP3 Files

Digitizing sound files

CD-R or CD-RW drive

Program for creating MP3 files

Creating WAV Files

Microphone

Sound recording program

Creating CDAudio Files

External audio source

Program to convert WAV or MP3 files to CD-Audio

Minimum requirements for sound cards.

Replacing the previous Sound Blaster Pro ISA audio adapter with a PCI sound card has significantly improved system performance, but it is advisable to use all the capabilities of sound cards, which in particular include:

  • 3D audio support implemented in the chipset. The expression “3D sound” means that sounds corresponding to what is happening on the screen are heard further or closer, behind you or somewhere to the side. Interface Microsoft DirectX 8.0 includes support for 3D audio, but for this it is better to use an audio adapter with hardware built-in 3D audio support;
  • using the DirectX 8.0 interface along with others API interfaces three-dimensional audio, which include, for example, Creative's EAX, Sensaura's 3D Positional Audio and the now-defunct Aureal's A3D technology;
  • ZO-sonic acceleration. Sound cards with chipsets that support this feature have fairly low CPU utilization, resulting in an overall increase in gaming speed. For best results, use chipsets that support acceleration of the largest number of 3D streams; otherwise, processing of 3D audio by the central processor will be difficult, which will ultimately affect the speed of the game;
  • game ports supporting force feedback game controllers.

Today there are many mid-range sound cards that support at least two of these features. At the same time, the retail price of audio adapters does not exceed $50-100. New 3D audio chipsets supplied various manufacturers, allow fans of 3D computer games to upgrade the system in accordance with their wishes.

Movies in DVD format on the computer screen. To watch DVD movies on your computer, you need the following components:

  • Digital disc playback software that supports Dolby Digital 5.1 output. One of the most acceptable options is the PowerDVD program;
  • An audio adapter that supports the Dolby Digital input signal of a DVD drive and outputs data to Dolby Digital 5.1-compatible audio hardware devices. If the appropriate hardware is not available, the Dolby 5.1 input is configured for four-speaker operation; in addition, you can add an S/PDIF ACS (Dolby Surround) input, designed for four-speaker speaker systems;
  • Dolby Digital 5.1 compatible receiver and speakers. Most high-quality Dolby Digital 5.1 sound cards are coupled with a dedicated analog input receiver, but others, such as the Creative Labs Sound Blaster Live! Platinum also supports speakers with a digital input by adding an additional Digital DIN connector to the board.

Speech recognition. Speech recognition technology is not yet perfect, but today there are programs that allow you to give voice commands to a computer, call required applications, open files and necessary dialog boxes and even dictate texts to him that previously would have had to be typed.

For the typical user, this type of application is useless. Thus, Compaq for some time supplied computers with a microphone and an application for voice control, and the application was very cheap. Watching a lot of users in an office talking to computers was certainly interesting, but productivity didn't actually increase, and a lot of time was wasted as users were forced to experiment with software, and the office also became very noisy.

However, this type of software may be of some interest to users with disabilities, which is why speech recognition technology is constantly evolving.

As mentioned above, there is another type of speech recognition software that allows you to convert speech into text. This is an unusually difficult task, primarily due to the differences in speech patterns between different people, so almost all software, including some voice command applications, includes a step to “train” the technology to recognize the user’s voice. In the process of such training, the user reads text (or words) running on the computer screen. Because the text is programmed, the computer quickly adapts to the speaker's speech pattern.

As a result of the experiments, it turned out that the quality of recognition depends on the individual characteristics of speech. Additionally, some users are able to dictate entire pages of text without touching the keyboard, while others get tired of it.

There are many parameters that affect the quality of speech recognition. We list the main ones:

  • discrete and continuous speech recognition programs. Continuous (or connected) speech, which allows for a more natural “dialogue” with a computer, is currently standard, but, on the other hand, there are a number of so far insoluble problems in achieving acceptable recognition accuracy;
  • trained and non-trained programs. “Training” the program for correct speech recognition gives good results even in those applications that allow you to skip this stage;
  • large active and general dictionaries. Programs with a large active vocabulary respond much faster to oral speech, and programs with a larger general vocabulary allow you to preserve a unique vocabulary;
  • computer hardware performance. Increasing the speed of processors and the amount of RAM leads to a significant increase in the speed and accuracy of speech recognition programs, and also allows developers to enter additional features in new versions of applications;
  • High-quality sound card and microphone: headphones with a built-in microphone are not designed for recording music or sound effects, but specifically for speech recognition.

Sound files. To store audio recordings on personal computer There are two main types of files. The first type of files, called regular audio files, use the .wav, .voc, .au, and .aiff formats. An audio file contains waveform data, i.e. it is a recording of analog audio signals in digital form suitable for storage on a computer. Three levels of sound recording quality used in the Windows 9x and Windows Me operating systems are defined, as well as a sound recording quality level with characteristics of 48 kHz, 16-bit stereo and 188 Kb/s. This level is designed to support playback of audio from sources such as DVD and Dolby AC-3.

To achieve a compromise between high sound quality and small file size, you can convert .wav files to .mp3 format.

Audio data compression. There are two main areas in which audio compression is used:

  • use of sound bites on websites;
  • reducing the volume of high-quality music files.

Special programs for editing audio files, in particular, RealProducer from Real or Microsoft Windows Media Encoder 7 allows you to reduce the volume of audio fragments with minimal loss of quality.

The most popular audio file format is .mp3. These files are close to CD-quality sound quality and are much smaller in size than regular .wav files. So, sound file A 5-minute sound file in .wav format with CD quality has a size of about 50 MB, while the same sound file in .mp3 format is about 4 MB.

The only drawback of .mp3 files is the lack of protection against unauthorized use, i.e. anyone can freely download such a file from the Internet (fortunately, there are a great many websites offering these “pirated” recordings). The described file format, despite its shortcomings, has become quite widespread and has led to the mass production of 3D players.

MIDI files. A MIDI audio file is different from a .wav file in the same way that a vector image is different from a raster image. MIDI files have a .mid or .rmi extension and are completely digital, containing not a recording of the sound, but rather the commands used by the audio equipment to create it. Just as video cards use commands to create images of three-dimensional objects, MIDI sound cards work with MIDI files to synthesize music.

MIDI is a powerful programming language that became popular in the 1980s. and designed specifically for electronic musical instruments. The MIDI standard has become a new word in the field of electronic music. With MIDI, you can create, record, edit, and play music files on a personal computer or on a MIDI-compatible electronic musical instrument connected to a computer.

MIDI files, unlike other types of audio files, require a relatively small amount of disk space. To record 1 hour of stereo music stored in MIDI format, less than 500 KB are required. Many games use MIDI audio recording rather than sampled analog audio recording.

A MIDI file is actually a digital representation of a musical score, composed of several dedicated channels, each representing a different musical document or type of sound. Each channel defines the frequencies and durations of the notes, resulting in a MIDI file for, say, a string quartet containing four channels that represent two violins, a viola, and a cello.

All three MPC specifications, as well as PC9x, provide support for the MIDI format in all sound cards. The General MIDI standard for most sound cards allows for up to 16 channels in a single MIDI file, but this does not necessarily limit the audio to 16 instruments. One channel is capable of representing the sound of a group of instruments; therefore a full orchestra can be synthesized.

Because a MIDI file consists of digital commands, it is much easier to edit than a .wav audio file. The corresponding software allows you to select any MIDI channel, record notes, and add effects. Certain software packages are designed to record music in a MIDI file using standard music notation. As a result, the composer writes the music directly on the computer, edits it as needed, and then prints out the sheet music for the performers. This is very convenient for professional musicians who have to spend a lot of time transcribing notes.

Playing MIDI files. Running a MIDI file on a personal computer does not play back the recording. The computer actually creates music based on recorded commands: the system reads the MIDI file, the synthesizer generates sounds for each channel in accordance with the commands in the file in order to give the desired tone and duration to the sound of the notes. To produce the sound of a specific musical instrument, a synthesizer uses a predefined pattern, i.e., a set of commands that creates a sound similar to that produced by a specific instrument.

A sound card synthesizer is similar to an electronic keyboard synthesizer, but with limited capabilities. According to the MPC specification, the sound card must have a frequency synthesizer that can simultaneously play at least six melodic notes and two drum notes.

Frequency synthesis. Most sound cards generate sounds using a frequency synthesizer; this technology was developed back in 1976. By using one sine wave to modify another, a frequency synthesizer creates an artificial sound that resembles the sound of a specific instrument. The MIDI standard defines a set of preprogrammed sounds that can be played by most instruments.

Some frequency synthesizers use four waves, and the sounds produced have a normal, if somewhat artificial, sound. For example, the synthesized sound of a trumpet is undoubtedly similar to its sound, but no one will ever recognize it as the sound of a real trumpet.

Table-wave synthesis. The peculiarity of frequency synthesis is that the reproduced sound, even in the best case, does not completely coincide with the real sound of a musical instrument. Inexpensive technology for more natural sound was developed by Ensoniq Corporation in 1984. It records the sound of any instrument (including piano, violin, guitar, flute, trumpet and drum) and stores the digitized sound in a special table. This table is written either to ROM chips or to disk, and the sound card can extract the digitized sound of the desired instrument from the table.

Using a table-wave synthesizer, you can select an instrument, make the only note you need sound, and, if necessary, change its frequency (i.e., play a given note from the corresponding octave). Some adapters use multiple samples of the same instrument to enhance sound reproduction. The highest note on the piano is different from the lowest pitch, so for a more natural sound you need to choose a sample that is closest (in pitch) to the note being synthesized.

Thus, the size of the table largely determines the quality and variety of sounds that the synthesizer is capable of reproducing. The best quality wavetable adapters usually have several megabytes of memory on the board for storing samples. Some of them provide the ability to connect additional cards to install additional memory and record sound samples in a table.

Connecting other devices to the MIDI connector. The MIDI interface of a sound card is also used to connect electronic instruments, sound generators, drums and other MIDI devices to a computer. As a result MIDI files plays a high-quality music synthesizer rather than a soundboard synthesizer, and you can also create your own MIDI files by playing notes on a dedicated keyboard. The right software will allow you to compose a symphony on a PC by recording the notes of each instrument separately into its own channel, and then allowing all channels to sound simultaneously. Many professional musicians and composers use MIDI devices to compose music directly on their computers, without using traditional instruments.

There are also high-quality MIDI cards that work bi-directionally, i.e. play back pre-recorded audio tracks while recording a new track to the same MIDI file. Just a few years ago, this could only be done in a studio using professional equipment that cost hundreds of thousands of dollars.

MIDI devices connect to the audio adapter's two round 5-pin DIN connectors, which are used for input (MIDI-IN) and output (MIDI-OUT) signals. Many devices also have a MIDI-THRU port, which sends the device's input signals directly to its output, but sound cards typically do not have such a port. Interestingly, according to the MIDI standard, data is transmitted only through pins 1 and 3 of the connectors. Pin 2 is shielded and pins 4 and 5 are not used.

Main function MIDI interface The sound card consists of converting (converting) a stream of bytes (i.e., 8 bits arriving in parallel) of data that is transmitted by the computer system bus into a serial data stream in MIDI format. MIDI devices have asynchronous serial ports operating at 31.25 kbaud. When exchanging data in accordance with the MIDI standard, eight information bits are used with one start and one stop bit, and 320 ms are spent on the serial transmission of 1 byte.

According to the MIDI standard, signals are transmitted over a special unshielded twisted pair cable, which can have a maximum length of up to 15 m (although most cables sold are 3 or 6 m long). You can also connect multiple MIDI devices using a loopback to combine their capabilities. The total length of the MIDI device chain is not limited, but the length of each individual cable must not exceed 15 m.

In legacy-free systems there is no game port connector (MIDI port) - all devices are connected to a USB bus.

Software for MIDI devices. The Windows 9x, Windows Me and Windows 2000 operating systems come with the Universal Player program ( Media Player), which plays MIDI files. In order to use all the capabilities of MIDI, it is recommended to purchase specialized software for performing various editing operations on MIDI files (setting the tempo of playback, cutting, and inserting various pre-recorded music).

A number of sound cards come with programs that provide editing capabilities for MIDI files. In addition, many free and shareware tools (programs) are freely distributed over the Internet, but truly powerful software that allows you to create and edit MIDI files must be purchased separately.

Record. Almost all sound cards have an input connector, by connecting a microphone to which you can record your voice. Using the Sound Recorder program in Windows, you can play, edit and record a sound file in a special .wav format.

The following are the main uses of .wav files:

  • tracking of certain events in the Windows system. To do this, use the “Sounds” option in the panel. Windows management;
  • adding speech comments using Windows controls OLE and ActiveX for documents of various types;
  • entering accompanying text into presentations created using PowerPoint, Freelance Graphics, Corel Presentations, or others.

In order to reduce the size and further use on the Internet, .wav files are converted into .mp3 or .wma files.

Audio CDs. Using a storage device CD-ROM You can listen to audio CDs not only through speakers, but also through headphones, while working with other programs. A number of sound cards come with programs for playing CDs, and such programs are often downloaded for free over the Internet. These programs typically feature a visual display that mimics the front panel of a CD player for keyboard or mouse control.

Sound mixer (mixer). If you have multiple sound sources and only one speaker system, you must use an audio mixer. Most sound cards are equipped with a built-in audio mixer (mixer), which allows you to mix sound from audio, MIDI and WAV sources, line input and CD player, playing it back on a single line output. Typically, audio mixing software interfaces look the same on screen as a standard audio mixer panel. This allows you to easily control the volume of each source.

Sound cards: basic concepts and terms. In order to understand what sound cards are, you first need to understand the terms. Sound is vibrations (waves) propagating in air or another medium from a vibration source in all directions. When the waves reach the ear, the sensory elements located in it perceive the vibration and sound is heard.

Each sound is characterized by frequency and intensity (loudness).

Frequency - this is the number of sound vibrations per second; it is measured in Hertz (Hz). One cycle (period) is one movement of the vibration source (back and forth). The higher the frequency, the higher the tone.

The human ear perceives only a small range of frequencies. Very few people hear sounds below 16 Hz and above 20 kHz (1 kHz = 1000 Hz). The frequency of the lowest note on a piano is 27 Hz, and the highest note is just over 4 kHz. The highest audio frequency that FM broadcast stations can transmit is 15 kHz.

Volume sound is determined by the amplitude of vibrations, which depends primarily on the power of the sound source. For example, a piano string sounds quiet when struck lightly because its vibration range is small. If you hit the key harder, the amplitude of vibration of the string will increase. The volume of sound is measured in decibels (dB). The sound of rustling leaves, for example, is about 20 dB, normal street noise is about 70 dB, and a close clap of thunder is 120 dB.

Assessing the quality of a sound adapter. Three parameters are used to evaluate the quality of a sound adapter:

  • frequency range;
  • nonlinear distortion factor;
  • signal-to-noise ratio.

The frequency response determines the frequency range in which the level of recorded and reproduced amplitudes remains constant. For most sound cards, the range is from 30 Hz to 20 kHz. The wider this range, the better the board.

The nonlinear distortion coefficient characterizes the nonlinearity of the sound card, i.e., the difference between the real curve frequency response from the ideal straight line, or, more simply put, the coefficient characterizes the purity of sound reproduction. Every nonlinear element causes distortion. The lower this coefficient, the higher the sound quality.

High signal-to-noise ratio (in decibels) corresponds to better quality sound playback.

Sampling. If your computer has a sound card installed, it is possible to record sound in digital (also called discrete) form, in which case the computer is used as a recording device. The sound card includes a small chip - an analog-to-digital converter, or ADC (Analog-to-Digital Converter - ADC), which, when recording, converts the analog signal into a digital form that the computer can understand. Similarly, during playback, a Digital-to-Analog Converter (DAC) converts the audio recording into sound that our ears can perceive.

The process of converting the original audio signal into digital form (Fig. 5.5), in which it is stored for subsequent playback, is called sampling, or digitization. In this case, the instantaneous values ​​of the sound signal are stored at certain points in time, called selection.


Rice. 5.5. Circuit for converting an audio signal into digital form. The more frequently samples are taken, the more closely the digital copy of the sound matches the original.

The first MPC standard provided for 8-bit audio. Audio bit depth describes the number of bits used to digitally represent each sample.

Eight bits determine 256 discrete audio signal levels, and if you use 16 bits, then their number reaches 65,536 (naturally, the sound quality is significantly improved). An 8-bit representation is sufficient for speech recording and playback, but 16 bits are required for music. Most older boards only support 8-bit audio; all modern boards provide 16 bits or more.

The quality of recorded and played sound, along with resolution, is determined by the sampling rate (number of samples per second). Theoretically, it should be 2 times higher than the maximum signal frequency (i.e., the upper frequency limit) plus a 10% margin. The hearing threshold of the human ear is 20 kHz. Recording from a CD corresponds to a frequency of 44.1 kHz.

Audio sampled at 11 kHz (11,000 samples per second) is blurrier than audio sampled at 22 kHz. The amount of disk space required to record 16-bit audio at a sampling rate of 44.1 kHz for 1 minute is 10.5 MB. With 8-bit representation, monaural audio and a sampling rate of 11 kHz, the required disk space is reduced by 16 times. This data can be checked using the Sound Recorder program: record a sound fragment at different sampling rates and look at the size of the resulting files.

Three-dimensional sound. One of the most challenging challenges for sound cards in gaming systems is 3D audio processing. There are several factors that complicate solving problems of this kind:

  • different sound positioning standards;
  • hardware and software used to process 3D audio;
  • problems related to DirectX interface support.

Positional sound. Audio positioning is a common technology for all 3L sound cards and involves adjusting certain parameters such as reverberation or reflection of the sound, equalization (balance) and indicating the “location” of the sound source. All these components create the illusion of sounds coming from in front, to the right, to the left of the user, or even behind him. The most important element of positional audio is the HRTF (Head Related Transfer Function), which determines how the perception of sound changes depending on the shape of the ear and the angle of rotation of the listener's head. The parameters of this function describe the conditions under which "realistic" sound is perceived completely differently when the listener's head is turned in one direction or another. The use of multi-speaker speakers that “surround” the user on all sides, as well as complex sound algorithms that complement the reproduced sound with controlled reverberation, make computer-synthesized sound even more realistic.

Three-dimensional sound processing. An important factor in quality sound is various ways 3D audio processing in sound cards, in particular:

  • centralized (a central processor is used to process three-dimensional sound, which leads to a decrease in the overall performance of the system);
  • Sound card processing (3D acceleration) using a powerful digital signal processor (DSP) that performs processing directly on the sound card.

Sound cards that centrally process 3D audio can be a major cause of reduced frame rates (the number of animation frames displayed on the screen each second) when using the 3D audio feature. In sound cards with a built-in audio processor, the frame rate does not change much when 3D audio is turned on or off.

As practice shows, the average frame rate of a realistic computer game should be at least 30 fps (frames per second). If you have a fast processor, for example, a Pentium III 800 MHz, and any modern ZE sound card, this frequency can be achieved quite easily. Using a slower processor, say a 300 MHz Celeron 300A, and a board with centralized 3D audio processing will result in frame rates well below 30 fps. To see how 3D audio processing affects the speed of computer games, there is a frame rate tracking feature built into most games. Frame rate is directly related to CPU utilization; Increasing resource requirements for the processor will lead to a decrease in frame rates.

3D audio and 3D video technologies are of greatest interest primarily to computer game developers, but their use in a commercial environment is also not far off.

Connecting a stereo system to a sound card. The process of connecting a stereo system to a sound card is to connect them using a cable. If the sound card has an output for a speaker system or headphones and a linear stereo output, then it is better to use the latter to connect a stereo system. In this case, higher quality sound is obtained, since the signal arrives at the linear output bypassing the amplification circuits, and therefore is practically not subject to distortion, and only the stereo system will amplify the signal.

Connect this output to the auxiliary input of your stereo system. If your stereo system does not have auxiliary inputs, you should use others, such as a CD player input. The stereo amplifier and the computer do not necessarily have to be located next to each other, so the length of the connecting cable can be several meters.

Some stereos and radios have a connector on the rear panel for connecting a tuner, tape recorder, or CD player. Using this connector, as well as the line input and output of the sound card, you can listen to sound coming from the computer, as well as radio broadcasts through a stereo speaker system.